Robust biological parametric mapping: an improved technique for multimodal brain image analysis
NASA Astrophysics Data System (ADS)
Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.
2011-03-01
Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.
Image segmentation-based robust feature extraction for color image watermarking
NASA Astrophysics Data System (ADS)
Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen
2018-04-01
This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.
Robust lane detection and tracking using multiple visual cues under stochastic lane shape conditions
NASA Astrophysics Data System (ADS)
Huang, Zhi; Fan, Baozheng; Song, Xiaolin
2018-03-01
As one of the essential components of environment perception techniques for an intelligent vehicle, lane detection is confronted with challenges including robustness against the complicated disturbance and illumination, also adaptability to stochastic lane shapes. To overcome these issues, we proposed a robust lane detection method named classification-generation-growth-based (CGG) operator to the detected lines, whereby the linear lane markings are identified by synergizing multiple visual cues with the a priori knowledge and spatial-temporal information. According to the quality of linear lane fitting, the linear and linear-parabolic models are dynamically switched to describe the actual lane. The Kalman filter with adaptive noise covariance and the region of interests (ROI) tracking are applied to improve the robustness and efficiency. Experiments were conducted with images covering various challenging scenarios. The experimental results evaluate the effectiveness of the presented method for complicated disturbances, illumination, and stochastic lane shapes.
Robust gaze-steering of an active vision system against errors in the estimated parameters
NASA Astrophysics Data System (ADS)
Han, Youngmo
2015-01-01
Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.
Image interpolation via regularized local linear regression.
Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang
2011-12-01
The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE
An improved artifact removal in exposure fusion with local linear constraints
NASA Astrophysics Data System (ADS)
Zhang, Hai; Yu, Mali
2018-04-01
In exposure fusion, it is challenging to remove artifacts because of camera motion and moving objects in the scene. An improved artifact removal method is proposed in this paper, which performs local linear adjustment in artifact removal progress. After determining a reference image, we first perform high-dynamic-range (HDR) deghosting to generate an intermediate image stack from the input image stack. Then, a linear Intensity Mapping Function (IMF) in each window is extracted based on the intensities of intermediate image and reference image, the intensity mean and variance of reference image. Finally, with the extracted local linear constraints, we reconstruct a target image stack, which can be directly used for fusing a single HDR-like image. Some experiments have been implemented and experimental results demonstrate that the proposed method is robust and effective in removing artifacts especially in the saturated regions of the reference image.
Robust linearized image reconstruction for multifrequency EIT of the breast.
Boverman, Gregory; Kao, Tzu-Jen; Kulkarni, Rujuta; Kim, Bong Seok; Isaacson, David; Saulnier, Gary J; Newell, Jonathan C
2008-10-01
Electrical impedance tomography (EIT) is a developing imaging modality that is beginning to show promise for detecting and characterizing tumors in the breast. At Rensselaer Polytechnic Institute, we have developed a combined EIT-tomosynthesis system that allows for the coregistered and simultaneous analysis of the breast using EIT and X-ray imaging. A significant challenge in EIT is the design of computationally efficient image reconstruction algorithms which are robust to various forms of model mismatch. Specifically, we have implemented a scaling procedure that is robust to the presence of a thin highly-resistive layer of skin at the boundary of the breast and we have developed an algorithm to detect and exclude from the image reconstruction electrodes that are in poor contact with the breast. In our initial clinical studies, it has been difficult to ensure that all electrodes make adequate contact with the breast, and thus procedures for the use of data sets containing poorly contacting electrodes are particularly important. We also present a novel, efficient method to compute the Jacobian matrix for our linearized image reconstruction algorithm by reducing the computation of the sensitivity for each voxel to a quadratic form. Initial clinical results are presented, showing the potential of our algorithms to detect and localize breast tumors.
Aqil, Muhammad; Jeong, Myung Yung
2018-04-24
The robust characterization of real-time brain activity carries potential for many applications. However, the contamination of measured signals by various instrumental, environmental, and physiological sources of noise introduces a substantial amount of signal variance and, consequently, challenges real-time estimation of contributions from underlying neuronal sources. Functional near infra-red spectroscopy (fNIRS) is an emerging imaging modality whose real-time potential is yet to be fully explored. The objectives of the current study are to (i) validate a time-dependent linear model of hemodynamic responses in fNIRS, and (ii) test the robustness of this approach against measurement noise (instrumental and physiological) and mis-specification of the hemodynamic response basis functions (amplitude, latency, and duration). We propose a linear hemodynamic model with time-varying parameters, which are estimated (adapted and tracked) using a dynamic recursive least square algorithm. Owing to the linear nature of the activation model, the problem of achieving robust convergence to an accurate estimation of the model parameters is recast as a problem of parameter error stability around the origin. We show that robust convergence of the proposed method is guaranteed in the presence of an acceptable degree of model misspecification and we derive an upper bound on noise under which reliable parameters can still be inferred. We also derived a lower bound on signal-to-noise-ratio over which the reliable parameters can still be inferred from a channel/voxel. Whilst here applied to fNIRS, the proposed methodology is applicable to other hemodynamic-based imaging technologies such as functional magnetic resonance imaging. Copyright © 2018 Elsevier Inc. All rights reserved.
LCC-Demons: a robust and accurate symmetric diffeomorphic registration algorithm.
Lorenzi, M; Ayache, N; Frisoni, G B; Pennec, X
2013-11-01
Non-linear registration is a key instrument for computational anatomy to study the morphology of organs and tissues. However, in order to be an effective instrument for the clinical practice, registration algorithms must be computationally efficient, accurate and most importantly robust to the multiple biases affecting medical images. In this work we propose a fast and robust registration framework based on the log-Demons diffeomorphic registration algorithm. The transformation is parameterized by stationary velocity fields (SVFs), and the similarity metric implements a symmetric local correlation coefficient (LCC). Moreover, we show how the SVF setting provides a stable and consistent numerical scheme for the computation of the Jacobian determinant and the flux of the deformation across the boundaries of a given region. Thus, it provides a robust evaluation of spatial changes. We tested the LCC-Demons in the inter-subject registration setting, by comparing with state-of-the-art registration algorithms on public available datasets, and in the intra-subject longitudinal registration problem, for the statistically powered measurements of the longitudinal atrophy in Alzheimer's disease. Experimental results show that LCC-Demons is a generic, flexible, efficient and robust algorithm for the accurate non-linear registration of images, which can find several applications in the field of medical imaging. Without any additional optimization, it solves equally well intra & inter-subject registration problems, and compares favorably to state-of-the-art methods. Copyright © 2013 Elsevier Inc. All rights reserved.
Xia, Youshen; Sun, Changyin; Zheng, Wei Xing
2012-05-01
There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.
Robust Models for Optic Flow Coding in Natural Scenes Inspired by Insect Biology
Brinkworth, Russell S. A.; O'Carroll, David C.
2009-01-01
The extraction of accurate self-motion information from the visual world is a difficult problem that has been solved very efficiently by biological organisms utilizing non-linear processing. Previous bio-inspired models for motion detection based on a correlation mechanism have been dogged by issues that arise from their sensitivity to undesired properties of the image, such as contrast, which vary widely between images. Here we present a model with multiple levels of non-linear dynamic adaptive components based directly on the known or suspected responses of neurons within the visual motion pathway of the fly brain. By testing the model under realistic high-dynamic range conditions we show that the addition of these elements makes the motion detection model robust across a large variety of images, velocities and accelerations. Furthermore the performance of the entire system is more than the incremental improvements offered by the individual components, indicating beneficial non-linear interactions between processing stages. The algorithms underlying the model can be implemented in either digital or analog hardware, including neuromorphic analog VLSI, but defy an analytical solution due to their dynamic non-linear operation. The successful application of this algorithm has applications in the development of miniature autonomous systems in defense and civilian roles, including robotics, miniature unmanned aerial vehicles and collision avoidance sensors. PMID:19893631
Efficient and robust model-to-image alignment using 3D scale-invariant features.
Toews, Matthew; Wells, William M
2013-04-01
This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.
Efficient and Robust Model-to-Image Alignment using 3D Scale-Invariant Features
Toews, Matthew; Wells, William M.
2013-01-01
This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a-posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. PMID:23265799
NASA Astrophysics Data System (ADS)
Ye, Y.
2017-09-01
This paper presents a fast and robust method for the registration of multimodal remote sensing data (e.g., optical, LiDAR, SAR and map). The proposed method is based on the hypothesis that structural similarity between images is preserved across different modalities. In the definition of the proposed method, we first develop a pixel-wise feature descriptor named Dense Orientated Gradient Histogram (DOGH), which can be computed effectively at every pixel and is robust to non-linear intensity differences between images. Then a fast similarity metric based on DOGH is built in frequency domain using the Fast Fourier Transform (FFT) technique. Finally, a template matching scheme is applied to detect tie points between images. Experimental results on different types of multimodal remote sensing images show that the proposed similarity metric has the superior matching performance and computational efficiency than the state-of-the-art methods. Moreover, based on the proposed similarity metric, we also design a fast and robust automatic registration system for multimodal images. This system has been evaluated using a pair of very large SAR and optical images (more than 20000 × 20000 pixels). Experimental results show that our system outperforms the two popular commercial software systems (i.e. ENVI and ERDAS) in both registration accuracy and computational efficiency.
Recognition of blurred images by the method of moments.
Flusser, J; Suk, T; Saic, S
1996-01-01
The article is devoted to the feature-based recognition of blurred images acquired by a linear shift-invariant imaging system against an image database. The proposed approach consists of describing images by features that are invariant with respect to blur and recognizing images in the feature space. The PSF identification and image restoration are not required. A set of symmetric blur invariants based on image moments is introduced. A numerical experiment is presented to illustrate the utilization of the invariants for blurred image recognition. Robustness of the features is also briefly discussed.
Qian, Jianjun; Yang, Jian; Xu, Yong
2013-09-01
This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.
Hybrid active contour model for inhomogeneous image segmentation with background estimation
NASA Astrophysics Data System (ADS)
Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun
2018-03-01
This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.
NASA Astrophysics Data System (ADS)
Hsieh, Cheng-Ta; Huang, Kae-Horng; Lee, Chang-Hsing; Han, Chin-Chuan; Fan, Kuo-Chin
2017-12-01
Robust face recognition under illumination variations is an important and challenging task in a face recognition system, particularly for face recognition in the wild. In this paper, a face image preprocessing approach, called spatial adaptive shadow compensation (SASC), is proposed to eliminate shadows in the face image due to different lighting directions. First, spatial adaptive histogram equalization (SAHE), which uses face intensity prior model, is proposed to enhance the contrast of each local face region without generating visible noises in smooth face areas. Adaptive shadow compensation (ASC), which performs shadow compensation in each local image block, is then used to produce a wellcompensated face image appropriate for face feature extraction and recognition. Finally, null-space linear discriminant analysis (NLDA) is employed to extract discriminant features from SASC compensated images. Experiments performed on the Yale B, Yale B extended, and CMU PIE face databases have shown that the proposed SASC always yields the best face recognition accuracy. That is, SASC is more robust to face recognition under illumination variations than other shadow compensation approaches.
On use of image quality metrics for perceptual blur modeling: image/video compression case
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn
2018-02-01
Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.
Model based control of dynamic atomic force microscope.
Lee, Chibum; Salapaka, Srinivasa M
2015-04-01
A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H(∞) control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.
Automatic Image Registration of Multimodal Remotely Sensed Data with Global Shearlet Features
NASA Technical Reports Server (NTRS)
Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.
2015-01-01
Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone.
Automatic Image Registration of Multi-Modal Remotely Sensed Data with Global Shearlet Features
Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.
2017-01-01
Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone. PMID:29123329
Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD
NASA Astrophysics Data System (ADS)
Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun
2017-12-01
This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.
Image quality assessment using deep convolutional networks
NASA Astrophysics Data System (ADS)
Li, Yezhou; Ye, Xiang; Li, Yong
2017-12-01
This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.
Du, Yiping P; Jin, Zhaoyang
2009-10-01
To develop a robust algorithm for tissue-air segmentation in magnetic resonance imaging (MRI) using the statistics of phase and magnitude of the images. A multivariate measure based on the statistics of phase and magnitude was constructed for tissue-air volume segmentation. The standard deviation of first-order phase difference and the standard deviation of magnitude were calculated in a 3 x 3 x 3 kernel in the image domain. To improve differentiation accuracy, the uniformity of phase distribution in the kernel was also calculated and linear background phase introduced by field inhomogeneity was corrected. The effectiveness of the proposed volume segmentation technique was compared to a conventional approach that uses the magnitude data alone. The proposed algorithm was shown to be more effective and robust in volume segmentation in both synthetic phantom and susceptibility-weighted images of human brain. Using our proposed volume segmentation method, veins in the peripheral regions of the brain were well depicted in the minimum-intensity projection of the susceptibility-weighted images. Using the additional statistics of phase, tissue-air volume segmentation can be substantially improved compared to that using the statistics of magnitude data alone. (c) 2009 Wiley-Liss, Inc.
Deep and Structured Robust Information Theoretic Learning for Image Analysis.
Deng, Yue; Bao, Feng; Deng, Xuesong; Wang, Ruiping; Kong, Youyong; Dai, Qionghai
2016-07-07
This paper presents a robust information theoretic (RIT) model to reduce the uncertainties, i.e. missing and noisy labels, in general discriminative data representation tasks. The fundamental pursuit of our model is to simultaneously learn a transformation function and a discriminative classifier that maximize the mutual information of data and their labels in the latent space. In this general paradigm, we respectively discuss three types of the RIT implementations with linear subspace embedding, deep transformation and structured sparse learning. In practice, the RIT and deep RIT are exploited to solve the image categorization task whose performances will be verified on various benchmark datasets. The structured sparse RIT is further applied to a medical image analysis task for brain MRI segmentation that allows group-level feature selections on the brain tissues.
Change Detection via Selective Guided Contrasting Filters
NASA Astrophysics Data System (ADS)
Vizilter, Y. V.; Rubis, A. Y.; Zheltov, S. Y.
2017-05-01
Change detection scheme based on guided contrasting was previously proposed. Guided contrasting filter takes two images (test and sample) as input and forms the output as filtered version of test image. Such filter preserves the similar details and smooths the non-similar details of test image with respect to sample image. Due to this the difference between test image and its filtered version (difference map) could be a basis for robust change detection. Guided contrasting is performed in two steps: at the first step some smoothing operator (SO) is applied for elimination of test image details; at the second step all matched details are restored with local contrast proportional to the value of some local similarity coefficient (LSC). The guided contrasting filter was proposed based on local average smoothing as SO and local linear correlation as LSC. In this paper we propose and implement new set of selective guided contrasting filters based on different combinations of various SO and thresholded LSC. Linear average and Gaussian smoothing, nonlinear median filtering, morphological opening and closing are considered as SO. Local linear correlation coefficient, morphological correlation coefficient (MCC), mutual information, mean square MCC and geometrical correlation coefficients are applied as LSC. Thresholding of LSC allows operating with non-normalized LSC and enhancing the selective properties of guided contrasting filters: details are either totally recovered or not recovered at all after the smoothing. These different guided contrasting filters are tested as a part of previously proposed change detection pipeline, which contains following stages: guided contrasting filtering on image pyramid, calculation of difference map, binarization, extraction of change proposals and testing change proposals using local MCC. Experiments on real and simulated image bases demonstrate the applicability of all proposed selective guided contrasting filters. All implemented filters provide the robustness relative to weak geometrical discrepancy of compared images. Selective guided contrasting based on morphological opening/closing and thresholded morphological correlation demonstrates the best change detection result.
NASA Astrophysics Data System (ADS)
Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur
2009-05-01
Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.
A robust color signal processing with wide dynamic range WRGB CMOS image sensor
NASA Astrophysics Data System (ADS)
Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi
2011-01-01
We have developed a robust color reproduction methodology by a simple calculation with a new color matrix using the formerly developed wide dynamic range WRGB lateral overflow integration capacitor (LOFIC) CMOS image sensor. The image sensor was fabricated through a 0.18 μm CMOS technology and has a 45 degrees oblique pixel array, the 4.2 μm effective pixel pitch and the W pixels. A W pixel was formed by replacing one of the two G pixels in the Bayer RGB color filter. The W pixel has a high sensitivity through the visible light waveband. An emerald green and yellow (EGY) signal is generated from the difference between the W signal and the sum of RGB signals. This EGY signal mainly includes emerald green and yellow lights. These colors are difficult to be reproduced accurately by the conventional simple linear matrix because their wave lengths are in the valleys of the spectral sensitivity characteristics of the RGB pixels. A new linear matrix based on the EGY-RGB signal was developed. Using this simple matrix, a highly accurate color processing with a large margin to the sensitivity fluctuation and noise has been achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Dong, E-mail: radon.han@gmail.com; Williamson, Jeffrey F.; Siebers, Jeffrey V.
2016-01-15
Purpose: To evaluate the accuracy and robustness of a simple, linear, separable, two-parameter model (basis vector model, BVM) in mapping proton stopping powers via dual energy computed tomography (DECT) imaging. Methods: The BVM assumes that photon cross sections (attenuation coefficients) of unknown materials are linear combinations of the corresponding radiological quantities of dissimilar basis substances (i.e., polystyrene, CaCl{sub 2} aqueous solution, and water). The authors have extended this approach to the estimation of electron density and mean excitation energy, which are required parameters for computing proton stopping powers via the Bethe–Bloch equation. The authors compared the stopping power estimation accuracymore » of the BVM with that of a nonlinear, nonseparable photon cross section Torikoshi parametric fit model (VCU tPFM) as implemented by the authors and by Yang et al. [“Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues,” Phys. Med. Biol. 55, 1343–1362 (2010)]. Using an idealized monoenergetic DECT imaging model, proton ranges estimated by the BVM, VCU tPFM, and Yang tPFM were compared to International Commission on Radiation Units and Measurements (ICRU) published reference values. The robustness of the stopping power prediction accuracy of tissue composition variations was assessed for both of the BVM and VCU tPFM. The sensitivity of accuracy to CT image uncertainty was also evaluated. Results: Based on the authors’ idealized, error-free DECT imaging model, the root-mean-square error of BVM proton stopping power estimation for 175 MeV protons relative to ICRU reference values for 34 ICRU standard tissues is 0.20%, compared to 0.23% and 0.68% for the Yang and VCU tPFM models, respectively. The range estimation errors were less than 1 mm for the BVM and Yang tPFM models, respectively. The BVM estimation accuracy is not dependent on tissue type and proton energy range. The BVM is slightly more vulnerable to CT image intensity uncertainties than the tPFM models. Both the BVM and tPFM prediction accuracies were robust to uncertainties of tissue composition and independent of the choice of reference values. This reported accuracy does not include the impacts of I-value uncertainties and imaging artifacts and may not be achievable on current clinical CT scanners. Conclusions: The proton stopping power estimation accuracy of the proposed linear, separable BVM model is comparable to or better than that of the nonseparable tPFM models proposed by other groups. In contrast to the tPFM, the BVM does not require an iterative solving for effective atomic number and electron density at every voxel; this improves the computational efficiency of DECT imaging when iterative, model-based image reconstruction algorithms are used to minimize noise and systematic imaging artifacts of CT images.« less
Bin Ratio-Based Histogram Distances and Their Application to Image Classification.
Hu, Weiming; Xie, Nianhua; Hu, Ruiguang; Ling, Haibin; Chen, Qiang; Yan, Shuicheng; Maybank, Stephen
2014-12-01
Large variations in image background may cause partial matching and normalization problems for histogram-based representations, i.e., the histograms of the same category may have bins which are significantly different, and normalization may produce large changes in the differences between corresponding bins. In this paper, we deal with this problem by using the ratios between bin values of histograms, rather than bin values' differences which are used in the traditional histogram distances. We propose a bin ratio-based histogram distance (BRD), which is an intra-cross-bin distance, in contrast with previous bin-to-bin distances and cross-bin distances. The BRD is robust to partial matching and histogram normalization, and captures correlations between bins with only a linear computational complexity. We combine the BRD with the ℓ1 histogram distance and the χ(2) histogram distance to generate the ℓ1 BRD and the χ(2) BRD, respectively. These combinations exploit and benefit from the robustness of the BRD under partial matching and the robustness of the ℓ1 and χ(2) distances to small noise. We propose a method for assessing the robustness of histogram distances to partial matching. The BRDs and logistic regression-based histogram fusion are applied to image classification. The experimental results on synthetic data sets show the robustness of the BRDs to partial matching, and the experiments on seven benchmark data sets demonstrate promising results of the BRDs for image classification.
NASA Astrophysics Data System (ADS)
Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun
2018-05-01
This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition
Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe
2013-01-01
Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764
HIGH-RESOLUTION LINEAR POLARIMETRIC IMAGING FOR THE EVENT HORIZON TELESCOPE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chael, Andrew A.; Johnson, Michael D.; Narayan, Ramesh
Images of the linear polarizations of synchrotron radiation around active galactic nuclei (AGNs) highlight their projected magnetic field lines and provide key data for understanding the physics of accretion and outflow from supermassive black holes. The highest-resolution polarimetric images of AGNs are produced with Very Long Baseline Interferometry (VLBI). Because VLBI incompletely samples the Fourier transform of the source image, any image reconstruction that fills in unmeasured spatial frequencies will not be unique and reconstruction algorithms are required. In this paper, we explore some extensions of the Maximum Entropy Method (MEM) to linear polarimetric VLBI imaging. In contrast to previousmore » work, our polarimetric MEM algorithm combines a Stokes I imager that only uses bispectrum measurements that are immune to atmospheric phase corruption, with a joint Stokes Q and U imager that operates on robust polarimetric ratios. We demonstrate the effectiveness of our technique on 7 and 3 mm wavelength quasar observations from the VLBA and simulated 1.3 mm Event Horizon Telescope observations of Sgr A* and M87. Consistent with past studies, we find that polarimetric MEM can produce superior resolution compared to the standard CLEAN algorithm, when imaging smooth and compact source distributions. As an imaging framework, MEM is highly adaptable, allowing a range of constraints on polarization structure. Polarimetric MEM is thus an attractive choice for image reconstruction with the EHT.« less
High-resolution Linear Polarimetric Imaging for the Event Horizon Telescope
NASA Astrophysics Data System (ADS)
Chael, Andrew A.; Johnson, Michael D.; Narayan, Ramesh; Doeleman, Sheperd S.; Wardle, John F. C.; Bouman, Katherine L.
2016-09-01
Images of the linear polarizations of synchrotron radiation around active galactic nuclei (AGNs) highlight their projected magnetic field lines and provide key data for understanding the physics of accretion and outflow from supermassive black holes. The highest-resolution polarimetric images of AGNs are produced with Very Long Baseline Interferometry (VLBI). Because VLBI incompletely samples the Fourier transform of the source image, any image reconstruction that fills in unmeasured spatial frequencies will not be unique and reconstruction algorithms are required. In this paper, we explore some extensions of the Maximum Entropy Method (MEM) to linear polarimetric VLBI imaging. In contrast to previous work, our polarimetric MEM algorithm combines a Stokes I imager that only uses bispectrum measurements that are immune to atmospheric phase corruption, with a joint Stokes Q and U imager that operates on robust polarimetric ratios. We demonstrate the effectiveness of our technique on 7 and 3 mm wavelength quasar observations from the VLBA and simulated 1.3 mm Event Horizon Telescope observations of Sgr A* and M87. Consistent with past studies, we find that polarimetric MEM can produce superior resolution compared to the standard CLEAN algorithm, when imaging smooth and compact source distributions. As an imaging framework, MEM is highly adaptable, allowing a range of constraints on polarization structure. Polarimetric MEM is thus an attractive choice for image reconstruction with the EHT.
Linear discriminant analysis based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu
2013-08-01
Linear discriminant analysis (LDA) is a well-known dimensionality reduction technique, which is widely used for many purposes. However, conventional LDA is sensitive to outliers because its objective function is based on the distance criterion using L2-norm. This paper proposes a simple but effective robust LDA version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class dispersion and the L1-norm-based within-class dispersion. The proposed method is theoretically proved to be feasible and robust to outliers while overcoming the singular problem of the within-class scatter matrix for conventional LDA. Experiments on artificial datasets, standard classification datasets and three popular image databases demonstrate the efficacy of the proposed method.
Todorović, Dejan
2008-01-01
Every image of a scene produced in accord with the rules of linear perspective has an associated projection centre. Only if observed from that position does the image provide the stimulus which is equivalent to the one provided by the original scene. According to the perspective-transformation hypothesis, observing the image from other vantage points should result in specific transformations of the structure of the conveyed scene, whereas according to the vantage-point compensation hypothesis it should have little effect. Geometrical analyses illustrating the transformation theory are presented. An experiment is reported to confront the two theories. The results provide little support for the compensation theory and are generally in accord with the transformation theory, but also show systematic deviations from it, possibly due to cue conflict and asymmetry of visual angles.
Use of the Hotelling observer to optimize image reconstruction in digital breast tomosynthesis
Sánchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan
2015-01-01
Abstract. We propose an implementation of the Hotelling observer that can be applied to the optimization of linear image reconstruction algorithms in digital breast tomosynthesis. The method is based on considering information within a specific region of interest, and it is applied to the optimization of algorithms for detectability of microcalcifications. Several linear algorithms are considered: simple back-projection, filtered back-projection, back-projection filtration, and Λ-tomography. The optimized algorithms are then evaluated through the reconstruction of phantom data. The method appears robust across algorithms and parameters and leads to the generation of algorithm implementations which subjectively appear optimized for the task of interest. PMID:26702408
NASA Astrophysics Data System (ADS)
Brekhna, Brekhna; Mahmood, Arif; Zhou, Yuanfeng; Zhang, Caiming
2017-11-01
Superpixels have gradually become popular in computer vision and image processing applications. However, no comprehensive study has been performed to evaluate the robustness of superpixel algorithms in regard to common forms of noise in natural images. We evaluated the robustness of 11 recently proposed algorithms to different types of noise. The images were corrupted with various degrees of Gaussian blur, additive white Gaussian noise, and impulse noise that either made the object boundaries weak or added extra information to it. We performed a robustness analysis of simple linear iterative clustering (SLIC), Voronoi Cells (VCells), flooding-based superpixel generation (FCCS), bilateral geodesic distance (Bilateral-G), superpixel via geodesic distance (SSS-G), manifold SLIC (M-SLIC), Turbopixels, superpixels extracted via energy-driven sampling (SEEDS), lazy random walk (LRW), real-time superpixel segmentation by DBSCAN clustering, and video supervoxels using partially absorbing random walks (PARW) algorithms. The evaluation process was carried out both qualitatively and quantitatively. For quantitative performance comparison, we used achievable segmentation accuracy (ASA), compactness, under-segmentation error (USE), and boundary recall (BR) on the Berkeley image database. The results demonstrated that all algorithms suffered performance degradation due to noise. For Gaussian blur, Bilateral-G exhibited optimal results for ASA and USE measures, SLIC yielded optimal compactness, whereas FCCS and DBSCAN remained optimal for BR. For the case of additive Gaussian and impulse noises, FCCS exhibited optimal results for ASA, USE, and BR, whereas Bilateral-G remained a close competitor in ASA and USE for Gaussian noise only. Additionally, Turbopixel demonstrated optimal performance for compactness for both types of noise. Thus, no single algorithm was able to yield optimal results for all three types of noise across all performance measures. Conclusively, to solve real-world problems effectively, more robust superpixel algorithms must be developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blais, AR; Dekaban, M; Lee, T-Y
2014-08-15
Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kineticmore » models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.« less
QR code-based non-linear image encryption using Shearlet transform and spiral phase transform
NASA Astrophysics Data System (ADS)
Kumar, Ravi; Bhaduri, Basanta; Hennelly, Bryan
2018-02-01
In this paper, we propose a new quick response (QR) code-based non-linear technique for image encryption using Shearlet transform (ST) and spiral phase transform. The input image is first converted into a QR code and then scrambled using the Arnold transform. The scrambled image is then decomposed into five coefficients using the ST and the first Shearlet coefficient, C1 is interchanged with a security key before performing the inverse ST. The output after inverse ST is then modulated with a random phase mask and further spiral phase transformed to get the final encrypted image. The first coefficient, C1 is used as a private key for decryption. The sensitivity of the security keys is analysed in terms of correlation coefficient and peak signal-to noise ratio. The robustness of the scheme is also checked against various attacks such as noise, occlusion and special attacks. Numerical simulation results are shown in support of the proposed technique and an optoelectronic set-up for encryption is also proposed.
Graph-cut based discrete-valued image reconstruction.
Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim
2015-05-01
Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.
NASA Technical Reports Server (NTRS)
Lin, Qian; Allebach, Jan P.
1990-01-01
An adaptive vector linear minimum mean-squared error (LMMSE) filter for multichannel images with multiplicative noise is presented. It is shown theoretically that the mean-squared error in the filter output is reduced by making use of the correlation between image bands. The vector and conventional scalar LMMSE filters are applied to a three-band SIR-B SAR, and their performance is compared. Based on a mutliplicative noise model, the per-pel maximum likelihood classifier was derived. The authors extend this to the design of sequential and robust classifiers. These classifiers are also applied to the three-band SIR-B SAR image.
Robust estimation for partially linear models with large-dimensional covariates
Zhu, LiPing; Li, RunZe; Cui, HengJian
2014-01-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087
Robust estimation for partially linear models with large-dimensional covariates.
Zhu, LiPing; Li, RunZe; Cui, HengJian
2013-10-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.
Image distortion analysis using polynomial series expansion.
Baggenstoss, Paul M
2004-11-01
In this paper, we derive a technique for analysis of local distortions which affect data in real-world applications. In the paper, we focus on image data, specifically handwritten characters. Given a reference image and a distorted copy of it, the method is able to efficiently determine the rotations, translations, scaling, and any other distortions that have been applied. Because the method is robust, it is also able to estimate distortions for two unrelated images, thus determining the distortions that would be required to cause the two images to resemble each other. The approach is based on a polynomial series expansion using matrix powers of linear transformation matrices. The technique has applications in pattern recognition in the presence of distortions.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents.
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-13
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-01
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions. PMID:28098797
Robust optical sensors for safety critical automotive applications
NASA Astrophysics Data System (ADS)
De Locht, Cliff; De Knibber, Sven; Maddalena, Sam
2008-02-01
Optical sensors for the automotive industry need to be robust, high performing and low cost. This paper focuses on the impact of automotive requirements on optical sensor design and packaging. Main strategies to lower optical sensor entry barriers in the automotive market include: Perform sensor calibration and tuning by the sensor manufacturer, sensor test modes on chip to guarantee functional integrity at operation, and package technology is key. As a conclusion, optical sensor applications are growing in automotive. Optical sensor robustness matured to the level of safety critical applications like Electrical Power Assisted Steering (EPAS) and Drive-by-Wire by optical linear arrays based systems and Automated Cruise Control (ACC), Lane Change Assist and Driver Classification/Smart Airbag Deployment by camera imagers based systems.
Cai, Ailong; Wang, Linyuan; Zhang, Hanming; Yan, Bin; Li, Lei; Xi, Xiaoqi; Li, Jianxin
2014-01-01
Linear scan computed tomography (CT) is a promising imaging configuration with high scanning efficiency while the data set is under-sampled and angularly limited for which high quality image reconstruction is challenging. In this work, an edge guided total variation minimization reconstruction (EGTVM) algorithm is developed in dealing with this problem. The proposed method is modeled on the combination of total variation (TV) regularization and iterative edge detection strategy. In the proposed method, the edge weights of intermediate reconstructions are incorporated into the TV objective function. The optimization is efficiently solved by applying alternating direction method of multipliers. A prudential and conservative edge detection strategy proposed in this paper can obtain the true edges while restricting the errors within an acceptable degree. Based on the comparison on both simulation studies and real CT data set reconstructions, EGTVM provides comparable or even better quality compared to the non-edge guided reconstruction and adaptive steepest descent-projection onto convex sets method. With the utilization of weighted alternating direction TV minimization and edge detection, EGTVM achieves fast and robust convergence and reconstructs high quality image when applied in linear scan CT with under-sampled data set.
NASA Astrophysics Data System (ADS)
McDonald, Michael C.; Kim, H. K.; Henry, J. R.; Cunningham, I. A.
2012-03-01
The detective quantum efficiency (DQE) is widely accepted as a primary measure of x-ray detector performance in the scientific community. A standard method for measuring the DQE, based on IEC 62220-1, requires the system to have a linear response meaning that the detector output signals are proportional to the incident x-ray exposure. However, many systems have a non-linear response due to characteristics of the detector, or post processing of the detector signals, that cannot be disabled and may involve unknown algorithms considered proprietary by the manufacturer. For these reasons, the DQE has not been considered as a practical candidate for routine quality assurance testing in a clinical setting. In this article we described a method that can be used to measure the DQE of both linear and non-linear systems that employ only linear image processing algorithms. The method was validated on a Cesium Iodide based flat panel system that simultaneously stores a raw (linear) and processed (non-linear) image for each exposure. It was found that the resulting DQE was equivalent to a conventional standards-compliant DQE with measurement precision, and the gray-scale inversion and linear edge enhancement did not affect the DQE result. While not IEC 62220-1 compliant, it may be adequate for QA programs.
Wavefront Sensing for WFIRST with a Linear Optical Model
NASA Technical Reports Server (NTRS)
Jurling, Alden S.; Content, David A.
2012-01-01
In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.
Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT
Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B
2014-01-01
We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second (fps) were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction. PMID:24710978
Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT.
Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B
2014-05-01
We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction.
Temporal assessment of radiomic features on clinical mammography in a high-risk population
NASA Astrophysics Data System (ADS)
Mendel, Kayla R.; Li, Hui; Lan, Li; Chan, Chun-Wai; King, Lauren M.; Tayob, Nabihah; Whitman, Gary; El-Zein, Randa; Bedrosian, Isabelle; Giger, Maryellen L.
2018-02-01
Extraction of high-dimensional quantitative data from medical images has become necessary in disease risk assessment, diagnostics and prognostics. Radiomic workflows for mammography typically involve a single medical image for each patient although medical images may exist for multiple imaging exams, especially in screening protocols. Our study takes advantage of the availability of mammograms acquired over multiple years for the prediction of cancer onset. This study included 841 images from 328 patients who developed subsequent mammographic abnormalities, which were confirmed as either cancer (n=173) or non-cancer (n=155) through diagnostic core needle biopsy. Quantitative radiomic analysis was conducted on antecedent FFDMs acquired a year or more prior to diagnostic biopsy. Analysis was limited to the breast contralateral to that in which the abnormality arose. Novel metrics were used to identify robust radiomic features. The most robust features were evaluated in the task of predicting future malignancies on a subset of 72 subjects (23 cancer cases and 49 non-cancer controls) with mammograms over multiple years. Using linear discriminant analysis, the robust radiomic features were merged into predictive signatures by: (i) using features from only the most recent contralateral mammogram, (ii) change in feature values between mammograms, and (iii) ratio of feature values over time, yielding AUCs of 0.57 (SE=0.07), 0.63 (SE=0.06), and 0.66 (SE=0.06), respectively. The AUCs for temporal radiomics (ratio) statistically differed from chance, suggesting that changes in radiomics over time may be critical for risk assessment. Overall, we found that our two-stage process of robustness assessment followed by performance evaluation served well in our investigation on the role of temporal radiomics in risk assessment.
Shift-invariant discrete wavelet transform analysis for retinal image classification.
Khademi, April; Krishnan, Sridhar
2007-12-01
This work involves retinal image classification and a novel analysis system was developed. From the compressed domain, the proposed scheme extracts textural features from wavelet coefficients, which describe the relative homogeneity of localized areas of the retinal images. Since the discrete wavelet transform (DWT) is shift-variant, a shift-invariant DWT was explored to ensure that a robust feature set was extracted. To combat the small database size, linear discriminant analysis classification was used with the leave one out method. 38 normal and 48 abnormal (exudates, large drusens, fine drusens, choroidal neovascularization, central vein and artery occlusion, histoplasmosis, arteriosclerotic retinopathy, hemi-central retinal vein occlusion and more) were used and a specificity of 79% and sensitivity of 85.4% were achieved (the average classification rate is 82.2%). The success of the system can be accounted to the highly robust feature set which included translation, scale and semi-rotational, features. Additionally, this technique is database independent since the features were specifically tuned to the pathologies of the human eye.
Global gray-level thresholding based on object size.
Ranefall, Petter; Wählby, Carolina
2016-04-01
In this article, we propose a fast and robust global gray-level thresholding method based on object size, where the selection of threshold level is based on recall and maximum precision with regard to objects within a given size interval. The method relies on the component tree representation, which can be computed in quasi-linear time. Feature-based segmentation is especially suitable for biomedical microscopy applications where objects often vary in number, but have limited variation in size. We show that for real images of cell nuclei and synthetic data sets mimicking fluorescent spots the proposed method is more robust than all standard global thresholding methods available for microscopy applications in ImageJ and CellProfiler. The proposed method, provided as ImageJ and CellProfiler plugins, is simple to use and the only required input is an interval of the expected object sizes. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Wiesinger, Florian; Bylund, Mikael; Yang, Jaewon; Kaushik, Sandeep; Shanbhag, Dattesh; Ahn, Sangtae; Jonsson, Joakim H; Lundman, Josef A; Hope, Thomas; Nyholm, Tufve; Larson, Peder; Cozzini, Cristina
2018-02-18
To describe a method for converting Zero TE (ZTE) MR images into X-ray attenuation information in the form of pseudo-CT images and demonstrate its performance for (1) attenuation correction (AC) in PET/MR and (2) dose planning in MR-guided radiation therapy planning (RTP). Proton density-weighted ZTE images were acquired as input for MR-based pseudo-CT conversion, providing (1) efficient capture of short-lived bone signals, (2) flat soft-tissue contrast, and (3) fast and robust 3D MR imaging. After bias correction and normalization, the images were segmented into bone, soft-tissue, and air by means of thresholding and morphological refinements. Fixed Hounsfield replacement values were assigned for air (-1000 HU) and soft-tissue (+42 HU), whereas continuous linear mapping was used for bone. The obtained ZTE-derived pseudo-CT images accurately resembled the true CT images (i.e., Dice coefficient for bone overlap of 0.73 ± 0.08 and mean absolute error of 123 ± 25 HU evaluated over the whole head, including errors from residual registration mismatches in the neck and mouth regions). The linear bone mapping accounted for bone density variations. Averaged across five patients, ZTE-based AC demonstrated a PET error of -0.04 ± 1.68% relative to CT-based AC. Similarly, for RTP assessed in eight patients, the absolute dose difference over the target volume was found to be 0.23 ± 0.42%. The described method enables MR to pseudo-CT image conversion for the head in an accurate, robust, and fast manner without relying on anatomical prior knowledge. Potential applications include PET/MR-AC, and MR-guided RTP. © 2018 International Society for Magnetic Resonance in Medicine.
An improved feature extraction algorithm based on KAZE for multi-spectral image
NASA Astrophysics Data System (ADS)
Yang, Jianping; Li, Jun
2018-02-01
Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.
Robust Adaptive Data Encoding and Restoration
NASA Technical Reports Server (NTRS)
Park, Stephen K.; Rahman, Zia-ur; Halyo, Nesim
2000-01-01
This is the final report for NASA cooperative agreement and covers the period from 01 October, 1997 to 11 April, 2000. The research during this period was performed in three primary, but related, areas. 1. Evaluation of integrated information adaptive imaging. 2. Improvements in memory utilization and performance of the multiscale retinex with color restoration (MSRCR). 3. Commencement of a theoretical study to evaluate the non-linear retinex image enhancement technique. The research resulted in several publications, and an intellectual property disclosure to the NASA patent council in May, 1999.
Pedestrian Detection in Far-Infrared Daytime Images Using a Hierarchical Codebook of SURF
Besbes, Bassem; Rogozan, Alexandrina; Rus, Adela-Maria; Bensrhair, Abdelaziz; Broggi, Alberto
2015-01-01
One of the main challenges in intelligent vehicles concerns pedestrian detection for driving assistance. Recent experiments have showed that state-of-the-art descriptors provide better performances on the far-infrared (FIR) spectrum than on the visible one, even in daytime conditions, for pedestrian classification. In this paper, we propose a pedestrian detector with on-board FIR camera. Our main contribution is the exploitation of the specific characteristics of FIR images to design a fast, scale-invariant and robust pedestrian detector. Our system consists of three modules, each based on speeded-up robust feature (SURF) matching. The first module allows generating regions-of-interest (ROI), since in FIR images of the pedestrian shapes may vary in large scales, but heads appear usually as light regions. ROI are detected with a high recall rate with the hierarchical codebook of SURF features located in head regions. The second module consists of pedestrian full-body classification by using SVM. This module allows one to enhance the precision with low computational cost. In the third module, we combine the mean shift algorithm with inter-frame scale-invariant SURF feature tracking to enhance the robustness of our system. The experimental evaluation shows that our system outperforms, in the FIR domain, the state-of-the-art Haar-like Adaboost-cascade, histogram of oriented gradients (HOG)/linear SVM (linSVM) and MultiFtrpedestrian detectors, trained on the FIR images. PMID:25871724
Local facet approximation for image stitching
NASA Astrophysics Data System (ADS)
Li, Jing; Lai, Shiming; Liu, Yu; Wang, Zhengming; Zhang, Maojun
2018-01-01
Image stitching aims at eliminating multiview parallax and generating a seamless panorama given a set of input images. This paper proposes a local adaptive stitching method, which could achieve both accurate and robust image alignments across the whole panorama. A transformation estimation model is introduced by approximating the scene as a combination of neighboring facets. Then, the local adaptive stitching field is constructed using a series of linear systems of the facet parameters, which enables the parallax handling in three-dimensional space. We also provide a concise but effective global projectivity preserving technique that smoothly varies the transformations from local adaptive to global planar. The proposed model is capable of stitching both normal images and fisheye images. The efficiency of our method is quantitatively demonstrated in the comparative experiments on several challenging cases.
An embedded system for face classification in infrared video using sparse representation
NASA Astrophysics Data System (ADS)
Saavedra M., Antonio; Pezoa, Jorge E.; Zarkesh-Ha, Payman; Figueroa, Miguel
2017-09-01
We propose a platform for robust face recognition in Infrared (IR) images using Compressive Sensing (CS). In line with CS theory, the classification problem is solved using a sparse representation framework, where test images are modeled by means of a linear combination of the training set. Because the training set constitutes an over-complete dictionary, we identify new images by finding their sparsest representation based on the training set, using standard l1-minimization algorithms. Unlike conventional face-recognition algorithms, we feature extraction is performed using random projections with a precomputed binary matrix, as proposed in the CS literature. This random sampling reduces the effects of noise and occlusions such as facial hair, eyeglasses, and disguises, which are notoriously challenging in IR images. Thus, the performance of our framework is robust to these noise and occlusion factors, achieving an average accuracy of approximately 90% when the UCHThermalFace database is used for training and testing purposes. We implemented our framework on a high-performance embedded digital system, where the computation of the sparse representation of IR images was performed by a dedicated hardware using a deeply pipelined architecture on an Field-Programmable Gate Array (FPGA).
Geometry-based ensembles: toward a structural characterization of the classification boundary.
Pujol, Oriol; Masip, David
2009-06-01
This paper introduces a novel binary discriminative learning technique based on the approximation of the nonlinear decision boundary by a piecewise linear smooth additive model. The decision border is geometrically defined by means of the characterizing boundary points-points that belong to the optimal boundary under a certain notion of robustness. Based on these points, a set of locally robust linear classifiers is defined and assembled by means of a Tikhonov regularized optimization procedure in an additive model to create a final lambda-smooth decision rule. As a result, a very simple and robust classifier with a strong geometrical meaning and nonlinear behavior is obtained. The simplicity of the method allows its extension to cope with some of today's machine learning challenges, such as online learning, large-scale learning or parallelization, with linear computational complexity. We validate our approach on the UCI database, comparing with several state-of-the-art classification techniques. Finally, we apply our technique in online and large-scale scenarios and in six real-life computer vision and pattern recognition problems: gender recognition based on face images, intravascular ultrasound tissue classification, speed traffic sign detection, Chagas' disease myocardial damage severity detection, old musical scores clef classification, and action recognition using 3D accelerometer data from a wearable device. The results are promising and this paper opens a line of research that deserves further attention.
Defocusing effects of lensless ghost imaging and ghost diffraction with partially coherent sources
NASA Astrophysics Data System (ADS)
Zhou, Shuang-Xi; Sheng, Wei; Bi, Yu-Bo; Luo, Chun-Ling
2018-04-01
The defocusing effect is inevitable and degrades the image quality in the conventional optical imaging process significantly due to the close confinement of the imaging lens. Based on classical optical coherent theory and linear algebra, we develop a unified formula to describe the defocusing effects of both lensless ghost imaging (LGI) and lensless ghost diffraction (LGD) systems with a partially coherent source. Numerical examples are given to illustrate the influence of defocusing length on the quality of LGI and LGD. We find that the defocusing effects of the test and reference paths in the LGI or LGD systems are entirely different, while the LGD system is more robust against defocusing than the LGI system. Specifically, we find that the imaging process for LGD systems can be viewed as pinhole imaging, which may find applications in ultra-short-wave band imaging without imaging lenses, e.g. x-ray diffraction and γ-ray imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, Edward T.
Purpose: To develop a robust method for deriving dose-painting prescription functions using spatial information about the risk for disease recurrence. Methods: Spatial distributions of radiobiological model parameters are derived from distributions of recurrence risk after uniform irradiation. These model parameters are then used to derive optimal dose-painting prescription functions given a constant mean biologically effective dose. Results: An estimate for the optimal dose distribution can be derived based on spatial information about recurrence risk. Dose painting based on imaging markers that are moderately or poorly correlated with recurrence risk are predicted to potentially result in inferior disease control when comparedmore » the same mean biologically effective dose delivered uniformly. A robust optimization approach may partially mitigate this issue. Conclusions: The methods described here can be used to derive an estimate for a robust, patient-specific prescription function for use in dose painting. Two approximate scaling relationships were observed: First, the optimal choice for the maximum dose differential when using either a linear or two-compartment prescription function is proportional to R, where R is the Pearson correlation coefficient between a given imaging marker and recurrence risk after uniform irradiation. Second, the predicted maximum possible gain in tumor control probability for any robust optimization technique is nearly proportional to the square of R.« less
Medical image enhancement using resolution synthesis
NASA Astrophysics Data System (ADS)
Wong, Tak-Shing; Bouman, Charles A.; Thibault, Jean-Baptiste; Sauer, Ken D.
2011-03-01
We introduce a post-processing approach to improve the quality of CT reconstructed images. The scheme is adapted from the resolution-synthesis (RS)1 interpolation algorithm. In this approach, we consider the input image, scanned at a particular dose level, as a degraded version of a high quality image scanned at a high dose level. Image enhancement is achieved by predicting the high quality image by classification based linear regression. To improve the robustness of our scheme, we also apply the minimum description length principle to determine the optimal number of predictors to use in the scheme, and the ridge regression to regularize the design of the predictors. Experimental results show that our scheme is effective in reducing the noise in images reconstructed from filtered back projection without significant loss of image details. Alternatively, our scheme can also be applied to reduce dose while maintaining image quality at an acceptable level.
Geometry-aware multiscale image registration via OBBTree-based polyaffine log-demons.
Seiler, Christof; Pennec, Xavier; Reyes, Mauricio
2011-01-01
Non-linear image registration is an important tool in many areas of image analysis. For instance, in morphometric studies of a population of brains, free-form deformations between images are analyzed to describe the structural anatomical variability. Such a simple deformation model is justified by the absence of an easy expressible prior about the shape changes. Applying the same algorithms used in brain imaging to orthopedic images might not be optimal due to the difference in the underlying prior on the inter-subject deformations. In particular, using an un-informed deformation prior often leads to local minima far from the expected solution. To improve robustness and promote anatomically meaningful deformations, we propose a locally affine and geometry-aware registration algorithm that automatically adapts to the data. We build upon the log-domain demons algorithm and introduce a new type of OBBTree-based regularization in the registration with a natural multiscale structure. The regularization model is composed of a hierarchy of locally affine transformations via their logarithms. Experiments on mandibles show improved accuracy and robustness when used to initialize the demons, and even similar performance by direct comparison to the demons, with a significantly lower degree of freedom. This closes the gap between polyaffine and non-rigid registration and opens new ways to statistically analyze the registration results.
Meteor tracking via local pattern clustering in spatio-temporal domain
NASA Astrophysics Data System (ADS)
Kukal, Jaromír.; Klimt, Martin; Švihlík, Jan; Fliegel, Karel
2016-09-01
Reliable meteor detection is one of the crucial disciplines in astronomy. A variety of imaging systems is used for meteor path reconstruction. The traditional approach is based on analysis of 2D image sequences obtained from a double station video observation system. Precise localization of meteor path is difficult due to atmospheric turbulence and other factors causing spatio-temporal fluctuations of the image background. The proposed technique performs non-linear preprocessing of image intensity using Box-Cox transform as recommended in our previous work. Both symmetric and asymmetric spatio-temporal differences are designed to be robust in the statistical sense. Resulting local patterns are processed by data whitening technique and obtained vectors are classified via cluster analysis and Self-Organized Map (SOM).
Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation.
Zana, F; Klein, J C
2001-01-01
This paper presents an algorithm based on mathematical morphology and curvature evaluation for the detection of vessel-like patterns in a noisy environment. Such patterns are very common in medical images. Vessel detection is interesting for the computation of parameters related to blood flow. Its tree-like geometry makes it a usable feature for registration between images that can be of a different nature. In order to define vessel-like patterns, segmentation is performed with respect to a precise model. We define a vessel as a bright pattern, piece-wise connected, and locally linear, mathematical morphology is very well adapted to this description, however other patterns fit such a morphological description. In order to differentiate vessels from analogous background patterns, a cross-curvature evaluation is performed. They are separated out as they have a specific Gaussian-like profile whose curvature varies smoothly along the vessel. The detection algorithm that derives directly from this modeling is based on four steps: (1) noise reduction; (2) linear pattern with Gaussian-like profile improvement; (3) cross-curvature evaluation; (4) linear filtering. We present its theoretical background and illustrate it on real images of various natures, then evaluate its robustness and its accuracy with respect to noise.
Li, Zukui; Ding, Ran; Floudas, Christodoulos A.
2011-01-01
Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263
NASA Astrophysics Data System (ADS)
Negahdar, Mohammadreza; Beymer, David; Syeda-Mahmood, Tanveer
2018-02-01
Deep Learning models such as Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in 2D medical image analysis. In clinical practice; however, most analyzed and acquired medical data are formed of 3D volumes. In this paper, we present a fast and efficient 3D lung segmentation method based on V-net: a purely volumetric fully CNN. Our model is trained on chest CT images through volume to volume learning, which palliates overfitting problem on limited number of annotated training data. Adopting a pre-processing step and training an objective function based on Dice coefficient addresses the imbalance between the number of lung voxels against that of background. We have leveraged Vnet model by using batch normalization for training which enables us to use higher learning rate and accelerates the training of the model. To address the inadequacy of training data and obtain better robustness, we augment the data applying random linear and non-linear transformations. Experimental results on two challenging medical image data show that our proposed method achieved competitive result with a much faster speed.
Comby, G.
1996-10-01
The Ceramic Electron Multipliers (CEM) is a compact, robust, linear and fast multi-channel electron multiplier. The Multi Layer Ceramic Technique (MLCT) allows to build metallic dynodes inside a compact ceramic block. The activation of the metallic dynodes enhances their secondary electron emission (SEE). The CEM can be used in multi-channel photomultipliers, multi-channel light intensifiers, ion detection, spectroscopy, analysis of time of flight events, particle detection or Cherenkov imaging detectors. (auth)
NASA Astrophysics Data System (ADS)
Sun, Xiao-Dong; Ge, Zhong-Hui; Li, Zhen-Chun
2017-09-01
Although conventional reverse time migration can be perfectly applied to structural imaging it lacks the capability of enabling detailed delineation of a lithological reservoir due to irregular illumination. To obtain reliable reflectivity of the subsurface it is necessary to solve the imaging problem using inversion. The least-square reverse time migration (LSRTM) (also known as linearized reflectivity inversion) aims to obtain relatively high-resolution amplitude preserving imaging by including the inverse of the Hessian matrix. In practice, the conjugate gradient algorithm is proven to be an efficient iterative method for enabling use of LSRTM. The velocity gradient can be derived from a cross-correlation between observed data and simulated data, making LSRTM independent of wavelet signature and thus more robust in practice. Tests on synthetic and marine data show that LSRTM has good potential for use in reservoir description and four-dimensional (4D) seismic images compared to traditional RTM and Fourier finite difference (FFD) migration. This paper investigates the first order approximation of LSRTM, which is also known as the linear Born approximation. However, for more complex geological structures a higher order approximation should be considered to improve imaging quality.
Biologically-inspired data decorrelation for hyper-spectral imaging
NASA Astrophysics Data System (ADS)
Picon, Artzai; Ghita, Ovidiu; Rodriguez-Vaamonde, Sergio; Iriondo, Pedro Ma; Whelan, Paul F.
2011-12-01
Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors) is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA), linear discriminant analysis (LDA), wavelet decomposition (WD), or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral) characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classification
Lee, Junhwa; Lee, Kyoung-Chan; Cho, Soojin
2017-01-01
The displacement responses of a civil engineering structure can provide important information regarding structural behaviors that help in assessing safety and serviceability. A displacement measurement using conventional devices, such as the linear variable differential transformer (LVDT), is challenging owing to issues related to inconvenient sensor installation that often requires additional temporary structures. A promising alternative is offered by computer vision, which typically provides a low-cost and non-contact displacement measurement that converts the movement of an object, mostly an attached marker, in the captured images into structural displacement. However, there is limited research on addressing light-induced measurement error caused by the inevitable sunlight in field-testing conditions. This study presents a computer vision-based displacement measurement approach tailored to a field-testing environment with enhanced robustness to strong sunlight. An image-processing algorithm with an adaptive region-of-interest (ROI) is proposed to reliably determine a marker’s location even when the marker is indistinct due to unfavorable light. The performance of the proposed system is experimentally validated in both laboratory-scale and field experiments. PMID:29019950
Effect of using different cover image quality to obtain robust selective embedding in steganography
NASA Astrophysics Data System (ADS)
Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer
2014-05-01
One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.
Local structure preserving sparse coding for infrared target recognition
Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa
2017-01-01
Sparse coding performs well in image classification. However, robust target recognition requires a lot of comprehensive template images and the sparse learning process is complex. We incorporate sparsity into a template matching concept to construct a local sparse structure matching (LSSM) model for general infrared target recognition. A local structure preserving sparse coding (LSPSc) formulation is proposed to simultaneously preserve the local sparse and structural information of objects. By adding a spatial local structure constraint into the classical sparse coding algorithm, LSPSc can improve the stability of sparse representation for targets and inhibit background interference in infrared images. Furthermore, a kernel LSPSc (K-LSPSc) formulation is proposed, which extends LSPSc to the kernel space to weaken the influence of the linear structure constraint in nonlinear natural data. Because of the anti-interference and fault-tolerant capabilities, both LSPSc- and K-LSPSc-based LSSM can implement target identification based on a simple template set, which just needs several images containing enough local sparse structures to learn a sufficient sparse structure dictionary of a target class. Specifically, this LSSM approach has stable performance in the target detection with scene, shape and occlusions variations. High performance is demonstrated on several datasets, indicating robust infrared target recognition in diverse environments and imaging conditions. PMID:28323824
Initial evaluation of discrete orthogonal basis reconstruction of ECT images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, E.B.; Donohue, K.D.
1996-12-31
Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less
Li, Zhaoying; Zhou, Wenjie; Liu, Hao
2016-09-01
This paper addresses the nonlinear robust tracking controller design problem for hypersonic vehicles. This problem is challenging due to strong coupling between the aerodynamics and the propulsion system, and the uncertainties involved in the vehicle dynamics including parametric uncertainties, unmodeled model uncertainties, and external disturbances. By utilizing the feedback linearization technique, a linear tracking error system is established with prescribed references. For the linear model, a robust controller is proposed based on the signal compensation theory to guarantee that the tracking error dynamics is robustly stable. Numerical simulation results are given to show the advantages of the proposed nonlinear robust control method, compared to the robust loop-shaping control approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
A sparse representation-based approach for copy-move image forgery detection in smooth regions
NASA Astrophysics Data System (ADS)
Abdessamad, Jalila; ElAdel, Asma; Zaied, Mourad
2017-03-01
Copy-move image forgery is the act of cloning a restricted region in the image and pasting it once or multiple times within that same image. This procedure intends to cover a certain feature, probably a person or an object, in the processed image or emphasize it through duplication. Consequences of this malicious operation can be unexpectedly harmful. Hence, the present paper proposes a new approach that automatically detects Copy-move Forgery (CMF). In particular, this work broaches a widely common open issue in CMF research literature that is detecting CMF within smooth areas. Indeed, the proposed approach represents the image blocks as a sparse linear combination of pre-learned bases (a mixture of texture and color-wise small patches) which allows a robust description of smooth patches. The reported experimental results demonstrate the effectiveness of the proposed approach in identifying the forged regions in CM attacks.
Automatic multiresolution age-related macular degeneration detection from fundus images
NASA Astrophysics Data System (ADS)
Garnier, Mickaël.; Hurtut, Thomas; Ben Tahar, Houssem; Cheriet, Farida
2014-03-01
Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model.
A rapid and robust gradient measurement technique using dynamic single-point imaging.
Jang, Hyungseok; McMillan, Alan B
2017-09-01
We propose a new gradient measurement technique based on dynamic single-point imaging (SPI), which allows simple, rapid, and robust measurement of k-space trajectory. To enable gradient measurement, we utilize the variable field-of-view (FOV) property of dynamic SPI, which is dependent on gradient shape. First, one-dimensional (1D) dynamic SPI data are acquired from a targeted gradient axis, and then relative FOV scaling factors between 1D images or k-spaces at varying encoding times are found. These relative scaling factors are the relative k-space position that can be used for image reconstruction. The gradient measurement technique also can be used to estimate the gradient impulse response function for reproducible gradient estimation as a linear time invariant system. The proposed measurement technique was used to improve reconstructed image quality in 3D ultrashort echo, 2D spiral, and multi-echo bipolar gradient-echo imaging. In multi-echo bipolar gradient-echo imaging, measurement of the k-space trajectory allowed the use of a ramp-sampled trajectory for improved acquisition speed (approximately 30%) and more accurate quantitative fat and water separation in a phantom. The proposed dynamic SPI-based method allows fast k-space trajectory measurement with a simple implementation and no additional hardware for improved image quality. Magn Reson Med 78:950-962, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
The performance of projective standardization for digital subtraction radiography.
Mol, André; Dunn, Stanley M
2003-09-01
We sought to test the performance and robustness of projective standardization in preserving invariant properties of subtraction images in the presence of irreversible projection errors. Study design Twenty bone chips (1-10 mg each) were placed on dentate dry mandibles. Follow-up images were obtained without the bone chips, and irreversible projection errors of up to 6 degrees were introduced. Digitized image intensities were normalized, and follow-up images were geometrically reconstructed by 2 operators using anatomical and fiduciary landmarks. Subtraction images were analyzed by 3 observers. Regression analysis revealed a linear relationship between radiographic estimates of mineral loss and actual mineral loss (R(2) = 0.99; P <.05). The effect of projection error was not significant (general linear model [GLM]: P >.05). There was no difference between the radiographic estimates from images standardized with anatomical landmarks and those standardized with fiduciary landmarks (Wilcoxon signed rank test: P >.05). Operator variability was low for image analysis alone (R(2) = 0.99; P <.05), as well as for the entire procedure (R(2) = 0.98; P <.05). The predicted detection limit was smaller than 1 mg. Subtraction images registered by projective standardization yield estimates of osseous change that are invariant to irreversible projection errors of up to 6 degrees. Within these limits, operator precision is high and anatomical landmarks can be used to establish correspondence.
O'Neill, William; Penn, Richard; Werner, Michael; Thomas, Justin
2015-06-01
Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible.
FRIT characterized hierarchical kernel memory arrangement for multiband palmprint recognition
NASA Astrophysics Data System (ADS)
Kisku, Dakshina R.; Gupta, Phalguni; Sing, Jamuna K.
2015-10-01
In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.
An integration of minimum local feature representation methods to recognize large variation of foods
NASA Astrophysics Data System (ADS)
Razali, Mohd Norhisham bin; Manshor, Noridayu; Halin, Alfian Abdul; Mustapha, Norwati; Yaakob, Razali
2017-10-01
Local invariant features have shown to be successful in describing object appearances for image classification tasks. Such features are robust towards occlusion and clutter and are also invariant against scale and orientation changes. This makes them suitable for classification tasks with little inter-class similarity and large intra-class difference. In this paper, we propose an integrated representation of the Speeded-Up Robust Feature (SURF) and Scale Invariant Feature Transform (SIFT) descriptors, using late fusion strategy. The proposed representation is used for food recognition from a dataset of food images with complex appearance variations. The Bag of Features (BOF) approach is employed to enhance the discriminative ability of the local features. Firstly, the individual local features are extracted to construct two kinds of visual vocabularies, representing SURF and SIFT. The visual vocabularies are then concatenated and fed into a Linear Support Vector Machine (SVM) to classify the respective food categories. Experimental results demonstrate impressive overall recognition at 82.38% classification accuracy based on the challenging UEC-Food100 dataset.
Assessing the quality of restored images in optical long-baseline interferometry
NASA Astrophysics Data System (ADS)
Gomes, Nuno; Garcia, Paulo J. V.; Thiébaut, Éric
2017-03-01
Assessing the quality of aperture synthesis maps is relevant for benchmarking image reconstruction algorithms, for the scientific exploitation of data from optical long-baseline interferometers, and for the design/upgrade of new/existing interferometric imaging facilities. Although metrics have been proposed in these contexts, no systematic study has been conducted on the selection of a robust metric for quality assessment. This article addresses the question: what is the best metric to assess the quality of a reconstructed image? It starts by considering several metrics and selecting a few based on general properties. Then, a variety of image reconstruction cases are considered. The observational scenarios are phase closure and phase referencing at the Very Large Telescope Interferometer (VLTI), for a combination of two, three, four and six telescopes. End-to-end image reconstruction is accomplished with the MIRA software, and several merit functions are put to test. It is found that convolution by an effective point spread function is required for proper image quality assessment. The effective angular resolution of the images is superior to naive expectation based on the maximum frequency sampled by the array. This is due to the prior information used in the aperture synthesis algorithm and to the nature of the objects considered. The ℓ1-norm is the most robust of all considered metrics, because being linear it is less sensitive to image smoothing by high regularization levels. For the cases considered, this metric allows the implementation of automatic quality assessment of reconstructed images, with a performance similar to human selection.
NASA Astrophysics Data System (ADS)
Li, Jun; Song, Minghui; Peng, Yuanxi
2018-03-01
Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.
Comparison of interpolation functions to improve a rebinning-free CT-reconstruction algorithm.
de las Heras, Hugo; Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph
2008-01-01
The robust algorithm OPED for the reconstruction of images from Radon data has been recently developed. This reconstructs an image from parallel data within a special scanning geometry that does not need rebinning but only a simple re-ordering, so that the acquired fan data can be used directly for the reconstruction. However, if the number of rays per fan view is increased, there appear empty cells in the sinogram. These cells need to be filled by interpolation before the reconstruction can be carried out. The present paper analyzes linear interpolation, cubic splines and parametric (or "damped") splines for the interpolation task. The reconstruction accuracy in the resulting images was measured by the Normalized Mean Square Error (NMSE), the Hilbert Angle, and the Mean Relative Error. The spatial resolution was measured by the Modulation Transfer Function (MTF). Cubic splines were confirmed to be the most recommendable method. The reconstructed images resulting from cubic spline interpolation show a significantly lower NMSE than the ones from linear interpolation and have the largest MTF for all frequencies. Parametric splines proved to be advantageous only for small sinograms (below 50 fan views).
Restoration of retinal images with space-variant blur.
Marrugo, Andrés G; Millán, María S; Sorel, Michal; Sroubek, Filip
2014-01-01
Retinal images are essential clinical resources for the diagnosis of retinopathy and many other ocular diseases. Because of improper acquisition conditions or inherent optical aberrations in the eye, the images are often degraded with blur. In many common cases, the blur varies across the field of view. Most image deblurring algorithms assume a space-invariant blur, which fails in the presence of space-variant (SV) blur. In this work, we propose an innovative strategy for the restoration of retinal images in which we consider the blur to be both unknown and SV. We model the blur by a linear operation interpreted as a convolution with a point-spread function (PSF) that changes with the position in the image. To achieve an artifact-free restoration, we propose a framework for a robust estimation of the SV PSF based on an eye-domain knowledge strategy. The restoration method was tested on artificially and naturally degraded retinal images. The results show an important enhancement, significant enough to leverage the images' clinical use.
Sequential and parallel image restoration: neural network implementations.
Figueiredo, M T; Leitao, J N
1994-01-01
Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.
Control design for robust stability in linear regulators: Application to aerospace flight control
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1986-01-01
Time domain stability robustness analysis and design for linear multivariable uncertain systems with bounded uncertainties is the central theme of the research. After reviewing the recently developed upper bounds on the linear elemental (structured), time varying perturbation of an asymptotically stable linear time invariant regulator, it is shown that it is possible to further improve these bounds by employing state transformations. Then introducing a quantitative measure called the stability robustness index, a state feedback conrol design algorithm is presented for a general linear regulator problem and then specialized to the case of modal systems as well as matched systems. The extension of the algorithm to stochastic systems with Kalman filter as the state estimator is presented. Finally an algorithm for robust dynamic compensator design is presented using Parameter Optimization (PO) procedure. Applications in a aircraft control and flexible structure control are presented along with a comparison with other existing methods.
A comparative robustness evaluation of feedforward neurofilters
NASA Technical Reports Server (NTRS)
Troudet, Terry; Merrill, Walter
1993-01-01
A comparative performance and robustness analysis is provided for feedforward neurofilters trained with back propagation to filter additive white noise. The signals used in this analysis are simulated pitch rate responses to typical pilot command inputs for a modern fighter aircraft model. Various configurations of nonlinear and linear neurofilters are trained to estimate exact signal values from input sequences of noisy sampled signal values. In this application, nonlinear neurofiltering is found to be more efficient than linear neurofiltering in removing the noise from responses of the nominal vehicle model, whereas linear neurofiltering is found to be more robust in the presence of changes in the vehicle dynamics. The possibility of enhancing neurofiltering through hybrid architectures based on linear and nonlinear neuroprocessing is therefore suggested as a way of taking advantage of the robustness of linear neurofiltering, while maintaining the nominal performance advantage of nonlinear neurofiltering.
NASA Astrophysics Data System (ADS)
Samaille, T.; Colliot, O.; Cuingnet, R.; Jouvent, E.; Chabriat, H.; Dormont, D.; Chupin, M.
2012-02-01
White matter hyperintensities (WMH), commonly seen on FLAIR images in elderly people, are a risk factor for dementia onset and have been associated with motor and cognitive deficits. We present here a method to fully automatically segment WMH from T1 and FLAIR images. Iterative steps of non linear diffusion followed by watershed segmentation were applied on FLAIR images until convergence. Diffusivity function and associated contrast parameter were carefully designed to adapt to WMH segmentation. It resulted in piecewise constant images with enhanced contrast between lesions and surrounding tissues. Selection of WMH areas was based on two characteristics: 1) a threshold automatically computed for intensity selection, 2) main location of areas in white matter. False positive areas were finally removed based on their proximity with cerebrospinal fluid/grey matter interface. Evaluation was performed on 67 patients: 24 with amnestic mild cognitive impairment (MCI), from five different centres, and 43 with Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoaraiosis (CADASIL) acquired in a single centre. Results showed excellent volume agreement with manual delineation (Pearson coefficient: r=0.97, p<0.001) and substantial spatial correspondence (Similarity Index: 72%+/-16%). Our method appeared robust to acquisition differences across the centres as well as to pathological variability.
Mohammadi, Siawoosh; Hutton, Chloe; Nagy, Zoltan; Josephs, Oliver; Weiskopf, Nikolaus
2013-01-01
Diffusion tensor imaging is widely used in research and clinical applications, but this modality is highly sensitive to artefacts. We developed an easy-to-implement extension of the original diffusion tensor model to account for physiological noise in diffusion tensor imaging using measures of peripheral physiology (pulse and respiration), the so-called extended tensor model. Within the framework of the extended tensor model two types of regressors, which respectively modeled small (linear) and strong (nonlinear) variations in the diffusion signal, were derived from peripheral measures. We tested the performance of four extended tensor models with different physiological noise regressors on nongated and gated diffusion tensor imaging data, and compared it to an established data-driven robust fitting method. In the brainstem and cerebellum the extended tensor models reduced the noise in the tensor-fit by up to 23% in accordance with previous studies on physiological noise. The extended tensor model addresses both large-amplitude outliers and small-amplitude signal-changes. The framework of the extended tensor model also facilitates further investigation into physiological noise in diffusion tensor imaging. The proposed extended tensor model can be readily combined with other artefact correction methods such as robust fitting and eddy current correction. PMID:22936599
Joint graph cut and relative fuzzy connectedness image segmentation algorithm.
Ciesielski, Krzysztof Chris; Miranda, Paulo A V; Falcão, Alexandre X; Udupa, Jayaram K
2013-12-01
We introduce an image segmentation algorithm, called GC(sum)(max), which combines, in novel manner, the strengths of two popular algorithms: Relative Fuzzy Connectedness (RFC) and (standard) Graph Cut (GC). We show, both theoretically and experimentally, that GC(sum)(max) preserves robustness of RFC with respect to the seed choice (thus, avoiding "shrinking problem" of GC), while keeping GC's stronger control over the problem of "leaking though poorly defined boundary segments." The analysis of GC(sum)(max) is greatly facilitated by our recent theoretical results that RFC can be described within the framework of Generalized GC (GGC) segmentation algorithms. In our implementation of GC(sum)(max) we use, as a subroutine, a version of RFC algorithm (based on Image Forest Transform) that runs (provably) in linear time with respect to the image size. This results in GC(sum)(max) running in a time close to linear. Experimental comparison of GC(sum)(max) to GC, an iterative version of RFC (IRFC), and power watershed (PW), based on a variety medical and non-medical images, indicates superior accuracy performance of GC(sum)(max) over these other methods, resulting in a rank ordering of GC(sum)(max)>PW∼IRFC>GC. Copyright © 2013 Elsevier B.V. All rights reserved.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-03-08
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-01-01
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062
Vavadi, Hamed; Zhu, Quing
2016-01-01
Imaging-guided near infrared diffuse optical tomography (DOT) has demonstrated a great potential as an adjunct modality for differentiation of malignant and benign breast lesions and for monitoring treatment response of breast cancers. However, diffused light measurements are sensitive to artifacts caused by outliers and errors in measurements due to probe-tissue coupling, patient and probe motions, and tissue heterogeneity. In general, pre-processing of the measurements is needed by experienced users to manually remove these outliers and therefore reduce imaging artifacts. An automated method of outlier removal, data selection, and filtering for diffuse optical tomography is introduced in this manuscript. This method consists of multiple steps to first combine several data sets collected from the same patient at contralateral normal breast and form a single robust reference data set using statistical tests and linear fitting of the measurements. The second step improves the perturbation measurements by filtering out outliers from the lesion site measurements using model based analysis. The results of 20 malignant and benign cases show similar performance between manual data processing and automated processing and improvement in tissue characterization of malignant to benign ratio by about 27%. PMID:27867711
Takei, Takaaki; Ikeda, Mitsuru; Imai, Kuniharu; Yamauchi-Kawaura, Chiyo; Kato, Katsuhiko; Isoda, Haruo
2013-09-01
The automated contrast-detail (C-D) analysis methods developed so-far cannot be expected to work well on images processed with nonlinear methods, such as noise reduction methods. Therefore, we have devised a new automated C-D analysis method by applying support vector machine (SVM), and tested for its robustness to nonlinear image processing. We acquired the CDRAD (a commercially available C-D test object) images at a tube voltage of 120 kV and a milliampere-second product (mAs) of 0.5-5.0. A partial diffusion equation based technique was used as noise reduction method. Three radiologists and three university students participated in the observer performance study. The training data for our SVM method was the classification data scored by the one radiologist for the CDRAD images acquired at 1.6 and 3.2 mAs and their noise-reduced images. We also compared the performance of our SVM method with the CDRAD Analyser algorithm. The mean C-D diagrams (that is a plot of the mean of the smallest visible hole diameter vs. hole depth) obtained from our devised SVM method agreed well with the ones averaged across the six human observers for both original and noise-reduced CDRAD images, whereas the mean C-D diagrams from the CDRAD Analyser algorithm disagreed with the ones from the human observers for both original and noise-reduced CDRAD images. In conclusion, our proposed SVM method for C-D analysis will work well for the images processed with the non-linear noise reduction method as well as for the original radiographic images.
Zheng, Bo; von See, Marc P.; Yu, Elaine; Gunel, Beliz; Lu, Kuan; Vazin, Tandis; Schaffer, David V.; Goodwill, Patrick W.; Conolly, Steven M.
2016-01-01
Stem cell therapies have enormous potential for treating many debilitating diseases, including heart failure, stroke and traumatic brain injury. For maximal efficacy, these therapies require targeted cell delivery to specific tissues followed by successful cell engraftment. However, targeted delivery remains an open challenge. As one example, it is common for intravenous deliveries of mesenchymal stem cells (MSCs) to become entrapped in lung microvasculature instead of the target tissue. Hence, a robust, quantitative imaging method would be essential for developing efficacious cell therapies. Here we show that Magnetic Particle Imaging (MPI), a novel technique that directly images iron-oxide nanoparticle-tagged cells, can longitudinally monitor and quantify MSC administration in vivo. MPI offers near-ideal image contrast, depth penetration, and robustness; these properties make MPI both ultra-sensitive and linearly quantitative. Here, we imaged, for the first time, the dynamic trafficking of intravenous MSC administrations using MPI. Our results indicate that labeled MSC injections are immediately entrapped in lung tissue and then clear to the liver within one day, whereas standard iron oxide particle (Resovist) injections are immediately taken up by liver and spleen. Longitudinal MPI-CT imaging also indicated a clearance half-life of MSC iron oxide labels in the liver at 4.6 days. Finally, our ex vivo MPI biodistribution measurements of iron in liver, spleen, heart, and lungs after injection showed excellent agreement (R2 = 0.943) with measurements from induction coupled plasma spectrometry. These results demonstrate that MPI offers strong utility for noninvasively imaging and quantifying the systemic distribution of cell therapies and other therapeutic agents. PMID:26909106
Robust H(∞) positional control of 2-DOF robotic arm driven by electro-hydraulic servo system.
Guo, Qing; Yu, Tian; Jiang, Dan
2015-11-01
In this paper an H∞ positional feedback controller is developed to improve the robust performance under structural and parametric uncertainty disturbance in electro-hydraulic servo system (EHSS). The robust control model is described as the linear state-space equation by upper linear fractional transformation. According to the solution of H∞ sub-optimal control problem, the robust controller is designed and simplified to lower order linear model which is easily realized in EHSS. The simulation and experimental results can validate the robustness of this proposed method. The comparison result with PI control shows that the robust controller is suitable for this EHSS under the critical condition where the desired system bandwidth is higher and the external load of the hydraulic actuator is closed to its limited capability. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Langwen; Xie, Wei; Wang, Jingcheng
2017-11-01
In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.
Robust estimation approach for blind denoising.
Rabie, Tamer
2005-11-01
This work develops a new robust statistical framework for blind image denoising. Robust statistics addresses the problem of estimation when the idealized assumptions about a system are occasionally violated. The contaminating noise in an image is considered as a violation of the assumption of spatial coherence of the image intensities and is treated as an outlier random variable. A denoised image is estimated by fitting a spatially coherent stationary image model to the available noisy data using a robust estimator-based regression method within an optimal-size adaptive window. The robust formulation aims at eliminating the noise outliers while preserving the edge structures in the restored image. Several examples demonstrating the effectiveness of this robust denoising technique are reported and a comparison with other standard denoising filters is presented.
Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F
2018-06-01
This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.
Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A
2008-10-01
Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.
Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li
2009-02-01
Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.
Multiscale morphological filtering for analysis of noisy and complex images
NASA Astrophysics Data System (ADS)
Kher, A.; Mitra, S.
Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.
Multiscale Morphological Filtering for Analysis of Noisy and Complex Images
NASA Technical Reports Server (NTRS)
Kher, A.; Mitra, S.
1993-01-01
Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.
Orthogonal sparse linear discriminant analysis
NASA Astrophysics Data System (ADS)
Liu, Zhonghua; Liu, Gang; Pu, Jiexin; Wang, Xiaohong; Wang, Haijun
2018-03-01
Linear discriminant analysis (LDA) is a linear feature extraction approach, and it has received much attention. On the basis of LDA, researchers have done a lot of research work on it, and many variant versions of LDA were proposed. However, the inherent problem of LDA cannot be solved very well by the variant methods. The major disadvantages of the classical LDA are as follows. First, it is sensitive to outliers and noises. Second, only the global discriminant structure is preserved, while the local discriminant information is ignored. In this paper, we present a new orthogonal sparse linear discriminant analysis (OSLDA) algorithm. The k nearest neighbour graph is first constructed to preserve the locality discriminant information of sample points. Then, L2,1-norm constraint on the projection matrix is used to act as loss function, which can make the proposed method robust to outliers in data points. Extensive experiments have been performed on several standard public image databases, and the experiment results demonstrate the performance of the proposed OSLDA algorithm.
Hierarchical Feature Extraction With Local Neural Response for Image Recognition.
Li, Hong; Wei, Yantao; Li, Luoqing; Chen, C L P
2013-04-01
In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.
NASA Astrophysics Data System (ADS)
Kim, Minwoo; Park, Hyeon K.; Yun, Gunsu; Lee, Jaehyun; Lee, Jieun; Lee, Woochang; Jardin, Stephen; Xu, X. Q.; Kstar Team
2015-11-01
The modeling of the Edge-localized-mode (ELM) should be rigorously pursued for reliable and robust ELM control for steady-state long-pulse H-mode operation in ITER as well as DEMO. In the KSTAR discharge #7328, a linear stability of the ELMs is investigated using M3D-C1 and BOUT + + codes. This is achieved by linear simulation for the n = 8 mode structure of the ELM observed by the KSTAR electron cyclotron emission imaging (ECEI) systems. In the process of analysis, variations due to the plasma equilibrium profiles and transport coefficients on the ELM growth rate are investigated and simulation results with the two codes are compared. The numerical simulations are extended to nonlinear phase of the ELM dynamics, which includes saturation and crash of the modes. Preliminary results of the nonlinear simulations are compared with the measured images especially from the saturation to the crash. This work is supported by NRF of Korea under contract no. NRF-2014M1A7A1A03029865, US DoE by LLNL under contract DE-AC52-07NA27344 and US DoE by PPPL under contract DE-AC02-09CH11466.
Liang, Gaozhen; Dong, Chunwang; Hu, Bin; Zhu, Hongkai; Yuan, Haibo; Jiang, Yongwen; Hao, Guoshuang
2018-05-18
Withering is the first step in the processing of congou black tea. With respect to the deficiency of traditional water content detection methods, a machine vision based NDT (Non Destructive Testing) method was established to detect the moisture content of withered leaves. First, according to the time sequences using computer visual system collected visible light images of tea leaf surfaces, and color and texture characteristics are extracted through the spatial changes of colors. Then quantitative prediction models for moisture content detection of withered tea leaves was established through linear PLS (Partial Least Squares) and non-linear SVM (Support Vector Machine). The results showed correlation coefficients higher than 0.8 between the water contents and green component mean value (G), lightness component mean value (L * ) and uniformity (U), which means that the extracted characteristics have great potential to predict the water contents. The performance parameters as correlation coefficient of prediction set (Rp), root-mean-square error of prediction (RMSEP), and relative standard deviation (RPD) of the SVM prediction model are 0.9314, 0.0411 and 1.8004, respectively. The non-linear modeling method can better describe the quantitative analytical relations between the image and water content. With superior generalization and robustness, the method would provide a new train of thought and theoretical basis for the online water content monitoring technology of automated production of black tea.
Optimization-Based Robust Nonlinear Control
2006-08-01
ABSTRACT New control algorithms were developed for robust stabilization of nonlinear dynamical systems . Novel, linear matrix inequality-based synthesis...was to further advance optimization-based robust nonlinear control design, for general nonlinear systems (especially in discrete time ), for linear...Teel, IEEE Transactions on Control Systems Technology, vol. 14, no. 3, p. 398-407, May 2006. 3. "A unified framework for input-to-state stability in
Inferring the most probable maps of underground utilities using Bayesian mapping model
NASA Astrophysics Data System (ADS)
Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony
2018-03-01
Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.
Fast global image smoothing based on weighted least squares.
Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N
2014-12-01
This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.
A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors.
He, Xiyan; Condat, Laurent; Bioucas-Diaz, Jose; Chanussot, Jocelyn; Xia, Junshi
2014-06-27
The development of multisensor systems in recent years has led to great increase in the amount of available remote sensing data. Image fusion techniques aim at inferring high quality images of a given area from degraded versions of the same area obtained by multiple sensors. This paper focuses on pansharpening, which is the inference of a high spatial resolution multispectral image from two degraded versions with complementary spectral and spatial resolution characteristics: a) a low spatial resolution multispectral image; and b) a high spatial resolution panchromatic image. We introduce a new variational model based on spatial and spectral sparsity priors for the fusion. In the spectral domain we encourage low-rank structure, whereas in the spatial domain we promote sparsity on the local differences. Given the fact that both panchromatic and multispectral images are integrations of the underlying continuous spectra using different channel responses, we propose to exploit appropriate regularizations based on both spatial and spectral links between panchromatic and the fused multispectral images. A weighted version of the vector Total Variation (TV) norm of the data matrix is employed to align the spatial information of the fused image with that of the panchromatic image. With regard to spectral information, two different types of regularization are proposed to promote a soft constraint on the linear dependence between the panchromatic and the fused multispectral images. The first one estimates directly the linear coefficients from the observed panchromatic and low resolution multispectral images by Linear Regression (LR) while the second one employs the Principal Component Pursuit (PCP) to obtain a robust recovery of the underlying low-rank structure. We also show that the two regularizers are strongly related. The basic idea of both regularizers is that the fused image should have low-rank and preserve edge locations. We use a variation of the recently proposed Split Augmented Lagrangian Shrinkage (SALSA) algorithm to effectively solve the proposed variational formulations. Experimental results on simulated and real remote sensing images show the effectiveness of the proposed pansharpening method compared to the state-of-the-art.
Robust head pose estimation via supervised manifold learning.
Wang, Chao; Song, Xubo
2014-05-01
Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with the pose being the only variable, the face images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background clutter, facial expression, and illumination. To address the problem, we propose to incorporate supervised information (pose angles of training samples) into the process of manifold learning. The process has three stages: neighborhood construction, graph weight computation and projection learning. For the first two stages, we redefine inter-point distance for neighborhood construction as well as graph weight by constraining them with the pose angle information. For Stage 3, we present a supervised neighborhood-based linear feature transformation algorithm to keep the data points with similar pose angles close together but the data points with dissimilar pose angles far apart. The experimental results show that our method has higher estimation accuracy than the other state-of-art algorithms and is robust to identity and illumination variations. Copyright © 2014 Elsevier Ltd. All rights reserved.
SU-F-207-16: CT Protocols Optimization Using Model Observer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tseng, H; Fan, J; Kupinski, M
2015-06-15
Purpose: To quantitatively evaluate the performance of different CT protocols using task-based measures of image quality. This work studies the task of size and the contrast estimation of different iodine concentration rods inserted in head- and body-sized phantoms using different imaging protocols. These protocols are designed to have the same dose level (CTDIvol) but using different X-ray tube voltage settings (kVp). Methods: Different concentrations of iodine objects inserted in a head size phantom and a body size phantom are imaged on a 64-slice commercial CT scanner. Scanning protocols with various tube voltages (80, 100, and 120 kVp) and current settingsmore » are selected, which output the same absorbed dose level (CTDIvol). Because the phantom design (size of the iodine objects, the air gap between the inserted objects and the phantom) is not ideal for a model observer study, the acquired CT images are used to generate simulation images with four different sizes and five different contracts iodine objects. For each type of the objects, 500 images (100 x 100 pixels) are generated for the observer study. The observer selected in this study is the channelized scanning linear observer which could be applied to estimate the size and the contrast. The figure of merit used is the correct estimation ratio. The mean and the variance are estimated by the shuffle method. Results: The results indicate that the protocols with 100 kVp tube voltage setting provides the best performance for iodine insert size and contrast estimation for both head and body phantom cases. Conclusion: This work presents a practical and robust quantitative approach using channelized scanning linear observer to study contrast and size estimation performance from different CT protocols. Different protocols at same CTDIvol setting could Result in different image quality performance. The relationship between the absorbed dose and the diagnostic image quality is not linear.« less
Penn, Richard; Werner, Michael; Thomas, Justin
2015-01-01
Background Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. Methods In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. Results We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Conclusions Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible. PMID:26029638
SC-GRAPPA: Self-constraint noniterative GRAPPA reconstruction with closed-form solution.
Ding, Yu; Xue, Hui; Ahmad, Rizwan; Ting, Samuel T; Simonetti, Orlando P
2012-12-01
Parallel MRI (pMRI) reconstruction techniques are commonly used to reduce scan time by undersampling the k-space data. GRAPPA, a k-space based pMRI technique, is widely used clinically because of its robustness. In GRAPPA, the missing k-space data are estimated by solving a set of linear equations; however, this set of equations does not take advantage of the correlations within the missing k-space data. All k-space data in a neighborhood acquired from a phased-array coil are correlated. The correlation can be estimated easily as a self-constraint condition, and formulated as an extra set of linear equations to improve the performance of GRAPPA. The authors propose a modified k-space based pMRI technique called self-constraint GRAPPA (SC-GRAPPA) which combines the linear equations of GRAPPA with these extra equations to solve for the missing k-space data. Since SC-GRAPPA utilizes a least-squares solution of the linear equations, it has a closed-form solution that does not require an iterative solver. The SC-GRAPPA equation was derived by incorporating GRAPPA as a prior estimate. SC-GRAPPA was tested in a uniform phantom and two normal volunteers. MR real-time cardiac cine images with acceleration rate 5 and 6 were reconstructed using GRAPPA and SC-GRAPPA. SC-GRAPPA showed a significantly lower artifact level, and a greater than 10% overall signal-to-noise ratio (SNR) gain over GRAPPA, with more significant SNR gain observed in low-SNR regions of the images. SC-GRAPPA offers improved pMRI reconstruction, and is expected to benefit clinical imaging applications in the future.
Extensions of algebraic image operators: An approach to model-based vision
NASA Technical Reports Server (NTRS)
Lerner, Bao-Ting; Morelli, Michael V.
1990-01-01
Researchers extend their previous research on a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets. Addition and multiplication are defined for the set of all grey-level images, which can then be described as polynomials of two variables. Utilizing this new algebraic structure, researchers devised an innovative, efficient edge detection scheme. An accurate method for deriving gradient component information from this edge detector is presented. Based upon this new edge detection system researchers developed a robust method for linear feature extraction by combining the techniques of a Hough transform and a line follower. The major advantage of this feature extractor is its general, object-independent nature. Target attributes, such as line segment lengths, intersections, angles of intersection, and endpoints are derived by the feature extraction algorithm and employed during model matching. The algebraic operators are global operations which are easily reconfigured to operate on any size or shape region. This provides a natural platform from which to pursue dynamic scene analysis. A method for optimizing the linear feature extractor which capitalizes on the spatially reconfiguration nature of the edge detector/gradient component operator is discussed.
Measurement Consistency from Magnetic Resonance Images
Chung, Dongjun; Chung, Moo K.; Durtschi, Reid B.; Lindell, R. Gentry; Vorperian, Houri K.
2010-01-01
Rationale and Objectives In quantifying medical images, length-based measurements are still obtained manually. Due to possible human error, a measurement protocol is required to guarantee the consistency of measurements. In this paper, we review various statistical techniques that can be used in determining measurement consistency. The focus is on detecting a possible measurement bias and determining the robustness of the procedures to outliers. Materials and Methods We review correlation analysis, linear regression, Bland-Altman method, paired t-test, and analysis of variance (ANOVA). These techniques were applied to measurements, obtained by two raters, of head and neck structures from magnetic resonance images (MRI). Results The correlation analysis and the linear regression were shown to be insufficient for detecting measurement inconsistency. They are also very sensitive to outliers. The widely used Bland-Altman method is a visualization technique so it lacks the numerical quantification. The paired t-test tends to be sensitive to small measurement bias. On the other hand, ANOVA performs well even under small measurement bias. Conclusion In almost all cases, using only one method is insufficient and it is recommended to use several methods simultaneously. In general, ANOVA performs the best. PMID:18790405
A long-term target detection approach in infrared image sequence
NASA Astrophysics Data System (ADS)
Li, Hang; Zhang, Qi; Li, Yuanyuan; Wang, Liqiang
2015-12-01
An automatic target detection method used in long term infrared (IR) image sequence from a moving platform is proposed. Firstly, based on non-linear histogram equalization, target candidates are coarse-to-fine segmented by using two self-adapt thresholds generated in the intensity space. Then the real target is captured via two different selection approaches. At the beginning of image sequence, the genuine target with litter texture is discriminated from other candidates by using contrast-based confidence measure. On the other hand, when the target becomes larger, we apply online EM method to iteratively estimate and update the distributions of target's size and position based on the prior detection results, and then recognize the genuine one which satisfies both the constraints of size and position. Experimental results demonstrate that the presented method is accurate, robust and efficient.
NASA Astrophysics Data System (ADS)
Lancaster, N.; LeBlanc, D.; Bebis, G.; Nicolescu, M.
2015-12-01
Dune-field patterns are believed to behave as self-organizing systems, but what causes the patterns to form is still poorly understood. The most obvious (and in many cases the most significant) aspect of a dune system is the pattern of dune crest lines. Extracting meaningful features such as crest length, orientation, spacing, bifurcations, and merging of crests from image data can reveal important information about the specific dune-field morphological properties, development, and response to changes in boundary conditions, but manual methods are labor-intensive and time-consuming. We are developing the capability to recognize and characterize patterns of sand dunes on planetary surfaces. Our goal is to develop a robust methodology and the necessary algorithms for automated or semi-automated extraction of dune morphometric information from image data. Our main approach uses image processing methods to extract gradient information from satellite images of dune fields. Typically, the gradients have a dominant magnitude and orientation. In many cases, the images have two major dominant gradient orientations, for the sunny and shaded side of the dunes. A histogram of the gradient orientations is used to determine the dominant orientation. A threshold is applied to the image based on gradient orientations which agree with the dominant orientation. The contours of the binary image can then be used to determine the dune crest-lines, based on pixel intensity values. Once the crest-lines have been extracted, the morphological properties can be computed. We have tested our approach on a variety of images of linear and crescentic (transverse) dunes and compared dune detection algorithms with manually-digitized dune crest lines, achieving true positive values of 0.57-0.99; and false positives values of 0.30-0.67, indicating that out approach is generally robust.
Robust control of a parallel hybrid drivetrain with a CVT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayer, T.; Schroeder, D.
1996-09-01
In this paper the design of a robust control system for a parallel hybrid drivetrain is presented. The drivetrain is based on a continuously variable transmission (CVT) and is therefore a highly nonlinear multiple-input-multiple-output system (MIMO-System). Input-Output-Linearization offers the possibility of linearizing and of decoupling the system. Since for example the vehicle mass varies with the load and the efficiency of the gearbox depends strongly on the actual working point, an exact linearization of the plant will mostly fail. Therefore a robust control algorithm based on sliding mode is used to control the drivetrain.
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H
2011-04-01
A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.
NASA Astrophysics Data System (ADS)
Han, Xiaobao; Li, Huacong; Jia, Qiusheng
2017-12-01
For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.
M-estimation for robust sparse unmixing of hyperspectral images
NASA Astrophysics Data System (ADS)
Toomik, Maria; Lu, Shijian; Nelson, James D. B.
2016-10-01
Hyperspectral unmixing methods often use a conventional least squares based lasso which assumes that the data follows the Gaussian distribution. The normality assumption is an approximation which is generally invalid for real imagery data. We consider a robust (non-Gaussian) approach to sparse spectral unmixing of remotely sensed imagery which reduces the sensitivity of the estimator to outliers and relaxes the linearity assumption. The method consists of several appropriate penalties. We propose to use an lp norm with 0 < p < 1 in the sparse regression problem, which induces more sparsity in the results, but makes the problem non-convex. On the other hand, the problem, though non-convex, can be solved quite straightforwardly with an extensible algorithm based on iteratively reweighted least squares. To deal with the huge size of modern spectral libraries we introduce a library reduction step, similar to the multiple signal classification (MUSIC) array processing algorithm, which not only speeds up unmixing but also yields superior results. In the hyperspectral setting we extend the traditional least squares method to the robust heavy-tailed case and propose a generalised M-lasso solution. M-estimation replaces the Gaussian likelihood with a fixed function ρ(e) that restrains outliers. The M-estimate function reduces the effect of errors with large amplitudes or even assigns the outliers zero weights. Our experimental results on real hyperspectral data show that noise with large amplitudes (outliers) often exists in the data. This ability to mitigate the influence of such outliers can therefore offer greater robustness. Qualitative hyperspectral unmixing results on real hyperspectral image data corroborate the efficacy of the proposed method.
Robust pedestrian detection and tracking from a moving vehicle
NASA Astrophysics Data System (ADS)
Tuong, Nguyen Xuan; Müller, Thomas; Knoll, Alois
2011-01-01
In this paper, we address the problem of multi-person detection, tracking and distance estimation in a complex scenario using multi-cameras. Specifically, we are interested in a vision system for supporting the driver in avoiding any unwanted collision with the pedestrian. We propose an approach using Histograms of Oriented Gradients (HOG) to detect pedestrians on static images and a particle filter as a robust tracking technique to follow targets from frame to frame. Because the depth map requires expensive computation, we extract depth information of targets using Direct Linear Transformation (DLT) to reconstruct 3D-coordinates of correspondent points found by running Speeded Up Robust Features (SURF) on two input images. Using the particle filter the proposed tracker can efficiently handle target occlusions in a simple background environment. However, to achieve reliable performance in complex scenarios with frequent target occlusions and complex cluttered background, results from the detection module are integrated to create feedback and recover the tracker from tracking failures due to the complexity of the environment and target appearance model variability. The proposed approach is evaluated on different data sets both in a simple background scenario and a cluttered background environment. The result shows that, by integrating detector and tracker, a reliable and stable performance is possible even if occlusion occurs frequently in highly complex environment. A vision-based collision avoidance system for an intelligent car, as a result, can be achieved.
NASA Astrophysics Data System (ADS)
Zafar, I.; Edirisinghe, E. A.; Acar, S.; Bez, H. E.
2007-02-01
Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic License Plate Recognition (ALPR) systems. Several car MMR systems have been proposed in literature. However these approaches are based on feature detection algorithms that can perform sub-optimally under adverse lighting and/or occlusion conditions. In this paper we propose a real time, appearance based, car MMR approach using Two Dimensional Linear Discriminant Analysis that is capable of addressing this limitation. We provide experimental results to analyse the proposed algorithm's robustness under varying illumination and occlusions conditions. We have shown that the best performance with the proposed 2D-LDA based car MMR approach is obtained when the eigenvectors of lower significance are ignored. For the given database of 200 car images of 25 different make-model classifications, a best accuracy of 91% was obtained with the 2D-LDA approach. We use a direct Principle Component Analysis (PCA) based approach as a benchmark to compare and contrast the performance of the proposed 2D-LDA approach to car MMR. We conclude that in general the 2D-LDA based algorithm supersedes the performance of the PCA based approach.
NASA Astrophysics Data System (ADS)
Manolakis, Dimitris G.
2004-10-01
The linear mixing model is widely used in hyperspectral imaging applications to model the reflectance spectra of mixed pixels in the SWIR atmospheric window or the radiance spectra of plume gases in the LWIR atmospheric window. In both cases it is important to detect the presence of materials or gases and then estimate their amount, if they are present. The detection and estimation algorithms available for these tasks are related but they are not identical. The objective of this paper is to theoretically investigate how the heavy tails observed in hyperspectral background data affect the quality of abundance estimates and how the F-test, used for endmember selection, is robust to the presence of heavy tails when the model fits the data.
A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes
2011-01-01
Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Conclusions Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views. PMID:21251284
A 3D freehand ultrasound system for multi-view reconstructions from sparse 2D scanning planes.
Yu, Honggang; Pattichis, Marios S; Agurto, Carla; Beth Goens, M
2011-01-20
A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes.For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views.
Giraudo, Chiara; Motyka, Stanislav; Weber, Michael; Resinger, Christoph; Thorsten, Feiweier; Traxler, Hannes; Trattnig, Siegfried; Bogner, Wolfgang
2017-08-01
The aim of this study was to investigate the origin of random image artifacts in stimulated echo acquisition mode diffusion tensor imaging (STEAM-DTI), assess the role of averaging, develop an automated artifact postprocessing correction method using weighted mean of signal intensities (WMSIs), and compare it with other correction techniques. Institutional review board approval and written informed consent were obtained. The right calf and thigh of 10 volunteers were scanned on a 3 T magnetic resonance imaging scanner using a STEAM-DTI sequence.Artifacts (ie, signal loss) in STEAM-based DTI, presumably caused by involuntary muscle contractions, were investigated in volunteers and ex vivo (ie, human cadaver calf and turkey leg using the same DTI parameters as for the volunteers). An automated postprocessing artifact correction method based on the WMSI was developed and compared with previous approaches (ie, iteratively reweighted linear least squares and informed robust estimation of tensors by outlier rejection [iRESTORE]). Diffusion tensor imaging and fiber tracking metrics, using different averages and artifact corrections, were compared for region of interest- and mask-based analyses. One-way repeated measures analysis of variance with Greenhouse-Geisser correction and Bonferroni post hoc tests were used to evaluate differences among all tested conditions. Qualitative assessment (ie, images quality) for native and corrected images was performed using the paired t test. Randomly localized and shaped artifacts affected all volunteer data sets. Artifact burden during voluntary muscle contractions increased on average from 23.1% to 77.5% but were absent ex vivo. Diffusion tensor imaging metrics (mean diffusivity, fractional anisotropy, radial diffusivity, and axial diffusivity) had a heterogeneous behavior, but in the range reported by literature. Fiber track metrics (number, length, and volume) significantly improved in both calves and thighs after artifact correction in region of interest- and mask-based analyses (P < 0.05 each). Iteratively reweighted linear least squares and iRESTORE showed equivalent results, but WMSI was faster than iRESTORE. Muscle delineation and artifact load significantly improved after correction (P < 0.05 each). Weighted mean of signal intensity correction significantly improved STEAM-based quantitative DTI analyses and fiber tracking of lower-limb muscles, providing a robust tool for musculoskeletal applications.
Linear, multivariable robust control with a mu perspective
NASA Technical Reports Server (NTRS)
Packard, Andy; Doyle, John; Balas, Gary
1993-01-01
The structured singular value is a linear algebra tool developed to study a particular class of matrix perturbation problems arising in robust feedback control of multivariable systems. These perturbations are called linear fractional, and are a natural way to model many types of uncertainty in linear systems, including state-space parameter uncertainty, multiplicative and additive unmodeled dynamics uncertainty, and coprime factor and gap metric uncertainty. The structured singular value theory provides a natural extension of classical SISO robustness measures and concepts to MIMO systems. The structured singular value analysis, coupled with approximate synthesis methods, make it possible to study the tradeoff between performance and uncertainty that occurs in all feedback systems. In MIMO systems, the complexity of the spatial interactions in the loop gains make it difficult to heuristically quantify the tradeoffs that must occur. This paper examines the role played by the structured singular value (and its computable bounds) in answering these questions, as well as its role in the general robust, multivariable control analysis and design problem.
van Ijsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Reiber, J H C; Kaptein, B L
2011-10-13
Accurate in vivo measurements methods of wear in total knee arthroplasty are required for a timely detection of excessive wear and to assess new implant designs. Component separation measurements based on model-based Roentgen stereophotogrammetric analysis (RSA), in which 3-dimensional reconstruction methods are used, have shown promising results, yet the robustness of these measurements is unknown. In this study, the accuracy and robustness of this measurement for clinical usage was assessed. The validation experiments were conducted in an RSA setup with a phantom setup of a knee in a vertical orientation. 72 RSA images were created using different variables for knee orientations, two prosthesis types (fixed-bearing Duracon knee and fixed-bearing Triathlon knee) and accuracies of the reconstruction models. The measurement error was determined for absolute and relative measurements and the effect of knee positioning and true seperation distance was determined. The measurement method overestimated the separation distance with 0.1mm on average. The precision of the method was 0.10mm (2*SD) for the Duracon prosthesis and 0.20mm for the Triathlon prosthesis. A slight difference in error was found between the measurements with 0° and 10° anterior tilt. (difference=0.08mm, p=0.04). The accuracy of 0.1mm and precision of 0.2mm can be achieved for linear wear measurements based on model-based RSA, which is more than adequate for clinical applications. The measurement is robust in clinical settings. Although anterior tilt seems to influence the measurement, the size of this influence is low and clinically irrelevant. Copyright © 2011 Elsevier Ltd. All rights reserved.
Image Alignment for Multiple Camera High Dynamic Range Microscopy.
Eastwood, Brian S; Childs, Elisabeth C
2012-01-09
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.
Image Alignment for Multiple Camera High Dynamic Range Microscopy
Eastwood, Brian S.; Childs, Elisabeth C.
2012-01-01
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028
Tail mean and related robust solution concepts
NASA Astrophysics Data System (ADS)
Ogryczak, Włodzimierz
2014-01-01
Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.
Robust image modeling techniques with an image restoration application
NASA Astrophysics Data System (ADS)
Kashyap, Rangasami L.; Eom, Kie-Bum
1988-08-01
A robust parameter-estimation algorithm for a nonsymmetric half-plane (NSHP) autoregressive model, where the driving noise is a mixture of a Gaussian and an outlier process, is presented. The convergence of the estimation algorithm is proved. An algorithm to estimate parameters and original image intensity simultaneously from the impulse-noise-corrupted image, where the model governing the image is not available, is also presented. The robustness of the parameter estimates is demonstrated by simulation. Finally, an algorithm to restore realistic images is presented. The entire image generally does not obey a simple image model, but a small portion (e.g., 8 x 8) of the image is assumed to obey an NSHP model. The original image is divided into windows and the robust estimation algorithm is applied for each window. The restoration algorithm is tested by comparing it to traditional methods on several different images.
Registration-based interpolation applied to cardiac MRI
NASA Astrophysics Data System (ADS)
Ólafsdóttir, Hildur; Pedersen, Henrik; Hansen, Michael S.; Lyksborg, Mark; Hansen, Mads Fogtmann; Darkner, Sune; Larsen, Rasmus
2010-03-01
Various approaches have been proposed for segmentation of cardiac MRI. An accurate segmentation of the myocardium and ventricles is essential to determine parameters of interest for the function of the heart, such as the ejection fraction. One problem with MRI is the poor resolution in one dimension. A 3D registration algorithm will typically use a trilinear interpolation of intensities to determine the intensity of a deformed template image. Due to the poor resolution across slices, such linear approximation is highly inaccurate since the assumption of smooth underlying intensities is violated. Registration-based interpolation is based on 2D registrations between adjacent slices and is independent of segmentations. Hence, rather than assuming smoothness in intensity, the assumption is that the anatomy is consistent across slices. The basis for the proposed approach is the set of 2D registrations between each pair of slices, both ways. The intensity of a new slice is then weighted by (i) the deformation functions and (ii) the intensities in the warped images. Unlike the approach by Penney et al. 2004, this approach takes into account deformation both ways, which gives more robustness where correspondence between slices is poor. We demonstrate the approach on a toy example and on a set of cardiac CINE MRI. Qualitative inspection reveals that the proposed approach provides a more convincing transition between slices than images obtained by linear interpolation. A quantitative validation reveals significantly lower reconstruction errors than both linear and registration-based interpolation based on one-way registrations.
A hybrid robust fault tolerant control based on adaptive joint unscented Kalman filter.
Shabbouei Hagh, Yashar; Mohammadi Asl, Reza; Cocquempot, Vincent
2017-01-01
In this paper, a new hybrid robust fault tolerant control scheme is proposed. A robust H ∞ control law is used in non-faulty situation, while a Non-Singular Terminal Sliding Mode (NTSM) controller is activated as soon as an actuator fault is detected. Since a linear robust controller is designed, the system is first linearized through the feedback linearization method. To switch from one controller to the other, a fuzzy based switching system is used. An Adaptive Joint Unscented Kalman Filter (AJUKF) is used for fault detection and diagnosis. The proposed method is based on the simultaneous estimation of the system states and parameters. In order to show the efficiency of the proposed scheme, a simulated 3-DOF robotic manipulator is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Low-contrast underwater living fish recognition using PCANet
NASA Astrophysics Data System (ADS)
Sun, Xin; Yang, Jianping; Wang, Changgang; Dong, Junyu; Wang, Xinhua
2018-04-01
Quantitative and statistical analysis of ocean creatures is critical to ecological and environmental studies. And living fish recognition is one of the most essential requirements for fishery industry. However, light attenuation and scattering phenomenon are present in the underwater environment, which makes underwater images low-contrast and blurry. This paper tries to design a robust framework for accurate fish recognition. The framework introduces a two stage PCA Network to extract abstract features from fish images. On a real-world fish recognition dataset, we use a linear SVM classifier and set penalty coefficients to conquer data unbalanced issue. Feature visualization results show that our method can avoid the feature distortion in boundary regions of underwater image. Experiments results show that the PCA Network can extract discriminate features and achieve promising recognition accuracy. The framework improves the recognition accuracy of underwater living fishes and can be easily applied to marine fishery industry.
An enhanced CCRTM (E-CCRTM) damage imaging technique using a 2D areal scan for composite plates
NASA Astrophysics Data System (ADS)
He, Jiaze; Yuan, Fuh-Gwo
2016-04-01
A two-dimensional (2-D) non-contact areal scan system was developed to image and quantify impact damage in a composite plate using an enhanced zero-lag cross-correlation reverse-time migration (E-CCRTM) technique. The system comprises a single piezoelectric actuator mounted on the composite plate and a laser Doppler vibrometer (LDV) for scanning a region to capture the scattered wavefield in the vicinity of the PZT. The proposed damage imaging technique takes into account the amplitude, phase, geometric spreading, and all of the frequency content of the Lamb waves propagating in the plate; thus, the reflectivity coefficients of the delamination can be calculated and potentially related to damage severity. Comparisons are made in terms of damage imaging quality between 2-D areal scans and linear scans as well as between the proposed and existing imaging conditions. The experimental results show that the 2-D E-CCRTM performs robustly when imaging and quantifying impact damage in large-scale composites using a single PZT actuator with a nearby areal scan using LDV.
NASA Astrophysics Data System (ADS)
Wei, Hai-Rui; Liu, Ji-Zhen
2017-02-01
It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch-Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Hai-Rui, E-mail: hrwei@ustb.edu.cn; Liu, Ji-Zhen
2017-02-15
It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch–Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.
Automatic segmentation of vessels in in-vivo ultrasound scans
NASA Astrophysics Data System (ADS)
Tamimi-Sarnikowski, Philip; Brink-Kjær, Andreas; Moshavegh, Ramin; Arendt Jensen, Jørgen
2017-03-01
Ultrasound has become highly popular to monitor atherosclerosis, by scanning the carotid artery. The screening involves measuring the thickness of the vessel wall and diameter of the lumen. An automatic segmentation of the vessel lumen, can enable the determination of lumen diameter. This paper presents a fully automatic segmentation algorithm, for robustly segmenting the vessel lumen in longitudinal B-mode ultrasound images. The automatic segmentation is performed using a combination of B-mode and power Doppler images. The proposed algorithm includes a series of preprocessing steps, and performs a vessel segmentation by use of the marker-controlled watershed transform. The ultrasound images used in the study were acquired using the bk3000 ultrasound scanner (BK Ultrasound, Herlev, Denmark) with two transducers "8L2 Linear" and "10L2w Wide Linear" (BK Ultrasound, Herlev, Denmark). The algorithm was evaluated empirically and applied to a dataset of in-vivo 1770 images recorded from 8 healthy subjects. The segmentation results were compared to manual delineation performed by two experienced users. The results showed a sensitivity and specificity of 90.41+/-11.2 % and 97.93+/-5.7% (mean+/-standard deviation), respectively. The amount of overlap of segmentation and manual segmentation, was measured by the Dice similarity coefficient, which was 91.25+/-11.6%. The empirical results demonstrated the feasibility of segmenting the vessel lumen in ultrasound scans using a fully automatic algorithm.
NASA Astrophysics Data System (ADS)
Döring, D.; Solodov, I.; Busse, G.
Sound and ultrasound in air are the products of a multitude of different processes and thus can be favorable or undesirable phenomena. Development of experimental tools for non-invasive measurements and imaging of airborne sound fields is of importance for linear and nonlinear nondestructive material testing as well as noise control in industrial or civil engineering applications. One possible solution is based on acousto-optic interaction, like light diffraction imaging. The diffraction approach usually requires a sophisticated setup with fine optical alignment barely applicable in industrial environment. This paper focuses on the application of the robust experimental tool of scanning laser vibrometry, which utilizes commercial off-the-shelf equipment. The imaging technique of air-coupled vibrometry (ACV) is based on the modulation of the optical path length by the acoustic pressure of the sound wave. The theoretical considerations focus on the analysis of acousto-optical phase modulation. The sensitivity of the ACV in detecting vibration velocity was estimated as ~1 mm/s. The ACV applications to imaging of linear airborne fields are demonstrated for leaky wave propagation and measurements of ultrasonic air-coupled transducers. For higher-intensity ultrasound, the classical nonlinear effect of the second harmonic generation was measured in air. Another nonlinear application includes a direct observation of the nonlinear air-coupled emission (NACE) from the damaged areas in solid materials. The source of the NACE is shown to be strongly localized around the damage and proposed as a nonlinear "tag" to discern and image the defects.
NASA Technical Reports Server (NTRS)
2004-01-01
Topics covered include: Analysis of SSEM Sensor Data Using BEAM; Hairlike Percutaneous Photochemical Sensors; Video Guidance Sensors Using Remotely Activated Targets; Simulating Remote Sensing Systems; EHW Approach to Temperature Compensation of Electronics; Polymorphic Electronic Circuits; Micro-Tubular Fuel Cells; Whispering-Gallery-Mode Tunable Narrow-Band-Pass Filter; PVM Wrapper; Simulation of Hyperspectral Images; Algorithm for Controlling a Centrifugal Compressor; Hybrid Inflatable Pressure Vessel; Double-Acting, Locking Carabiners; Position Sensor Integral with a Linear Actuator; Improved Electromagnetic Brake; Flow Straightener for a Rotating-Drum Liquid Separator; Sensory-Feedback Exoskeletal Arm Controller; Active Suppression of Instabilities in Engine Combustors; Fabrication of Robust, Flat, Thinned, UV-Imaging CCDs; Chemical Thinning Process for Fabricating UV-Imaging CCDs; Pseudoslit Spectrometer; Waste-Heat-Driven Cooling Using Complex Compound Sorbents; Improved Refractometer for Measuring Temperatures of Drops; Semiconductor Lasers Containing Quantum Wells in Junctions; Phytoplankton-Fluorescence-Lifetime Vertical Profiler; Hexagonal Pixels and Indexing Scheme for Binary Images; Finding Minimum-Power Broadcast Trees for Wireless Networks; and Automation of Design Engineering Processes.
Joint Transform Correlation for face tracking: elderly fall detection application
NASA Astrophysics Data System (ADS)
Katz, Philippe; Aron, Michael; Alfalou, Ayman
2013-03-01
In this paper, an iterative tracking algorithm based on a non-linear JTC (Joint Transform Correlator) architecture and enhanced by a digital image processing method is proposed and validated. This algorithm is based on the computation of a correlation plane where the reference image is updated at each frame. For that purpose, we use the JTC technique in real time to track a patient (target image) in a room fitted with a video camera. The correlation plane is used to localize the target image in the current video frame (frame i). Then, the reference image to be exploited in the next frame (frame i+1) is updated according to the previous one (frame i). In an effort to validate our algorithm, our work is divided into two parts: (i) a large study based on different sequences with several situations and different JTC parameters is achieved in order to quantify their effects on the tracking performances (decimation, non-linearity coefficient, size of the correlation plane, size of the region of interest...). (ii) the tracking algorithm is integrated into an application of elderly fall detection. The first reference image is a face detected by means of Haar descriptors, and then localized into the new video image thanks to our tracking method. In order to avoid a bad update of the reference frame, a method based on a comparison of image intensity histograms is proposed and integrated in our algorithm. This step ensures a robust tracking of the reference frame. This article focuses on face tracking step optimisation and evalutation. A supplementary step of fall detection, based on vertical acceleration and position, will be added and studied in further work.
Tri-linear color multi-linescan sensor with 200 kHz line rate
NASA Astrophysics Data System (ADS)
Schrey, Olaf; Brockherde, Werner; Nitta, Christian; Bechen, Benjamin; Bodenstorfer, Ernst; Brodersen, Jörg; Mayer, Konrad J.
2016-11-01
In this paper we present a newly developed linear CMOS high-speed line-scanning sensor realized in a 0.35 μm CMOS OPTO process for line-scan with 200 kHz true RGB and 600 kHz monochrome line rate, respectively. In total, 60 lines are integrated in the sensor allowing for electronic position adjustment. The lines are read out in rolling shutter manner. The high readout speed is achieved by a column-wise organization of the readout chain. At full speed, the sensor provides RGB color images with a spatial resolution down to 50 μm. This feature enables a variety of applications like quality assurance in print inspection, real-time surveillance of railroad tracks, in-line monitoring in flat panel fabrication lines and many more. The sensor has a fill-factor close to 100%, preventing aliasing and color artefacts. Hence the tri-linear technology is robust against aliasing ensuring better inspection quality and thus less waste in production lines.
Exploiting structure: Introduction and motivation
NASA Technical Reports Server (NTRS)
Xu, Zhong Ling
1994-01-01
This annual report summarizes the research activities that were performed from 26 Jun. 1993 to 28 Feb. 1994. We continued to investigate the Robust Stability of Systems where transfer functions or characteristic polynomials are affine multilinear functions of parameters. An approach that differs from 'Stability by Linear Process' and that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty was found for low order, 2-order, and 3-order cases. We proved a crucial theorem, the so-called Face Theorem. Previously, we have proven Kharitonov's Vertex Theorem and the Edge Theorem by Bartlett. The detail of this proof is contained in the Appendix. This Theorem provides a tool to describe the boundary of the image of the affine multilinear function. For SPR design, we have developed some new results. The third objective for this period is to design a controller for IHM by the H-infinity optimization technique. The details are presented in the Appendix.
Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi
2017-01-01
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as −0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-28
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-01
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496
Statistical analysis and machine learning algorithms for optical biopsy
NASA Astrophysics Data System (ADS)
Wu, Binlin; Liu, Cheng-hui; Boydston-White, Susie; Beckman, Hugh; Sriramoju, Vidyasagar; Sordillo, Laura; Zhang, Chunyuan; Zhang, Lin; Shi, Lingyan; Smith, Jason; Bailin, Jacob; Alfano, Robert R.
2018-02-01
Analyzing spectral or imaging data collected with various optical biopsy methods is often times difficult due to the complexity of the biological basis. Robust methods that can utilize the spectral or imaging data and detect the characteristic spectral or spatial signatures for different types of tissue is challenging but highly desired. In this study, we used various machine learning algorithms to analyze a spectral dataset acquired from human skin normal and cancerous tissue samples using resonance Raman spectroscopy with 532nm excitation. The algorithms including principal component analysis, nonnegative matrix factorization, and autoencoder artificial neural network are used to reduce dimension of the dataset and detect features. A support vector machine with a linear kernel is used to classify the normal tissue and cancerous tissue samples. The efficacies of the methods are compared.
Optical asymmetric cryptography based on amplitude reconstruction of elliptically polarized light
NASA Astrophysics Data System (ADS)
Cai, Jianjun; Shen, Xueju; Lei, Ming
2017-11-01
We propose a novel optical asymmetric image encryption method based on amplitude reconstruction of elliptically polarized light, which is free from silhouette problem. The original image is analytically separated into two phase-only masks firstly, and then the two masks are encoded into amplitudes of the orthogonal polarization components of an elliptically polarized light. Finally, the elliptically polarized light propagates through a linear polarizer, and the output intensity distribution is recorded by a CCD camera to obtain the ciphertext. The whole encryption procedure could be implemented by using commonly used optical elements, and it combines diffusion process and confusion process. As a result, the proposed method achieves high robustness against iterative-algorithm-based attacks. Simulation results are presented to prove the validity of the proposed cryptography.
Robustness Analysis of Integrated LPV-FDI Filters and LTI-FTC System for a Transport Aircraft
NASA Technical Reports Server (NTRS)
Khong, Thuan H.; Shin, Jong-Yeob
2007-01-01
This paper proposes an analysis framework for robustness analysis of a nonlinear dynamics system that can be represented by a polynomial linear parameter varying (PLPV) system with constant bounded uncertainty. The proposed analysis framework contains three key tools: 1) a function substitution method which can convert a nonlinear system in polynomial form into a PLPV system, 2) a matrix-based linear fractional transformation (LFT) modeling approach, which can convert a PLPV system into an LFT system with the delta block that includes key uncertainty and scheduling parameters, 3) micro-analysis, which is a well known robust analysis tool for linear systems. The proposed analysis framework is applied to evaluating the performance of the LPV-fault detection and isolation (FDI) filters of the closed-loop system of a transport aircraft in the presence of unmodeled actuator dynamics and sensor gain uncertainty. The robustness analysis results are compared with nonlinear time simulations.
Robust control for uncertain structures
NASA Technical Reports Server (NTRS)
Douglas, Joel; Athans, Michael
1991-01-01
Viewgraphs on robust control for uncertain structures are presented. Topics covered include: robust linear quadratic regulator (RLQR) formulas; mismatched LQR design; RLQR design; interpretations of RLQR design; disturbance rejection; and performance comparisons: RLQR vs. mismatched LQR.
Robust linear discriminant analysis with distance based estimators
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina
2017-11-01
Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.
Practical robustness measures in multivariable control system analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Lehtomaki, N. A.
1981-01-01
The robustness of the stability of multivariable linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria. Available robustness tests are unified under a common framework based on the nature and structure of model errors. These results are derived using a multivariable version of Nyquist's stability theorem in which the minimum singular value of the return difference transfer matrix is shown to be the multivariable generalization of the distance to the critical point on a single input, single output Nyquist diagram. Using the return difference transfer matrix, a very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived. The robustness tests that explicitly utilized model error structure are able to guarantee feedback system stability in the face of model errors of larger magnitude than those robustness tests that do not. The robustness of linear quadratic Gaussian control systems are analyzed.
Towards a robust HDR imaging system
NASA Astrophysics Data System (ADS)
Long, Xin; Zeng, Xiangrong; Huangpeng, Qizi; Zhou, Jinglun; Feng, Jing
2016-07-01
High dynamic range (HDR) images can show more details and luminance information in general display device than low dynamic image (LDR) images. We present a robust HDR imaging system which can deal with blurry LDR images, overcoming the limitations of most existing HDR methods. Experiments on real images show the effectiveness and competitiveness of the proposed method.
2017-01-01
Objective Electrical Impedance Tomography (EIT) is a powerful non-invasive technique for imaging applications. The goal is to estimate the electrical properties of living tissues by measuring the potential at the boundary of the domain. Being safe with respect to patient health, non-invasive, and having no known hazards, EIT is an attractive and promising technology. However, it suffers from a particular technical difficulty, which consists of solving a nonlinear inverse problem in real time. Several nonlinear approaches have been proposed as a replacement for the linear solver, but in practice very few are capable of stable, high-quality, and real-time EIT imaging because of their very low robustness to errors and inaccurate modeling, or because they require considerable computational effort. Methods In this paper, a post-processing technique based on an artificial neural network (ANN) is proposed to obtain a nonlinear solution to the inverse problem, starting from a linear solution. While common reconstruction methods based on ANNs estimate the solution directly from the measured data, the method proposed here enhances the solution obtained from a linear solver. Conclusion Applying a linear reconstruction algorithm before applying an ANN reduces the effects of noise and modeling errors. Hence, this approach significantly reduces the error associated with solving 2D inverse problems using machine-learning-based algorithms. Significance This work presents radical enhancements in the stability of nonlinear methods for biomedical EIT applications. PMID:29206856
A robust optimization methodology for preliminary aircraft design
NASA Astrophysics Data System (ADS)
Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.
2016-05-01
This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.
Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.
Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi
2017-12-01
We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.
A Statistical Analysis of Brain Morphology Using Wild Bootstrapping
Ibrahim, Joseph G.; Tang, Niansheng; Rowe, Daniel B.; Hao, Xuejun; Bansal, Ravi; Peterson, Bradley S.
2008-01-01
Methods for the analysis of brain morphology, including voxel-based morphology and surface-based morphometries, have been used to detect associations between brain structure and covariates of interest, such as diagnosis, severity of disease, age, IQ, and genotype. The statistical analysis of morphometric measures usually involves two statistical procedures: 1) invoking a statistical model at each voxel (or point) on the surface of the brain or brain subregion, followed by mapping test statistics (e.g., t test) or their associated p values at each of those voxels; 2) correction for the multiple statistical tests conducted across all voxels on the surface of the brain region under investigation. We propose the use of new statistical methods for each of these procedures. We first use a heteroscedastic linear model to test the associations between the morphological measures at each voxel on the surface of the specified subregion (e.g., cortical or subcortical surfaces) and the covariates of interest. Moreover, we develop a robust test procedure that is based on a resampling method, called wild bootstrapping. This procedure assesses the statistical significance of the associations between a measure of given brain structure and the covariates of interest. The value of this robust test procedure lies in its computationally simplicity and in its applicability to a wide range of imaging data, including data from both anatomical and functional magnetic resonance imaging (fMRI). Simulation studies demonstrate that this robust test procedure can accurately control the family-wise error rate. We demonstrate the application of this robust test procedure to the detection of statistically significant differences in the morphology of the hippocampus over time across gender groups in a large sample of healthy subjects. PMID:17649909
Izquierdo-Garcia, David; Hansen, Adam E; Förster, Stefan; Benoit, Didier; Schachoff, Sylvia; Fürst, Sebastian; Chen, Kevin T; Chonde, Daniel B; Catana, Ciprian
2014-11-01
We present an approach for head MR-based attenuation correction (AC) based on the Statistical Parametric Mapping 8 (SPM8) software, which combines segmentation- and atlas-based features to provide a robust technique to generate attenuation maps (μ maps) from MR data in integrated PET/MR scanners. Coregistered anatomic MR and CT images of 15 glioblastoma subjects were used to generate the templates. The MR images from these subjects were first segmented into 6 tissue classes (gray matter, white matter, cerebrospinal fluid, bone, soft tissue, and air), which were then nonrigidly coregistered using a diffeomorphic approach. A similar procedure was used to coregister the anatomic MR data for a new subject to the template. Finally, the CT-like images obtained by applying the inverse transformations were converted to linear attenuation coefficients to be used for AC of PET data. The method was validated on 16 new subjects with brain tumors (n = 12) or mild cognitive impairment (n = 4) who underwent CT and PET/MR scans. The μ maps and corresponding reconstructed PET images were compared with those obtained using the gold standard CT-based approach and the Dixon-based method available on the Biograph mMR scanner. Relative change (RC) images were generated in each case, and voxel- and region-of-interest-based analyses were performed. The leave-one-out cross-validation analysis of the data from the 15 atlas-generation subjects showed small errors in brain linear attenuation coefficients (RC, 1.38% ± 4.52%) compared with the gold standard. Similar results (RC, 1.86% ± 4.06%) were obtained from the analysis of the atlas-validation datasets. The voxel- and region-of-interest-based analysis of the corresponding reconstructed PET images revealed quantification errors of 3.87% ± 5.0% and 2.74% ± 2.28%, respectively. The Dixon-based method performed substantially worse (the mean RC values were 13.0% ± 10.25% and 9.38% ± 4.97%, respectively). Areas closer to the skull showed the largest improvement. We have presented an SPM8-based approach for deriving the head μ map from MR data to be used for PET AC in integrated PET/MR scanners. Its implementation is straightforward and requires only the morphologic data acquired with a single MR sequence. The method is accurate and robust, combining the strengths of both segmentation- and atlas-based approaches while minimizing their drawbacks. © 2014 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Robust selectivity to two-object images in human visual cortex
Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel
2010-01-01
SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105
Wegner, Kyle A; Keikhosravi, Adib; Eliceiri, Kevin W; Vezina, Chad M
2017-08-01
The low cost and simplicity of picrosirius red (PSR) staining have driven its popularity for collagen detection in tissue sections. We extended the versatility of this method by using fluorescent imaging to detect the PSR signal and applying automated quantification tools. We also developed the first PSR protocol that is fully compatible with multiplex immunostaining, making it possible to test whether collagen structure differs across immunohistochemically labeled regions of the tissue landscape. We compared our imaging method with two gold standards in collagen imaging, linear polarized light microscopy and second harmonic generation imaging, and found that it is at least as sensitive and robust to changes in sample orientation. As proof of principle, we used a genetic approach to overexpress beta catenin in a patchy subset of mouse prostate epithelial cells distinguished only by immunolabeling. We showed that collagen fiber length is significantly greater near beta catenin overexpressing cells than near control cells. Our fluorescent PSR imaging method is sensitive, reproducible, and offers a new way to guide region of interest selection for quantifying collagen in tissue sections.
Review of LFTs, LMIs, and mu. [Linear Fractional Transformations, Linear Matrix Inequalities
NASA Technical Reports Server (NTRS)
Doyle, John; Packard, Andy; Zhou, Kemin
1991-01-01
The authors present a tutorial overview of linear fractional transformations (LFTs) and the role of the structured singular value, mu, and linear matrix inequalities (LMIs) in solving LFT problems. The authors first introduce the notation for LFTs and briefly discuss some of their properties. They then describe mu and its connections with LFTs. They focus on two standard notions of robust stability and performance, mu stability and performance and Q stability and performance, and their relationship is discussed. Comparisons with the L1 theory of robust performance with structured uncertainty are considered.
Li, Zukui; Floudas, Christodoulos A.
2012-01-01
Probabilistic guarantees on constraint satisfaction for robust counterpart optimization are studied in this paper. The robust counterpart optimization formulations studied are derived from box, ellipsoidal, polyhedral, “interval+ellipsoidal” and “interval+polyhedral” uncertainty sets (Li, Z., Ding, R., and Floudas, C.A., A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear and Robust Mixed Integer Linear Optimization, Ind. Eng. Chem. Res, 2011, 50, 10567). For those robust counterpart optimization formulations, their corresponding probability bounds on constraint satisfaction are derived for different types of uncertainty characteristic (i.e., bounded or unbounded uncertainty, with or without detailed probability distribution information). The findings of this work extend the results in the literature and provide greater flexibility for robust optimization practitioners in choosing tighter probability bounds so as to find less conservative robust solutions. Extensive numerical studies are performed to compare the tightness of the different probability bounds and the conservatism of different robust counterpart optimization formulations. Guiding rules for the selection of robust counterpart optimization models and for the determination of the size of the uncertainty set are discussed. Applications in production planning and process scheduling problems are presented. PMID:23329868
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Brock, Kristy K.; Daly, Michael J.; Chan, Harley; Irish, Jonathan C.; Siewerdsen, Jeffrey H.
2011-01-01
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and∕or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5±2.8) mm compared to (3.5±3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance. PMID:21626913
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali
2011-04-15
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (''intensity''). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specificmore » intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5{+-}2.8) mm compared to (3.5{+-}3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.« less
NASA Astrophysics Data System (ADS)
Bukhari, Hassan J.
2017-12-01
In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.
A new look at the robust control of discrete-time Markov jump linear systems
NASA Astrophysics Data System (ADS)
Todorov, M. G.; Fragoso, M. D.
2016-03-01
In this paper, we make a foray in the role played by a set of four operators on the study of robust H2 and mixed H2/H∞ control problems for discrete-time Markov jump linear systems. These operators appear in the study of mean square stability for this class of systems. By means of new linear matrix inequality (LMI) characterisations of controllers, which include slack variables that, to some extent, separate the robustness and performance objectives, we introduce four alternative approaches to the design of controllers which are robustly stabilising and at the same time provide a guaranteed level of H2 performance. Since each operator provides a different degree of conservatism, the results are unified in the form of an iterative LMI technique for designing robust H2 controllers, whose convergence is attained in a finite number of steps. The method yields a new way of computing mixed H2/H∞ controllers, whose conservatism decreases with iteration. Two numerical examples illustrate the applicability of the proposed results for the control of a small unmanned aerial vehicle, and for an underactuated robotic arm.
Explicit asymmetric bounds for robust stability of continuous and discrete-time systems
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang; Antsaklis, Panos J.
1993-01-01
The problem of robust stability in linear systems with parametric uncertainties is considered. Explicit stability bounds on uncertain parameters are derived and expressed in terms of linear inequalities for continuous systems, and inequalities with quadratic terms for discrete-times systems. Cases where system parameters are nonlinear functions of an uncertainty are also examined.
NASA Technical Reports Server (NTRS)
Turso, James A.; Litt, Jonathan S.
2004-01-01
A method for accommodating engine deterioration via a scheduled Linear Parameter Varying Quadratic Lyapunov Function (LPVQLF)-Based controller is presented. The LPVQLF design methodology provides a means for developing unconditionally stable, robust control of Linear Parameter Varying (LPV) systems. The controller is scheduled on the Engine Deterioration Index, a function of estimated parameters that relate to engine health, and is computed using a multilayer feedforward neural network. Acceptable thrust response and tight control of exhaust gas temperature (EGT) is accomplished by adjusting the performance weights on these parameters for different levels of engine degradation. Nonlinear simulations demonstrate that the controller achieves specified performance objectives while being robust to engine deterioration as well as engine-to-engine variations.
Computational methods of robust controller design for aerodynamic flutter suppression
NASA Technical Reports Server (NTRS)
Anderson, L. R.
1981-01-01
The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Robust linear quadratic designs with respect to parameter uncertainty
NASA Technical Reports Server (NTRS)
Douglas, Joel; Athans, Michael
1992-01-01
The authors derive a linear quadratic regulator (LQR) which is robust to parametric uncertainty by using the overbounding method of I. R. Petersen and C. V. Hollot (1986). The resulting controller is determined from the solution of a single modified Riccati equation. It is shown that, when applied to a structural system, the controller gains add robustness by minimizing the potential energy of uncertain stiffness elements, and minimizing the rate of dissipation of energy through uncertain damping elements. A worst-case disturbance in the direction of the uncertainty is also considered. It is proved that performance robustness has been increased with the robust LQR when compared to a mismatched LQR design where the controller is designed on the nominal system, but applied to the actual uncertain system.
A robust watermarking scheme using lifting wavelet transform and singular value decomposition
NASA Astrophysics Data System (ADS)
Bhardwaj, Anuj; Verma, Deval; Verma, Vivek Singh
2017-01-01
The present paper proposes a robust image watermarking scheme using lifting wavelet transform (LWT) and singular value decomposition (SVD). Second level LWT is applied on host/cover image to decompose into different subbands. SVD is used to obtain singular values of watermark image and then these singular values are updated with the singular values of LH2 subband. The algorithm is tested on a number of benchmark images and it is found that the present algorithm is robust against different geometric and image processing operations. A comparison of the proposed scheme is performed with other existing schemes and observed that the present scheme is better not only in terms of robustness but also in terms of imperceptibility.
A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF
Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan
2016-01-01
With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101
Robust image watermarking using DWT and SVD for copyright protection
NASA Astrophysics Data System (ADS)
Harjito, Bambang; Suryani, Esti
2017-02-01
The Objective of this paper is proposed a robust combined Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD). The RGB image is called a cover medium, and watermark image is converted into gray scale. Then, they are transformed using DWT so that they can be split into several subbands, namely sub-band LL2, LH2, HL2. The watermark image embeds into the cover medium on sub-band LL2. This scheme aims to obtain the higher robustness level than the previous method which performs of SVD matrix factorization image for copyright protection. The experiment results show that the proposed method has robustness against several image processing attacks such as Gaussian, Poisson and Salt and Pepper Noise. In these attacks, noise has average Normalized Correlation (NC) values of 0.574863 0.889784, 0.889782 respectively. The watermark image can be detected and extracted.
2012-09-01
Robust global image registration based on a hybrid algorithm combining Fourier and spatial domain techniques Peter N. Crabtree, Collin Seanor...00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Robust global image registration based on a hybrid algorithm combining Fourier and spatial domain...demonstrate performance of a hybrid algorithm . These results are from analysis of a set of images of an ISO 12233 [12] resolution chart captured in the
Robust Computation of Linear Models, or How to Find a Needle in a Haystack
2012-02-17
robustly, project it onto a sphere, and then apply standard PCA. This approach is due to [LMS+99]. Maronna et al . [MMY06] recommend it as a preferred...of this form is due to Chandrasekaran et al . [CSPW11]. Given an observed matrix X, they propose to solve the semidefinite problem minimize ‖P ‖S1 + γ...regularization parameter γ negotiates a tradeoff between the two goals. Candès et al . [CLMW11] study the performance of (2.1) for robust linear
Supervised linear dimensionality reduction with robust margins for object recognition
NASA Astrophysics Data System (ADS)
Dornaika, F.; Assoum, A.
2013-01-01
Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.
Prinyakupt, Jaroonrut; Pluempitiwiriyawej, Charnchai
2015-06-30
Blood smear microscopic images are routinely investigated by haematologists to diagnose most blood diseases. However, the task is quite tedious and time consuming. An automatic detection and classification of white blood cells within such images can accelerate the process tremendously. In this paper we propose a system to locate white blood cells within microscopic blood smear images, segment them into nucleus and cytoplasm regions, extract suitable features and finally, classify them into five types: basophil, eosinophil, neutrophil, lymphocyte and monocyte. Two sets of blood smear images were used in this study's experiments. Dataset 1, collected from Rangsit University, were normal peripheral blood slides under light microscope with 100× magnification; 555 images with 601 white blood cells were captured by a Nikon DS-Fi2 high-definition color camera and saved in JPG format of size 960 × 1,280 pixels at 15 pixels per 1 μm resolution. In dataset 2, 477 cropped white blood cell images were downloaded from CellaVision.com. They are in JPG format of size 360 × 363 pixels. The resolution is estimated to be 10 pixels per 1 μm. The proposed system comprises a pre-processing step, nucleus segmentation, cell segmentation, feature extraction, feature selection and classification. The main concept of the segmentation algorithm employed uses white blood cell's morphological properties and the calibrated size of a real cell relative to image resolution. The segmentation process combined thresholding, morphological operation and ellipse curve fitting. Consequently, several features were extracted from the segmented nucleus and cytoplasm regions. Prominent features were then chosen by a greedy search algorithm called sequential forward selection. Finally, with a set of selected prominent features, both linear and naïve Bayes classifiers were applied for performance comparison. This system was tested on normal peripheral blood smear slide images from two datasets. Two sets of comparison were performed: segmentation and classification. The automatically segmented results were compared to the ones obtained manually by a haematologist. It was found that the proposed method is consistent and coherent in both datasets, with dice similarity of 98.9 and 91.6% for average segmented nucleus and cell regions, respectively. Furthermore, the overall correction rate in the classification phase is about 98 and 94% for linear and naïve Bayes models, respectively. The proposed system, based on normal white blood cell morphology and its characteristics, was applied to two different datasets. The results of the calibrated segmentation process on both datasets are fast, robust, efficient and coherent. Meanwhile, the classification of normal white blood cells into five types shows high sensitivity in both linear and naïve Bayes models, with slightly better results in the linear classifier.
Comparison of different methods for gender estimation from face image of various poses
NASA Astrophysics Data System (ADS)
Ishii, Yohei; Hongo, Hitoshi; Niwa, Yoshinori; Yamamoto, Kazuhiko
2003-04-01
Recently, gender estimation from face images has been studied for frontal facial images. However, it is difficult to obtain such facial images constantly in the case of application systems for security, surveillance and marketing research. In order to build such systems, a method is required to estimate gender from the image of various facial poses. In this paper, three different classifiers are compared in appearance-based gender estimation, which use four directional features (FDF). The classifiers are linear discriminant analysis (LDA), Support Vector Machines (SVMs) and Sparse Network of Winnows (SNoW). Face images used for experiments were obtained from 35 viewpoints. The direction of viewpoints varied +/-45 degrees horizontally, +/-30 degrees vertically at 15 degree intervals respectively. Although LDA showed the best performance for frontal facial images, SVM with Gaussian kernel was found the best performance (86.0%) for the facial images of 35 viewpoints. It is considered that SVM with Gaussian kernel is robust to changes in viewpoint when estimating gender from these results. Furthermore, the estimation rate was quite close to the average estimation rate at 35 viewpoints respectively. It is supposed that the methods are reasonable to estimate gender within the range of experimented viewpoints by learning face images from multiple directions by one class.
Adaptive ISAR Imaging of Maneuvering Targets Based on a Modified Fourier Transform.
Wang, Binbin; Xu, Shiyou; Wu, Wenzhen; Hu, Pengjiang; Chen, Zengping
2018-04-27
Focusing on the inverse synthetic aperture radar (ISAR) imaging of maneuvering targets, this paper presents a new imaging method which works well when the target's maneuvering is not too severe. After translational motion compensation, we describe the equivalent rotation of maneuvering targets by two variables-the relative chirp rate of the linear frequency modulated (LFM) signal and the Doppler focus shift. The first variable indicates the target's motion status, and the second one represents the possible residual error of the translational motion compensation. With them, a modified Fourier transform matrix is constructed and then used for cross-range compression. Consequently, the imaging of maneuvering is converted into a two-dimensional parameter optimization problem in which a stable and clear ISAR image is guaranteed. A gradient descent optimization scheme is employed to obtain the accurate relative chirp rate and Doppler focus shift. Moreover, we designed an efficient and robust initialization process for the gradient descent method, thus, the well-focused ISAR images of maneuvering targets can be achieved adaptively. Human intervention is not needed, and it is quite convenient for practical ISAR imaging systems. Compared to precedent imaging methods, the new method achieves better imaging quality under reasonable computational cost. Simulation results are provided to validate the effectiveness and advantages of the proposed method.
NASA Technical Reports Server (NTRS)
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.
2017-03-01
A Low- Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller Jong Hwan Ko...Atlanta, GA 30332 USA Contact Author Email: jonghwan.ko@gatech.edu Abstract: This paper presents a low- power wireless image sensor node for...present a low- power wireless image sensor node with a noise-robust moving object detection and region-of-interest based rate controller [Fig. 1]. The
Multiscale approach to contour fitting for MR images
NASA Astrophysics Data System (ADS)
Rueckert, Daniel; Burger, Peter
1996-04-01
We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.
Robust synthetic biology design: stochastic game theory approach.
Chen, Bor-Sen; Chang, Chia-Hung; Lee, Hsiao-Ching
2009-07-15
Synthetic biology is to engineer artificial biological systems to investigate natural biological phenomena and for a variety of applications. However, the development of synthetic gene networks is still difficult and most newly created gene networks are non-functioning due to uncertain initial conditions and disturbances of extra-cellular environments on the host cell. At present, how to design a robust synthetic gene network to work properly under these uncertain factors is the most important topic of synthetic biology. A robust regulation design is proposed for a stochastic synthetic gene network to achieve the prescribed steady states under these uncertain factors from the minimax regulation perspective. This minimax regulation design problem can be transformed to an equivalent stochastic game problem. Since it is not easy to solve the robust regulation design problem of synthetic gene networks by non-linear stochastic game method directly, the Takagi-Sugeno (T-S) fuzzy model is proposed to approximate the non-linear synthetic gene network via the linear matrix inequality (LMI) technique through the Robust Control Toolbox in Matlab. Finally, an in silico example is given to illustrate the design procedure and to confirm the efficiency and efficacy of the proposed robust gene design method. http://www.ee.nthu.edu.tw/bschen/SyntheticBioDesign_supplement.pdf.
Robust digital image watermarking using distortion-compensated dither modulation
NASA Astrophysics Data System (ADS)
Li, Mianjie; Yuan, Xiaochen
2018-04-01
In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.
NASA Astrophysics Data System (ADS)
Kanamori, Katsuhiro
2016-07-01
An endoscopic image processing technique for enhancing the appearance of microstructures on translucent mucosae is described. This technique employs two pairs of co- and cross-polarization images under two different linearly polarized lights, from which the averaged subtracted polarization image (AVSPI) is calculated. Experiments were then conducted using an acrylic phantom and excised porcine stomach tissue using a manual experimental setup with ring-type lighting, two rotating polarizers, and a color camera; better results were achieved with the proposed method than with conventional color intensity image processing. An objective evaluation method that uses texture analysis was developed and used to evaluate the enhanced microstructure images. This paper introduces two types of online, rigid-type, polarimetric endoscopic implementations using a polarized ring-shaped LED and a polarimetric camera. The first type uses a beam-splitter-type color polarimetric camera, and the second uses a single-chip monochrome polarimetric camera. Microstructures on the mucosa surface were enhanced robustly with these online endoscopes regardless of the difference in the extinction ratio of each device. These results show that polarimetric endoscopy using AVSPI is both effective and practical for hardware implementation.
Robust Stabilization of Uncertain Systems Based on Energy Dissipation Concepts
NASA Technical Reports Server (NTRS)
Gupta, Sandeep
1996-01-01
Robust stability conditions obtained through generalization of the notion of energy dissipation in physical systems are discussed in this report. Linear time-invariant (LTI) systems which dissipate energy corresponding to quadratic power functions are characterized in the time-domain and the frequency-domain, in terms of linear matrix inequalities (LMls) and algebraic Riccati equations (ARE's). A novel characterization of strictly dissipative LTI systems is introduced in this report. Sufficient conditions in terms of dissipativity and strict dissipativity are presented for (1) stability of the feedback interconnection of dissipative LTI systems, (2) stability of dissipative LTI systems with memoryless feedback nonlinearities, and (3) quadratic stability of uncertain linear systems. It is demonstrated that the framework of dissipative LTI systems investigated in this report unifies and extends small gain, passivity, and sector conditions for stability. Techniques for selecting power functions for characterization of uncertain plants and robust controller synthesis based on these stability results are introduced. A spring-mass-damper example is used to illustrate the application of these methods for robust controller synthesis.
Robust L1-norm two-dimensional linear discriminant analysis.
Li, Chun-Na; Shao, Yuan-Hai; Deng, Nai-Yang
2015-05-01
In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sparse Coding and Counting for Robust Visual Tracking
Liu, Risheng; Wang, Jing; Shang, Xiaoke; Wang, Yiyang; Su, Zhixun; Cai, Yu
2016-01-01
In this paper, we propose a novel sparse coding and counting method under Bayesian framework for visual tracking. In contrast to existing methods, the proposed method employs the combination of L0 and L1 norm to regularize the linear coefficients of incrementally updated linear basis. The sparsity constraint enables the tracker to effectively handle difficult challenges, such as occlusion or image corruption. To achieve real-time processing, we propose a fast and efficient numerical algorithm for solving the proposed model. Although it is an NP-hard problem, the proposed accelerated proximal gradient (APG) approach is guaranteed to converge to a solution quickly. Besides, we provide a closed solution of combining L0 and L1 regularized representation to obtain better sparsity. Experimental results on challenging video sequences demonstrate that the proposed method achieves state-of-the-art results both in accuracy and speed. PMID:27992474
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-01-01
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS. PMID:27420066
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-07-12
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.
Multiple-image hiding using super resolution reconstruction in high-frequency domains
NASA Astrophysics Data System (ADS)
Li, Xiao-Wei; Zhao, Wu-Xiang; Wang, Jun; Wang, Qiong-Hua
2017-12-01
In this paper, a robust multiple-image hiding method using the computer-generated integral imaging and the modified super-resolution reconstruction algorithm is proposed. In our work, the host image is first transformed into frequency domains by cellular automata (CA), to assure the quality of the stego-image, the secret images are embedded into the CA high-frequency domains. The proposed method has the following advantages: (1) robustness to geometric attacks because of the memory-distributed property of elemental images, (2) increasing quality of the reconstructed secret images as the scheme utilizes the modified super-resolution reconstruction algorithm. The simulation results show that the proposed multiple-image hiding method outperforms other similar hiding methods and is robust to some geometric attacks, e.g., Gaussian noise and JPEG compression attacks.
Enhanced echolocation via robust statistics and super-resolution of sonar images
NASA Astrophysics Data System (ADS)
Kim, Kio
Echolocation is a process in which an animal uses acoustic signals to exchange information with environments. In a recent study, Neretti et al. have shown that the use of robust statistics can significantly improve the resiliency of echolocation against noise and enhance its accuracy by suppressing the development of sidelobes in the processing of an echo signal. In this research, the use of robust statistics is extended to problems in underwater explorations. The dissertation consists of two parts. Part I describes how robust statistics can enhance the identification of target objects, which in this case are cylindrical containers filled with four different liquids. Particularly, this work employs a variation of an existing robust estimator called an L-estimator, which was first suggested by Koenker and Bassett. As pointed out by Au et al.; a 'highlight interval' is an important feature, and it is closely related with many other important features that are known to be crucial for dolphin echolocation. A varied L-estimator described in this text is used to enhance the detection of highlight intervals, which eventually leads to a successful classification of echo signals. Part II extends the problem into 2 dimensions. Thanks to the advances in material and computer technology, various sonar imaging modalities are available on the market. By registering acoustic images from such video sequences, one can extract more information on the region of interest. Computer vision and image processing allowed application of robust statistics to the acoustic images produced by forward looking sonar systems, such as Dual-frequency Identification Sonar and ProViewer. The first use of robust statistics for sonar image enhancement in this text is in image registration. Random Sampling Consensus (RANSAC) is widely used for image registration. The registration algorithm using RANSAC is optimized for sonar image registration, and the performance is studied. The second use of robust statistics is in fusing the images. It is shown that the maximum a posteriori fusion method can be formulated in a Kalman filter-like manner, and also that the resulting expression is identical to a W-estimator with a specific weight function.
A robust correspondence matching algorithm of ground images along the optic axis
NASA Astrophysics Data System (ADS)
Jia, Fengman; Kang, Zhizhong
2013-10-01
Facing challenges of nontraditional geometry, multiple resolutions and the same features sensed from different angles, there are more difficulties of robust correspondence matching for ground images along the optic axis. A method combining SIFT algorithm and the geometric constraint of the ratio of coordinate differences between image point and image principal point is proposed in this paper. As it can provide robust matching across a substantial range of affine distortion addition of change in 3D viewpoint and noise, we use SIFT algorithm to tackle the problem of image distortion. By analyzing the nontraditional geometry of ground image along the optic axis, this paper derivates that for one correspondence pair, the ratio of distances between image point and image principal point in an image pair should be a value not far from 1. Therefore, a geometric constraint for gross points detection is formed. The proposed approach is tested with real image data acquired by Kodak. The results show that with SIFT and the proposed geometric constraint, the robustness of correspondence matching on the ground images along the optic axis can be effectively improved, and thus prove the validity of the proposed algorithm.
A spectral water index based on visual bands
NASA Astrophysics Data System (ADS)
Basaeed, Essa; Bhaskar, Harish; Al-Mualla, Mohammed
2013-10-01
Land-water segmentation is an important preprocessing step in a number of remote sensing applications such as target detection, environmental monitoring, and map updating. A Normalized Optical Water Index (NOWI) is proposed to accurately discriminate between land and water regions in multi-spectral satellite imagery data from DubaiSat-1. NOWI exploits the spectral characteristics of water content (using visible bands) and uses a non-linear normalization procedure that renders strong emphasize on small changes in lower brightness values whilst guaranteeing that the segmentation process remains image-independent. The NOWI representation is validated through systematic experiments, evaluated using robust metrics, and compared against various supervised classification algorithms. Analysis has indicated that NOWI has the advantages that it: a) is a pixel-based method that requires no global knowledge of the scene under investigation, b) can be easily implemented in parallel processing, c) is image-independent and requires no training, d) works in different environmental conditions, e) provides high accuracy and efficiency, and f) works directly on the input image without any form of pre-processing.
Adding flexibility to the search for robust portfolios in non-linear water resource planning
NASA Astrophysics Data System (ADS)
Tomlinson, James; Harou, Julien
2017-04-01
To date robust optimisation of water supply systems has sought to find portfolios or strategies that are robust to a range of uncertainties or scenarios. The search for a single portfolio that is robust in all scenarios is necessarily suboptimal compared to portfolios optimised for a single scenario deterministic future. By contrast establishing a separate portfolio for each future scenario is unhelpful to the planner who must make a single decision today under deep uncertainty. In this work we show that a middle ground is possible by allowing a small number of different portfolios to be found that are each robust to a different subset of the global scenarios. We use evolutionary algorithms and a simple water resource system model to demonstrate this approach. The primary contribution is to demonstrate that flexibility can be added to the search for portfolios, in complex non-linear systems, at the expense of complete robustness across all future scenarios. In this context we define flexibility as the ability to design a portfolio in which some decisions are delayed, but those decisions that are not delayed are themselves shown to be robust to the future. We recognise that some decisions in our portfolio are more important than others. An adaptive portfolio is found by allowing no flexibility for these near-term "important" decisions, but maintaining flexibility in the remaining longer term decisions. In this sense we create an effective 2-stage decision process for a non-linear water resource supply system. We show how this reduces a measure of regret versus the inflexible robust solution for the same system.
Tian, Zhen; Yuan, Jingqi; Xu, Liang; Zhang, Xiang; Wang, Jingcheng
2018-05-25
As higher requirements are proposed for the load regulation and efficiency enhancement, the control performance of boiler-turbine systems has become much more important. In this paper, a novel robust control approach is proposed to improve the coordinated control performance for subcritical boiler-turbine units. To capture the key features of the boiler-turbine system, a nonlinear control-oriented model is established and validated with the history operation data of a 300 MW unit. To achieve system linearization and decoupling, an adaptive feedback linearization strategy is proposed, which could asymptotically eliminate the linearization error caused by the model uncertainties. Based on the linearized boiler-turbine system, a second-order sliding mode controller is designed with the super-twisting algorithm. Moreover, the closed-loop system is proved robustly stable with respect to uncertainties and disturbances. Simulation results are presented to illustrate the effectiveness of the proposed control scheme, which achieves excellent tracking performance, strong robustness and chattering reduction. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman
2017-06-01
Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.
Gamma/x-ray linear pushbroom stereo for 3D cargo inspection
NASA Astrophysics Data System (ADS)
Zhu, Zhigang; Hu, Yu-Chi
2006-05-01
For evaluating the contents of trucks, containers, cargo, and passenger vehicles by a non-intrusive gamma-ray or X-ray imaging system to determine the possible presence of contraband, three-dimensional (3D) measurements could provide more information than 2D measurements. In this paper, a linear pushbroom scanning model is built for such a commonly used gamma-ray or x-ray cargo inspection system. Accurate 3D measurements of the objects inside a cargo can be obtained by using two such scanning systems with different scanning angles to construct a pushbroom stereo system. A simple but robust calibration method is proposed to find the important parameters of the linear pushbroom sensors. Then, a fast and automated stereo matching algorithm based on free-form deformable registration is developed to obtain 3D measurements of the objects under inspection. A user interface is designed for 3D visualization of the objects in interests. Experimental results of sensor calibration, stereo matching, 3D measurements and visualization of a 3D cargo container and the objects inside, are presented.
NASA Astrophysics Data System (ADS)
Cao, Faxian; Yang, Zhijing; Ren, Jinchang; Ling, Wing-Kuen; Zhao, Huimin; Marshall, Stephen
2017-12-01
Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework.
Robust nonlinear control of vectored thrust aircraft
NASA Technical Reports Server (NTRS)
Doyle, John C.; Murray, Richard; Morris, John
1993-01-01
An interdisciplinary program in robust control for nonlinear systems with applications to a variety of engineering problems is outlined. Major emphasis will be placed on flight control, with both experimental and analytical studies. This program builds on recent new results in control theory for stability, stabilization, robust stability, robust performance, synthesis, and model reduction in a unified framework using Linear Fractional Transformations (LFT's), Linear Matrix Inequalities (LMI's), and the structured singular value micron. Most of these new advances have been accomplished by the Caltech controls group independently or in collaboration with researchers in other institutions. These recent results offer a new and remarkably unified framework for all aspects of robust control, but what is particularly important for this program is that they also have important implications for system identification and control of nonlinear systems. This combines well with Caltech's expertise in nonlinear control theory, both in geometric methods and methods for systems with constraints and saturations.
ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.
Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles
2018-04-19
Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We developed a novel de-arraying approach for TMA analysis. By combining wavelet-based detection, active contour segmentation, and thin-plate spline interpolation, our approach is able to handle TMA images with high dynamic, poor signal-to-noise ratio, complex background and non-linear deformation of TMA grid. In addition, the deformation estimation produces quantitative information to asset the manufacturing quality of TMAs.
NASA Technical Reports Server (NTRS)
Patel, R. V.; Toda, M.; Sridhar, B.
1977-01-01
The paper deals with the problem of expressing the robustness (stability) property of a linear quadratic state feedback (LQSF) design quantitatively in terms of bounds on the perturbations (modeling errors or parameter variations) in the system matrices so that the closed-loop system remains stable. Nonlinear time-varying and linear time-invariant perturbations are considered. The only computation required in obtaining a measure of the robustness of an LQSF design is to determine the eigenvalues of two symmetric matrices determined when solving the algebraic Riccati equation corresponding to the LQSF design problem. Results are applied to a complex dynamic system consisting of the flare control of a STOL aircraft. The design of the flare control is formulated as an LQSF tracking problem.
NASA Astrophysics Data System (ADS)
Whitney, Heather M.; Drukker, Karen; Edwards, Alexandra; Papaioannou, John; Giger, Maryellen L.
2018-02-01
Radiomics features extracted from breast lesion images have shown potential in diagnosis and prognosis of breast cancer. As clinical institutions transition from 1.5 T to 3.0 T magnetic resonance imaging (MRI), it is helpful to identify robust features across these field strengths. In this study, dynamic contrast-enhanced MR images were acquired retrospectively under IRB/HIPAA compliance, yielding 738 cases: 241 and 124 benign lesions imaged at 1.5 T and 3.0 T and 231 and 142 luminal A cancers imaged at 1.5 T and 3.0 T, respectively. Lesions were segmented using a fuzzy C-means method. Extracted radiomic values for each group of lesions by cancer status and field strength of acquisition were compared using a Kolmogorov-Smirnov test for the null hypothesis that two groups being compared came from the same distribution, with p-values being corrected for multiple comparisons by the Holm-Bonferroni method. Two shape features, one texture feature, and three enhancement variance kinetics features were found to be potentially robust. All potentially robust features had areas under the receiver operating characteristic curve (AUC) statistically greater than 0.5 in the task of distinguishing between lesion types (range of means 0.57-0.78). The significant difference in voxel size between field strength of acquisition limits the ability to affirm more features as robust or not robust according to field strength alone, and inhomogeneities in static field strength and radiofrequency field could also have affected the assessment of kinetic curve features as robust or not. Vendor-specific image scaling could have also been a factor. These findings will contribute to the development of radiomic signatures that use features identified as robust across field strength.
A smart sensor architecture based on emergent computation in an array of outer-totalistic cells
NASA Astrophysics Data System (ADS)
Dogaru, Radu; Dogaru, Ioana; Glesner, Manfred
2005-06-01
A novel smart-sensor architecture is proposed, capable to segment and recognize characters in a monochrome image. It is capable to provide a list of ASCII codes representing the recognized characters from the monochrome visual field. It can operate as a blind's aid or for industrial applications. A bio-inspired cellular model with simple linear neurons was found the best to perform the nontrivial task of cropping isolated compact objects such as handwritten digits or characters. By attaching a simple outer-totalistic cell to each pixel sensor, emergent computation in the resulting cellular automata lattice provides a straightforward and compact solution to the otherwise computationally intensive problem of character segmentation. A simple and robust recognition algorithm is built in a compact sequential controller accessing the array of cells so that the integrated device can provide directly a list of codes of the recognized characters. Preliminary simulation tests indicate good performance and robustness to various distortions of the visual field.
Task-specific image partitioning.
Kim, Sungwoong; Nowozin, Sebastian; Kohli, Pushmeet; Yoo, Chang D
2013-02-01
Image partitioning is an important preprocessing step for many of the state-of-the-art algorithms used for performing high-level computer vision tasks. Typically, partitioning is conducted without regard to the task in hand. We propose a task-specific image partitioning framework to produce a region-based image representation that will lead to a higher task performance than that reached using any task-oblivious partitioning framework and existing supervised partitioning framework, albeit few in number. The proposed method partitions the image by means of correlation clustering, maximizing a linear discriminant function defined over a superpixel graph. The parameters of the discriminant function that define task-specific similarity/dissimilarity among superpixels are estimated based on structured support vector machine (S-SVM) using task-specific training data. The S-SVM learning leads to a better generalization ability while the construction of the superpixel graph used to define the discriminant function allows a rich set of features to be incorporated to improve discriminability and robustness. We evaluate the learned task-aware partitioning algorithms on three benchmark datasets. Results show that task-aware partitioning leads to better labeling performance than the partitioning computed by the state-of-the-art general-purpose and supervised partitioning algorithms. We believe that the task-specific image partitioning paradigm is widely applicable to improving performance in high-level image understanding tasks.
Image segmentation with a novel regularized composite shape prior based on surrogate study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu
Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulatedmore » in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.« less
Super-resolution reconstruction of hyperspectral images.
Akgun, Toygar; Altunbasak, Yucel; Mersereau, Russell M
2005-11-01
Hyperspectral images are used for aerial and space imagery applications, including target detection, tracking, agricultural, and natural resource exploration. Unfortunately, atmospheric scattering, secondary illumination, changing viewing angles, and sensor noise degrade the quality of these images. Improving their resolution has a high payoff, but applying super-resolution techniques separately to every spectral band is problematic for two main reasons. First, the number of spectral bands can be in the hundreds, which increases the computational load excessively. Second, considering the bands separately does not make use of the information that is present across them. Furthermore, separate band super-resolution does not make use of the inherent low dimensionality of the spectral data, which can effectively be used to improve the robustness against noise. In this paper, we introduce a novel super-resolution method for hyperspectral images. An integral part of our work is to model the hyperspectral image acquisition process. We propose a model that enables us to represent the hyperspectral observations from different wavelengths as weighted linear combinations of a small number of basis image planes. Then, a method for applying super resolution to hyperspectral images using this model is presented. The method fuses information from multiple observations and spectral bands to improve spatial resolution and reconstruct the spectrum of the observed scene as a combination of a small number of spectral basis functions.
NASA Astrophysics Data System (ADS)
Gorpas, Dimitris; Politopoulos, Kostas; Yova, Dido; Andersson-Engels, Stefan
2008-02-01
One of the most challenging problems in medical imaging is to "see" a tumour embedded into tissue, which is a turbid medium, by using fluorescent probes for tumour labeling. This problem, despite the efforts made during the last years, has not been fully encountered yet, due to the non-linear nature of the inverse problem and the convergence failures of many optimization techniques. This paper describes a robust solution of the inverse problem, based on data fitting and image fine-tuning techniques. As a forward solver the coupled radiative transfer equation and diffusion approximation model is proposed and compromised via a finite element method, enhanced with adaptive multi-grids for faster and more accurate convergence. A database is constructed by application of the forward model on virtual tumours with known geometry, and thus fluorophore distribution, embedded into simulated tissues. The fitting procedure produces the best matching between the real and virtual data, and thus provides the initial estimation of the fluorophore distribution. Using this information, the coupled radiative transfer equation and diffusion approximation model has the required initial values for a computational reasonable and successful convergence during the image fine-tuning application.
Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images
Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi
2016-01-01
Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms. PMID:27399704
Robust image obfuscation for privacy protection in Web 2.0 applications
NASA Astrophysics Data System (ADS)
Poller, Andreas; Steinebach, Martin; Liu, Huajian
2012-03-01
We present two approaches to robust image obfuscation based on permutation of image regions and channel intensity modulation. The proposed concept of robust image obfuscation is a step towards end-to-end security in Web 2.0 applications. It helps to protect the privacy of the users against threats caused by internet bots and web applications that extract biometric and other features from images for data-linkage purposes. The approaches described in this paper consider that images uploaded to Web 2.0 applications pass several transformations, such as scaling and JPEG compression, until the receiver downloads them. In contrast to existing approaches, our focus is on usability, therefore the primary goal is not a maximum of security but an acceptable trade-off between security and resulting image quality.
Determination of skeleton and sign map for phase obtaining from a single ESPI image
NASA Astrophysics Data System (ADS)
Yang, Xia; Yu, Qifeng; Fu, Sihua
2009-06-01
A robust method of determining the sign map and skeletons for ESPI images is introduced in this paper. ESPI images have high speckle noise which makes it difficult to obtain the fringe information, especially from a single image. To overcome the effects of high speckle noise, local directional computing windows are designed according to the fringe directions. Then by calculating the gradients from the filtered image in directional windows, sign map and good skeletons can be determined robustly. Based on the sign map, single image phase-extracting methods such as quadrature transform can be improved. And based on skeletons, fringe phases can be obtained directly by normalization methods. Experiments show that this new method is robust and effective for extracting phase from a single ESPI fringe image.
Robust Weak Chimeras in Oscillator Networks with Delayed Linear and Quadratic Interactions
NASA Astrophysics Data System (ADS)
Bick, Christian; Sebek, Michael; Kiss, István Z.
2017-10-01
We present an approach to generate chimera dynamics (localized frequency synchrony) in oscillator networks with two populations of (at least) two elements using a general method based on a delayed interaction with linear and quadratic terms. The coupling design yields robust chimeras through a phase-model-based design of the delay and the ratio of linear and quadratic components of the interactions. We demonstrate the method in the Brusselator model and experiments with electrochemical oscillators. The technique opens the way to directly bridge chimera dynamics in phase models and real-world oscillator networks.
Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.
Obuchowski, Nancy A; Bullen, Jennifer
2017-01-01
Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.
SU-F-I-41: Calibration-Free Material Decomposition for Dual-Energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, W; Xing, L; Zhang, Q
2016-06-15
Purpose: To eliminate tedious phantom calibration or manually region of interest (ROI) selection as required in dual-energy CT material decomposition, we establish a new projection-domain material decomposition framework with incorporation of energy spectrum. Methods: Similar to the case of dual-energy CT, the integral of the basis material image in our model is expressed as a linear combination of basis functions, which are the polynomials of high- and low-energy raw projection data. To yield the unknown coefficients of the linear combination, the proposed algorithm minimizes the quadratic error between the high- and low-energy raw projection data and the projection calculated usingmore » material images. We evaluate the algorithm with an iodine concentration numerical phantom at different dose and iodine concentration levels. The x-ray energy spectra of the high and low energy are estimated using an indirect transmission method. The derived monochromatic images are compared with the high- and low-energy CT images to demonstrate beam hardening artifacts reduction. Quantitative results were measured and compared to the true values. Results: The differences between the true density value used for simulation and that were obtained from the monochromatic images, are 1.8%, 1.3%, 2.3%, and 2.9% for the dose levels from standard dose to 1/8 dose, and are 0.4%, 0.7%, 1.5%, and 1.8% for the four iodine concentration levels from 6 mg/mL to 24 mg/mL. For all of the cases, beam hardening artifacts, especially streaks shown between dense inserts, are almost completely removed in the monochromatic images. Conclusion: The proposed algorithm provides an effective way to yield material images and artifacts-free monochromatic images at different dose levels without the need for phantom calibration or ROI selection. Furthermore, the approach also yields accurate results when the concentration of the iodine concentrate insert is very low, suggesting the algorithm is robust with respect to the low-contrast scenario.« less
Robust image matching via ORB feature and VFC for mismatch removal
NASA Astrophysics Data System (ADS)
Ma, Tao; Fu, Wenxing; Fang, Bin; Hu, Fangyu; Quan, Siwen; Ma, Jie
2018-03-01
Image matching is at the base of many image processing and computer vision problems, such as object recognition or structure from motion. Current methods rely on good feature descriptors and mismatch removal strategies for detection and matching. In this paper, we proposed a robust image match approach based on ORB feature and VFC for mismatch removal. ORB (Oriented FAST and Rotated BRIEF) is an outstanding feature, it has the same performance as SIFT with lower cost. VFC (Vector Field Consensus) is a state-of-the-art mismatch removing method. The experiment results demonstrate that our method is efficient and robust.
Usuda, Takashi; Kobayashi, Naoki; Takeda, Sunao; Kotake, Yoshifumi
2010-01-01
We have developed the non-invasive blood pressure monitor which can measure the blood pressure quickly and robustly. This monitor combines two measurement mode: the linear inflation and the linear deflation. On the inflation mode, we realized a faster measurement with rapid inflation rate. On the deflation mode, we realized a robust noise reduction. When there is neither noise nor arrhythmia, the inflation mode incorporated on this monitor provides precise, quick and comfortable measurement. Once the inflation mode fails to calculate appropriate blood pressure due to body movement or arrhythmia, then the monitor switches automatically to the deflation mode and measure blood pressure by using digital signal processing as wavelet analysis, filter bank, filter combined with FFT and Inverse FFT. The inflation mode succeeded 2440 measurements out of 3099 measurements (79%) in an operating room and a rehabilitation room. The new designed blood pressure monitor provides the fastest measurement for patient with normal circulation and robust measurement for patients with body movement or severe arrhythmia. Also this fast measurement method provides comfortableness for patients.
Modeling heading and path perception from optic flow in the case of independently moving objects
Raudies, Florian; Neumann, Heiko
2013-01-01
Humans are usually accurate when estimating heading or path from optic flow, even in the presence of independently moving objects (IMOs) in an otherwise rigid scene. To invoke significant biases in perceived heading, IMOs have to be large and obscure the focus of expansion (FOE) in the image plane, which is the point of approach. For the estimation of path during curvilinear self-motion no significant biases were found in the presence of IMOs. What makes humans robust in their estimation of heading or path using optic flow? We derive analytical models of optic flow for linear and curvilinear self-motion using geometric scene models. Heading biases of a linear least squares method, which builds upon these analytical models, are large, larger than those reported for humans. This motivated us to study segmentation cues that are available from optic flow. We derive models of accretion/deletion, expansion/contraction, acceleration/deceleration, local spatial curvature, and local temporal curvature, to be used as cues to segment an IMO from the background. Integrating these segmentation cues into our method of estimating heading or path now explains human psychophysical data and extends, as well as unifies, previous investigations. Our analysis suggests that various cues available from optic flow help to segment IMOs and, thus, make humans' heading and path perception robust in the presence of such IMOs. PMID:23554589
Study on Fuzzy Adaptive Fractional Order PIλDμ Control for Maglev Guiding System
NASA Astrophysics Data System (ADS)
Hu, Qing; Hu, Yuwei
The mathematical model of the linear elevator maglev guiding system is analyzed in this paper. For the linear elevator needs strong stability and robustness to run, the integer order PID was expanded to the fractional order, in order to improve the steady state precision, rapidity and robustness of the system, enhance the accuracy of the parameter in fractional order PIλDμ controller, the fuzzy control is combined with the fractional order PIλDμ control, using the fuzzy logic achieves the parameters online adjustment. The simulations reveal that the system has faster response speed, higher tracking precision, and has stronger robustness to the disturbance.
An algebraic algorithm for nonuniformity correction in focal-plane arrays.
Ratliff, Bradley M; Hayat, Majeed M; Hardie, Russell C
2002-09-01
A scene-based algorithm is developed to compensate for bias nonuniformity in focal-plane arrays. Nonuniformity can be extremely problematic, especially for mid- to far-infrared imaging systems. The technique is based on use of estimates of interframe subpixel shifts in an image sequence, in conjunction with a linear-interpolation model for the motion, to extract information on the bias nonuniformity algebraically. The performance of the proposed algorithm is analyzed by using real infrared and simulated data. One advantage of this technique is its simplicity; it requires relatively few frames to generate an effective correction matrix, thereby permitting the execution of frequent on-the-fly nonuniformity correction as drift occurs. Additionally, the performance is shown to exhibit considerable robustness with respect to lack of the common types of temporal and spatial irradiance diversity that are typically required by statistical scene-based nonuniformity correction techniques.
Localization of the lumbar discs using machine learning and exact probabilistic inference.
Oktay, Ayse Betul; Akgul, Yusuf Sinan
2011-01-01
We propose a novel fully automatic approach to localize the lumbar intervertebral discs in MR images with PHOG based SVM and a probabilistic graphical model. At the local level, our method assigns a score to each pixel in target image that indicates whether it is a disc center or not. At the global level, we define a chain-like graphical model that represents the lumbar intervertebral discs and we use an exact inference algorithm to localize the discs. Our main contributions are the employment of the SVM with the PHOG based descriptor which is robust against variations of the discs and a graphical model that reflects the linear nature of the vertebral column. Our inference algorithm runs in polynomial time and produces globally optimal results. The developed system is validated on a real spine MRI dataset and the final localization results are favorable compared to the results reported in the literature.
Quantile rank maps: a new tool for understanding individual brain development.
Chen, Huaihou; Kelly, Clare; Castellanos, F Xavier; He, Ye; Zuo, Xi-Nian; Reiss, Philip T
2015-05-01
We propose a novel method for neurodevelopmental brain mapping that displays how an individual's values for a quantity of interest compare with age-specific norms. By estimating smoothly age-varying distributions at a set of brain regions of interest, we derive age-dependent region-wise quantile ranks for a given individual, which can be presented in the form of a brain map. Such quantile rank maps could potentially be used for clinical screening. Bootstrap-based confidence intervals are proposed for the quantile rank estimates. We also propose a recalibrated Kolmogorov-Smirnov test for detecting group differences in the age-varying distribution. This test is shown to be more robust to model misspecification than a linear regression-based test. The proposed methods are applied to brain imaging data from the Nathan Kline Institute Rockland Sample and from the Autism Brain Imaging Data Exchange (ABIDE) sample. Copyright © 2015 Elsevier Inc. All rights reserved.
Diatomic Metasurface for Vectorial Holography.
Deng, Zi-Lan; Deng, Junhong; Zhuang, Xin; Wang, Shuai; Li, Kingfai; Wang, Yao; Chi, Yihui; Ye, Xuan; Xu, Jian; Wang, Guo Ping; Zhao, Rongkuo; Wang, Xiaolei; Cao, Yaoyu; Cheng, Xing; Li, Guixin; Li, Xiangping
2018-05-09
The emerging metasurfaces with the exceptional capability of manipulating an arbitrary wavefront have revived the holography with unprecedented prospects. However, most of the reported metaholograms suffer from limited polarization controls for a restrained bandwidth in addition to their complicated meta-atom designs with spatially variant dimensions. Here, we demonstrate a new concept of vectorial holography based on diatomic metasurfaces consisting of metamolecules formed by two orthogonal meta-atoms. On the basis of a simply linear relationship between phase and polarization modulations with displacements and orientations of identical meta-atoms, active diffraction of multiple polarization states and reconstruction of holographic images are simultaneously achieved, which is robust against both incident angles and wavelengths. Leveraging this appealing feature, broadband vectorial holographic images with spatially varying polarization states and dual-way polarization switching functionalities have been demonstrated, suggesting a new route to achromatic diffractive elements, polarization optics, and ultrasecure anticounterfeiting.
Wang, Peng; Zheng, Yefeng; John, Matthias; Comaniciu, Dorin
2012-01-01
Dynamic overlay of 3D models onto 2D X-ray images has important applications in image guided interventions. In this paper, we present a novel catheter tracking for motion compensation in the Transcatheter Aortic Valve Implantation (TAVI). To address such challenges as catheter shape and appearance changes, occlusions, and distractions from cluttered backgrounds, we present an adaptive linear discriminant learning method to build a measurement model online to distinguish catheters from background. An analytic solution is developed to effectively and efficiently update the discriminant model and to minimize the classification errors between the tracking object and backgrounds. The online learned discriminant model is further combined with an offline learned detector and robust template matching in a Bayesian tracking framework. Quantitative evaluations demonstrate the advantages of this method over current state-of-the-art tracking methods in tracking catheters for clinical applications.
In situ Visualization of Electrocatalytic Reaction Activity at Quantum Dots for Water Oxidation.
Chen, Ying; Fu, Jiaju; Cui, Chen; Jiang, Dechen; Chen, Zixuan; Chen, Hong-Yuan; Zhu, Jun-Jie
2018-06-11
Exploring electrocatalytic reactions on nanomaterial surface can give crucial information for the development of robust catalysts. Here, electrocatalytic reaction activity at single quantum dots (QDs) loaded silica micro-particles involved in water oxidation is visualized using electrochemiluminescence (ECL) microscopy. Under positive potential, the active redox centers at QDs induce the generation of hydroperoxide surface intermediates as coreactant to remarkably enhance ECL emission from luminol derivative for imaging. For the first time, in situ visualization of catalytic activity in water oxidation at QDs catalyst was achieved, supported by a linear relation between ECL intensity and turn over frequency. A very slight diffusion trend attributed to only luminol species proved in situ capture of hydroperoxide surface intermediates at catalytic active sites of QDs. This work provides tremendous potential in on-line imaging of electrocatalytic reaction and visual evaluation of catalyst performance.
Reflection symmetry detection using locally affine invariant edge correspondence.
Wang, Zhaozhong; Tang, Zesheng; Zhang, Xiao
2015-04-01
Reflection symmetry detection receives increasing attentions in recent years. The state-of-the-art algorithms mainly use the matching of intensity-based features (such as the SIFT) within a single image to find symmetry axes. This paper proposes a novel approach by establishing the correspondence of locally affine invariant edge-based features, which are superior to the intensity based in the aspects that it is insensitive to illumination variations, and applicable to textureless objects. The locally affine invariance is achieved by simple linear algebra for efficient and robust computations, making the algorithm suitable for detections under object distortions like perspective projection. Commonly used edge detectors and a voting process are, respectively, used before and after the edge description and matching steps to form a complete reflection detection pipeline. Experiments are performed using synthetic and real-world images with both multiple and single reflection symmetry axis. The test results are compared with existing algorithms to validate the proposed method.
NASA Astrophysics Data System (ADS)
Pura, John A.; Hamilton, Allison M.; Vargish, Geoffrey A.; Butman, John A.; Linguraru, Marius George
2011-03-01
Accurate ventricle volume estimates could improve the understanding and diagnosis of postoperative communicating hydrocephalus. For this category of patients, associated changes in ventricle volume can be difficult to identify, particularly over short time intervals. We present an automated segmentation algorithm that evaluates ventricle size from serial brain MRI examination. The technique combines serial T1- weighted images to increase SNR and segments the means image to generate a ventricle template. After pre-processing, the segmentation is initiated by a fuzzy c-means clustering algorithm to find the seeds used in a combination of fast marching methods and geodesic active contours. Finally, the ventricle template is propagated onto the serial data via non-linear registration. Serial volume estimates were obtained in an automated robust and accurate manner from difficult data.
Deformable templates guided discriminative models for robust 3D brain MRI segmentation.
Liu, Cheng-Yi; Iglesias, Juan Eugenio; Tu, Zhuowen
2013-10-01
Automatically segmenting anatomical structures from 3D brain MRI images is an important task in neuroimaging. One major challenge is to design and learn effective image models accounting for the large variability in anatomy and data acquisition protocols. A deformable template is a type of generative model that attempts to explicitly match an input image with a template (atlas), and thus, they are robust against global intensity changes. On the other hand, discriminative models combine local image features to capture complex image patterns. In this paper, we propose a robust brain image segmentation algorithm that fuses together deformable templates and informative features. It takes advantage of the adaptation capability of the generative model and the classification power of the discriminative models. The proposed algorithm achieves both robustness and efficiency, and can be used to segment brain MRI images with large anatomical variations. We perform an extensive experimental study on four datasets of T1-weighted brain MRI data from different sources (1,082 MRI scans in total) and observe consistent improvement over the state-of-the-art systems.
Feng, Xiang; Deistung, Andreas; Dwyer, Michael G; Hagemeier, Jesper; Polak, Paul; Lebenberg, Jessica; Frouin, Frédérique; Zivadinov, Robert; Reichenbach, Jürgen R; Schweser, Ferdinand
2017-06-01
Accurate and robust segmentation of subcortical gray matter (SGM) nuclei is required in many neuroimaging applications. FMRIB's Integrated Registration and Segmentation Tool (FIRST) is one of the most popular software tools for automated subcortical segmentation based on T 1 -weighted (T1w) images. In this work, we demonstrate that FIRST tends to produce inaccurate SGM segmentation results in the case of abnormal brain anatomy, such as present in atrophied brains, due to a poor spatial match of the subcortical structures with the training data in the MNI space as well as due to insufficient contrast of SGM structures on T1w images. Consequently, such deviations from the average brain anatomy may introduce analysis bias in clinical studies, which may not always be obvious and potentially remain unidentified. To improve the segmentation of subcortical nuclei, we propose to use FIRST in combination with a special Hybrid image Contrast (HC) and Non-Linear (nl) registration module (HC-nlFIRST), where the hybrid image contrast is derived from T1w images and magnetic susceptibility maps to create subcortical contrast that is similar to that in the Montreal Neurological Institute (MNI) template. In our approach, a nonlinear registration replaces FIRST's default linear registration, yielding a more accurate alignment of the input data to the MNI template. We evaluated our method on 82 subjects with particularly abnormal brain anatomy, selected from a database of >2000 clinical cases. Qualitative and quantitative analyses revealed that HC-nlFIRST provides improved segmentation compared to the default FIRST method. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Anees, Asim; Aryal, Jagannath; O'Reilly, Małgorzata M.; Gale, Timothy J.; Wardlaw, Tim
2016-12-01
A robust non-parametric framework, based on multiple Radial Basic Function (RBF) kernels, is proposed in this study, for detecting land/forest cover changes using Landsat 7 ETM+ images. One of the widely used frameworks is to find change vectors (difference image) and use a supervised classifier to differentiate between change and no-change. The Bayesian Classifiers e.g. Maximum Likelihood Classifier (MLC), Naive Bayes (NB), are widely used probabilistic classifiers which assume parametric models, e.g. Gaussian function, for the class conditional distributions. However, their performance can be limited if the data set deviates from the assumed model. The proposed framework exploits the useful properties of Least Squares Probabilistic Classifier (LSPC) formulation i.e. non-parametric and probabilistic nature, to model class posterior probabilities of the difference image using a linear combination of a large number of Gaussian kernels. To this end, a simple technique, based on 10-fold cross-validation is also proposed for tuning model parameters automatically instead of selecting a (possibly) suboptimal combination from pre-specified lists of values. The proposed framework has been tested and compared with Support Vector Machine (SVM) and NB for detection of defoliation, caused by leaf beetles (Paropsisterna spp.) in Eucalyptus nitens and Eucalyptus globulus plantations of two test areas, in Tasmania, Australia, using raw bands and band combination indices of Landsat 7 ETM+. It was observed that due to multi-kernel non-parametric formulation and probabilistic nature, the LSPC outperforms parametric NB with Gaussian assumption in change detection framework, with Overall Accuracy (OA) ranging from 93.6% (κ = 0.87) to 97.4% (κ = 0.94) against 85.3% (κ = 0.69) to 93.4% (κ = 0.85), and is more robust to changing data distributions. Its performance was comparable to SVM, with added advantages of being probabilistic and capable of handling multi-class problems naturally with its original formulation.
Synthesis Methods for Robust Passification and Control
NASA Technical Reports Server (NTRS)
Kelkar, Atul G.; Joshi, Suresh M. (Technical Monitor)
2000-01-01
The research effort under this cooperative agreement has been essentially the continuation of the work from previous grants. The ongoing work has primarily focused on developing passivity-based control techniques for Linear Time-Invariant (LTI) systems. During this period, there has been a significant progress made in the area of passivity-based control of LTI systems and some preliminary results have also been obtained for nonlinear systems, as well. The prior work has addressed optimal control design for inherently passive as well as non- passive linear systems. For exploiting the robustness characteristics of passivity-based controllers the passification methodology was developed for LTI systems that are not inherently passive. Various methods of passification were first proposed in and further developed. The robustness of passification was addressed for multi-input multi-output (MIMO) systems for certain classes of uncertainties using frequency-domain methods. For MIMO systems, a state-space approach using Linear Matrix Inequality (LMI)-based formulation was presented, for passification of non-passive LTI systems. An LMI-based robust passification technique was presented for systems with redundant actuators and sensors. The redundancy in actuators and sensors was used effectively for robust passification using the LMI formulation. The passification was designed to be robust to an interval-type uncertainties in system parameters. The passification techniques were used to design a robust controller for Benchmark Active Control Technology wing under parametric uncertainties. The results on passive nonlinear systems, however, are very limited to date. Our recent work in this area was presented, wherein some stability results were obtained for passive nonlinear systems that are affine in control.
NASA Technical Reports Server (NTRS)
Nett, C. N.; Jacobson, C. A.; Balas, M. J.
1983-01-01
This paper reviews and extends the fractional representation theory. In particular, new and powerful robustness results are presented. This new theory is utilized to develop a preliminary design methodology for finite dimensional control of a class of linear evolution equations on a Banach space. The design is for stability in an input-output sense, but particular attention is paid to internal stability as well.
Accurate and robust brain image alignment using boundary-based registration.
Greve, Douglas N; Fischl, Bruce
2009-10-15
The fine spatial scales of the structures in the human brain represent an enormous challenge to the successful integration of information from different images for both within- and between-subject analysis. While many algorithms to register image pairs from the same subject exist, visual inspection shows that their accuracy and robustness to be suspect, particularly when there are strong intensity gradients and/or only part of the brain is imaged. This paper introduces a new algorithm called Boundary-Based Registration, or BBR. The novelty of BBR is that it treats the two images very differently. The reference image must be of sufficient resolution and quality to extract surfaces that separate tissue types. The input image is then aligned to the reference by maximizing the intensity gradient across tissue boundaries. Several lower quality images can be aligned through their alignment with the reference. Visual inspection and fMRI results show that BBR is more accurate than correlation ratio or normalized mutual information and is considerably more robust to even strong intensity inhomogeneities. BBR also excels at aligning partial-brain images to whole-brain images, a domain in which existing registration algorithms frequently fail. Even in the limit of registering a single slice, we show the BBR results to be robust and accurate.
Variational and robust density fitting of four-center two-electron integrals in local metrics
NASA Astrophysics Data System (ADS)
Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjærgaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Høst, Stinne; Salek, Paweł
2008-09-01
Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.
Variational and robust density fitting of four-center two-electron integrals in local metrics.
Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjaergaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Host, Stinne; Salek, Paweł
2008-09-14
Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.
NASA Astrophysics Data System (ADS)
Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang
2017-12-01
In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.
Fluorescence hyperspectral imaging (fHSI) using a spectrally resolved detector array
Luthman, Anna Siri; Dumitru, Sebastian; Quiros‐Gonzalez, Isabel; Joseph, James
2017-01-01
Abstract The ability to resolve multiple fluorescent emissions from different biological targets in video rate applications, such as endoscopy and intraoperative imaging, has traditionally been limited by the use of filter‐based imaging systems. Hyperspectral imaging (HSI) facilitates the detection of both spatial and spectral information in a single data acquisition, however, instrumentation for HSI is typically complex, bulky and expensive. We sought to overcome these limitations using a novel robust and low cost HSI camera based on a spectrally resolved detector array (SRDA). We integrated this HSI camera into a wide‐field reflectance‐based imaging system operating in the near‐infrared range to assess the suitability for in vivo imaging of exogenous fluorescent contrast agents. Using this fluorescence HSI (fHSI) system, we were able to accurately resolve the presence and concentration of at least 7 fluorescent dyes in solution. We also demonstrate high spectral unmixing precision, signal linearity with dye concentration and at depth in tissue mimicking phantoms, and delineate 4 fluorescent dyes in vivo. Our approach, including statistical background removal, could be directly generalised to broader spectral ranges, for example, to resolve tissue reflectance or autofluorescence and in future be tailored to video rate applications requiring snapshot HSI data acquisition. PMID:28485130
NASA Astrophysics Data System (ADS)
Xia, Wenze; Ma, Yayun; Han, Shaokun; Wang, Yulin; Liu, Fei; Zhai, Yu
2018-06-01
One of the most important goals of research on three-dimensional nonscanning laser imaging systems is the improvement of the illumination system. In this paper, a new three-dimensional nonscanning laser imaging system based on the illumination pattern of a point-light-source array is proposed. This array is obtained using a fiber array connected to a laser array with each unit laser having independent control circuits. This system uses a point-to-point imaging process, which is realized using the exact corresponding optical relationship between the point-light-source array and a linear-mode avalanche photodiode array detector. The complete working process of this system is explained in detail, and the mathematical model of this system containing four equations is established. A simulated contrast experiment and two real contrast experiments which use the simplified setup without a laser array are performed. The final results demonstrate that unlike a conventional three-dimensional nonscanning laser imaging system, the proposed system meets all the requirements of an eligible illumination system. Finally, the imaging performance of this system is analyzed under defocusing situations, and analytical results show that the system has good defocusing robustness and can be easily adjusted in real applications.
Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo
2011-04-01
The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.
Palmprint Based Verification System Using SURF Features
NASA Astrophysics Data System (ADS)
Srinivas, Badrinath G.; Gupta, Phalguni
This paper describes the design and development of a prototype of robust biometric system for verification. The system uses features extracted using Speeded Up Robust Features (SURF) operator of human hand. The hand image for features is acquired using a low cost scanner. The palmprint region extracted is robust to hand translation and rotation on the scanner. The system is tested on IITK database of 200 images and PolyU database of 7751 images. The system is found to be robust with respect to translation and rotation. It has FAR 0.02%, FRR 0.01% and accuracy of 99.98% and can be a suitable system for civilian applications and high-security environments.
Robust control of combustion instabilities
NASA Astrophysics Data System (ADS)
Hong, Boe-Shong
Several interactive dynamical subsystems, each of which has its own time-scale and physical significance, are decomposed to build a feedback-controlled combustion- fluid robust dynamics. On the fast-time scale, the phenomenon of combustion instability is corresponding to the internal feedback of two subsystems: acoustic dynamics and flame dynamics, which are parametrically dependent on the slow-time-scale mean-flow dynamics controlled for global performance by a mean-flow controller. This dissertation constructs such a control system, through modeling, analysis and synthesis, to deal with model uncertainties, environmental noises and time- varying mean-flow operation. Conservation law is decomposed as fast-time acoustic dynamics and slow-time mean-flow dynamics, served for synthesizing LPV (linear parameter varying)- L2-gain robust control law, in which a robust observer is embedded for estimating and controlling the internal status, while achieving trade- offs among robustness, performances and operation. The robust controller is formulated as two LPV-type Linear Matrix Inequalities (LMIs), whose numerical solver is developed by finite-element method. Some important issues related to physical understanding and engineering application are discussed in simulated results of the control system.
Chen, Huipeng; Li, Mengyuan; Zhang, Yi; Xie, Huikai; Chen, Chang; Peng, Zhangming; Su, Shaohui
2018-02-08
Incorporating linear-scanning micro-electro-mechanical systems (MEMS) micromirrors into Fourier transform spectral acquisition systems can greatly reduce the size of the spectrometer equipment, making portable Fourier transform spectrometers (FTS) possible. How to minimize the tilting of the MEMS mirror plate during its large linear scan is a major problem in this application. In this work, an FTS system has been constructed based on a biaxial MEMS micromirror with a large-piston displacement of 180 μm, and a biaxial H∞ robust controller is designed. Compared with open-loop control and proportional-integral-derivative (PID) closed-loop control, H∞ robust control has good stability and robustness. The experimental results show that the stable scanning displacement reaches 110.9 μm under the H∞ robust control, and the tilting angle of the MEMS mirror plate in that full scanning range falls within ±0.0014°. Without control, the FTS system cannot generate meaningful spectra. In contrast, the FTS yields a clean spectrum with a full width at half maximum (FWHM) spectral linewidth of 96 cm -1 under the H∞ robust control. Moreover, the FTS system can maintain good stability and robustness under various driving conditions.
Li, Mengyuan; Zhang, Yi; Chen, Chang; Peng, Zhangming; Su, Shaohui
2018-01-01
Incorporating linear-scanning micro-electro-mechanical systems (MEMS) micromirrors into Fourier transform spectral acquisition systems can greatly reduce the size of the spectrometer equipment, making portable Fourier transform spectrometers (FTS) possible. How to minimize the tilting of the MEMS mirror plate during its large linear scan is a major problem in this application. In this work, an FTS system has been constructed based on a biaxial MEMS micromirror with a large-piston displacement of 180 μm, and a biaxial H∞ robust controller is designed. Compared with open-loop control and proportional-integral-derivative (PID) closed-loop control, H∞ robust control has good stability and robustness. The experimental results show that the stable scanning displacement reaches 110.9 μm under the H∞ robust control, and the tilting angle of the MEMS mirror plate in that full scanning range falls within ±0.0014°. Without control, the FTS system cannot generate meaningful spectra. In contrast, the FTS yields a clean spectrum with a full width at half maximum (FWHM) spectral linewidth of 96 cm−1 under the H∞ robust control. Moreover, the FTS system can maintain good stability and robustness under various driving conditions. PMID:29419765
NASA Astrophysics Data System (ADS)
Polat, Esra; Gunay, Suleyman
2013-10-01
One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.
Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.
Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L
2018-02-01
This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Segmentation of histological images and fibrosis identification with a convolutional neural network.
Fu, Xiaohang; Liu, Tong; Xiong, Zhaohan; Smaill, Bruce H; Stiles, Martin K; Zhao, Jichao
2018-07-01
Segmentation of histological images is one of the most crucial tasks for many biomedical analyses involving quantification of certain tissue types, such as fibrosis via Masson's trichrome staining. However, challenges are posed by the high variability and complexity of structural features in such images, in addition to imaging artifacts. Further, the conventional approach of manual thresholding is labor-intensive, and highly sensitive to inter- and intra-image intensity variations. An accurate and robust automated segmentation method is of high interest. We propose and evaluate an elegant convolutional neural network (CNN) designed for segmentation of histological images, particularly those with Masson's trichrome stain. The network comprises 11 successive convolutional - rectified linear unit - batch normalization layers. It outperformed state-of-the-art CNNs on a dataset of cardiac histological images (labeling fibrosis, myocytes, and background) with a Dice similarity coefficient of 0.947. With 100 times fewer (only 300,000) trainable parameters than the state-of-the-art, our CNN is less susceptible to overfitting, and is efficient. Additionally, it retains image resolution from input to output, captures fine-grained details, and can be trained end-to-end smoothly. To the best of our knowledge, this is the first deep CNN tailored to the problem of concern, and may potentially be extended to solve similar segmentation tasks to facilitate investigations into pathology and clinical treatment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Integrated vision-based GNC for autonomous rendezvous and capture around Mars
NASA Astrophysics Data System (ADS)
Strippoli, L.; Novelli, G.; Gil Fernandez, J.; Colmenarejo, P.; Le Peuvedic, C.; Lanza, P.; Ankersen, F.
2015-06-01
Integrated GNC (iGNC) is an activity aimed at designing, developing and validating the GNC for autonomously performing the rendezvous and capture phase of the Mars sample return mission as defined during the Mars sample return Orbiter (MSRO) ESA study. The validation cycle includes testing in an end-to-end simulator, in a real-time avionics-representative test bench and, finally, in a dynamic HW in the loop test bench for assessing the feasibility, performances and figure of merits of the baseline approach defined during the MSRO study, for both nominal and contingency scenarios. The on-board software (OBSW) is tailored to work with the sensors, actuators and orbits baseline proposed in MSRO. The whole rendezvous is based on optical navigation, aided by RF-Doppler during the search and first orbit determination of the orbiting sample. The simulated rendezvous phase includes also the non-linear orbit synchronization, based on a dedicated non-linear guidance algorithm robust to Mars ascent vehicle (MAV) injection accuracy or MAV failures resulting in elliptic target orbits. The search phase is very demanding for the image processing (IP) due to the very high visual magnitude of the target wrt. the stellar background, and the attitude GNC requires very high pointing stability accuracies to fulfil IP constraints. A trade-off of innovative, autonomous navigation filters indicates the unscented Kalman filter (UKF) as the approach that provides the best results in terms of robustness, response to non-linearities and performances compatibly with computational load. At short range, an optimized IP based on a convex hull algorithm has been developed in order to guarantee LoS and range measurements from hundreds of metres to capture.
Robust Bounded Influence Tests in Linear Models
1988-11-01
sensitivity analysis and bounded influence estimation. In: Evaluation of Econometric Models, J. Kmenta and J.B. Ramsey (eds.) Academic Press, New York...1R’OBUST bOUNDED INFLUENCE TESTS IN LINEA’ MODELS and( I’homas P. [lettmansperger* Tim [PennsylvanLa State UJniversity A M i0d fix pu111 rsos.p JJ 1 0...November 1988 ROBUST BOUNDED INFLUENCE TESTS IN LINEAR MODELS Marianthi Markatou The University of Iowa and Thomas P. Hettmansperger* The Pennsylvania
Flight control application of new stability robustness bounds for linear uncertain systems
NASA Technical Reports Server (NTRS)
Yedavalli, Rama K.
1993-01-01
This paper addresses the issue of obtaining bounds on the real parameter perturbations of a linear state-space model for robust stability. Based on Kronecker algebra, new, easily computable sufficient bounds are derived that are much less conservative than the existing bounds since the technique is meant for only real parameter perturbations (in contrast to specializing complex variation case to real parameter case). The proposed theory is illustrated with application to several flight control examples.
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
DWT-based stereoscopic image watermarking
NASA Astrophysics Data System (ADS)
Chammem, A.; Mitrea, M.; Pr"teux, F.
2011-03-01
Watermarking already imposed itself as an effective and reliable solution for conventional multimedia content protection (image/video/audio/3D). By persistently (robustly) and imperceptibly (transparently) inserting some extra data into the original content, the illegitimate use of data can be detected without imposing any annoying constraint to a legal user. The present paper deals with stereoscopic image protection by means of watermarking techniques. That is, we first investigate the peculiarities of the visual stereoscopic content from the transparency and robustness point of view. Then, we advance a new watermarking scheme designed so as to reach the trade-off between transparency and robustness while ensuring a prescribed quantity of inserted information. Finally, this method is evaluated on two stereoscopic image corpora (natural image and medical data).
Robust linear discriminant models to solve financial crisis in banking sectors
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Idris, Faoziah; Ali, Hazlina; Omar, Zurni
2014-12-01
Linear discriminant analysis (LDA) is a widely-used technique in patterns classification via an equation which will minimize the probability of misclassifying cases into their respective categories. However, the performance of classical estimators in LDA highly depends on the assumptions of normality and homoscedasticity. Several robust estimators in LDA such as Minimum Covariance Determinant (MCD), S-estimators and Minimum Volume Ellipsoid (MVE) are addressed by many authors to alleviate the problem of non-robustness of the classical estimates. In this paper, we investigate on the financial crisis of the Malaysian banking institutions using robust LDA and classical LDA methods. Our objective is to distinguish the "distress" and "non-distress" banks in Malaysia by using the LDA models. Hit ratio is used to validate the accuracy predictive of LDA models. The performance of LDA is evaluated by estimating the misclassification rate via apparent error rate. The results and comparisons show that the robust estimators provide a better performance than the classical estimators for LDA.
Zheng, Haiyong; Wang, Ruchen; Yu, Zhibin; Wang, Nan; Gu, Zhaorui; Zheng, Bing
2017-12-28
Plankton, including phytoplankton and zooplankton, are the main source of food for organisms in the ocean and form the base of marine food chain. As the fundamental components of marine ecosystems, plankton is very sensitive to environment changes, and the study of plankton abundance and distribution is crucial, in order to understand environment changes and protect marine ecosystems. This study was carried out to develop an extensive applicable plankton classification system with high accuracy for the increasing number of various imaging devices. Literature shows that most plankton image classification systems were limited to only one specific imaging device and a relatively narrow taxonomic scope. The real practical system for automatic plankton classification is even non-existent and this study is partly to fill this gap. Inspired by the analysis of literature and development of technology, we focused on the requirements of practical application and proposed an automatic system for plankton image classification combining multiple view features via multiple kernel learning (MKL). For one thing, in order to describe the biomorphic characteristics of plankton more completely and comprehensively, we combined general features with robust features, especially by adding features like Inner-Distance Shape Context for morphological representation. For another, we divided all the features into different types from multiple views and feed them to multiple classifiers instead of only one by combining different kernel matrices computed from different types of features optimally via multiple kernel learning. Moreover, we also applied feature selection method to choose the optimal feature subsets from redundant features for satisfying different datasets from different imaging devices. We implemented our proposed classification system on three different datasets across more than 20 categories from phytoplankton to zooplankton. The experimental results validated that our system outperforms state-of-the-art plankton image classification systems in terms of accuracy and robustness. This study demonstrated automatic plankton image classification system combining multiple view features using multiple kernel learning. The results indicated that multiple view features combined by NLMKL using three kernel functions (linear, polynomial and Gaussian kernel functions) can describe and use information of features better so that achieve a higher classification accuracy.
Median Robust Extended Local Binary Pattern for Texture Classification.
Liu, Li; Lao, Songyang; Fieguth, Paul W; Guo, Yulan; Wang, Xiaogang; Pietikäinen, Matti
2016-03-01
Local binary patterns (LBP) are considered among the most computationally efficient high-performance texture features. However, the LBP method is very sensitive to image noise and is unable to capture macrostructure information. To best address these disadvantages, in this paper, we introduce a novel descriptor for texture classification, the median robust extended LBP (MRELBP). Different from the traditional LBP and many LBP variants, MRELBP compares regional image medians rather than raw image intensities. A multiscale LBP type descriptor is computed by efficiently comparing image medians over a novel sampling scheme, which can capture both microstructure and macrostructure texture information. A comprehensive evaluation on benchmark data sets reveals MRELBP's high performance-robust to gray scale variations, rotation changes and noise-but at a low computational cost. MRELBP produces the best classification scores of 99.82%, 99.38%, and 99.77% on three popular Outex test suites. More importantly, MRELBP is shown to be highly robust to image noise, including Gaussian noise, Gaussian blur, salt-and-pepper noise, and random pixel corruption.
Real-time scene and signature generation for ladar and imaging sensors
NASA Astrophysics Data System (ADS)
Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios
2014-05-01
This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.
NASA Astrophysics Data System (ADS)
Hsu, Jen-Feng; Dhingra, Shonali; D'Urso, Brian
2017-01-01
Mirror galvanometer systems (galvos) are commonly employed in research and commercial applications in areas involving laser imaging, laser machining, laser-light shows, and others. Here, we present a robust, moderate-speed, and cost-efficient home-built galvo system. The mechanical part of this design consists of one mirror, which is tilted around two axes with multiple surface transducers. We demonstrate the ability of this galvo by scanning the mirror using a computer, via a custom driver circuit. The performance of the galvo, including scan range, noise, linearity, and scan speed, is characterized. As an application, we show that this galvo system can be used in a confocal scanning microscopy system.
Molina, David; Pérez-Beteta, Julián; Martínez-González, Alicia; Martino, Juan; Velasquez, Carlos; Arana, Estanislao; Pérez-García, Víctor M
2017-01-01
Textural measures have been widely explored as imaging biomarkers in cancer. However, their robustness under dynamic range and spatial resolution changes in brain 3D magnetic resonance images (MRI) has not been assessed. The aim of this work was to study potential variations of textural measures due to changes in MRI protocols. Twenty patients harboring glioblastoma with pretreatment 3D T1-weighted MRIs were included in the study. Four different spatial resolution combinations and three dynamic ranges were studied for each patient. Sixteen three-dimensional textural heterogeneity measures were computed for each patient and configuration including co-occurrence matrices (CM) features and run-length matrices (RLM) features. The coefficient of variation was used to assess the robustness of the measures in two series of experiments corresponding to (i) changing the dynamic range and (ii) changing the matrix size. No textural measures were robust under dynamic range changes. Entropy was the only textural feature robust under spatial resolution changes (coefficient of variation under 10% in all cases). Textural measures of three-dimensional brain tumor images are not robust neither under dynamic range nor under matrix size changes. Standards should be harmonized to use textural features as imaging biomarkers in radiomic-based studies. The implications of this work go beyond the specific tumor type studied here and pose the need for standardization in textural feature calculation of oncological images.
NASA Astrophysics Data System (ADS)
Chen, Wen-Yuan; Liu, Chen-Chung
2006-01-01
The problems with binary watermarking schemes are that they have only a small amount of embeddable space and are not robust enough. We develop a slice-based large-cluster algorithm (SBLCA) to construct a robust watermarking scheme for binary images. In SBLCA, a small-amount cluster selection (SACS) strategy is used to search for a feasible slice in a large-cluster flappable-pixel decision (LCFPD) method, which is used to search for the best location for concealing a secret bit from a selected slice. This method has four major advantages over the others: (a) SBLCA has a simple and effective decision function to select appropriate concealment locations, (b) SBLCA utilizes a blind watermarking scheme without the original image in the watermark extracting process, (c) SBLCA uses slice-based shuffling capability to transfer the regular image into a hash state without remembering the state before shuffling, and finally, (d) SBLCA has enough embeddable space that every 64 pixels could accommodate a secret bit of the binary image. Furthermore, empirical results on test images reveal that our approach is a robust watermarking scheme for binary images.
Application of Bounded Linear Stability Analysis Method for Metrics-Driven Adaptive Control
NASA Technical Reports Server (NTRS)
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics-driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a second order system that represents a pitch attitude control of a generic transport aircraft. The analysis shows that the system with the metrics-conforming variable adaptive gain becomes more robust to unmodeled dynamics or time delay. The effect of analysis time-window for BLSA is also evaluated in order to meet the stability margin criteria.
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Amoroso, N; Errico, R; Bruno, S; Chincarini, A; Garuccio, E; Sensi, F; Tangaro, S; Tateo, A; Bellotti, R
2015-11-21
In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer's Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice[Formula: see text] and Dice[Formula: see text]). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.
NASA Astrophysics Data System (ADS)
Amoroso, N.; Errico, R.; Bruno, S.; Chincarini, A.; Garuccio, E.; Sensi, F.; Tangaro, S.; Tateo, A.; Bellotti, R.; Alzheimers Disease Neuroimaging Initiative,the
2015-11-01
In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer’s Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice{{}\\text{ADNI}} =0.929+/- 0.003 and Dice{{}\\text{OASIS}} =0.869+/- 0.002 ). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.
Chunking dynamics: heteroclinics in mind
Rabinovich, Mikhail I.; Varona, Pablo; Tristan, Irma; Afraimovich, Valentin S.
2014-01-01
Recent results of imaging technologies and non-linear dynamics make possible to relate the structure and dynamics of functional brain networks to different mental tasks and to build theoretical models for the description and prediction of cognitive activity. Such models are non-linear dynamical descriptions of the interaction of the core components—brain modes—participating in a specific mental function. The dynamical images of different mental processes depend on their temporal features. The dynamics of many cognitive functions are transient. They are often observed as a chain of sequentially changing metastable states. A stable heteroclinic channel (SHC) consisting of a chain of saddles—metastable states—connected by unstable separatrices is a mathematical image for robust transients. In this paper we focus on hierarchical chunking dynamics that can represent several forms of transient cognitive activity. Chunking is a dynamical phenomenon that nature uses to perform information processing of long sequences by dividing them in shorter information items. Chunking, for example, makes more efficient the use of short-term memory by breaking up long strings of information (like in language where one can see the separation of a novel on chapters, paragraphs, sentences, and finally words). Chunking is important in many processes of perception, learning, and cognition in humans and animals. Based on anatomical information about the hierarchical organization of functional brain networks, we propose a cognitive network architecture that hierarchically chunks and super-chunks switching sequences of metastable states produced by winnerless competitive heteroclinic dynamics. PMID:24672469
Chunking dynamics: heteroclinics in mind.
Rabinovich, Mikhail I; Varona, Pablo; Tristan, Irma; Afraimovich, Valentin S
2014-01-01
Recent results of imaging technologies and non-linear dynamics make possible to relate the structure and dynamics of functional brain networks to different mental tasks and to build theoretical models for the description and prediction of cognitive activity. Such models are non-linear dynamical descriptions of the interaction of the core components-brain modes-participating in a specific mental function. The dynamical images of different mental processes depend on their temporal features. The dynamics of many cognitive functions are transient. They are often observed as a chain of sequentially changing metastable states. A stable heteroclinic channel (SHC) consisting of a chain of saddles-metastable states-connected by unstable separatrices is a mathematical image for robust transients. In this paper we focus on hierarchical chunking dynamics that can represent several forms of transient cognitive activity. Chunking is a dynamical phenomenon that nature uses to perform information processing of long sequences by dividing them in shorter information items. Chunking, for example, makes more efficient the use of short-term memory by breaking up long strings of information (like in language where one can see the separation of a novel on chapters, paragraphs, sentences, and finally words). Chunking is important in many processes of perception, learning, and cognition in humans and animals. Based on anatomical information about the hierarchical organization of functional brain networks, we propose a cognitive network architecture that hierarchically chunks and super-chunks switching sequences of metastable states produced by winnerless competitive heteroclinic dynamics.
Distinguishing one from many using super-resolution compressive sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
Distinguishing one from many using super-resolution compressive sensing
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.; ...
2018-05-14
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
Tao, Chenyang; Feng, Jianfeng
2016-03-15
Quantifying associations in neuroscience (and many other scientific disciplines) is often challenged by high-dimensionality, nonlinearity and noisy observations. Many classic methods have either poor power or poor scalability on data sets of the same or different scales such as genetical, physiological and image data. Based on the framework of reproducing kernel Hilbert spaces we proposed a new nonlinear association criteria (NAC) with an efficient numerical algorithm and p-value approximation scheme. We also presented mathematical justification that links the proposed method to related methods such as kernel generalized variance, kernel canonical correlation analysis and Hilbert-Schmidt independence criteria. NAC allows the detection of association between arbitrary input domain as long as a characteristic kernel is defined. A MATLAB package was provided to facilitate applications. Extensive simulation examples and four real world neuroscience examples including functional MRI causality, Calcium imaging and imaging genetic studies on autism [Brain, 138(5):13821393 (2015)] and alcohol addiction [PNAS, 112(30):E4085-E4093 (2015)] are used to benchmark NAC. It demonstrates the superior performance over the existing procedures we tested and also yields biologically significant results for the real world examples. NAC beats its linear counterparts when nonlinearity is presented in the data. It also shows more robustness against different experimental setups compared with its nonlinear counterparts. In this work we presented a new and robust statistical approach NAC for measuring associations. It could serve as an interesting alternative to the existing methods for datasets where nonlinearity and other confounding factors are present. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Zhifu; Hu, Yueming; Li, Di
2016-08-01
For a class of linear discrete-time uncertain systems, a feedback feed-forward iterative learning control (ILC) scheme is proposed, which is comprised of an iterative learning controller and two current iteration feedback controllers. The iterative learning controller is used to improve the performance along the iteration direction and the feedback controllers are used to improve the performance along the time direction. First of all, the uncertain feedback feed-forward ILC system is presented by an uncertain two-dimensional Roesser model system. Then, two robust control schemes are proposed. One can ensure that the feedback feed-forward ILC system is bounded-input bounded-output stable along time direction, and the other can ensure that the feedback feed-forward ILC system is asymptotically stable along time direction. Both schemes can guarantee the system is robust monotonically convergent along the iteration direction. Third, the robust convergent sufficient conditions are given, which contains a linear matrix inequality (LMI). Moreover, the LMI can be used to determine the gain matrix of the feedback feed-forward iterative learning controller. Finally, the simulation results are presented to demonstrate the effectiveness of the proposed schemes.
NASA Astrophysics Data System (ADS)
Otake, Yoshito; Wang, Adam S.; Webster Stayman, J.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A. Jay; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.
2013-12-01
We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the same registration could be solved with 99.993% success in 6.3 s. The ability to register CT to fluoroscopy in a manner robust to patient deformation could be valuable in applications such as radiation therapy, interventional radiology, and an assistant to target localization (e.g., vertebral labeling) in image-guided spine surgery.
SU-F-BRD-05: Robustness of Dose Painting by Numbers in Proton Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montero, A Barragan; Sterpin, E; Lee, J
Purpose: Proton range uncertainties may cause important dose perturbations within the target volume, especially when steep dose gradients are present as in dose painting. The aim of this study is to assess the robustness against setup and range errors for high heterogeneous dose prescriptions (i.e., dose painting by numbers), delivered by proton pencil beam scanning. Methods: An automatic workflow, based on MATLAB functions, was implemented through scripting in RayStation (RaySearch Laboratories). It performs a gradient-based segmentation of the dose painting volume from 18FDG-PET images (GTVPET), and calculates the dose prescription as a linear function of the FDG-uptake value on eachmore » voxel. The workflow was applied to two patients with head and neck cancer. Robustness against setup and range errors of the conventional PTV margin strategy (prescription dilated by 2.5 mm) versus CTV-based (minimax) robust optimization (2.5 mm setup, 3% range error) was assessed by comparing the prescription with the planned dose for a set of error scenarios. Results: In order to ensure dose coverage above 95% of the prescribed dose in more than 95% of the GTVPET voxels while compensating for the uncertainties, the plans with a PTV generated a high overdose. For the nominal case, up to 35% of the GTVPET received doses 5% beyond prescription. For the worst of the evaluated error scenarios, the volume with 5% overdose increased to 50%. In contrast, for CTV-based plans this 5% overdose was present only in a small fraction of the GTVPET, which ranged from 7% in the nominal case to 15% in the worst of the evaluated scenarios. Conclusion: The use of a PTV leads to non-robust dose distributions with excessive overdose in the painted volume. In contrast, robust optimization yields robust dose distributions with limited overdose. RaySearch Laboratories is sincerely acknowledged for providing us with RayStation treatment planning system and for the support provided.« less
Measurement, time-stamping, and analysis of electrodermal activity in fMRI
NASA Astrophysics Data System (ADS)
Smyser, Christopher; Grabowski, Thomas J.; Rainville, Pierre; Bechara, Antione; Razavi, Mehrdad; Mehta, Sonya; Eaton, Brent L.; Bolinger, Lizann
2002-04-01
A low cost fMRI-compatible system was developed for detecting electrodermal activity without inducing image artifact. Subject electrodermal activity was measured on the plantar surface of the foot using a standard recording circuit. Filtered analog skin conductance responses (SCR) were recorded with a general purpose, time-stamping data acquisition system. A conditioning paradigm involving painful thermal stimulation was used to demonstrate SCR detection and investigate neural correlates of conditioned autonomic activity. 128x128 pixel EPI-BOLD images were acquired with a GE 1.5T Signa scanner. Image analysis was performed using voxel-wise multiple linear regression. The covariate of interest was generated by convolving stimulus event onset with a standard hemodynamic response function. The function was time-shifted to determine optimal activation. Significance was tested using the t-statistic. Image quality was unaffected by the device, and conditioned and unconditioned SCRs were successfully detected. Conditioned SCRs correlated significantly with activity in the right anterior insular cortex. The effect was more robust when responses were scaled by SCR amplitude. The ability to measure and time register SCRs during fMRI acquisition enables studies of cognitive processes marked by autonomic activity, including those involving decision-making, pain, emotion, and addiction.
High performance multi-spectral interrogation for surface plasmon resonance imaging sensors.
Sereda, A; Moreau, J; Canva, M; Maillart, E
2014-04-15
Surface plasmon resonance (SPR) sensing has proven to be a valuable tool in the field of surface interactions characterization, especially for biomedical applications where label-free techniques are of particular interest. In order to approach the theoretical resolution limit, most SPR-based systems have turned to either angular or spectral interrogation modes, which both offer very accurate real-time measurements, but at the expense of the 2-dimensional imaging capability, therefore decreasing the data throughput. In this article, we show numerically and experimentally how to combine the multi-spectral interrogation technique with 2D-imaging, while finding an optimum in terms of resolution, accuracy, acquisition speed and reduction in data dispersion with respect to the classical reflectivity interrogation mode. This multi-spectral interrogation methodology is based on a robust five parameter fitting of the spectral reflectivity curve which enables monitoring of the reflectivity spectral shift with a resolution of the order of ten picometers, and using only five wavelength measurements per point. In fine, such multi-spectral based plasmonic imaging system allows biomolecular interaction monitoring in a linear regime independently of variations of buffer optical index, which is illustrated on a DNA-DNA model case. © 2013 Elsevier B.V. All rights reserved.
High-performance web viewer for cardiac images
NASA Astrophysics Data System (ADS)
dos Santos, Marcelo; Furuie, Sergio S.
2004-04-01
With the advent of the digital devices for medical diagnosis the use of the regular films in radiology has decreased. Thus, the management and handling of medical images in digital format has become an important and critical task. In Cardiology, for example, the main difficulty is to display dynamic images with the appropriated color palette and frame rate used on acquisition process by Cath, Angio and Echo systems. In addition, other difficulty is handling large images in memory by any existing personal computer, including thin clients. In this work we present a web-based application that carries out these tasks with robustness and excellent performance, without burdening the server and network. This application provides near-diagnostic quality display of cardiac images stored as DICOM 3.0 files via a web browser and provides a set of resources that allows the viewing of still and dynamic images. It can access image files from the local disks, or network connection. Its features include: allows real-time playback, dynamic thumbnails image viewing during loading, access to patient database information, image processing tools, linear and angular measurements, on-screen annotations, image printing and exporting DICOM images to other image formats, and many others, all characterized by a pleasant user-friendly interface, inside a Web browser by means of a Java application. This approach offers some advantages over the most of medical images viewers, such as: facility of installation, integration with other systems by means of public and standardized interfaces, platform independence, efficient manipulation and display of medical images, all with high performance.
NASA Astrophysics Data System (ADS)
Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.
2015-03-01
During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.
Automatic segmentation of brain MRIs and mapping neuroanatomy across the human lifespan
NASA Astrophysics Data System (ADS)
Keihaninejad, Shiva; Heckemann, Rolf A.; Gousias, Ioannis S.; Rueckert, Daniel; Aljabar, Paul; Hajnal, Joseph V.; Hammers, Alexander
2009-02-01
A robust model for the automatic segmentation of human brain images into anatomically defined regions across the human lifespan would be highly desirable, but such structural segmentations of brain MRI are challenging due to age-related changes. We have developed a new method, based on established algorithms for automatic segmentation of young adults' brains. We used prior information from 30 anatomical atlases, which had been manually segmented into 83 anatomical structures. Target MRIs came from 80 subjects (~12 individuals/decade) from 20 to 90 years, with equal numbers of men, women; data from two different scanners (1.5T, 3T), using the IXI database. Each of the adult atlases was registered to each target MR image. By using additional information from segmentation into tissue classes (GM, WM and CSF) to initialise the warping based on label consistency similarity before feeding this into the previous normalised mutual information non-rigid registration, the registration became robust enough to accommodate atrophy and ventricular enlargement with age. The final segmentation was obtained by combination of the 30 propagated atlases using decision fusion. Kernel smoothing was used for modelling the structural volume changes with aging. Example linear correlation coefficients with age were, for lateral ventricular volume, rmale=0.76, rfemale=0.58 and, for hippocampal volume, rmale=-0.6, rfemale=-0.4 (allρ<0.01).
Kinreich, Sivan; Intrator, Nathan; Hendler, Talma
2011-01-01
One of the greatest challenges involved in studying the brain mechanisms of fear is capturing the individual's unique instantaneous experience. Brain imaging studies to date commonly sacrifice valuable information regarding the individual real-time conscious experience, especially when focusing on elucidating the amygdala's activity. Here, we assumed that by using a minimally intrusive cue along with applying a robust clustering approach to probe the amygdala, it would be possible to rate fear in real time and to derive the related network of activation. During functional magnetic resonance imaging scanning, healthy volunteers viewed two excerpts from horror movies and were periodically auditory cued to rate their instantaneous experience of "I'm scared." Using graph theory and community mathematical concepts, data-driven clustering of the fear-related functional cliques in the amygdala was performed guided by the individually marked periods of heightened fear. Individually tailored functions derived from these amygdala activation cliques were subsequently applied as general linear model predictors to a whole-brain analysis to reveal the correlated networks. Our results suggest that by using a localized robust clustering approach, it is possible to probe activation in the right dorsal amygdala that is directly related to individual real-time emotional experience. Moreover, this fear-evoked amygdala revealed two opposing networks of co-activation and co-deactivation, which correspond to vigilance and rest-related circuits, respectively.
A robust and fast active contour model for image segmentation with intensity inhomogeneity
NASA Astrophysics Data System (ADS)
Ding, Keyan; Weng, Guirong
2018-04-01
In this paper, a robust and fast active contour model is proposed for image segmentation in the presence of intensity inhomogeneity. By introducing the local image intensities fitting functions before the evolution of curve, the proposed model can effectively segment images with intensity inhomogeneity. And the computation cost is low because the fitting functions do not need to be updated in each iteration. Experiments have shown that the proposed model has a higher segmentation efficiency compared to some well-known active contour models based on local region fitting energy. In addition, the proposed model is robust to initialization, which allows the initial level set function to be a small constant function.
Effect of smoothing on robust chaos.
Deshpande, Amogh; Chen, Qingfei; Wang, Yan; Lai, Ying-Cheng; Do, Younghae
2010-08-01
In piecewise-smooth dynamical systems, situations can arise where the asymptotic attractors of the system in an open parameter interval are all chaotic (e.g., no periodic windows). This is the phenomenon of robust chaos. Previous works have established that robust chaos can occur through the mechanism of border-collision bifurcation, where border is the phase-space region where discontinuities in the derivatives of the dynamical equations occur. We investigate the effect of smoothing on robust chaos and find that periodic windows can arise when a small amount of smoothness is present. We introduce a parameter of smoothing and find that the measure of the periodic windows in the parameter space scales linearly with the parameter, regardless of the details of the smoothing function. Numerical support and a heuristic theory are provided to establish the scaling relation. Experimental evidence of periodic windows in a supposedly piecewise linear dynamical system, which has been implemented as an electronic circuit, is also provided.
Robust, nonlinear, high angle-of-attack control design for a supermaneuverable vehicle
NASA Technical Reports Server (NTRS)
Adams, Richard J.
1993-01-01
High angle-of-attack flight control laws are developed for a supermaneuverable fighter aircraft. The methods of dynamic inversion and structured singular value synthesis are combined into an approach which addresses both the nonlinearity and robustness problems of flight at extreme operating conditions. The primary purpose of the dynamic inversion control elements is to linearize the vehicle response across the flight envelope. Structured singular value synthesis is used to design a dynamic controller which provides robust tracking to pilot commands. The resulting control system achieves desired flying qualities and guarantees a large margin of robustness to uncertainties for high angle-of-attack flight conditions. The results of linear simulation and structured singular value stability analysis are presented to demonstrate satisfaction of the design criteria. High fidelity nonlinear simulation results show that the combined dynamics inversion/structured singular value synthesis control law achieves a high level of performance in a realistic environment.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-04-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-07-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
A Markov model for blind image separation by a mean-field EM algorithm.
Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele
2006-02-01
This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.
Image analysis-based modelling for flower number estimation in grapevine.
Millan, Borja; Aquino, Arturo; Diago, Maria P; Tardaguila, Javier
2017-02-01
Grapevine flower number per inflorescence provides valuable information that can be used for assessing yield. Considerable research has been conducted at developing a technological tool, based on image analysis and predictive modelling. However, the behaviour of variety-independent predictive models and yield prediction capabilities on a wide set of varieties has never been evaluated. Inflorescence images from 11 grapevine Vitis vinifera L. varieties were acquired under field conditions. The flower number per inflorescence and the flower number visible in the images were calculated manually, and automatically using an image analysis algorithm. These datasets were used to calibrate and evaluate the behaviour of two linear (single-variable and multivariable) and a nonlinear variety-independent model. As a result, the integrated tool composed of the image analysis algorithm and the nonlinear approach showed the highest performance and robustness (RPD = 8.32, RMSE = 37.1). The yield estimation capabilities of the flower number in conjunction with fruit set rate (R 2 = 0.79) and average berry weight (R 2 = 0.91) were also tested. This study proves the accuracy of flower number per inflorescence estimation using an image analysis algorithm and a nonlinear model that is generally applicable to different grapevine varieties. This provides a fast, non-invasive and reliable tool for estimation of yield at harvest. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Robust Linear Models for Cis-eQTL Analysis.
Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C
2015-01-01
Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.
Programmable Quantitative DNA Nanothermometers.
Gareau, David; Desrosiers, Arnaud; Vallée-Bélisle, Alexis
2016-07-13
Developing molecules, switches, probes or nanomaterials that are able to respond to specific temperature changes should prove of utility for several applications in nanotechnology. Here, we describe bioinspired strategies to design DNA thermoswitches with programmable linear response ranges that can provide either a precise ultrasensitive response over a desired, small temperature interval (±0.05 °C) or an extended linear response over a wide temperature range (e.g., from 25 to 90 °C). Using structural modifications or inexpensive DNA stabilizers, we show that we can tune the transition midpoints of DNA thermometers from 30 to 85 °C. Using multimeric switch architectures, we are able to create ultrasensitive thermometers that display large quantitative fluorescence gains within small temperature variation (e.g., > 700% over 10 °C). Using a combination of thermoswitches of different stabilities or a mix of stabilizers of various strengths, we can create extended thermometers that respond linearly up to 50 °C in temperature range. Here, we demonstrate the reversibility, robustness, and efficiency of these programmable DNA thermometers by monitoring temperature change inside individual wells during polymerase chain reactions. We discuss the potential applications of these programmable DNA thermoswitches in various nanotechnology fields including cell imaging, nanofluidics, nanomedecine, nanoelectronics, nanomaterial, and synthetic biology.
Superpixel-Augmented Endmember Detection for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Thompson, David R.; Castano, Rebecca; Gilmore, Martha
2011-01-01
Superpixels are homogeneous image regions comprised of several contiguous pixels. They are produced by shattering the image into contiguous, homogeneous regions that each cover between 20 and 100 image pixels. The segmentation aims for a many-to-one mapping from superpixels to image features; each image feature could contain several superpixels, but each superpixel occupies no more than one image feature. This conservative segmentation is relatively easy to automate in a robust fashion. Superpixel processing is related to the more general idea of improving hyperspectral analysis through spatial constraints, which can recognize subtle features at or below the level of noise by exploiting the fact that their spectral signatures are found in neighboring pixels. Recent work has explored spatial constraints for endmember extraction, showing significant advantages over techniques that ignore pixels relative positions. Methods such as AMEE (automated morphological endmember extraction) express spatial influence using fixed isometric relationships a local square window or Euclidean distance in pixel coordinates. In other words, two pixels covariances are based on their spatial proximity, but are independent of their absolute location in the scene. These isometric spatial constraints are most appropriate when spectral variation is smooth and constant over the image. Superpixels are simple to implement, efficient to compute, and are empirically effective. They can be used as a preprocessing step with any desired endmember extraction technique. Superpixels also have a solid theoretical basis in the hyperspectral linear mixing model, making them a principled approach for improving endmember extraction. Unlike existing approaches, superpixels can accommodate non-isometric covariance between image pixels (characteristic of discrete image features separated by step discontinuities). These kinds of image features are common in natural scenes. Analysts can substitute superpixels for image pixels during endmember analysis that leverages the spatial contiguity of scene features to enhance subtle spectral features. Superpixels define populations of image pixels that are independent samples from each image feature, permitting robust estimation of spectral properties, and reducing measurement noise in proportion to the area of the superpixel. This permits improved endmember extraction, and enables automated search for novel and constituent minerals in very noisy, hyperspatial images. This innovation begins with a graph-based segmentation based on the work of Felzenszwalb et al., but then expands their approach to the hyperspectral image domain with a Euclidean distance metric. Then, the mean spectrum of each segment is computed, and the resulting data cloud is used as input into sequential maximum angle convex cone (SMACC) endmember extraction.
NASA Astrophysics Data System (ADS)
Maalek, R.; Lichti, D. D.; Ruwanpura, J.
2015-08-01
The application of terrestrial laser scanners (TLSs) on construction sites for automating construction progress monitoring and controlling structural dimension compliance is growing markedly. However, current research in construction management relies on the planned building information model (BIM) to assign the accumulated point clouds to their corresponding structural elements, which may not be reliable in cases where the dimensions of the as-built structure differ from those of the planned model and/or the planned model is not available with sufficient detail. In addition outliers exist in construction site datasets due to data artefacts caused by moving objects, occlusions and dust. In order to overcome the aforementioned limitations, a novel method for robust classification and segmentation of planar and linear features is proposed to reduce the effects of outliers present in the LiDAR data collected from construction sites. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a robust clustering method. A method is also proposed to robustly extract the points belonging to the flat-slab floors and/or ceilings without performing the aforementioned stages in order to preserve computational efficiency. The applicability of the proposed method is investigated in two scenarios, namely, a laboratory with 30 million points and an actual construction site with over 150 million points. The results obtained by the two experiments validate the suitability of the proposed method for robust segmentation of planar and linear features in contaminated datasets, such as those collected from construction sites.
Image super-resolution via sparse representation.
Yang, Jianchao; Wright, John; Huang, Thomas S; Ma, Yi
2010-11-01
This paper presents a new approach to single-image super-resolution, based on sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low resolution and high resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low resolution image patch can be applied with the high resolution image patch dictionary to generate a high resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs, reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle super-resolution with noisy inputs in a more unified framework.
Closed-loop stability of linear quadratic optimal systems in the presence of modeling errors
NASA Technical Reports Server (NTRS)
Toda, M.; Patel, R.; Sridhar, B.
1976-01-01
The well-known stabilizing property of linear quadratic state feedback design is utilized to evaluate the robustness of a linear quadratic feedback design in the presence of modeling errors. Two general conditions are obtained for allowable modeling errors such that the resulting closed-loop system remains stable. One of these conditions is applied to obtain two more particular conditions which are readily applicable to practical situations where a designer has information on the bounds of modeling errors. Relations are established between the allowable parameter uncertainty and the weighting matrices of the quadratic performance index, thereby enabling the designer to select appropriate weighting matrices to attain a robust feedback design.
Robust Nonlinear Feedback Control of Aircraft Propulsion Systems
NASA Technical Reports Server (NTRS)
Garrard, William L.; Balas, Gary J.; Litt, Jonathan (Technical Monitor)
2001-01-01
This is the final report on the research performed under NASA Glen grant NASA/NAG-3-1975 concerning feedback control of the Pratt & Whitney (PW) STF 952, a twin spool, mixed flow, after burning turbofan engine. The research focussed on the design of linear and gain-scheduled, multivariable inner-loop controllers for the PW turbofan engine using H-infinity and linear, parameter-varying (LPV) control techniques. The nonlinear turbofan engine simulation was provided by PW within the NASA Rocket Engine Transient Simulator (ROCETS) simulation software environment. ROCETS was used to generate linearized models of the turbofan engine for control design and analysis as well as the simulation environment to evaluate the performance and robustness of the controllers. Comparison between the H-infinity, and LPV controllers are made with the baseline multivariable controller and developed by Pratt & Whitney engineers included in the ROCETS simulation. Simulation results indicate that H-infinity and LPV techniques effectively achieve desired response characteristics with minimal cross coupling between commanded values and are very robust to unmodeled dynamics and sensor noise.
Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert
2018-05-08
In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.
Pérez-Beteta, Julián; Martínez-González, Alicia; Martino, Juan; Velasquez, Carlos; Arana, Estanislao; Pérez-García, Víctor M.
2017-01-01
Purpose Textural measures have been widely explored as imaging biomarkers in cancer. However, their robustness under dynamic range and spatial resolution changes in brain 3D magnetic resonance images (MRI) has not been assessed. The aim of this work was to study potential variations of textural measures due to changes in MRI protocols. Materials and methods Twenty patients harboring glioblastoma with pretreatment 3D T1-weighted MRIs were included in the study. Four different spatial resolution combinations and three dynamic ranges were studied for each patient. Sixteen three-dimensional textural heterogeneity measures were computed for each patient and configuration including co-occurrence matrices (CM) features and run-length matrices (RLM) features. The coefficient of variation was used to assess the robustness of the measures in two series of experiments corresponding to (i) changing the dynamic range and (ii) changing the matrix size. Results No textural measures were robust under dynamic range changes. Entropy was the only textural feature robust under spatial resolution changes (coefficient of variation under 10% in all cases). Conclusion Textural measures of three-dimensional brain tumor images are not robust neither under dynamic range nor under matrix size changes. Standards should be harmonized to use textural features as imaging biomarkers in radiomic-based studies. The implications of this work go beyond the specific tumor type studied here and pose the need for standardization in textural feature calculation of oncological images. PMID:28586353
H.264/AVC digital fingerprinting based on spatio-temporal just noticeable distortion
NASA Astrophysics Data System (ADS)
Ait Saadi, Karima; Bouridane, Ahmed; Guessoum, Abderrezak
2014-01-01
This paper presents a robust adaptive embedding scheme using a modified Spatio-Temporal noticeable distortion (JND) model that is designed for tracing the distribution of the H.264/AVC video content and protecting them from unauthorized redistribution. The Embedding process is performed during coding process in selected macroblocks type Intra 4x4 within I-Frame. The method uses spread-spectrum technique in order to obtain robustness against collusion attacks and the JND model to dynamically adjust the embedding strength and control the energy of the embedded fingerprints so as to ensure their imperceptibility. Linear and non linear collusion attacks are performed to show the robustness of the proposed technique against collusion attacks while maintaining visual quality unchanged.
Robust Control of Uncertain Systems via Dissipative LQG-Type Controllers
NASA Technical Reports Server (NTRS)
Joshi, Suresh M.
2000-01-01
Optimal controller design is addressed for a class of linear, time-invariant systems which are dissipative with respect to a quadratic power function. The system matrices are assumed to be affine functions of uncertain parameters confined to a convex polytopic region in the parameter space. For such systems, a method is developed for designing a controller which is dissipative with respect to a given power function, and is simultaneously optimal in the linear-quadratic-Gaussian (LQG) sense. The resulting controller provides robust stability as well as optimal performance. Three important special cases, namely, passive, norm-bounded, and sector-bounded controllers, which are also LQG-optimal, are presented. The results give new methods for robust controller design in the presence of parametric uncertainties.
Kallehauge, Jesper F; Sourbron, Steven; Irving, Benjamin; Tanderup, Kari; Schnabel, Julia A; Chappell, Michael A
2017-06-01
Fitting tracer kinetic models using linear methods is much faster than using their nonlinear counterparts, although this comes often at the expense of reduced accuracy and precision. The aim of this study was to derive and compare the performance of the linear compartmental tissue uptake (CTU) model with its nonlinear version with respect to their percentage error and precision. The linear and nonlinear CTU models were initially compared using simulations with varying noise and temporal sampling. Subsequently, the clinical applicability of the linear model was demonstrated on 14 patients with locally advanced cervical cancer examined with dynamic contrast-enhanced magnetic resonance imaging. Simulations revealed equal percentage error and precision when noise was within clinical achievable ranges (contrast-to-noise ratio >10). The linear method was significantly faster than the nonlinear method, with a minimum speedup of around 230 across all tested sampling rates. Clinical analysis revealed that parameters estimated using the linear and nonlinear CTU model were highly correlated (ρ ≥ 0.95). The linear CTU model is computationally more efficient and more stable against temporal downsampling, whereas the nonlinear method is more robust to variations in noise. The two methods may be used interchangeably within clinical achievable ranges of temporal sampling and noise. Magn Reson Med 77:2414-2423, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
Image Hashes as Templates for Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janik, Tadeusz; Jarman, Kenneth D.; Robinson, Sean M.
2012-07-17
Imaging systems can provide measurements that confidently assess characteristics of nuclear weapons and dismantled weapon components, and such assessment will be needed in future verification for arms control. Yet imaging is often viewed as too intrusive, raising concern about the ability to protect sensitive information. In particular, the prospect of using image-based templates for verifying the presence or absence of a warhead, or of the declared configuration of fissile material in storage, may be rejected out-of-hand as being too vulnerable to violation of information barrier (IB) principles. Development of a rigorous approach for generating and comparing reduced-information templates from images,more » and assessing the security, sensitivity, and robustness of verification using such templates, are needed to address these concerns. We discuss our efforts to develop such a rigorous approach based on a combination of image-feature extraction and encryption-utilizing hash functions to confirm proffered declarations, providing strong classified data security while maintaining high confidence for verification. The proposed work is focused on developing secure, robust, tamper-sensitive and automatic techniques that may enable the comparison of non-sensitive hashed image data outside an IB. It is rooted in research on so-called perceptual hash functions for image comparison, at the interface of signal/image processing, pattern recognition, cryptography, and information theory. Such perceptual or robust image hashing—which, strictly speaking, is not truly cryptographic hashing—has extensive application in content authentication and information retrieval, database search, and security assurance. Applying and extending the principles of perceptual hashing to imaging for arms control, we propose techniques that are sensitive to altering, forging and tampering of the imaged object yet robust and tolerant to content-preserving image distortions and noise. Ensuring that the information contained in the hashed image data (available out-of-IB) cannot be used to extract sensitive information about the imaged object is of primary concern. Thus the techniques are characterized by high unpredictability to guarantee security. We will present an assessment of the performance of our techniques with respect to security, sensitivity and robustness on the basis of a methodical and mathematically precise framework.« less
Constructing the L2-Graph for Robust Subspace Learning and Subspace Clustering.
Peng, Xi; Yu, Zhiding; Yi, Zhang; Tang, Huajin
2017-04-01
Under the framework of graph-based learning, the key to robust subspace clustering and subspace learning is to obtain a good similarity graph that eliminates the effects of errors and retains only connections between the data points from the same subspace (i.e., intrasubspace data points). Recent works achieve good performance by modeling errors into their objective functions to remove the errors from the inputs. However, these approaches face the limitations that the structure of errors should be known prior and a complex convex problem must be solved. In this paper, we present a novel method to eliminate the effects of the errors from the projection space (representation) rather than from the input space. We first prove that l 1 -, l 2 -, l ∞ -, and nuclear-norm-based linear projection spaces share the property of intrasubspace projection dominance, i.e., the coefficients over intrasubspace data points are larger than those over intersubspace data points. Based on this property, we introduce a method to construct a sparse similarity graph, called L2-graph. The subspace clustering and subspace learning algorithms are developed upon L2-graph. We conduct comprehensive experiment on subspace learning, image clustering, and motion segmentation and consider several quantitative benchmarks classification/clustering accuracy, normalized mutual information, and running time. Results show that L2-graph outperforms many state-of-the-art methods in our experiments, including L1-graph, low rank representation (LRR), and latent LRR, least square regression, sparse subspace clustering, and locally linear representation.
Imaging ultrasonic dispersive guided wave energy in long bones using linear radon transform.
Tran, Tho N H T; Nguyen, Kim-Cuong T; Sacchi, Mauricio D; Le, Lawrence H
2014-11-01
Multichannel analysis of dispersive ultrasonic energy requires a reliable mapping of the data from the time-distance (t-x) domain to the frequency-wavenumber (f-k) or frequency-phase velocity (f-c) domain. The mapping is usually performed with the classic 2-D Fourier transform (FT) with a subsequent substitution and interpolation via c = 2πf/k. The extracted dispersion trajectories of the guided modes lack the resolution in the transformed plane to discriminate wave modes. The resolving power associated with the FT is closely linked to the aperture of the recorded data. Here, we present a linear Radon transform (RT) to image the dispersive energies of the recorded ultrasound wave fields. The RT is posed as an inverse problem, which allows implementation of the regularization strategy to enhance the focusing power. We choose a Cauchy regularization for the high-resolution RT. Three forms of Radon transform: adjoint, damped least-squares, and high-resolution are described, and are compared with respect to robustness using simulated and cervine bone data. The RT also depends on the data aperture, but not as severely as does the FT. With the RT, the resolution of the dispersion panel could be improved up to around 300% over that of the FT. Among the Radon solutions, the high-resolution RT delineated the guided wave energy with much better imaging resolution (at least 110%) than the other two forms. The Radon operator can also accommodate unevenly spaced records. The results of the study suggest that the high-resolution RT is a valuable imaging tool to extract dispersive guided wave energies under limited aperture. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Peng, Yahui; Jiang, Yulei; Antic, Tatjana; Giger, Maryellen L.; Eggener, Scott; Oto, Aytekin
2013-02-01
The purpose of this study was to study T2-weighted magnetic resonance (MR) image texture features and diffusionweighted (DW) MR image features in distinguishing prostate cancer (PCa) from normal tissue. We collected two image datasets: 23 PCa patients (25 PCa and 23 normal tissue regions of interest [ROIs]) imaged with Philips MR scanners, and 30 PCa patients (41 PCa and 26 normal tissue ROIs) imaged with GE MR scanners. A radiologist drew ROIs manually via consensus histology-MR correlation conference with a pathologist. A number of T2-weighted texture features and apparent diffusion coefficient (ADC) features were investigated, and linear discriminant analysis (LDA) was used to combine select strong image features. Area under the receiver operating characteristic (ROC) curve (AUC) was used to characterize feature effectiveness in distinguishing PCa from normal tissue ROIs. Of the features studied, ADC 10th percentile, ADC average, and T2-weighted sum average yielded AUC values (+/-standard error) of 0.95+/-0.03, 0.94+/-0.03, and 0.85+/-0.05 on the Phillips images, and 0.91+/-0.04, 0.89+/-0.04, and 0.70+/-0.06 on the GE images, respectively. The three-feature combination yielded AUC values of 0.94+/-0.03 and 0.89+/-0.04 on the Phillips and GE images, respectively. ADC 10th percentile, ADC average, and T2-weighted sum average, are effective in distinguishing PCa from normal tissue, and appear robust in images acquired from Phillips and GE MR scanners.
NASA Astrophysics Data System (ADS)
Deng, Zhipeng; Lei, Lin; Zhou, Shilin
2015-10-01
Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.
Mechanically induced intercellular calcium communication in confined endothelial structures.
Junkin, Michael; Lu, Yi; Long, Juexuan; Deymier, Pierre A; Hoying, James B; Wong, Pak Kin
2013-03-01
Calcium signaling in the diverse vascular structures is regulated by a wide range of mechanical and biochemical factors to maintain essential physiological functions of the vasculature. To properly transmit information, the intercellular calcium communication mechanism must be robust against various conditions in the cellular microenvironment. Using plasma lithography geometric confinement, we investigate mechanically induced calcium wave propagation in networks of human umbilical vein endothelial cells organized. Endothelial cell networks with confined architectures were stimulated at the single cell level, including using capacitive force probes. Calcium wave propagation in the network was observed using fluorescence calcium imaging. We show that mechanically induced calcium signaling in the endothelial networks is dynamically regulated against a wide range of probing forces and repeated stimulations. The calcium wave is able to propagate consistently in various dimensions from monolayers to individual cell chains, and in different topologies from linear patterns to cell junctions. Our results reveal that calcium signaling provides a robust mechanism for cell-cell communication in networks of endothelial cells despite the diversity of the microenvironmental inputs and complexity of vascular structures. Copyright © 2012 Elsevier Ltd. All rights reserved.
A robust color image watermarking algorithm against rotation attacks
NASA Astrophysics Data System (ADS)
Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min
2018-01-01
A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.
Robust Short-Lag Spatial Coherence Imaging.
Nair, Arun Asokan; Tran, Trac Duy; Bell, Muyinatu A Lediju
2018-03-01
Short-lag spatial coherence (SLSC) imaging displays the spatial coherence between backscattered ultrasound echoes instead of their signal amplitudes and is more robust to noise and clutter artifacts when compared with traditional delay-and-sum (DAS) B-mode imaging. However, SLSC imaging does not consider the content of images formed with different lags, and thus does not exploit the differences in tissue texture at each short-lag value. Our proposed method improves SLSC imaging by weighting the addition of lag values (i.e., M-weighting) and by applying robust principal component analysis (RPCA) to search for a low-dimensional subspace for projecting coherence images created with different lag values. The RPCA-based projections are considered to be denoised versions of the originals that are then weighted and added across lags to yield a final robust SLSC (R-SLSC) image. Our approach was tested on simulation, phantom, and in vivo liver data. Relative to DAS B-mode images, the mean contrast, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) improvements with R-SLSC images are 21.22 dB, 2.54, and 2.36, respectively, when averaged over simulated, phantom, and in vivo data and over all lags considered, which corresponds to mean improvements of 96.4%, 121.2%, and 120.5%, respectively. When compared with SLSC images, the corresponding mean improvements with R-SLSC images were 7.38 dB, 1.52, and 1.30, respectively (i.e., mean improvements of 14.5%, 50.5%, and 43.2%, respectively). Results show great promise for smoothing out the tissue texture of SLSC images and enhancing anechoic or hypoechoic target visibility at higher lag values, which could be useful in clinical tasks such as breast cyst visualization, liver vessel tracking, and obese patient imaging.
Robust and fast-converging level set method for side-scan sonar image segmentation
NASA Astrophysics Data System (ADS)
Liu, Yan; Li, Qingwu; Huo, Guanying
2017-11-01
A robust and fast-converging level set method is proposed for side-scan sonar (SSS) image segmentation. First, the noise in each sonar image is removed using the adaptive nonlinear complex diffusion filter. Second, k-means clustering is used to obtain the initial presegmentation image from the denoised image, and then the distance maps of the initial contours are reinitialized to guarantee the accuracy of the numerical calculation used in the level set evolution. Finally, the satisfactory segmentation is achieved using a robust variational level set model, where the evolution control parameters are generated by the presegmentation. The proposed method is successfully applied to both synthetic image with speckle noise and real SSS images. Experimental results show that the proposed method needs much less iteration and therefore is much faster than the fuzzy local information c-means clustering method, the level set method using a gamma observation model, and the enhanced region-scalable fitting method. Moreover, the proposed method can usually obtain more accurate segmentation results compared with other methods.
NASA Astrophysics Data System (ADS)
Pu, Zhiqiang; Tan, Xiangmin; Fan, Guoliang; Yi, Jianqiang
2014-08-01
Flexible air-breathing hypersonic vehicles feature significant uncertainties which pose huge challenges to robust controller designs. In this paper, four major categories of uncertainties are analyzed, that is, uncertainties associated with flexible effects, aerodynamic parameter variations, external environmental disturbances, and control-oriented modeling errors. A uniform nonlinear uncertainty model is explored for the first three uncertainties which lumps all uncertainties together and consequently is beneficial for controller synthesis. The fourth uncertainty is additionally considered in stability analysis. Based on these analyses, the starting point of the control design is to decompose the vehicle dynamics into five functional subsystems. Then a robust trajectory linearization control (TLC) scheme consisting of five robust subsystem controllers is proposed. In each subsystem controller, TLC is combined with the extended state observer (ESO) technique for uncertainty compensation. The stability of the overall closed-loop system with the four aforementioned uncertainties and additional singular perturbations is analyzed. Particularly, the stability of nonlinear ESO is also discussed from a Liénard system perspective. At last, simulations demonstrate the great control performance and the uncertainty rejection ability of the robust scheme.
NASA Astrophysics Data System (ADS)
Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar
2016-08-01
In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.
Robust output tracking control of a laboratory helicopter for automatic landing
NASA Astrophysics Data System (ADS)
Liu, Hao; Lu, Geng; Zhong, Yisheng
2014-11-01
In this paper, robust output tracking control problem of a laboratory helicopter for automatic landing in high seas is investigated. The motion of the helicopter is required to synchronise with that of an oscillating platform, e.g. the deck of a vessel subject to wave-induced motions. A robust linear time-invariant output feedback controller consisting of a nominal controller and a robust compensator is designed. The robust compensator is introduced to restrain the influences of parametric uncertainties, nonlinearities and external disturbances. It is shown that robust stability and robust tracking property can be achieved simultaneously. Experimental results on the laboratory helicopter for automatic landing demonstrate the effectiveness of the designed control approach.
A single-pixel X-ray imager concept and its application to secure radiographic inspections
Gilbert, Andrew J.; Miller, Brian W.; Robinson, Sean M.; ...
2017-07-01
Imaging technology is generally considered too invasive for arms control inspections due to the concern that it cannot properly secure sensitive features of the inspected item. But, this same sensitive information, which could include direct information on the form and function of the items under inspection, could be used for robust arms control inspections. The single-pixel X-ray imager (SPXI) is introduced as a method to make such inspections, capturing the salient spatial information of an object in a secure manner while never forming an actual image. We built this method on the theory of compressive sensing and the single pixelmore » optical camera. The performance of the system is quantified using simulated inspections of simple objects. Measures of the robustness and security of the method are introduced and used to determine how robust and secure such an inspection would be. Particularly, it is found that an inspection with low noise (<1%) and high undersampling (>256×) exhibits high robustness and security.« less
A Robust Image Watermarking in the Joint Time-Frequency Domain
NASA Astrophysics Data System (ADS)
Öztürk, Mahmut; Akan, Aydın; Çekiç, Yalçın
2010-12-01
With the rapid development of computers and internet applications, copyright protection of multimedia data has become an important problem. Watermarking techniques are proposed as a solution to copyright protection of digital media files. In this paper, a new, robust, and high-capacity watermarking method that is based on spatiofrequency (SF) representation is presented. We use the discrete evolutionary transform (DET) calculated by the Gabor expansion to represent an image in the joint SF domain. The watermark is embedded onto selected coefficients in the joint SF domain. Hence, by combining the advantages of spatial and spectral domain watermarking methods, a robust, invisible, secure, and high-capacity watermarking method is presented. A correlation-based detector is also proposed to detect and extract any possible watermarks on an image. The proposed watermarking method was tested on some commonly used test images under different signal processing attacks like additive noise, Wiener and Median filtering, JPEG compression, rotation, and cropping. Simulation results show that our method is robust against all of the attacks.
A single-pixel X-ray imager concept and its application to secure radiographic inspections
NASA Astrophysics Data System (ADS)
Gilbert, Andrew J.; Miller, Brian W.; Robinson, Sean M.; White, Timothy A.; Pitts, William Karl; Jarman, Kenneth D.; Seifert, Allen
2017-07-01
Imaging technology is generally considered too invasive for arms control inspections due to the concern that it cannot properly secure sensitive features of the inspected item. However, this same sensitive information, which could include direct information on the form and function of the items under inspection, could be used for robust arms control inspections. The single-pixel X-ray imager (SPXI) is introduced as a method to make such inspections, capturing the salient spatial information of an object in a secure manner while never forming an actual image. The method is built on the theory of compressive sensing and the single pixel optical camera. The performance of the system is quantified using simulated inspections of simple objects. Measures of the robustness and security of the method are introduced and used to determine how robust and secure such an inspection would be. In particular, it is found that an inspection with low noise ( < 1 %) and high undersampling ( > 256 ×) exhibits high robustness and security.
Robust mosiacs of close-range high-resolution images
NASA Astrophysics Data System (ADS)
Song, Ran; Szymanski, John E.
2008-03-01
This paper presents a robust algorithm which relies only on the information contained within the captured images for the construction of massive composite mosaic images from close-range and high-resolution originals, such as those obtained when imaging architectural and heritage structures. We first apply Harris algorithm to extract a selection of corners and, then, employ both the intensity correlation and the spatial correlation between the corresponding corners for matching them. Then we estimate the eight-parameter projective transformation matrix by the genetic algorithm. Lastly, image fusion using a weighted blending function together with intensity compensation produces an effective seamless mosaic image.
Lane Marking Detection and Reconstruction with Line-Scan Imaging Data.
Li, Lin; Luo, Wenting; Wang, Kelvin C P
2018-05-20
A bstract: Lane marking detection and localization are crucial for autonomous driving and lane-based pavement surveys. Numerous studies have been done to detect and locate lane markings with the purpose of advanced driver assistance systems, in which image data are usually captured by vision-based cameras. However, a limited number of studies have been done to identify lane markings using high-resolution laser images for road condition evaluation. In this study, the laser images are acquired with a digital highway data vehicle (DHDV). Subsequently, a novel methodology is presented for the automated lane marking identification and reconstruction, and is implemented in four phases: (1) binarization of the laser images with a new threshold method (multi-box segmentation based threshold method); (2) determination of candidate lane markings with closing operations and a marching square algorithm; (3) identification of true lane marking by eliminating false positives (FPs) using a linear support vector machine method; and (4) reconstruction of the damaged and dash lane marking segments to form a continuous lane marking based on the geometry features such as adjacent lane marking location and lane width. Finally, a case study is given to validate effects of the novel methodology. The findings indicate the new strategy is robust in image binarization and lane marking localization. This study would be beneficial in road lane-based pavement condition evaluation such as lane-based rutting measurement and crack classification.
De-noising of 3D multiple-coil MR images using modified LMMSE estimator.
Yaghoobi, Nima; Hasanzadeh, Reza P R
2018-06-20
De-noising is a crucial topic in Magnetic Resonance Imaging (MRI) which focuses on less loss of Magnetic Resonance (MR) image information and details preservation during the noise suppression. Nowadays multiple-coil MRI system is preferred to single one due to its acceleration in the imaging process. Due to the fact that the model of noise in single-coil and multiple-coil MRI systems are different, the de-noising methods that mostly are adapted to single-coil MRI systems, do not work appropriately with multiple-coil one. The model of noise in single-coil MRI systems is Rician while in multiple-coil one (if no subsampling occurs in k-space or GRAPPA reconstruction process is being done in the coils), it obeys noncentral Chi (nc-χ). In this paper, a new filtering method based on the Linear Minimum Mean Square Error (LMMSE) estimator is proposed for multiple-coil MR Images ruined by nc-χ noise. In the presented method, to have an optimum similarity selection of voxels, the Bayesian Mean Square Error (BMSE) criterion is used and proved for nc-χ noise model and also a nonlocal voxel selection methodology is proposed for nc-χ distribution. The results illustrate robust and accurate performance compared to the related state-of-the-art methods, either on ideal nc-χ images or GRAPPA reconstructed ones. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
King, Sharon V.; Yuan, Shuai; Preza, Chrysanthe
2018-03-01
Effectiveness of extended depth of field microscopy (EDFM) implementation with wavefront encoding methods is reduced by depth-induced spherical aberration (SA) due to reliance of this approach on a defined point spread function (PSF). Evaluation of the engineered PSF's robustness to SA, when a specific phase mask design is used, is presented in terms of the final restored image quality. Synthetic intermediate images were generated using selected generalized cubic and cubic phase mask designs. Experimental intermediate images were acquired using the same phase mask designs projected from a liquid crystal spatial light modulator. Intermediate images were restored using the penalized space-invariant expectation maximization and the regularized linear least squares algorithms. In the presence of depth-induced SA, systems characterized by radially symmetric PSFs, coupled with model-based computational methods, achieve microscope imaging performance with fewer deviations in structural fidelity (e.g., artifacts) in simulation and experiment and 50% more accurate positioning of 1-μm beads at 10-μm depth in simulation than those with radially asymmetric PSFs. Despite a drop in the signal-to-noise ratio after processing, EDFM is shown to achieve the conventional resolution limit when a model-based reconstruction algorithm with appropriate regularization is used. These trends are also found in images of fixed fluorescently labeled brine shrimp, not adjacent to the coverslip, and fluorescently labeled mitochondria in live cells.
Sun and aureole spectrometer for airborne measurements to derive aerosol optical properties.
Asseng, Hagen; Ruhtz, Thomas; Fischer, Jürgen
2004-04-01
We have designed an airborne spectrometer system for the simultaneous measurement of the direct Sun irradiance and aureole radiance. The instrument is based on diffraction grating spectrometers with linear image sensors. It is robust, lightweight, compact, and reliable, characteristics that are important for airborne applications. The multispectral radiation measurements are used to derive optical properties of tropospheric aerosols. We extract the altitude dependence of the aerosol volume scattering function and of the aerosol optical depth by using flight patterns with descents and ascents ranging from the surface level to the top of the boundary layer. The extinction coefficient and the product of single scattering albedo and phase function of separate layers can be derived from the airborne measurements.
Distributed Sensing and Processing for Multi-Camera Networks
NASA Astrophysics Data System (ADS)
Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.
Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1992-01-01
The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.
NASA Astrophysics Data System (ADS)
Azizi, S.; Torres, L. A. B.; Palhares, R. M.
2018-01-01
The regional robust stabilisation by means of linear time-invariant state feedback control for a class of uncertain MIMO nonlinear systems with parametric uncertainties and control input saturation is investigated. The nonlinear systems are described in a differential algebraic representation and the regional stability is handled considering the largest ellipsoidal domain-of-attraction (DOA) inside a given polytopic region in the state space. A novel set of sufficient Linear Matrix Inequality (LMI) conditions with new auxiliary decision variables are developed aiming to design less conservative linear state feedback controllers with corresponding larger DOAs, by considering the polytopic description of the saturated inputs. A few examples are presented showing favourable comparisons with recently published similar control design methodologies.
Image-adaptive and robust digital wavelet-domain watermarking for images
NASA Astrophysics Data System (ADS)
Zhao, Yi; Zhang, Liping
2018-03-01
We propose a new frequency domain wavelet based watermarking technique. The key idea of our scheme is twofold: multi-tier solution representation of image and odd-even quantization embedding/extracting watermark. Because many complementary watermarks need to be hidden, the watermark image designed is image-adaptive. The meaningful and complementary watermark images was embedded into the original image (host image) by odd-even quantization modifying coefficients, which was selected from the detail wavelet coefficients of the original image, if their magnitudes are larger than their corresponding Just Noticeable Difference thresholds. The tests show good robustness against best-known attacks such as noise addition, image compression, median filtering, clipping as well as geometric transforms. Further research may improve the performance by refining JND thresholds.
NASA Astrophysics Data System (ADS)
Rotenberg, David J.
Artifacts caused by head motion are a substantial source of error in fMRI that limits its use in neuroscience research and clinical settings. Real-time scan-plane correction by optical tracking has been shown to correct slice misalignment and non-linear spin-history artifacts, however residual artifacts due to dynamic magnetic field non-uniformity may remain in the data. A recently developed correction technique, PLACE, can correct for absolute geometric distortion using the complex image data from two EPI images, with slightly shifted k-space trajectories. We present a correction approach that integrates PLACE into a real-time scan-plane update system by optical tracking, applied to a tissue-equivalent phantom undergoing complex motion and an fMRI finger tapping experiment with overt head motion to induce dynamic field non-uniformity. Experiments suggest that including volume by volume geometric distortion correction by PLACE can suppress dynamic geometric distortion artifacts in a phantom and in vivo and provide more robust activation maps.
An Asymmetric Image Encryption Based on Phase Truncated Hybrid Transform
NASA Astrophysics Data System (ADS)
Khurana, Mehak; Singh, Hukum
2017-09-01
To enhance the security of the system and to protect it from the attacker, this paper proposes a new asymmetric cryptosystem based on hybrid approach of Phase Truncated Fourier and Discrete Cosine Transform (PTFDCT) which adds non linearity by including cube and cube root operation in the encryption and decryption path respectively. In this cryptosystem random phase masks are used as encryption keys and phase masks generated after the cube operation in encryption process are reserved as decryption keys and cube root operation is required to decrypt image in decryption process. The cube and cube root operation introduced in the encryption and decryption path makes system resistant against standard attacks. The robustness of the proposed cryptosystem has been analysed and verified on the basis of various parameters by simulating on MATLAB 7.9.0 (R2008a). The experimental results are provided to highlight the effectiveness and suitability of the proposed cryptosystem and prove the system is secure.
NASA Astrophysics Data System (ADS)
Suzuki, H.; Mizuguchi, R.; Matsuhiro, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.; Moriyama, N.
2015-03-01
Computed tomography has been used for assessing structural abnormalities associated with emphysema. It is important to develop a robust CT based imaging biomarker that would allow quantification of emphysema progression in early stage. This paper presents effect of smoking on emphysema progression using annual changes of low attenuation volume (LAV) by each lung lobe acquired from low-dose CT images in longitudinal screening for lung cancer. The percentage of LAV (LAV%) was measured after applying CT value threshold method and small noise reduction. Progression of emphysema was assessed by statistical analysis of the annual changes represented by linear regression of LAV%. This method was applied to 215 participants in lung cancer CT screening for five years (18 nonsmokers, 85 past smokers, and 112 current smokers). The results showed that LAV% is useful to classify current smokers with rapid progression of emphysema (0.2%/year, p<0.05). This paper demonstrates effectiveness of the proposed method in diagnosis and prognosis of early emphysema in CT screening for lung cancer.
NASA Astrophysics Data System (ADS)
Álvarez, Charlens; Martínez, Fabio; Romero, Eduardo
2015-01-01
The pelvic magnetic Resonance images (MRI) are used in Prostate cancer radiotherapy (RT), a process which is part of the radiation planning. Modern protocols require a manual delineation, a tedious and variable activity that may take about 20 minutes per patient, even for trained experts. That considerable time is an important work ow burden in most radiological services. Automatic or semi-automatic methods might improve the efficiency by decreasing the measure times while conserving the required accuracy. This work presents a fully automatic atlas- based segmentation strategy that selects the more similar templates for a new MRI using a robust multi-scale SURF analysis. Then a new segmentation is achieved by a linear combination of the selected templates, which are previously non-rigidly registered towards the new image. The proposed method shows reliable segmentations, obtaining an average DICE Coefficient of 79%, when comparing with the expert manual segmentation, under a leave-one-out scheme with the training database.
Detection and correction for EPID and gantry sag during arc delivery using cine EPID imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowshanfarzad, Pejman; Sabet, Mahsheed; O'Connor, Daryl J.
2012-02-15
Purpose: Electronic portal imaging devices (EPIDs) have been studied and used for pretreatment and in-vivo dosimetry applications for many years. The application of EPIDs for dosimetry in arc treatments requires accurate characterization of the mechanical sag of the EPID and gantry during rotation. Several studies have investigated the effects of gravity on the sag of these systems but each have limitations. In this study, an easy experiment setup and accurate algorithm have been introduced to characterize and correct for the effect of EPID and gantry sag during arc delivery. Methods: Three metallic ball bearings were used as markers in themore » beam: two of them fixed to the gantry head and the third positioned at the isocenter. EPID images were acquired during a 360 deg. gantry rotation in cine imaging mode. The markers were tracked in EPID images and a robust in-house developed MATLAB code was used to analyse the images and find the EPID sag in three directions as well as the EPID + gantry sag by comparison to the reference gantry zero image. The algorithm results were then tested against independent methods. The method was applied to compare the effect in clockwise and counter clockwise gantry rotations and different source-to-detector distances (SDDs). The results were monitored for one linear accelerator over a course of 15 months and six other linear-accelerators from two treatment centers were also investigated using this method. The generalized shift patterns were derived from the data and used in an image registration algorithm to correct for the effect of the mechanical sag in the system. The Gamma evaluation (3%, 3 mm) technique was used to investigate the improvement in alignment of cine EPID images of a fixed field, by comparing both individual images and the sum of images in a series with the reference gantry zero image. Results: The mechanical sag during gantry rotation was dependent on the gantry angle and was larger in the in-plane direction, although the patterns were not identical for various linear-accelerators. The reproducibility of measurements was within 0.2 mm over a period of 15 months. The direction of gantry rotation and SDD did not affect the results by more than 0.3 mm. Results of independent tests agreed with the algorithm within the accuracy of the measurement tools. When comparing summed images, the percentage of points with Gamma index <1 increased from 85.4% to 94.1% after correcting for the EPID sag, and to 99.3% after correction for gantry + EPID sag. Conclusions: The measurement method and algorithms introduced in this study use cine-images, are highly accurate, simple, fast, and reproducible. It tests all gantry angles and provides a suitable automatic analysis and correction tool to improve EPID dosimetry and perform comprehensive linac QA for arc treatments.« less
The comparison between SVD-DCT and SVD-DWT digital image watermarking
NASA Astrophysics Data System (ADS)
Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas
2018-03-01
With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.
Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P
2015-03-01
Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.
Robust control of the DC-DC boost converter based on the uncertainty and disturbance estimator
NASA Astrophysics Data System (ADS)
Oucheriah, Said
2017-11-01
In this paper, a robust non-linear controller based on the uncertainty and disturbance estimator (UDE) scheme is successfully developed and implemented for the output voltage regulation of the DC-DC boost converter. System uncertainties, external disturbances and unknown non-linear dynamics are lumped as a signal that is accurately estimated using a low-pass filter and their effects are cancelled by the controller. This methodology forms the basis of the UDE-based controller. A simple procedure is also developed that systematically determines the parameters of the controller to meet certain specifications. Using simulation, the effectiveness of the proposed controller is compared against the sliding-mode control (SMC). Experimental tests also show that the proposed controller is robust to system uncertainties, large input and load perturbations.
Reduced conservatism in stability robustness bounds by state transformation
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.; Liang, Z.
1986-01-01
This note addresses the issue of 'conservatism' in the time domain stability robustness bounds obtained by the Liapunov approach. A state transformation is employed to improve the upper bounds on the linear time-varying perturbation of an asymptotically stable linear time-invariant system for robust stability. This improvement is due to the variance of the conservatism of the Liapunov approach with respect to the basis of the vector space in which the Liapunov function is constructed. Improved bounds are obtained, using a transformation, on elemental and vector norms of perturbations (i.e., structured perturbations) as well as on a matrix norm of perturbations (i.e., unstructured perturbations). For the case of a diagonal transformation, an algorithm is proposed to find the 'optimal' transformation. Several examples are presented to illustrate the proposed analysis.
Large-scale linear programs in planning and prediction.
DOT National Transportation Integrated Search
2017-06-01
Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...
Efficient robust reconstruction of dynamic PET activity maps with radioisotope decay constraints.
Gao, Fei; Liu, Huafeng; Shi, Pengcheng
2010-01-01
Dynamic PET imaging performs sequence of data acquisition in order to provide visualization and quantification of physiological changes in specific tissues and organs. The reconstruction of activity maps is generally the first step in dynamic PET. State space Hinfinity approaches have been proved to be a robust method for PET image reconstruction where, however, temporal constraints are not considered during the reconstruction process. In addition, the state space strategies for PET image reconstruction have been computationally prohibitive for practical usage because of the need for matrix inversion. In this paper, we present a minimax formulation of the dynamic PET imaging problem where a radioisotope decay model is employed as physics-based temporal constraints on the photon counts. Furthermore, a robust steady state Hinfinity filter is developed to significantly improve the computational efficiency with minimal loss of accuracy. Experiments are conducted on Monte Carlo simulated image sequences for quantitative analysis and validation.
A novel approach for fire recognition using hybrid features and manifold learning-based classifier
NASA Astrophysics Data System (ADS)
Zhu, Rong; Hu, Xueying; Tang, Jiajun; Hu, Sheng
2018-03-01
Although image/video based fire recognition has received growing attention, an efficient and robust fire detection strategy is rarely explored. In this paper, we propose a novel approach to automatically identify the flame or smoke regions in an image. It is composed to three stages: (1) a block processing is applied to divide an image into several nonoverlapping image blocks, and these image blocks are identified as suspicious fire regions or not by using two color models and a color histogram-based similarity matching method in the HSV color space, (2) considering that compared to other information, the flame and smoke regions have significant visual characteristics, so that two kinds of image features are extracted for fire recognition, where local features are obtained based on the Scale Invariant Feature Transform (SIFT) descriptor and the Bags of Keypoints (BOK) technique, and texture features are extracted based on the Gray Level Co-occurrence Matrices (GLCM) and the Wavelet-based Analysis (WA) methods, and (3) a manifold learning-based classifier is constructed based on two image manifolds, which is designed via an improve Globular Neighborhood Locally Linear Embedding (GNLLE) algorithm, and the extracted hybrid features are used as input feature vectors to train the classifier, which is used to make decision for fire images or non fire images. Experiments and comparative analyses with four approaches are conducted on the collected image sets. The results show that the proposed approach is superior to the other ones in detecting fire and achieving a high recognition accuracy and a low error rate.
Chaos and Robustness in a Single Family of Genetic Oscillatory Networks
Fu, Daniel; Tan, Patrick; Kuznetsov, Alexey; Molkov, Yaroslav I.
2014-01-01
Genetic oscillatory networks can be mathematically modeled with delay differential equations (DDEs). Interpreting genetic networks with DDEs gives a more intuitive understanding from a biological standpoint. However, it presents a problem mathematically, for DDEs are by construction infinitely-dimensional and thus cannot be analyzed using methods common for systems of ordinary differential equations (ODEs). In our study, we address this problem by developing a method for reducing infinitely-dimensional DDEs to two- and three-dimensional systems of ODEs. We find that the three-dimensional reductions provide qualitative improvements over the two-dimensional reductions. We find that the reducibility of a DDE corresponds to its robustness. For non-robust DDEs that exhibit high-dimensional dynamics, we calculate analytic dimension lines to predict the dependence of the DDEs’ correlation dimension on parameters. From these lines, we deduce that the correlation dimension of non-robust DDEs grows linearly with the delay. On the other hand, for robust DDEs, we find that the period of oscillation grows linearly with delay. We find that DDEs with exclusively negative feedback are robust, whereas DDEs with feedback that changes its sign are not robust. We find that non-saturable degradation damps oscillations and narrows the range of parameter values for which oscillations exist. Finally, we deduce that natural genetic oscillators with highly-regular periods likely have solely negative feedback. PMID:24667178
A feature-preserving hair removal algorithm for dermoscopy images.
Abbas, Qaisar; Garcia, Irene Fondón; Emre Celebi, M; Ahmad, Waqar
2013-02-01
Accurate segmentation and repair of hair-occluded information from dermoscopy images are challenging tasks for computer-aided detection (CAD) of melanoma. Currently, many hair-restoration algorithms have been developed, but most of these fail to identify hairs accurately and their removal technique is slow and disturbs the lesion's pattern. In this article, a novel hair-restoration algorithm is presented, which has a capability to preserve the skin lesion features such as color and texture and able to segment both dark and light hairs. Our algorithm is based on three major steps: the rough hairs are segmented using a matched filtering with first derivative of gaussian (MF-FDOG) with thresholding that generate strong responses for both dark and light hairs, refinement of hairs by morphological edge-based techniques, which are repaired through a fast marching inpainting method. Diagnostic accuracy (DA) and texture-quality measure (TQM) metrics are utilized based on dermatologist-drawn manual hair masks that were used as a ground truth to evaluate the performance of the system. The hair-restoration algorithm is tested on 100 dermoscopy images. The comparisons have been done among (i) linear interpolation, inpainting by (ii) non-linear partial differential equation (PDE), and (iii) exemplar-based repairing techniques. Among different hair detection and removal techniques, our proposed algorithm obtained the highest value of DA: 93.3% and TQM: 90%. The experimental results indicate that the proposed algorithm is highly accurate, robust and able to restore hair pixels without damaging the lesion texture. This method is fully automatic and can be easily integrated into a CAD system. © 2011 John Wiley & Sons A/S.
Cai, Bin; Dolly, Steven; Kamal, Gregory; Yaddanapudi, Sridhar; Sun, Baozhou; Goddu, S Murty; Mutic, Sasa; Li, Hua
2018-04-28
To investigate the feasibility of using kV flat panel detector on linac for consistency evaluations of kV X-ray generator performance. An in-house designed aluminum (Al) array phantom with six 9×9 cm 2 square regions having various thickness was proposed and used in this study. Through XML script-driven image acquisition, kV images with various acquisition settings were obtained using the kV flat panel detector. Utilizing pre-established baseline curves, the consistency of X-ray tube output characteristics including tube voltage accuracy, exposure accuracy and exposure linearity were assessed through image quality assessment metrics including ROI mean intensity, ROI standard deviation (SD) and noise power spectrums (NPS). The robustness of this method was tested on two linacs for a three-month period. With the proposed method, tube voltage accuracy can be verified through conscience check with a 2% tolerance and 2 kVp intervals for forty different kVp settings. The exposure accuracy can be tested with a 4% consistency tolerance for three mAs settings over forty kVp settings. The exposure linearity tested with three mAs settings achieved a coefficient of variation (CV) of 0.1. We proposed a novel approach that uses the kV flat panel detector available on linac for X-ray generator test. This approach eliminates the inefficiencies and variability associated with using third party QA detectors while enabling an automated process. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Linear variable narrow bandpass optical filters in the far infrared (Conference Presentation)
NASA Astrophysics Data System (ADS)
Rahmlow, Thomas D.
2017-06-01
We are currently developing linear variable filters (LVF) with very high wavelength gradients. In the visible, these filters have a wavelength gradient of 50 to 100 nm/mm. In the infrared, the wavelength gradient covers the range of 500 to 900 microns/mm. Filter designs include band pass, long pass and ulta-high performance anti-reflection coatings. The active area of the filters is on the order of 5 to 30 mm along the wavelength gradient and up to 30 mm in the orthogonal, constant wavelength direction. Variation in performance along the constant direction is less than 1%. Repeatable performance from filter to filter, absolute placement of the filter relative to a substrate fiducial and, high in-band transmission across the full spectral band is demonstrated. Applications include order sorting filters, direct replacement of the spectrometer and hyper-spectral imaging. Off-band rejection with an optical density of greater than 3 allows use of the filter as an order sorting filter. The linear variable order sorting filters replaces other filter types such as block filters. The disadvantage of block filters is the loss of pixels due to the transition between filter blocks. The LVF is a continuous gradient without a discrete transition between filter wavelength regions. If the LVF is designed as a narrow band pass filter, it can be used in place of a spectrometer thus reducing overall sensor weight and cost while improving the robustness of the sensor. By controlling the orthogonal performance (smile) the LVF can be sized to the dimensions of the detector. When imaging on to a 2 dimensional array and operating the sensor in a push broom configuration, the LVF spectrometer performs as a hyper-spectral imager. This paper presents performance of LVF fabricated in the far infrared on substrates sized to available detectors. The impact of spot size, F-number and filter characterization are presented. Results are also compared to extended visible LVF filters.
Exploiting structure: Introduction and motivation
NASA Technical Reports Server (NTRS)
Xu, Zhong Ling
1993-01-01
Research activities performed during the period of 29 June 1993 through 31 Aug. 1993 are summarized. The Robust Stability of Systems where transfer function or characteristic polynomial are multilinear affine functions of parameters of interest in two directions, Algorithmic and Theoretical, was developed. In the algorithmic direction, a new approach that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty is found. This technique is called 'Stability by linear process.' In fact, the 'Stability by linear process' described gives an algorithm. In analysis, we obtained a robustness criterion for the family of polynomials with coefficients of multilinear affine function in the coefficient space and obtained the result for the robust stability of diamond families of polynomials with complex coefficients also. We obtained the limited results for SPR design and we provide a framework for solving ACS. Finally, copies of the outline of our results are provided in the appendix. Also, there is an administration issue in the appendix.
Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.
Xie, Xianming
2016-08-22
A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.
Spin echo SPI methods for quantitative analysis of fluids in porous media.
Li, Linqing; Han, Hui; Balcom, Bruce J
2009-06-01
Fluid density imaging is highly desirable in a wide variety of porous media measurements. The SPRITE class of MRI methods has proven to be robust and general in their ability to generate density images in porous media, however the short encoding times required, with correspondingly high magnetic field gradient strengths and filter widths, and low flip angle RF pulses, yield sub-optimal S/N images, especially at low static field strength. This paper explores two implementations of pure phase encode spin echo 1D imaging, with application to a proposed new petroleum reservoir core analysis measurement. In the first implementation of the pulse sequence, we modify the spin echo single point imaging (SE-SPI) technique to acquire the k-space origin data point, with a near zero evolution time, from the free induction decay (FID) following a 90 degrees excitation pulse. Subsequent k-space data points are acquired by separately phase encoding individual echoes in a multi-echo acquisition. T(2) attenuation of the echo train yields an image convolution which causes blurring. The T(2) blur effect is moderate for porous media with T(2) lifetime distributions longer than 5 ms. As a robust, high S/N, and fast 1D imaging method, this method will be highly complementary to SPRITE techniques for the quantitative analysis of fluid content in porous media. In the second implementation of the SE-SPI pulse sequence, modification of the basic measurement permits fast determination of spatially resolved T(2) distributions in porous media through separately phase encoding each echo in a multi-echo CPMG pulse train. An individual T(2) weighted image may be acquired from each echo. The echo time (TE) of each T(2) weighted image may be reduced to 500 micros or less. These profiles can be fit to extract a T(2) distribution from each pixel employing a variety of standard inverse Laplace transform methods. Fluid content 1D images are produced as an essential by product of determining the spatially resolved T(2) distribution. These 1D images do not suffer from a T(2) related blurring. The above SE-SPI measurements are combined to generate 1D images of the local saturation and T(2) distribution as a function of saturation, upon centrifugation of petroleum reservoir core samples. The logarithm mean T(2) is observed to shift linearly with water saturation. This new reservoir core analysis measurement may provide a valuable calibration of the Coates equation for irreducible water saturation, which has been widely implemented in NMR well logging measurements.
The effects of SENSE on PROPELLER imaging.
Chang, Yuchou; Pipe, James G; Karis, John P; Gibbs, Wende N; Zwart, Nicholas R; Schär, Michael
2015-12-01
To study how sensitivity encoding (SENSE) impacts periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) image quality, including signal-to-noise ratio (SNR), robustness to motion, precision of motion estimation, and image quality. Five volunteers were imaged by three sets of scans. A rapid method for generating the g-factor map was proposed and validated via Monte Carlo simulations. Sensitivity maps were extrapolated to increase the area over which SENSE can be performed and therefore enhance the robustness to head motion. The precision of motion estimation of PROPELLER blades that are unfolded with these sensitivity maps was investigated. An interleaved R-factor PROPELLER sequence was used to acquire data with similar amounts of motion with and without SENSE acceleration. Two neuroradiologists independently and blindly compared 214 image pairs. The proposed method of g-factor calculation was similar to that provided by the Monte Carlo methods. Extrapolation and rotation of the sensitivity maps allowed for continued robustness of SENSE unfolding in the presence of motion. SENSE-widened blades improved the precision of rotation and translation estimation. PROPELLER images with a SENSE factor of 3 outperformed the traditional PROPELLER images when reconstructing the same number of blades. SENSE not only accelerates PROPELLER but can also improve robustness and precision of head motion correction, which improves overall image quality even when SNR is lost due to acceleration. The reduction of SNR, as a penalty of acceleration, is characterized by the proposed g-factor method. © 2014 Wiley Periodicals, Inc.
Robust feature matching via support-line voting and affine-invariant ratios
NASA Astrophysics Data System (ADS)
Li, Jiayuan; Hu, Qingwu; Ai, Mingyao; Zhong, Ruofei
2017-10-01
Robust image matching is crucial for many applications of remote sensing and photogrammetry, such as image fusion, image registration, and change detection. In this paper, we propose a robust feature matching method based on support-line voting and affine-invariant ratios. We first use popular feature matching algorithms, such as SIFT, to obtain a set of initial matches. A support-line descriptor based on multiple adaptive binning gradient histograms is subsequently applied in the support-line voting stage to filter outliers. In addition, we use affine-invariant ratios computed by a two-line structure to refine the matching results and estimate the local affine transformation. The local affine model is more robust to distortions caused by elevation differences than the global affine transformation, especially for high-resolution remote sensing images and UAV images. Thus, the proposed method is suitable for both rigid and non-rigid image matching problems. Finally, we extract as many high-precision correspondences as possible based on the local affine extension and build a grid-wise affine model for remote sensing image registration. We compare the proposed method with six state-of-the-art algorithms on several data sets and show that our method significantly outperforms the other methods. The proposed method achieves 94.46% average precision on 15 challenging remote sensing image pairs, while the second-best method, RANSAC, only achieves 70.3%. In addition, the number of detected correct matches of the proposed method is approximately four times the number of initial SIFT matches.
Modern CACSD using the Robust-Control Toolbox
NASA Technical Reports Server (NTRS)
Chiang, Richard Y.; Safonov, Michael G.
1989-01-01
The Robust-Control Toolbox is a collection of 40 M-files which extend the capability of PC/PRO-MATLAB to do modern multivariable robust control system design. Included are robust analysis tools like singular values and structured singular values, robust synthesis tools like continuous/discrete H(exp 2)/H infinity synthesis and Linear Quadratic Gaussian Loop Transfer Recovery methods and a variety of robust model reduction tools such as Hankel approximation, balanced truncation and balanced stochastic truncation, etc. The capabilities of the toolbox are described and illustated with examples to show how easily they can be used in practice. Examples include structured singular value analysis, H infinity loop-shaping and large space structure model reduction.
NASA Astrophysics Data System (ADS)
Markman, A.; Javidi, B.
2016-06-01
Quick-response (QR) codes are barcodes that can store information such as numeric data and hyperlinks. The QR code can be scanned using a QR code reader, such as those built into smartphone devices, revealing the information stored in the code. Moreover, the QR code is robust to noise, rotation, and illumination when scanning due to error correction built in the QR code design. Integral imaging is an imaging technique used to generate a three-dimensional (3D) scene by combining the information from two-dimensional (2D) elemental images (EIs) each with a different perspective of a scene. Transferring these 2D images in a secure manner can be difficult. In this work, we overview two methods to store and encrypt EIs in multiple QR codes. The first method uses run-length encoding with Huffman coding and the double-random-phase encryption (DRPE) to compress and encrypt an EI. This information is then stored in a QR code. An alternative compression scheme is to perform photon-counting on the EI prior to compression. Photon-counting is a non-linear transformation of data that creates redundant information thus improving image compression. The compressed data is encrypted using the DRPE. Once information is stored in the QR codes, it is scanned using a smartphone device. The information scanned is decompressed and decrypted and an EI is recovered. Once all EIs have been recovered, a 3D optical reconstruction is generated.
NASA Astrophysics Data System (ADS)
Mazzoleni, Paolo; Matta, Fabio; Zappa, Emanuele; Sutton, Michael A.; Cigada, Alfredo
2015-03-01
This paper discusses the effect of pre-processing image blurring on the uncertainty of two-dimensional digital image correlation (DIC) measurements for the specific case of numerically-designed speckle patterns having particles with well-defined and consistent shape, size and spacing. Such patterns are more suitable for large measurement surfaces on large-scale specimens than traditional spray-painted random patterns without well-defined particles. The methodology consists of numerical simulations where Gaussian digital filters with varying standard deviation are applied to a reference speckle pattern. To simplify the pattern application process for large areas and increase contrast to reduce measurement uncertainty, the speckle shape, mean size and on-center spacing were selected to be representative of numerically-designed patterns that can be applied on large surfaces through different techniques (e.g., spray-painting through stencils). Such 'designer patterns' are characterized by well-defined regions of non-zero frequency content and non-zero peaks, and are fundamentally different from typical spray-painted patterns whose frequency content exhibits near-zero peaks. The effect of blurring filters is examined for constant, linear, quadratic and cubic displacement fields. Maximum strains between ±250 and ±20,000 με are simulated, thus covering a relevant range for structural materials subjected to service and ultimate stresses. The robustness of the simulation procedure is verified experimentally using a physical speckle pattern subjected to constant displacements. The stability of the relation between standard deviation of the Gaussian filter and measurement uncertainty is assessed for linear displacement fields at varying image noise levels, subset size, and frequency content of the speckle pattern. It is shown that bias error as well as measurement uncertainty are minimized through Gaussian pre-filtering. This finding does not apply to typical spray-painted patterns without well-defined particles, for which image blurring is only beneficial in reducing bias errors.
NASA Astrophysics Data System (ADS)
Emaminejad, Nastaran; Wahi-Anwar, Muhammad; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael
2018-02-01
Translation of radiomics into clinical practice requires confidence in its interpretations. This may be obtained via understanding and overcoming the limitations in current radiomic approaches. Currently there is a lack of standardization in radiomic feature extraction. In this study we examined a few factors that are potential sources of inconsistency in characterizing lung nodules, such as 1)different choices of parameters and algorithms in feature calculation, 2)two CT image dose levels, 3)different CT reconstruction algorithms (WFBP, denoised WFBP, and Iterative). We investigated the effect of variation of these factors on entropy textural feature of lung nodules. CT images of 19 lung nodules identified from our lung cancer screening program were identified by a CAD tool and contours provided. The radiomics features were extracted by calculating 36 GLCM based and 4 histogram based entropy features in addition to 2 intensity based features. A robustness index was calculated across different image acquisition parameters to illustrate the reproducibility of features. Most GLCM based and all histogram based entropy features were robust across two CT image dose levels. Denoising of images slightly improved robustness of some entropy features at WFBP. Iterative reconstruction resulted in improvement of robustness in a fewer times and caused more variation in entropy feature values and their robustness. Within different choices of parameters and algorithms texture features showed a wide range of variation, as much as 75% for individual nodules. Results indicate the need for harmonization of feature calculations and identification of optimum parameters and algorithms in a radiomics study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Andrew J.; Miller, Brian W.; Robinson, Sean M.
Imaging technology is generally considered too invasive for arms control inspections due to the concern that it cannot properly secure sensitive features of the inspected item. But, this same sensitive information, which could include direct information on the form and function of the items under inspection, could be used for robust arms control inspections. The single-pixel X-ray imager (SPXI) is introduced as a method to make such inspections, capturing the salient spatial information of an object in a secure manner while never forming an actual image. We built this method on the theory of compressive sensing and the single pixelmore » optical camera. The performance of the system is quantified using simulated inspections of simple objects. Measures of the robustness and security of the method are introduced and used to determine how robust and secure such an inspection would be. Particularly, it is found that an inspection with low noise (<1%) and high undersampling (>256×) exhibits high robustness and security.« less
A robust and hierarchical approach for the automatic co-registration of intensity and visible images
NASA Astrophysics Data System (ADS)
González-Aguilera, Diego; Rodríguez-Gonzálvez, Pablo; Hernández-López, David; Luis Lerma, José
2012-09-01
This paper presents a new robust approach to integrate intensity and visible images which have been acquired with a terrestrial laser scanner and a calibrated digital camera, respectively. In particular, an automatic and hierarchical method for the co-registration of both sensors is developed. The approach integrates several existing solutions to improve the performance of the co-registration between range-based and visible images: the Affine Scale-Invariant Feature Transform (A-SIFT), the epipolar geometry, the collinearity equations, the Groebner basis solution and the RANdom SAmple Consensus (RANSAC), integrating a voting scheme. The approach presented herein improves the existing co-registration approaches in automation, robustness, reliability and accuracy.
Validation of a stereo camera system to quantify brain deformation due to breathing and pulsatility.
Faria, Carlos; Sadowsky, Ofri; Bicho, Estela; Ferrigno, Giancarlo; Joskowicz, Leo; Shoham, Moshe; Vivanti, Refael; De Momi, Elena
2014-11-01
A new stereo vision system is presented to quantify brain shift and pulsatility in open-skull neurosurgeries. The system is endowed with hardware and software synchronous image acquisition with timestamp embedding in the captured images, a brain surface oriented feature detection, and a tracking subroutine robust to occlusions and outliers. A validation experiment for the stereo vision system was conducted against a gold-standard optical tracking system, Optotrak CERTUS. A static and dynamic analysis of the stereo camera tracking error was performed tracking a customized object in different positions, orientations, linear, and angular speeds. The system is able to detect an immobile object position and orientation with a maximum error of 0.5 mm and 1.6° in all depth of field, and tracking a moving object until 3 mm/s with a median error of 0.5 mm. Three stereo video acquisitions were recorded from a patient, immediately after the craniotomy. The cortical pulsatile motion was captured and is represented in the time and frequency domain. The amplitude of motion of the cloud of features' center of mass was inferior to 0.8 mm. Three distinct peaks are identified in the fast Fourier transform analysis related to the sympathovagal balance, breathing, and blood pressure with 0.03-0.05, 0.2, and 1 Hz, respectively. The stereo vision system presented is a precise and robust system to measure brain shift and pulsatility with an accuracy superior to other reported systems.
Zheng, Xianlin; Lu, Yiqing; Zhao, Jiangbo; Zhang, Yuhai; Ren, Wei; Liu, Deming; Lu, Jie; Piper, James A; Leif, Robert C; Liu, Xiaogang; Jin, Dayong
2016-01-19
Compared with routine microscopy imaging of a few analytes at a time, rapid scanning through the whole sample area of a microscope slide to locate every single target object offers many advantages in terms of simplicity, speed, throughput, and potential for robust quantitative analysis. Existing techniques that accommodate solid-phase samples incorporating individual micrometer-sized targets generally rely on digital microscopy and image analysis, with intrinsically low throughput and reliability. Here, we report an advanced on-the-fly stage scanning method to achieve high-precision target location across the whole slide. By integrating X- and Y-axis linear encoders to a motorized stage as the virtual "grids" that provide real-time positional references, we demonstrate an orthogonal scanning automated microscopy (OSAM) technique which can search a coverslip area of 50 × 24 mm(2) in just 5.3 min and locate individual 15 μm lanthanide luminescent microspheres with standard deviations of 1.38 and 1.75 μm in X and Y directions. Alongside implementation of an autofocus unit that compensates the tilt of a slide in the Z-axis in real time, we increase the luminescence detection efficiency by 35% with an improved coefficient of variation. We demonstrate the capability of advanced OSAM for robust quantification of luminescence intensities and lifetimes for a variety of micrometer-scale luminescent targets, specifically single down-shifting and upconversion microspheres, crystalline microplates, and color-barcoded microrods, as well as quantitative suspension array assays of biotinylated-DNA functionalized upconversion nanoparticles.
Optimal full motion video registration with rigorous error propagation
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn
2014-06-01
Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.
NASA Astrophysics Data System (ADS)
Liu, Xiyao; Lou, Jieting; Wang, Yifan; Du, Jingyu; Zou, Beiji; Chen, Yan
2018-03-01
Authentication and copyright identification are two critical security issues for medical images. Although zerowatermarking schemes can provide durable, reliable and distortion-free protection for medical images, the existing zerowatermarking schemes for medical images still face two problems. On one hand, they rarely considered the distinguishability for medical images, which is critical because different medical images are sometimes similar to each other. On the other hand, their robustness against geometric attacks, such as cropping, rotation and flipping, is insufficient. In this study, a novel discriminative and robust zero-watermarking (DRZW) is proposed to address these two problems. In DRZW, content-based features of medical images are first extracted based on completed local binary pattern (CLBP) operator to ensure the distinguishability and robustness, especially against geometric attacks. Then, master shares and ownership shares are generated from the content-based features and watermark according to (2,2) visual cryptography. Finally, the ownership shares are stored for authentication and copyright identification. For queried medical images, their content-based features are extracted and master shares are generated. Their watermarks for authentication and copyright identification are recovered by stacking the generated master shares and stored ownership shares. 200 different medical images of 5 types are collected as the testing data and our experimental results demonstrate that DRZW ensures both the accuracy and reliability of authentication and copyright identification. When fixing the false positive rate to 1.00%, the average value of false negative rates by using DRZW is only 1.75% under 20 common attacks with different parameters.
Quantitative analysis of histopathological findings using image processing software.
Horai, Yasushi; Kakimoto, Tetsuhiro; Takemoto, Kana; Tanaka, Masaharu
2017-10-01
In evaluating pathological changes in drug efficacy and toxicity studies, morphometric analysis can be quite robust. In this experiment, we examined whether morphometric changes of major pathological findings in various tissue specimens stained with hematoxylin and eosin could be recognized and quantified using image processing software. Using Tissue Studio, hypertrophy of hepatocytes and adrenocortical cells could be quantified based on the method of a previous report, but the regions of red pulp, white pulp, and marginal zones in the spleen could not be recognized when using one setting condition. Using Image-Pro Plus, lipid-derived vacuoles in the liver and mucin-derived vacuoles in the intestinal mucosa could be quantified using two criteria (area and/or roundness). Vacuoles derived from phospholipid could not be quantified when small lipid deposition coexisted in the liver and adrenal cortex. Mononuclear inflammatory cell infiltration in the liver could be quantified to some extent, except for specimens with many clustered infiltrating cells. Adipocyte size and the mean linear intercept could be quantified easily and efficiently using morphological processing and the macro tool equipped in Image-Pro Plus. These methodologies are expected to form a base system that can recognize morphometric features and analyze quantitatively pathological findings through the use of information technology.
Hinrichs, Ruth; Frank, Paulo Ricardo Ost; Vasconcellos, M A Z
2017-03-01
Modifications of cotton and polyester textiles due to shots fired at short range were analyzed with a variable pressure scanning electron microscope (VP-SEM). Different mechanisms of fiber rupture as a function of fiber type and shooting distance were detected, namely fusing, melting, scorching, and mechanical breakage. To estimate the firing distance, the approximately exponential decay of GSR coverage as a function of radial distance from the entrance hole was determined from image analysis, instead of relying on chemical analysis with EDX, which is problematic in the VP-SEM. A set of backscattered electron images, with sufficient magnification to discriminate micrometer wide GSR particles, was acquired at different radial distances from the entrance hole. The atomic number contrast between the GSR particles and the organic fibers allowed to find a robust procedure to segment the micrographs into binary images, in which the white pixel count was attributed to GSR coverage. The decrease of the white pixel count followed an exponential decay, and it was found that the reciprocal of the decay constant, obtained from the least-square fitting of the coverage data, showed a linear dependence on the shooting distance. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Photon-number correlation for quantum enhanced imaging and sensing
NASA Astrophysics Data System (ADS)
Meda, A.; Losero, E.; Samantaray, N.; Scafirimuto, F.; Pradyumna, S.; Avella, A.; Ruo-Berchera, I.; Genovese, M.
2017-09-01
In this review we present the potentialities and the achievements of the use of non-classical photon-number correlations in twin-beam states for many applications, ranging from imaging to metrology. Photon-number correlations in the quantum regime are easily produced and are rather robust against unavoidable experimental losses, and noise in some cases, if compared to the entanglement, where losing one photon can completely compromise the state and its exploitable advantages. Here, we will focus on quantum enhanced protocols in which only phase-insensitive intensity measurements (photon-number counting) are performed, which allow probing the transmission/absorption properties of a system, leading, for example, to innovative target detection schemes in a strong background. In this framework, one of the advantages is that the sources experimentally available emit a wide number of pair-wise correlated modes, which can be intercepted and exploited separately, for example by many pixels of a camera, providing a parallelism, essential in several applications, such as wide-field sub-shot-noise imaging and quantum enhanced ghost imaging. Finally, non-classical correlation enables new possibilities in quantum radiometry, e.g. the possibility of absolute calibration of a spatial resolving detector from the on-off single-photon regime to the linear regime in the same setup.
Direct estimation of evoked hemoglobin changes by multimodality fusion imaging
Huppert, Theodore J.; Diamond, Solomon G.; Boas, David A.
2009-01-01
In the last two decades, both diffuse optical tomography (DOT) and blood oxygen level dependent (BOLD)-based functional magnetic resonance imaging (fMRI) methods have been developed as noninvasive tools for imaging evoked cerebral hemodynamic changes in studies of brain activity. Although these two technologies measure functional contrast from similar physiological sources, i.e., changes in hemoglobin levels, these two modalities are based on distinct physical and biophysical principles leading to both limitations and strengths to each method. In this work, we describe a unified linear model to combine the complimentary spatial, temporal, and spectroscopic resolutions of concurrently measured optical tomography and fMRI signals. Using numerical simulations, we demonstrate that concurrent optical and BOLD measurements can be used to create cross-calibrated estimates of absolute micromolar deoxyhemoglobin changes. We apply this new analysis tool to experimental data acquired simultaneously with both DOT and BOLD imaging during a motor task, demonstrate the ability to more robustly estimate hemoglobin changes in comparison to DOT alone, and show how this approach can provide cross-calibrated estimates of hemoglobin changes. Using this multimodal method, we estimate the calibration of the 3 tesla BOLD signal to be −0.55% ± 0.40% signal change per micromolar change of deoxyhemoglobin. PMID:19021411
Keller, Brad M; Oustimov, Andrew; Wang, Yan; Chen, Jinbo; Acciavatti, Raymond J; Zheng, Yuanjie; Ray, Shonket; Gee, James C; Maidment, Andrew D A; Kontos, Despina
2015-04-01
An analytical framework is presented for evaluating the equivalence of parenchymal texture features across different full-field digital mammography (FFDM) systems using a physical breast phantom. Phantom images (FOR PROCESSING) are acquired from three FFDM systems using their automated exposure control setting. A panel of texture features, including gray-level histogram, co-occurrence, run length, and structural descriptors, are extracted. To identify features that are robust across imaging systems, a series of equivalence tests are performed on the feature distributions, in which the extent of their intersystem variation is compared to their intrasystem variation via the Hodges-Lehmann test statistic. Overall, histogram and structural features tend to be most robust across all systems, and certain features, such as edge enhancement, tend to be more robust to intergenerational differences between detectors of a single vendor than to intervendor differences. Texture features extracted from larger regions of interest (i.e., [Formula: see text]) and with a larger offset length (i.e., [Formula: see text]), when applicable, also appear to be more robust across imaging systems. This framework and observations from our experiments may benefit applications utilizing mammographic texture analysis on images acquired in multivendor settings, such as in multicenter studies of computer-aided detection and breast cancer risk assessment.
Digital watermarking algorithm research of color images based on quaternion Fourier transform
NASA Astrophysics Data System (ADS)
An, Mali; Wang, Weijiang; Zhao, Zhen
2013-10-01
A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.
NASA Astrophysics Data System (ADS)
Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.
2013-09-01
We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.
Morphology filter bank for extracting nodular and linear patterns in medical images.
Hashimoto, Ryutaro; Uchiyama, Yoshikazu; Uchimura, Keiichi; Koutaki, Gou; Inoue, Tomoki
2017-04-01
Using image processing to extract nodular or linear shadows is a key technique of computer-aided diagnosis schemes. This study proposes a new method for extracting nodular and linear patterns of various sizes in medical images. We have developed a morphology filter bank that creates multiresolution representations of an image. Analysis bank of this filter bank produces nodular and linear patterns at each resolution level. Synthesis bank can then be used to perfectly reconstruct the original image from these decomposed patterns. Our proposed method shows better performance based on a quantitative evaluation using a synthesized image compared with a conventional method based on a Hessian matrix, often used to enhance nodular and linear patterns. In addition, experiments show that our method can be applied to the followings: (1) microcalcifications of various sizes in mammograms can be extracted, (2) blood vessels of various sizes in retinal fundus images can be extracted, and (3) thoracic CT images can be reconstructed while removing normal vessels. Our proposed method is useful for extracting nodular and linear shadows or removing normal structures in medical images.
Robust extraction of the aorta and pulmonary artery from 3D MDCT image data
NASA Astrophysics Data System (ADS)
Taeprasartsit, Pinyo; Higgins, William E.
2010-03-01
Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.
2011-11-01
common housefly , Musca domestica. “Lightweight, Low Power Robust Means of Removing Image Jitter,” (AFRL-RX-TY-TR-2011-0096-02) develops an optimal...biological vision system of the common housefly , Musca domestica. Several variations of this sensor were designed, simulated extensively, and hardware
Robust Spatial Autoregressive Modeling for Hardwood Log Inspection
Dongping Zhu; A.A. Beex
1994-01-01
We explore the application of a stochastic texture modeling method toward a machine vision system for log inspection in the forest products industry. This machine vision system uses computerized tomography (CT) imaging to locate and identify internal defects in hardwood logs. The application of CT to such industrial vision problems requires efficient and robust image...
NASA Astrophysics Data System (ADS)
Guo, Dongwei; Wang, Zhe
2018-05-01
Convolutional neural networks (CNN) achieve great success in computer vision, it can learn hierarchical representation from raw pixels and has outstanding performance in various image recognition tasks [1]. However, CNN is easy to be fraudulent in terms of it is possible to produce images totally unrecognizable to human eyes that CNNs believe with near certainty are familiar objects. [2]. In this paper, an associative memory model based on multiple features is proposed. Within this model, feature extraction and classification are carried out by CNN, T-SNE and exponential bidirectional associative memory neural network (EBAM). The geometric features extracted from CNN and the digital features extracted from T-SNE are associated by EBAM. Thus we ensure the recognition of robustness by a comprehensive assessment of the two features. In our model, we can get only 8% error rate with fraudulent data. In systems that require a high safety factor or some key areas, strong robustness is extremely important, if we can ensure the image recognition robustness, network security will be greatly improved and the social production efficiency will be extremely enhanced.
Overcoming Dynamic Disturbances in Imaging Systems
NASA Technical Reports Server (NTRS)
Young, Eric W.; Dente, Gregory C.; Lyon, Richard G.; Chesters, Dennis; Gong, Qian
2000-01-01
We develop and discuss a methodology with the potential to yield a significant reduction in complexity, cost, and risk of space-borne optical systems in the presence of dynamic disturbances. More robust systems almost certainly will be a result as well. Many future space-based and ground-based optical systems will employ optical control systems to enhance imaging performance. The goal of the optical control subsystem is to determine the wavefront aberrations and remove them. Ideally reducing an aberrated image of the object under investigation to a sufficiently clear (usually diffraction-limited) image. Control will likely be distributed over several elements. These elements may include telescope primary segments, telescope secondary, telescope tertiary, deformable mirror(s), fine steering mirror(s), etc. The last two elements, in particular, may have to provide dynamic control. These control subsystems may become elaborate indeed. But robust system performance will require evaluation of the image quality over a substantial range and in a dynamic environment. Candidate systems for improvement in the Earth Sciences Enterprise could include next generation Landsat systems or atmospheric sensors for dynamic imaging of individual, severe storms. The technology developed here could have a substantial impact on the development of new systems in the Space Science Enterprise; such as the Next Generation Space Telescope(NGST) and its follow-on the Next NGST. Large Interferometric Systems of non-zero field, such as Planet Finder and Submillimeter Probe of the Evolution of Cosmic Structure, could benefit. These systems most likely will contain large, flexible optomechanical structures subject to dynamic disturbance. Furthermore, large systems for high resolution imaging of planets or the sun from space may also benefit. Tactical and Strategic Defense systems will need to image very small targets as well and could benefit from the technology developed here. We discuss a novel speckle imaging technique with the potential to separate dynamic aberrations from static aberrations. Post-processing of a set of image data, using an algorithm based on this technique, should work for all but the lowest light levels and highest frequency dynamic environments. This technique may serve to reduce the complexity of the control system and provide for robust, fault-tolerant, reduced risk operation. For a given object, a short exposure image is "frozen" on the focal plane in the presence of the environmental disturbance (turbulence, jitter, etc.). A key factor is that this imaging data exhibits frame-to-frame linear shift invariance. Therefore, although the Point Spread Function is varying from frame to frame, the source is fixed; and each short exposure contains object spectrum data out to the diffraction limit of the imaging system. This novel speckle imaging technique uses the Knox-Thompson method. The magnitude of the complex object spectrum is straightforward to determine by well-established approaches. The phase of the complex object spectrum is decomposed into two parts. One is a single-valued function determined by the divergence of the optical phase gradient. The other is a multi-valued function determined by the circulation of the optical phase gradient-"hidden phase." Finite difference equations are developed for the phase. The novelty of this approach is captured in the inclusion of this "hidden phase." This technique allows the diffraction-limited reconstruction of the object from the ensemble of short exposure frames while simultaneously estimating the phase as a function of time from a set of exposures.
Overcoming Dynamic Disturbances in Imaging Systems
NASA Technical Reports Server (NTRS)
Young, Eric W.; Dente, Gregory C.; Lyon, Richard G.; Chesters, Dennis; Gong, Qian
2000-01-01
We develop and discuss a methodology with the potential to yield a significant reduction in complexity, cost, and risk of space-borne optical systems in the presence of dynamic disturbances. More robust systems almost certainly will be a result as well. Many future space-based and ground-based optical systems will employ optical control systems to enhance imaging performance. The goal of the optical control subsystem is to determine the wavefront aberrations and remove them. Ideally reducing an aberrated image of the object under investigation to a sufficiently clear (usually diffraction-limited) image. Control will likely be distributed over several elements. These elements may include telescope primary segments, telescope secondary, telescope tertiary, deformable mirror(s), fine steering mirror(s), etc. The last two elements, in particular, may have to provide dynamic control. These control subsystems may become elaborate indeed. But robust system performance will require evaluation of the image quality over a substantial range and in a dynamic environment. Candidate systems for improvement in the Earth Sciences Enterprise could include next generation Landsat systems or atmospheric sensors for dynamic imaging of individual, severe storms. The technology developed here could have a substantial impact on the development of new systems in the Space Science Enterprise; such as the Next Generation Space Telescope(NGST) and its follow-on the Next NGST. Large Interferometric Systems of non-zero field, such as Planet Finder and Submillimeter Probe of the Evolution of Cosmic Structure, could benefit. These systems most likely will contain large, flexible optormechanical structures subject to dynamic disturbance. Furthermore, large systems for high resolution imaging of planets or the sun from space may also benefit. Tactical and Strategic Defense systems will need to image very small targets as well and could benefit from the technology developed here. We discuss a novel speckle imaging technique with the potential to separate dynamic aberrations from static aberrations. Post-processing of a set of image data, using an algorithm based on this technique, should work for all but the lowest light levels and highest frequency dynamic environments. This technique may serve to reduce the complexity of the control system and provide for robust, fault-tolerant, reduced risk operation. For a given object, a short exposure image is "frozen" on the focal plane in the presence of the environmental disturbance (turbulence, jitter, etc.). A key factor is that this imaging data exhibits frame-to-frame linear shift invariance. Therefore, although the Point Spread Function is varying from frame to frame, the source is fixed; and each short exposure contains object spectrum data out to the diffraction limit of the imaging system. This novel speckle imaging technique uses the Knox-Thompson method. The magnitude of the complex object spectrum is straightforward to determine by well-established approaches. The phase of the complex object spectrum is decomposed into two parts. One is a single-valued function determined by the divergence of the optical phase gradient. The other is a multi-valued function determined by, the circulation of the optical phase gradient-"hidden phase." Finite difference equations are developed for the phase. The novelty of this approach is captured in the inclusion of this "hidden phase." This technique allows the diffraction-limited reconstruction of the object from the ensemble of short exposure frames while simultaneously estimating the phase as a function of time from a set of exposures.
Linear quadratic servo control of a reusable rocket engine
NASA Technical Reports Server (NTRS)
Musgrave, Jeffrey L.
1991-01-01
The paper deals with the development of a design method for a servo component in the frequency domain using singular values and its application to a reusable rocket engine. A general methodology used to design a class of linear multivariable controllers for intelligent control systems is presented. Focus is placed on performance and robustness characteristics, and an estimator design performed in the framework of the Kalman-filter formalism with emphasis on using a sensor set different from the commanded values is discussed. It is noted that loop transfer recovery modifies the nominal plant noise intensities in order to obtain the desired degree of robustness to uncertainty reflected at the plant input. Simulation results demonstrating the performance of the linear design on a nonlinear engine model over all power levels during mainstage operation are discussed.
H(2)- and H(infinity)-design tools for linear time-invariant systems
NASA Technical Reports Server (NTRS)
Ly, Uy-Loi
1989-01-01
Recent advances in optimal control have brought design techniques based on optimization of H(2) and H(infinity) norm criteria, closer to be attractive alternatives to single-loop design methods for linear time-variant systems. Significant steps forward in this technology are the deeper understanding of performance and robustness issues of these design procedures and means to perform design trade-offs. However acceptance of the technology is hindered by the lack of convenient design tools to exercise these powerful multivariable techniques, while still allowing single-loop design formulation. Presented is a unique computer tool for designing arbitrary low-order linear time-invarient controllers than encompasses both performance and robustness issues via the familiar H(2) and H(infinity) norm optimization. Application to disturbance rejection design for a commercial transport is demonstrated.
Robustness in linear quadratic feedback design with application to an aircraft control problem
NASA Technical Reports Server (NTRS)
Patel, R. V.; Sridhar, B.; Toda, M.
1977-01-01
Some new results concerning robustness and asymptotic properties of error bounds of a linear quadratic feedback design are applied to an aircraft control problem. An autopilot for the flare control of the Augmentor Wing Jet STOL Research Aircraft (AWJSRA) is designed based on Linear Quadratic (LQ) theory and the results developed in this paper. The variation of the error bounds to changes in the weighting matrices in the LQ design is studied by computer simulations, and appropriate weighting matrices are chosen to obtain a reasonable error bound for variations in the system matrix and at the same time meet the practical constraints for the flare maneuver of the AWJSRA. Results from the computer simulation of a satisfactory autopilot design for the flare control of the AWJSRA are presented.
Lieb polariton topological insulators
NASA Astrophysics Data System (ADS)
Li, Chunyan; Ye, Fangwei; Chen, Xianfeng; Kartashov, Yaroslav V.; Ferrando, Albert; Torner, Lluis; Skryabin, Dmitry V.
2018-02-01
We predict that the interplay between the spin-orbit coupling, stemming from the transverse electric-transverse magnetic energy splitting, and the Zeeman effect in semiconductor microcavities supporting exciton-polariton quasiparticles, results in the appearance of unidirectional linear topological edge states when the top microcavity mirror is patterned to form a truncated dislocated Lieb lattice of cylindrical pillars. Periodic nonlinear edge states are found to emerge from the linear ones. They are strongly localized across the interface and they are remarkably robust in comparison to their counterparts in honeycomb lattices. Such robustness makes possible the existence of nested unidirectional dark solitons that move steadily along the lattice edge.
NASA Astrophysics Data System (ADS)
Gusriani, N.; Firdaniza
2018-03-01
The existence of outliers on multiple linear regression analysis causes the Gaussian assumption to be unfulfilled. If the Least Square method is forcedly used on these data, it will produce a model that cannot represent most data. For that, we need a robust regression method against outliers. This paper will compare the Minimum Covariance Determinant (MCD) method and the TELBS method on secondary data on the productivity of phytoplankton, which contains outliers. Based on the robust determinant coefficient value, MCD method produces a better model compared to TELBS method.
Robust Fault Detection and Isolation for Stochastic Systems
NASA Technical Reports Server (NTRS)
George, Jemin; Gregory, Irene M.
2010-01-01
This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.
Sparse distributed memory: understanding the speed and robustness of expert memory
Brogliato, Marcelo S.; Chada, Daniel M.; Linhares, Alexandre
2014-01-01
How can experts, sometimes in exacting detail, almost immediately and very precisely recall memory items from a vast repertoire? The problem in which we will be interested concerns models of theoretical neuroscience that could explain the speed and robustness of an expert's recollection. The approach is based on Sparse Distributed Memory, which has been shown to be plausible, both in a neuroscientific and in a psychological manner, in a number of ways. A crucial characteristic concerns the limits of human recollection, the “tip-of-tongue” memory event—which is found at a non-linearity in the model. We expand the theoretical framework, deriving an optimization formula to solve this non-linearity. Numerical results demonstrate how the higher frequency of rehearsal, through work or study, immediately increases the robustness and speed associated with expert memory. PMID:24808842
NASA Astrophysics Data System (ADS)
Zhang, Chuan; Wang, Xingyuan; Luo, Chao; Li, Junqiu; Wang, Chunpeng
2018-03-01
In this paper, we focus on the robust outer synchronization problem between two nonlinear complex networks with parametric disturbances and mixed time-varying delays. Firstly, a general complex network model is proposed. Besides the nonlinear couplings, the network model in this paper can possess parametric disturbances, internal time-varying delay, discrete time-varying delay and distributed time-varying delay. Then, according to the robust control strategy, linear matrix inequality and Lyapunov stability theory, several outer synchronization protocols are strictly derived. Simple linear matrix controllers are designed to driver the response network synchronize to the drive network. Additionally, our results can be applied on the complex networks without parametric disturbances. Finally, by utilizing the delayed Lorenz chaotic system as the dynamics of all nodes, simulation examples are given to demonstrate the effectiveness of our theoretical results.
Low SWaP multispectral sensors using dichroic filter arrays
NASA Astrophysics Data System (ADS)
Dougherty, John; Varghese, Ron
2015-06-01
The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.
NASA Astrophysics Data System (ADS)
Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.
2017-07-01
Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
NASA Technical Reports Server (NTRS)
Burken, John J.
2005-01-01
This viewgraph presentation reviews the use of a Robust Servo Linear Quadratic Regulator (LQR) and a Radial Basis Function (RBF) Neural Network in reconfigurable flight control designs in adaptation to a aircraft part failure. The method uses a robust LQR servomechanism design with model Reference adaptive control, and RBF neural networks. During the failure the LQR servomechanism behaved well, and using the neural networks improved the tracking.
Real-time polarization imaging algorithm for camera-based polarization navigation sensors.
Lu, Hao; Zhao, Kaichun; You, Zheng; Huang, Kaoli
2017-04-10
Biologically inspired polarization navigation is a promising approach due to its autonomous nature, high precision, and robustness. Many researchers have built point source-based and camera-based polarization navigation prototypes in recent years. Camera-based prototypes can benefit from their high spatial resolution but incur a heavy computation load. The pattern recognition algorithm in most polarization imaging algorithms involves several nonlinear calculations that impose a significant computation burden. In this paper, the polarization imaging and pattern recognition algorithms are optimized through reduction to several linear calculations by exploiting the orthogonality of the Stokes parameters without affecting precision according to the features of the solar meridian and the patterns of the polarized skylight. The algorithm contains a pattern recognition algorithm with a Hough transform as well as orientation measurement algorithms. The algorithm was loaded and run on a digital signal processing system to test its computational complexity. The test showed that the running time decreased to several tens of milliseconds from several thousand milliseconds. Through simulations and experiments, it was found that the algorithm can measure orientation without reducing precision. It can hence satisfy the practical demands of low computational load and high precision for use in embedded systems.
Chan, Rachel W; von Deuster, Constantin; Giese, Daniel; Stoeck, Christian T; Harmer, Jack; Aitken, Andrew P; Atkinson, David; Kozerke, Sebastian
2014-07-01
Diffusion tensor imaging (DTI) of moving organs is gaining increasing attention but robust performance requires sequence modifications and dedicated correction methods to account for system imperfections. In this study, eddy currents in the "unipolar" Stejskal-Tanner and the velocity-compensated "bipolar" spin-echo diffusion sequences were investigated and corrected for using a magnetic field monitoring approach in combination with higher-order image reconstruction. From the field-camera measurements, increased levels of second-order eddy currents were quantified in the unipolar sequence relative to the bipolar diffusion sequence while zeroth and linear orders were found to be similar between both sequences. Second-order image reconstruction based on field-monitoring data resulted in reduced spatial misalignment artifacts and residual displacements of less than 0.43 mm and 0.29 mm (in the unipolar and bipolar sequences, respectively) after second-order eddy-current correction. Results demonstrate the need for second-order correction in unipolar encoding schemes but also show that bipolar sequences benefit from second-order reconstruction to correct for incomplete intrinsic cancellation of eddy-currents. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Robust image features: concentric contrasting circles and their image extraction
NASA Astrophysics Data System (ADS)
Gatrell, Lance B.; Hoff, William A.; Sklair, Cheryl W.
1992-03-01
Many computer vision tasks can be simplified if special image features are placed on the objects to be recognized. A review of special image features that have been used in the past is given and then a new image feature, the concentric contrasting circle, is presented. The concentric contrasting circle image feature has the advantages of being easily manufactured, easily extracted from the image, robust extraction (true targets are found, while few false targets are found), it is a passive feature, and its centroid is completely invariant to the three translational and one rotational degrees of freedom and nearly invariant to the remaining two rotational degrees of freedom. There are several examples of existing parallel implementations which perform most of the extraction work. Extraction robustness was measured by recording the probability of correct detection and the false alarm rate in a set of images of scenes containing mockups of satellites, fluid couplings, and electrical components. A typical application of concentric contrasting circle features is to place them on modeled objects for monocular pose estimation or object identification. This feature is demonstrated on a visually challenging background of a specular but wrinkled surface similar to a multilayered insulation spacecraft thermal blanket.
Robust image registration for multiple exposure high dynamic range image synthesis
NASA Astrophysics Data System (ADS)
Yao, Susu
2011-03-01
Image registration is an important preprocessing technique in high dynamic range (HDR) image synthesis. This paper proposed a robust image registration method for aligning a group of low dynamic range images (LDR) that are captured with different exposure times. Illumination change and photometric distortion between two images would result in inaccurate registration. We propose to transform intensity image data into phase congruency to eliminate the effect of the changes in image brightness and use phase cross correlation in the Fourier transform domain to perform image registration. Considering the presence of non-overlapped regions due to photometric distortion, evolutionary programming is applied to search for the accurate translation parameters so that the accuracy of registration is able to be achieved at a hundredth of a pixel level. The proposed algorithm works well for under and over-exposed image registration. It has been applied to align LDR images for synthesizing high quality HDR images..
A non-linear regression method for CT brain perfusion analysis
NASA Astrophysics Data System (ADS)
Bennink, E.; Oosterbroek, J.; Viergever, M. A.; Velthuis, B. K.; de Jong, H. W. A. M.
2015-03-01
CT perfusion (CTP) imaging allows for rapid diagnosis of ischemic stroke. Generation of perfusion maps from CTP data usually involves deconvolution algorithms providing estimates for the impulse response function in the tissue. We propose the use of a fast non-linear regression (NLR) method that we postulate has similar performance to the current academic state-of-art method (bSVD), but that has some important advantages, including the estimation of vascular permeability, improved robustness to tracer-delay, and very few tuning parameters, that are all important in stroke assessment. The aim of this study is to evaluate the fast NLR method against bSVD and a commercial clinical state-of-art method. The three methods were tested against a published digital perfusion phantom earlier used to illustrate the superiority of bSVD. In addition, the NLR and clinical methods were also tested against bSVD on 20 clinical scans. Pearson correlation coefficients were calculated for each of the tested methods. All three methods showed high correlation coefficients (>0.9) with the ground truth in the phantom. With respect to the clinical scans, the NLR perfusion maps showed higher correlation with bSVD than the perfusion maps from the clinical method. Furthermore, the perfusion maps showed that the fast NLR estimates are robust to tracer-delay. In conclusion, the proposed fast NLR method provides a simple and flexible way of estimating perfusion parameters from CT perfusion scans, with high correlation coefficients. This suggests that it could be a better alternative to the current clinical and academic state-of-art methods.
Yokoo, Takeshi; Serai, Suraj D; Pirasteh, Ali; Bashir, Mustafa R; Hamilton, Gavin; Hernando, Diego; Hu, Houchun H; Hetterich, Holger; Kühn, Jens-Peter; Kukuk, Guido M; Loomba, Rohit; Middleton, Michael S; Obuchowski, Nancy A; Song, Ji Soo; Tang, An; Wu, Xinhuai; Reeder, Scott B; Sirlin, Claude B
2018-02-01
Purpose To determine the linearity, bias, and precision of hepatic proton density fat fraction (PDFF) measurements by using magnetic resonance (MR) imaging across different field strengths, imager manufacturers, and reconstruction methods. Materials and Methods This meta-analysis was performed in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A systematic literature search identified studies that evaluated the linearity and/or bias of hepatic PDFF measurements by using MR imaging (hereafter, MR imaging-PDFF) against PDFF measurements by using colocalized MR spectroscopy (hereafter, MR spectroscopy-PDFF) or the precision of MR imaging-PDFF. The quality of each study was evaluated by using the Quality Assessment of Studies of Diagnostic Accuracy 2 tool. De-identified original data sets from the selected studies were pooled. Linearity was evaluated by using linear regression between MR imaging-PDFF and MR spectroscopy-PDFF measurements. Bias, defined as the mean difference between MR imaging-PDFF and MR spectroscopy-PDFF measurements, was evaluated by using Bland-Altman analysis. Precision, defined as the agreement between repeated MR imaging-PDFF measurements, was evaluated by using a linear mixed-effects model, with field strength, imager manufacturer, reconstruction method, and region of interest as random effects. Results Twenty-three studies (1679 participants) were selected for linearity and bias analyses and 11 studies (425 participants) were selected for precision analyses. MR imaging-PDFF was linear with MR spectroscopy-PDFF (R 2 = 0.96). Regression slope (0.97; P < .001) and mean Bland-Altman bias (-0.13%; 95% limits of agreement: -3.95%, 3.40%) indicated minimal underestimation by using MR imaging-PDFF. MR imaging-PDFF was precise at the region-of-interest level, with repeatability and reproducibility coefficients of 2.99% and 4.12%, respectively. Field strength, imager manufacturer, and reconstruction method each had minimal effects on reproducibility. Conclusion MR imaging-PDFF has excellent linearity, bias, and precision across different field strengths, imager manufacturers, and reconstruction methods. © RSNA, 2017 Online supplemental material is available for this article. An earlier incorrect version of this article appeared online. This article was corrected on October 2, 2017.
An in vivo MRI Template Set for Morphometry, Tissue Segmentation, and fMRI Localization in Rats
Valdés-Hernández, Pedro Antonio; Sumiyoshi, Akira; Nonaka, Hiroi; Haga, Risa; Aubert-Vásquez, Eduardo; Ogawa, Takeshi; Iturria-Medina, Yasser; Riera, Jorge J.; Kawashima, Ryuta
2011-01-01
Over the last decade, several papers have focused on the construction of highly detailed mouse high field magnetic resonance image (MRI) templates via non-linear registration to unbiased reference spaces, allowing for a variety of neuroimaging applications such as robust morphometric analyses. However, work in rats has only provided medium field MRI averages based on linear registration to biased spaces with the sole purpose of approximate functional MRI (fMRI) localization. This precludes any morphometric analysis in spite of the need of exploring in detail the neuroanatomical substrates of diseases in a recent advent of rat models. In this paper we present a new in vivo rat T2 MRI template set, comprising average images of both intensity and shape, obtained via non-linear registration. Also, unlike previous rat template sets, we include white and gray matter probabilistic segmentations, expanding its use to those applications demanding prior-based tissue segmentation, e.g., statistical parametric mapping (SPM) voxel-based morphometry. We also provide a preliminary digitalization of latest Paxinos and Watson atlas for anatomical and functional interpretations within the cerebral cortex. We confirmed that, like with previous templates, forepaw and hindpaw fMRI activations can be correctly localized in the expected atlas structure. To exemplify the use of our new MRI template set, were reported the volumes of brain tissues and cortical structures and probed their relationships with ontogenetic development. Other in vivo applications in the near future can be tensor-, deformation-, or voxel-based morphometry, morphological connectivity, and diffusion tensor-based anatomical connectivity. Our template set, freely available through the SPM extension website, could be an important tool for future longitudinal and/or functional extensive preclinical studies. PMID:22275894
Space-time encoding for high frame rate ultrasound imaging.
Misaridis, Thanassis X; Jensen, Jørgen A
2002-05-01
Frame rate in ultrasound imaging can be dramatically increased by using sparse synthetic transmit aperture (STA) beamforming techniques. The two main drawbacks of the method are the low signal-to-noise ratio (SNR) and the motion artifacts, that degrade the image quality. In this paper we propose a spatio-temporal encoding for STA imaging based on simultaneous transmission of two quasi-orthogonal tapered linear FM signals. The excitation signals are an up- and a down-chirp with frequency division and a cross-talk of -55 dB. The received signals are first cross-correlated with the appropriate code, then spatially decoded and finally beamformed for each code, yielding two images per emission. The spatial encoding is a Hadamard encoding previously suggested by Chiao et al. [in: Proceedings of the IEEE Ultrasonics Symposium, 1997, p. 1679]. The Hadamard matrix has half the size of the transmit element groups, due to the orthogonality of the temporal encoded wavefronts. Thus, with this method, the frame rate is doubled compared to previous systems. Another advantage is the utilization of temporal codes which are more robust to attenuation. With the proposed technique it is possible to obtain images dynamically focused in both transmit and receive with only two firings. This reduces the problem of motion artifacts. The method has been tested with extensive simulations using Field II. Resolution and SNR are compared with uncoded STA imaging and conventional phased-array imaging. The range resolution remains the same for coded STA imaging with four emissions and is slightly degraded for STA imaging with two emissions due to the -55 dB cross-talk between the signals. The additional proposed temporal encoding adds more than 15 dB on the SNR gain, yielding a SNR at the same order as in phased-array imaging.
NASA Astrophysics Data System (ADS)
Zahari, Siti Meriam; Ramli, Norazan Mohamed; Moktar, Balkiah; Zainol, Mohammad Said
2014-09-01
In the presence of multicollinearity and multiple outliers, statistical inference of linear regression model using ordinary least squares (OLS) estimators would be severely affected and produces misleading results. To overcome this, many approaches have been investigated. These include robust methods which were reported to be less sensitive to the presence of outliers. In addition, ridge regression technique was employed to tackle multicollinearity problem. In order to mitigate both problems, a combination of ridge regression and robust methods was discussed in this study. The superiority of this approach was examined when simultaneous presence of multicollinearity and multiple outliers occurred in multiple linear regression. This study aimed to look at the performance of several well-known robust estimators; M, MM, RIDGE and robust ridge regression estimators, namely Weighted Ridge M-estimator (WRM), Weighted Ridge MM (WRMM), Ridge MM (RMM), in such a situation. Results of the study showed that in the presence of simultaneous multicollinearity and multiple outliers (in both x and y-direction), the RMM and RIDGE are more or less similar in terms of superiority over the other estimators, regardless of the number of observation, level of collinearity and percentage of outliers used. However, when outliers occurred in only single direction (y-direction), the WRMM estimator is the most superior among the robust ridge regression estimators, by producing the least variance. In conclusion, the robust ridge regression is the best alternative as compared to robust and conventional least squares estimators when dealing with simultaneous presence of multicollinearity and outliers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fusella, M; Loi, G; Fiandra, C
Purpose: To investigate the accuracy and robustness, against image noise and artifacts (typical of CBCT images), of a commercial algorithm for deformable image registration (DIR), to propagate regions of interest (ROIs) in computational phantoms based on real prostate patient images. Methods: The Anaconda DIR algorithm, implemented in RayStation was tested. Two specific Deformation Vector Fields (DVFs) were applied to the reference data set (CTref) using the ImSimQA software, obtaining two deformed CTs. For each dataset twenty-four different level of noise and/or capping artifacts were applied to simulate CBCT images. DIR was performed between CTref and each deformed CTs and CBCTs.more » In order to investigate the relationship between image quality parameters and the DIR results (expressed by a logit transform of the Dice Index) a bilinear regression was defined. Results: More than 550 DIR-mapped ROIs were analyzed. The Statistical analysis states that deformation strenght and artifacts were significant prognostic factors of DIR performances, while noise appeared to have a minor role in DIR process as implemented in RayStation as expected by the image similarity metric built in the registration algorithm. Capping artifacts reveals a determinant role for the accuracy of DIR results. Two optimal values for capping artifacts were found to obtain acceptable DIR results (DICE> 075/ 0.85). Various clinical CBCT acquisition protocol were reported to evaluate the significance of the study. Conclusion: This work illustrates the impact of image quality on DIR performance. Clinical issues like Adaptive Radiation Therapy (ART) and Dose Accumulation need accurate and robust DIR software. The RayStation DIR algorithm resulted robust against noise, but sensitive to image artifacts. This result highlights the need of robustness quality assurance against image noise and artifacts in the commissioning of a DIR commercial system and underlines the importance to adopt optimized protocols for CBCT image acquisitions in ART clinical implementation.« less
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
NASA Astrophysics Data System (ADS)
Bruynooghe, Michel M.
1998-04-01
In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.
Robust reflective pupil slicing technology
NASA Astrophysics Data System (ADS)
Meade, Jeffrey T.; Behr, Bradford B.; Cenko, Andrew T.; Hajian, Arsen R.
2014-07-01
Tornado Spectral Systems (TSS) has developed the High Throughput Virtual Slit (HTVSTM), robust all-reflective pupil slicing technology capable of replacing the slit in research-, commercial- and MIL-SPEC-grade spectrometer systems. In the simplest configuration, the HTVS allows optical designers to remove the lossy slit from pointsource spectrometers and widen the input slit of long-slit spectrometers, greatly increasing throughput without loss of spectral resolution or cross-dispersion information. The HTVS works by transferring etendue between image plane axes but operating in the pupil domain rather than at a focal plane. While useful for other technologies, this is especially relevant for spectroscopic applications by performing the same spectral narrowing as a slit without throwing away light on the slit aperture. HTVS can be implemented in all-reflective designs and only requires a small number of reflections for significant spectral resolution enhancement-HTVS systems can be efficiently implemented in most wavelength regions. The etendueshifting operation also provides smooth scaling with input spot/image size without requiring reconfiguration for different targets (such as different seeing disk diameters or different fiber core sizes). Like most slicing technologies, HTVS provides throughput increases of several times without resolution loss over equivalent slitbased designs. HTVS technology enables robust slit replacement in point-source spectrometer systems. By virtue of pupilspace operation this technology has several advantages over comparable image-space slicer technology, including the ability to adapt gracefully and linearly to changing source size and better vertical packing of the flux distribution. Additionally, this technology can be implemented with large slicing factors in both fast and slow beams and can easily scale from large, room-sized spectrometers through to small, telescope-mounted devices. Finally, this same technology is directly applicable to multi-fiber spectrometers to achieve similar enhancement. HTVS also provides the ability to anamorphically "stretch" the slit image in long-slit spectrometers, allowing the instrument designer to optimize the plate scale in the dispersion axis and cross-dispersion axes independently without sacrificing spatial information. This allows users to widen the input slit, with the associated gain of throughput and loss of spatial selectivity, while maintaining the spectral resolution of the spectrometer system. This "stretching" places increased requirements on detector focal plane height, as with image slicing techniques, but provides additional degrees of freedom to instrument designers to build the best possible spectrometer systems. We discuss the details of this technology for an astronomical context, covering the applicability from small telescope mounted spectrometers through long-slit imagers and radial-velocity engines. This powerful tool provides additional degrees of freedom when designing a spectrometer, enabling instrument designers to further optimize systems for the required scientific goals.
NASA Astrophysics Data System (ADS)
Fu, Z.; Qin, Q.; Wu, C.; Chang, Y.; Luo, B.
2017-09-01
Due to the differences of imaging principles, image matching between visible and thermal infrared images still exist new challenges and difficulties. Inspired by the complementary spatial and frequency information of geometric structural features, a robust descriptor is proposed for visible and thermal infrared images matching. We first divide two different spatial regions to the region around point of interest, using the histogram of oriented magnitudes, which corresponds to the 2-D structural shape information to describe the larger region and the edge oriented histogram to describe the spatial distribution for the smaller region. Then the two vectors are normalized and combined to a higher feature vector. Finally, our proposed descriptor is obtained by applying principal component analysis (PCA) to reduce the dimension of the combined high feature vector to make our descriptor more robust. Experimental results showed that our proposed method was provided with significant improvements in correct matching numbers and obvious advantages by complementing information within spatial and frequency structural information.
High-Throughput Histopathological Image Analysis via Robust Cell Segmentation and Hashing
Zhang, Xiaofan; Xing, Fuyong; Su, Hai; Yang, Lin; Zhang, Shaoting
2015-01-01
Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells. PMID:26599156
Chen, Bor-Sen; Hsu, Chih-Yuan
2012-10-26
Collective rhythms of gene regulatory networks have been a subject of considerable interest for biologists and theoreticians, in particular the synchronization of dynamic cells mediated by intercellular communication. Synchronization of a population of synthetic genetic oscillators is an important design in practical applications, because such a population distributed over different host cells needs to exploit molecular phenomena simultaneously in order to emerge a biological phenomenon. However, this synchronization may be corrupted by intrinsic kinetic parameter fluctuations and extrinsic environmental molecular noise. Therefore, robust synchronization is an important design topic in nonlinear stochastic coupled synthetic genetic oscillators with intrinsic kinetic parameter fluctuations and extrinsic molecular noise. Initially, the condition for robust synchronization of synthetic genetic oscillators was derived based on Hamilton Jacobi inequality (HJI). We found that if the synchronization robustness can confer enough intrinsic robustness to tolerate intrinsic parameter fluctuation and extrinsic robustness to filter the environmental noise, then robust synchronization of coupled synthetic genetic oscillators is guaranteed. If the synchronization robustness of a population of nonlinear stochastic coupled synthetic genetic oscillators distributed over different host cells could not be maintained, then robust synchronization could be enhanced by external control input through quorum sensing molecules. In order to simplify the analysis and design of robust synchronization of nonlinear stochastic synthetic genetic oscillators, the fuzzy interpolation method was employed to interpolate several local linear stochastic coupled systems to approximate the nonlinear stochastic coupled system so that the HJI-based synchronization design problem could be replaced by a simple linear matrix inequality (LMI)-based design problem, which could be solved with the help of LMI toolbox in MATLAB easily. If the synchronization robustness criterion, i.e. the synchronization robustness ≥ intrinsic robustness + extrinsic robustness, then the stochastic coupled synthetic oscillators can be robustly synchronized in spite of intrinsic parameter fluctuation and extrinsic noise. If the synchronization robustness criterion is violated, external control scheme by adding inducer can be designed to improve synchronization robustness of coupled synthetic genetic oscillators. The investigated robust synchronization criteria and proposed external control method are useful for a population of coupled synthetic networks with emergent synchronization behavior, especially for multi-cellular, engineered networks.
2012-01-01
Background Collective rhythms of gene regulatory networks have been a subject of considerable interest for biologists and theoreticians, in particular the synchronization of dynamic cells mediated by intercellular communication. Synchronization of a population of synthetic genetic oscillators is an important design in practical applications, because such a population distributed over different host cells needs to exploit molecular phenomena simultaneously in order to emerge a biological phenomenon. However, this synchronization may be corrupted by intrinsic kinetic parameter fluctuations and extrinsic environmental molecular noise. Therefore, robust synchronization is an important design topic in nonlinear stochastic coupled synthetic genetic oscillators with intrinsic kinetic parameter fluctuations and extrinsic molecular noise. Results Initially, the condition for robust synchronization of synthetic genetic oscillators was derived based on Hamilton Jacobi inequality (HJI). We found that if the synchronization robustness can confer enough intrinsic robustness to tolerate intrinsic parameter fluctuation and extrinsic robustness to filter the environmental noise, then robust synchronization of coupled synthetic genetic oscillators is guaranteed. If the synchronization robustness of a population of nonlinear stochastic coupled synthetic genetic oscillators distributed over different host cells could not be maintained, then robust synchronization could be enhanced by external control input through quorum sensing molecules. In order to simplify the analysis and design of robust synchronization of nonlinear stochastic synthetic genetic oscillators, the fuzzy interpolation method was employed to interpolate several local linear stochastic coupled systems to approximate the nonlinear stochastic coupled system so that the HJI-based synchronization design problem could be replaced by a simple linear matrix inequality (LMI)-based design problem, which could be solved with the help of LMI toolbox in MATLAB easily. Conclusion If the synchronization robustness criterion, i.e. the synchronization robustness ≥ intrinsic robustness + extrinsic robustness, then the stochastic coupled synthetic oscillators can be robustly synchronized in spite of intrinsic parameter fluctuation and extrinsic noise. If the synchronization robustness criterion is violated, external control scheme by adding inducer can be designed to improve synchronization robustness of coupled synthetic genetic oscillators. The investigated robust synchronization criteria and proposed external control method are useful for a population of coupled synthetic networks with emergent synchronization behavior, especially for multi-cellular, engineered networks. PMID:23101662
Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J
2018-05-01
To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.
Robust and Effective Component-based Banknote Recognition for the Blind
Hasanuzzaman, Faiz M.; Yang, Xiaodong; Tian, YingLi
2012-01-01
We develop a novel camera-based computer vision technology to automatically recognize banknotes for assisting visually impaired people. Our banknote recognition system is robust and effective with the following features: 1) high accuracy: high true recognition rate and low false recognition rate, 2) robustness: handles a variety of currency designs and bills in various conditions, 3) high efficiency: recognizes banknotes quickly, and 4) ease of use: helps blind users to aim the target for image capture. To make the system robust to a variety of conditions including occlusion, rotation, scaling, cluttered background, illumination change, viewpoint variation, and worn or wrinkled bills, we propose a component-based framework by using Speeded Up Robust Features (SURF). Furthermore, we employ the spatial relationship of matched SURF features to detect if there is a bill in the camera view. This process largely alleviates false recognition and can guide the user to correctly aim at the bill to be recognized. The robustness and generalizability of the proposed system is evaluated on a dataset including both positive images (with U.S. banknotes) and negative images (no U.S. banknotes) collected under a variety of conditions. The proposed algorithm, achieves 100% true recognition rate and 0% false recognition rate. Our banknote recognition system is also tested by blind users. PMID:22661884
Tian, Huawei; Zhao, Yao; Ni, Rongrong; Cao, Gang
2009-11-23
In a feature-based geometrically robust watermarking system, it is a challenging task to detect geometric-invariant regions (GIRs) which can survive a broad range of image processing operations. Instead of commonly used Harris detector or Mexican hat wavelet method, a more robust corner detector named multi-scale curvature product (MSCP) is adopted to extract salient features in this paper. Based on such features, disk-like GIRs are found, which consists of three steps. First, robust edge contours are extracted. Then, MSCP is utilized to detect the centers for GIRs. Third, the characteristic scale selection is performed to calculate the radius of each GIR. A novel sector-shaped partitioning method for the GIRs is designed, which can divide a GIR into several sector discs with the help of the most important corner (MIC). The watermark message is then embedded bit by bit in each sector by using Quantization Index Modulation (QIM). The GIRs and the divided sector discs are invariant to geometric transforms, so the watermarking method inherently has high robustness against geometric attacks. Experimental results show that the scheme has a better robustness against various image processing operations including common processing attacks, affine transforms, cropping, and random bending attack (RBA) than the previous approaches.
Song, Jung-Hwan; Lee, Kee-Woong; Lee, Woo-Kyung; Jung, Chul-Ho
2017-01-01
A high resolution inverse synthetic aperture radar (ISAR) technique is presented using modified Doppler history based motion compensation. To this purpose, a novel wideband ISAR system is developed that accommodates parametric processing over extended aperture length. The proposed method is derived from an ISAR-to-SAR approach that makes use of high resolution spotlight SAR and sub-aperture recombination. It is dedicated to wide aperture ISAR imaging and exhibits robust performance against unstable targets having non-linear motions. We demonstrate that the Doppler histories of the full aperture ISAR echoes from disturbed targets are efficiently retrieved with good fitting models. Experiments have been conducted on real aircraft targets and the feasibility of the full aperture ISAR processing is verified through the acquisition of high resolution ISAR imagery. PMID:28555036
Kovalenko, Lyudmyla Y; Chaumon, Maximilien; Busch, Niko A
2012-07-01
Semantic processing of verbal and visual stimuli has been investigated in semantic violation or semantic priming paradigms in which a stimulus is either related or unrelated to a previously established semantic context. A hallmark of semantic priming is the N400 event-related potential (ERP)--a deflection of the ERP that is more negative for semantically unrelated target stimuli. The majority of studies investigating the N400 and semantic integration have used verbal material (words or sentences), and standardized stimulus sets with norms for semantic relatedness have been published for verbal but not for visual material. However, semantic processing of visual objects (as opposed to words) is an important issue in research on visual cognition. In this study, we present a set of 800 pairs of semantically related and unrelated visual objects. The images were rated for semantic relatedness by a sample of 132 participants. Furthermore, we analyzed low-level image properties and matched the two semantic categories according to these features. An ERP study confirmed the suitability of this image set for evoking a robust N400 effect of semantic integration. Additionally, using a general linear modeling approach of single-trial data, we also demonstrate that low-level visual image properties and semantic relatedness are in fact only minimally overlapping. The image set is available for download from the authors' website. We expect that the image set will facilitate studies investigating mechanisms of semantic and contextual processing of visual stimuli.
Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion
Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang
2016-01-01
Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost. PMID:26840313
Robust algebraic image enhancement for intelligent control systems
NASA Technical Reports Server (NTRS)
Lerner, Bao-Ting; Morrelli, Michael
1993-01-01
Robust vision capability for intelligent control systems has been an elusive goal in image processing. The computationally intensive techniques a necessary for conventional image processing make real-time applications, such as object tracking and collision avoidance difficult. In order to endow an intelligent control system with the needed vision robustness, an adequate image enhancement subsystem capable of compensating for the wide variety of real-world degradations, must exist between the image capturing and the object recognition subsystems. This enhancement stage must be adaptive and must operate with consistency in the presence of both statistical and shape-based noise. To deal with this problem, we have developed an innovative algebraic approach which provides a sound mathematical framework for image representation and manipulation. Our image model provides a natural platform from which to pursue dynamic scene analysis, and its incorporation into a vision system would serve as the front-end to an intelligent control system. We have developed a unique polynomial representation of gray level imagery and applied this representation to develop polynomial operators on complex gray level scenes. This approach is highly advantageous since polynomials can be manipulated very easily, and are readily understood, thus providing a very convenient environment for image processing. Our model presents a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets.
A robust firearm identification algorithm of forensic ballistics specimens
NASA Astrophysics Data System (ADS)
Chuan, Z. L.; Jemain, A. A.; Liong, C.-Y.; Ghani, N. A. M.; Tan, L. K.
2017-09-01
There are several inherent difficulties in the existing firearm identification algorithms, include requiring the physical interpretation and time consuming. Therefore, the aim of this study is to propose a robust algorithm for a firearm identification based on extracting a set of informative features from the segmented region of interest (ROI) using the simulated noisy center-firing pin impression images. The proposed algorithm comprises Laplacian sharpening filter, clustering-based threshold selection, unweighted least square estimator, and segment a square ROI from the noisy images. A total of 250 simulated noisy images collected from five different pistols of the same make, model and caliber are used to evaluate the robustness of the proposed algorithm. This study found that the proposed algorithm is able to perform the identical task on the noisy images with noise levels as high as 70%, while maintaining a firearm identification accuracy rate of over 90%.
3D morphology reconstruction using linear array CCD binocular stereo vision imaging system
NASA Astrophysics Data System (ADS)
Pan, Yu; Wang, Jinjiang
2018-01-01
Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.
NASA Astrophysics Data System (ADS)
He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry
2008-04-01
The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.
Principal component reconstruction (PCR) for cine CBCT with motion learning from 2D fluoroscopy.
Gao, Hao; Zhang, Yawei; Ren, Lei; Yin, Fang-Fang
2018-01-01
This work aims to generate cine CT images (i.e., 4D images with high-temporal resolution) based on a novel principal component reconstruction (PCR) technique with motion learning from 2D fluoroscopic training images. In the proposed PCR method, the matrix factorization is utilized as an explicit low-rank regularization of 4D images that are represented as a product of spatial principal components and temporal motion coefficients. The key hypothesis of PCR is that temporal coefficients from 4D images can be reasonably approximated by temporal coefficients learned from 2D fluoroscopic training projections. For this purpose, we can acquire fluoroscopic training projections for a few breathing periods at fixed gantry angles that are free from geometric distortion due to gantry rotation, that is, fluoroscopy-based motion learning. Such training projections can provide an effective characterization of the breathing motion. The temporal coefficients can be extracted from these training projections and used as priors for PCR, even though principal components from training projections are certainly not the same for these 4D images to be reconstructed. For this purpose, training data are synchronized with reconstruction data using identical real-time breathing position intervals for projection binning. In terms of image reconstruction, with a priori temporal coefficients, the data fidelity for PCR changes from nonlinear to linear, and consequently, the PCR method is robust and can be solved efficiently. PCR is formulated as a convex optimization problem with the sum of linear data fidelity with respect to spatial principal components and spatiotemporal total variation regularization imposed on 4D image phases. The solution algorithm of PCR is developed based on alternating direction method of multipliers. The implementation is fully parallelized on GPU with NVIDIA CUDA toolbox and each reconstruction takes about a few minutes. The proposed PCR method is validated and compared with a state-of-art method, that is, PICCS, using both simulation and experimental data with the on-board cone-beam CT setting. The results demonstrated the feasibility of PCR for cine CBCT and significantly improved reconstruction quality of PCR from PICCS for cine CBCT. With a priori estimated temporal motion coefficients using fluoroscopic training projections, the PCR method can accurately reconstruct spatial principal components, and then generate cine CT images as a product of temporal motion coefficients and spatial principal components. © 2017 American Association of Physicists in Medicine.
Feedback system design with an uncertain plant
NASA Technical Reports Server (NTRS)
Milich, D.; Valavani, L.; Athans, M.
1986-01-01
A method is developed to design a fixed-parameter compensator for a linear, time-invariant, SISO (single-input single-output) plant model characterized by significant structured, as well as unstructured, uncertainty. The controller minimizes the H(infinity) norm of the worst-case sensitivity function over the operating band and the resulting feedback system exhibits robust stability and robust performance. It is conjectured that such a robust nonadaptive control design technique can be used on-line in an adaptive control system.
Robust Stability and Control of Multi-Body Ground Vehicles with Uncertain Dynamics and Failures
2010-01-01
and N. Zhang, 2008. “Robust stability control of vehicle rollover subject to actuator time delay”. Proc. IMechE Part I: J. of systems and control ...Dynamic Systems and Control Conference, Boston, MA, Sept 2010 R.K. Yedavalli,”Robust Stability of Linear Interval Parameter Matrix Family Problem...for control coupled output regulation for a class of systems is presented. In section 2.1.7, the control design algorithm developed in section
NASA Astrophysics Data System (ADS)
de Andrés, Javier; Landajo, Manuel; Lorca, Pedro; Labra, Jose; Ordóñez, Patricia
Artificial neural networks have proven to be useful tools for solving financial analysis problems such as financial distress prediction and audit risk assessment. In this paper we focus on the performance of robust (least absolute deviation-based) neural networks on measuring liquidity of firms. The problem of learning the bivariate relationship between the components (namely, current liabilities and current assets) of the so-called current ratio is analyzed, and the predictive performance of several modelling paradigms (namely, linear and log-linear regressions, classical ratios and neural networks) is compared. An empirical analysis is conducted on a representative data base from the Spanish economy. Results indicate that classical ratio models are largely inadequate as a realistic description of the studied relationship, especially when used for predictive purposes. In a number of cases, especially when the analyzed firms are microenterprises, the linear specification is improved by considering the flexible non-linear structures provided by neural networks.
MIND: modality independent neighbourhood descriptor for multi-modal deformable registration.
Heinrich, Mattias P; Jenkinson, Mark; Bhushan, Manav; Matin, Tahreema; Gleeson, Fergus V; Brady, Sir Michael; Schnabel, Julia A
2012-10-01
Deformable registration of images obtained from different modalities remains a challenging task in medical image analysis. This paper addresses this important problem and proposes a modality independent neighbourhood descriptor (MIND) for both linear and deformable multi-modal registration. Based on the similarity of small image patches within one image, it aims to extract the distinctive structure in a local neighbourhood, which is preserved across modalities. The descriptor is based on the concept of image self-similarity, which has been introduced for non-local means filtering for image denoising. It is able to distinguish between different types of features such as corners, edges and homogeneously textured regions. MIND is robust to the most considerable differences between modalities: non-functional intensity relations, image noise and non-uniform bias fields. The multi-dimensional descriptor can be efficiently computed in a dense fashion across the whole image and provides point-wise local similarity across modalities based on the absolute or squared difference between descriptors, making it applicable for a wide range of transformation models and optimisation algorithms. We use the sum of squared differences of the MIND representations of the images as a similarity metric within a symmetric non-parametric Gauss-Newton registration framework. In principle, MIND would be applicable to the registration of arbitrary modalities. In this work, we apply and validate it for the registration of clinical 3D thoracic CT scans between inhale and exhale as well as the alignment of 3D CT and MRI scans. Experimental results show the advantages of MIND over state-of-the-art techniques such as conditional mutual information and entropy images, with respect to clinically annotated landmark locations. Copyright © 2012 Elsevier B.V. All rights reserved.
Surface and Atmospheric Parameter Retrieval From AVIRIS Data: The Importance of Non-Linear Effects
NASA Technical Reports Server (NTRS)
Green Robert O.; Moreno, Jose F.
1996-01-01
AVIRIS data represent a new and important approach for the retrieval of atmospheric and surface parameters from optical remote sensing data. Not only as a test for future space systems, but also as an operational airborne remote sensing system, the development of algorithms to retrieve information from AVIRIS data is an important step to these new approaches and capabilities. Many things have been learned since AVIRIS became operational, and the successive technical improvements in the hardware and the more sophisticated calibration techniques employed have increased the quality of the data to the point of almost meeting optimum user requirements. However, the potential capabilities of imaging spectrometry over the standard multispectral techniques have still not been fully demonstrated. Reasons for this are the technical difficulties in handling the data, the critical aspect of calibration for advanced retrieval methods, and the lack of proper models with which to invert the measured AVIRIS radiances in all the spectral channels. To achieve the potential of imaging spectrometry, these issues must be addressed. In this paper, an algorithm to retrieve information about both atmospheric and surface parameters from AVIRIS data, by using model inversion techniques, is described. Emphasis is put on the derivation of the model itself as well as proper inversion techniques, robust to noise in the data and an inadequate ability of the model to describe natural variability in the data. The problem of non-linear effects is addressed, as it has been demonstrated to be a major source of error in the numerical values retrieved by more simple, linear-based approaches. Non-linear effects are especially critical for the retrieval of surface parameters where both scattering and absorption effects are coupled, as well as in the cases of significant multiple-scattering contributions. However, sophisticated modeling approaches can handle such non-linear effects, which are especially important over vegetated surfaces. All the data used in this study were acquired during the 1991 Multisensor Airborne Campaign (MAC-Europe), as part of the European Field Experiment on a Desertification-threatened Area (EFEDA), carried out in Spain in June-July 1991.
Pure endmember extraction using robust kernel archetypoid analysis for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Yang, Gang; Wu, Ke; Li, Weiyue; Zhang, Dianfa
2017-09-01
A robust kernel archetypoid analysis (RKADA) method is proposed to extract pure endmembers from hyperspectral imagery (HSI). The RKADA assumes that each pixel is a sparse linear mixture of all endmembers and each endmember corresponds to a real pixel in the image scene. First, it improves the re8gular archetypal analysis with a new binary sparse constraint, and the adoption of the kernel function constructs the principal convex hull in an infinite Hilbert space and enlarges the divergences between pairwise pixels. Second, the RKADA transfers the pure endmember extraction problem into an optimization problem by minimizing residual errors with the Huber loss function. The Huber loss function reduces the effects from big noises and outliers in the convergence procedure of RKADA and enhances the robustness of the optimization function. Third, the random kernel sinks for fast kernel matrix approximation and the two-stage algorithm for optimizing initial pure endmembers are utilized to improve its computational efficiency in realistic implementations of RKADA, respectively. The optimization equation of RKADA is solved by using the block coordinate descend scheme and the desired pure endmembers are finally obtained. Six state-of-the-art pure endmember extraction methods are employed to make comparisons with the RKADA on both synthetic and real Cuprite HSI datasets, including three geometrical algorithms vertex component analysis (VCA), alternative volume maximization (AVMAX) and orthogonal subspace projection (OSP), and three matrix factorization algorithms the preconditioning for successive projection algorithm (PreSPA), hierarchical clustering based on rank-two nonnegative matrix factorization (H2NMF) and self-dictionary multiple measurement vector (SDMMV). Experimental results show that the RKADA outperforms all the six methods in terms of spectral angle distance (SAD) and root-mean-square-error (RMSE). Moreover, the RKADA has short computational times in offline operations and shows significant improvement in identifying pure endmembers for ground objects with smaller spectrum differences. Therefore, the RKADA could be an alternative for pure endmember extraction from hyperspectral images.
Shan, Ying; Sawhney, Harpreet S; Kumar, Rakesh
2008-04-01
This paper proposes a novel unsupervised algorithm learning discriminative features in the context of matching road vehicles between two non-overlapping cameras. The matching problem is formulated as a same-different classification problem, which aims to compute the probability of vehicle images from two distinct cameras being from the same vehicle or different vehicle(s). We employ a novel measurement vector that consists of three independent edge-based measures and their associated robust measures computed from a pair of aligned vehicle edge maps. The weight of each measure is determined by an unsupervised learning algorithm that optimally separates the same-different classes in the combined measurement space. This is achieved with a weak classification algorithm that automatically collects representative samples from same-different classes, followed by a more discriminative classifier based on Fisher' s Linear Discriminants and Gibbs Sampling. The robustness of the match measures and the use of unsupervised discriminant analysis in the classification ensures that the proposed method performs consistently in the presence of missing/false features, temporally and spatially changing illumination conditions, and systematic misalignment caused by different camera configurations. Extensive experiments based on real data of over 200 vehicles at different times of day demonstrate promising results.
A multimedia retrieval framework based on semi-supervised ranking and relevance feedback.
Yang, Yi; Nie, Feiping; Xu, Dong; Luo, Jiebo; Zhuang, Yueting; Pan, Yunhe
2012-04-01
We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency.
Background recovery via motion-based robust principal component analysis with matrix factorization
NASA Astrophysics Data System (ADS)
Pan, Peng; Wang, Yongli; Zhou, Mingyuan; Sun, Zhipeng; He, Guoping
2018-03-01
Background recovery is a key technique in video analysis, but it still suffers from many challenges, such as camouflage, lighting changes, and diverse types of image noise. Robust principal component analysis (RPCA), which aims to recover a low-rank matrix and a sparse matrix, is a general framework for background recovery. The nuclear norm is widely used as a convex surrogate for the rank function in RPCA, which requires computing the singular value decomposition (SVD), a task that is increasingly costly as matrix sizes and ranks increase. However, matrix factorization greatly reduces the dimension of the matrix for which the SVD must be computed. Motion information has been shown to improve low-rank matrix recovery in RPCA, but this method still finds it difficult to handle original video data sets because of its batch-mode formulation and implementation. Hence, in this paper, we propose a motion-assisted RPCA model with matrix factorization (FM-RPCA) for background recovery. Moreover, an efficient linear alternating direction method of multipliers with a matrix factorization (FL-ADM) algorithm is designed for solving the proposed FM-RPCA model. Experimental results illustrate that the method provides stable results and is more efficient than the current state-of-the-art algorithms.
NASA Technical Reports Server (NTRS)
Patel, R. V.; Toda, M.; Sridhar, B.
1977-01-01
In connection with difficulties concerning an accurate mathematical representation of a linear quadratic state feedback (LQSF) system, it is often necessary to investigate the robustness (stability) of an LQSF design in the presence of system uncertainty and obtain some quantitative measure of the perturbations which such a design can tolerate. A study is conducted concerning the problem of expressing the robustness property of an LQSF design quantitatively in terms of bounds on the perturbations (modeling errors or parameter variations) in the system matrices. Bounds are obtained for the general case of nonlinear, time-varying perturbations. It is pointed out that most of the presented results are readily applicable to practical situations for which a designer has estimates of the bounds on the system parameter perturbations. Relations are provided which help the designer to select appropriate weighting matrices in the quadratic performance index to attain a robust design. The developed results are employed in the design of an autopilot logic for the flare maneuver of the Augmentor Wing Jet STOL Research Aircraft.
Cole-Cole, linear and multivariate modeling of capacitance data for on-line monitoring of biomass.
Dabros, Michal; Dennewald, Danielle; Currie, David J; Lee, Mark H; Todd, Robert W; Marison, Ian W; von Stockar, Urs
2009-02-01
This work evaluates three techniques of calibrating capacitance (dielectric) spectrometers used for on-line monitoring of biomass: modeling of cell properties using the theoretical Cole-Cole equation, linear regression of dual-frequency capacitance measurements on biomass concentration, and multivariate (PLS) modeling of scanning dielectric spectra. The performance and robustness of each technique is assessed during a sequence of validation batches in two experimental settings of differing signal noise. In more noisy conditions, the Cole-Cole model had significantly higher biomass concentration prediction errors than the linear and multivariate models. The PLS model was the most robust in handling signal noise. In less noisy conditions, the three models performed similarly. Estimates of the mean cell size were done additionally using the Cole-Cole and PLS models, the latter technique giving more satisfactory results.
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)
2004-01-01
A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).
Multimodal Image Alignment via Linear Mapping between Feature Modalities.
Jiang, Yanyun; Zheng, Yuanjie; Hou, Sujuan; Chang, Yuchou; Gee, James
2017-01-01
We propose a novel landmark matching based method for aligning multimodal images, which is accomplished uniquely by resolving a linear mapping between different feature modalities. This linear mapping results in a new measurement on similarity of images captured from different modalities. In addition, our method simultaneously solves this linear mapping and the landmark correspondences by minimizing a convex quadratic function. Our method can estimate complex image relationship between different modalities and nonlinear nonrigid spatial transformations even in the presence of heavy noise, as shown in our experiments carried out by using a variety of image modalities.
NASA Technical Reports Server (NTRS)
Miller, James G.
1995-01-01
Development and application of linear array imaging technologies to address specific aging-aircraft inspection issues is described. Real-time video-taped images were obtained from an unmodified commercial linear-array medical scanner of specimens constructed to simulate typical types of flaws encountered in the inspection of aircraft structures. Results suggest that information regarding the characteristics, location, and interface properties of specific types of flaws in materials and structures may be obtained from the images acquired with a linear array. Furthermore, linear array imaging may offer the advantage of being able to compare 'good' regions with 'flawed' regions simultaneously, and in real time. Real-time imaging permits the inspector to obtain image information from various views and provides the opportunity for observing the effects of introducing specific interventions. Observation of an image in real-time can offer the operator the ability to 'interact' with the inspection process, thus providing new capabilities, and perhaps, new approaches to nondestructive inspections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Andrew J.; Miller, Brian W.; Robinson, Sean M.
Imaging technology is generally considered too invasive for arms control inspections due to the concern that it cannot properly secure sensitive features of the inspected item. However, this same sensitive information, which could include direct information on the form and function of the items under inspection, could be used for robust arms control inspections. The single-pixel X-ray imager (SPXI) is introduced as a method to make such inspections, capturing the salient spatial information of an object in a secure manner while never forming an actual image. The method is built on the theory of compressive sensing and the single pixelmore » optical camera. The performance of the system is quantified here using simulated inspections of simple objects. Measures of the robustness and security of the method are introduced and used to determine how such an inspection would be made which can maintain high robustness and security. In particular, it is found that an inspection with low noise (<1%) and high undersampling (>256×) exhibits high robustness and security.« less
A robust nonlinear filter for image restoration.
Koivunen, V
1995-01-01
A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.
MO-AB-BRA-10: Cancer Therapy Outcome Prediction Based On Dempster-Shafer Theory and PET Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, C; University of Rouen, QuantIF - EA 4108 LITIS, 76000 Rouen; Li, H
2015-06-15
Purpose: In cancer therapy, utilizing FDG-18 PET image-based features for accurate outcome prediction is challenging because of 1) limited discriminative information within a small number of PET image sets, and 2) fluctuant feature characteristics caused by the inferior spatial resolution and system noise of PET imaging. In this study, we proposed a new Dempster-Shafer theory (DST) based approach, evidential low-dimensional transformation with feature selection (ELT-FS), to accurately predict cancer therapy outcome with both PET imaging features and clinical characteristics. Methods: First, a specific loss function with sparse penalty was developed to learn an adaptive low-rank distance metric for representing themore » dissimilarity between different patients’ feature vectors. By minimizing this loss function, a linear low-dimensional transformation of input features was achieved. Also, imprecise features were excluded simultaneously by applying a l2,1-norm regularization of the learnt dissimilarity metric in the loss function. Finally, the learnt dissimilarity metric was applied in an evidential K-nearest-neighbor (EK- NN) classifier to predict treatment outcome. Results: Twenty-five patients with stage II–III non-small-cell lung cancer and thirty-six patients with esophageal squamous cell carcinomas treated with chemo-radiotherapy were collected. For the two groups of patients, 52 and 29 features, respectively, were utilized. The leave-one-out cross-validation (LOOCV) protocol was used for evaluation. Compared to three existing linear transformation methods (PCA, LDA, NCA), the proposed ELT-FS leads to higher prediction accuracy for the training and testing sets both for lung-cancer patients (100+/−0.0, 88.0+/−33.17) and for esophageal-cancer patients (97.46+/−1.64, 83.33+/−37.8). The ELT-FS also provides superior class separation in both test data sets. Conclusion: A novel DST- based approach has been proposed to predict cancer treatment outcome using PET image features and clinical characteristics. A specific loss function has been designed for robust accommodation of feature set incertitude and imprecision, facilitating adaptive learning of the dissimilarity metric for the EK-NN classifier.« less
Optimal design of stimulus experiments for robust discrimination of biochemical reaction networks.
Flassig, R J; Sundmacher, K
2012-12-01
Biochemical reaction networks in the form of coupled ordinary differential equations (ODEs) provide a powerful modeling tool for understanding the dynamics of biochemical processes. During the early phase of modeling, scientists have to deal with a large pool of competing nonlinear models. At this point, discrimination experiments can be designed and conducted to obtain optimal data for selecting the most plausible model. Since biological ODE models have widely distributed parameters due to, e.g. biologic variability or experimental variations, model responses become distributed. Therefore, a robust optimal experimental design (OED) for model discrimination can be used to discriminate models based on their response probability distribution functions (PDFs). In this work, we present an optimal control-based methodology for designing optimal stimulus experiments aimed at robust model discrimination. For estimating the time-varying model response PDF, which results from the nonlinear propagation of the parameter PDF under the ODE dynamics, we suggest using the sigma-point approach. Using the model overlap (expected likelihood) as a robust discrimination criterion to measure dissimilarities between expected model response PDFs, we benchmark the proposed nonlinear design approach against linearization with respect to prediction accuracy and design quality for two nonlinear biological reaction networks. As shown, the sigma-point outperforms the linearization approach in the case of widely distributed parameter sets and/or existing multiple steady states. Since the sigma-point approach scales linearly with the number of model parameter, it can be applied to large systems for robust experimental planning. An implementation of the method in MATLAB/AMPL is available at http://www.uni-magdeburg.de/ivt/svt/person/rf/roed.html. flassig@mpi-magdeburg.mpg.de Supplementary data are are available at Bioinformatics online.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.
Robust Adaptive Dynamic Programming of Two-Player Zero-Sum Games for Continuous-Time Linear Systems.
Fu, Yue; Fu, Jun; Chai, Tianyou
2015-12-01
In this brief, an online robust adaptive dynamic programming algorithm is proposed for two-player zero-sum games of continuous-time unknown linear systems with matched uncertainties, which are functions of system outputs and states of a completely unknown exosystem. The online algorithm is developed using the policy iteration (PI) scheme with only one iteration loop. A new analytical method is proposed for convergence proof of the PI scheme. The sufficient conditions are given to guarantee globally asymptotic stability and suboptimal property of the closed-loop system. Simulation studies are conducted to illustrate the effectiveness of the proposed method.
Garcia, Jair E; Greentree, Andrew D; Shrestha, Mani; Dorin, Alan; Dyer, Adrian G
2014-01-01
The study of the signal-receiver relationship between flowering plants and pollinators requires a capacity to accurately map both the spectral and spatial components of a signal in relation to the perceptual abilities of potential pollinators. Spectrophotometers can typically recover high resolution spectral data, but the spatial component is difficult to record simultaneously. A technique allowing for an accurate measurement of the spatial component in addition to the spectral factor of the signal is highly desirable. Consumer-level digital cameras potentially provide access to both colour and spatial information, but they are constrained by their non-linear response. We present a robust methodology for recovering linear values from two different camera models: one sensitive to ultraviolet (UV) radiation and another to visible wavelengths. We test responses by imaging eight different plant species varying in shape, size and in the amount of energy reflected across the UV and visible regions of the spectrum, and compare the recovery of spectral data to spectrophotometer measurements. There is often a good agreement of spectral data, although when the pattern on a flower surface is complex a spectrophotometer may underestimate the variability of the signal as would be viewed by an animal visual system. Digital imaging presents a significant new opportunity to reliably map flower colours to understand the complexity of these signals as perceived by potential pollinators. Compared to spectrophotometer measurements, digital images can better represent the spatio-chromatic signal variability that would likely be perceived by the visual system of an animal, and should expand the possibilities for data collection in complex, natural conditions. However, and in spite of its advantages, the accuracy of the spectral information recovered from camera responses is subject to variations in the uncertainty levels, with larger uncertainties associated with low radiance levels.
NASA Technical Reports Server (NTRS)
Prakash, OM, II
1991-01-01
Three linear controllers are desiged to regulate the end effector of the Space Shuttle Remote Manipulator System (SRMS) operating in Position Hold Mode. In this mode of operation, jet firings of the Orbiter can be treated as disturbances while the controller tries to keep the end effector stationary in an orbiter-fixed reference frame. The three design techniques used include: the Linear Quadratic Regulator (LQR), H2 optimization, and H-infinity optimization. The nonlinear SRMS is linearized by modelling the effects of the significant nonlinearities as uncertain parameters. Each regulator design is evaluated for robust stability in light of the parametric uncertanties using both the small gain theorem with an H-infinity norm and the less conservative micro-analysis test. All three regulator designs offer significant improvement over the current system on the nominal plant. Unfortunately, even after dropping performance requirements and designing exclusively for robust stability, robust stability cannot be achieved. The SRMS suffers from lightly damped poles with real parametric uncertainties. Such a system renders the micro-analysis test, which allows for complex peturbations, too conservative.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, D; Meier, J; Mawlawi, O
Purpose: Use a NEMA-IEC PET phantom to assess the robustness of FDG-PET-based radiomics features to changes in reconstruction parameters across different scanners. Methods: We scanned a NEMA-IEC PET phantom on 3 different scanners (GE Discovery VCT, GE Discovery 710, and Siemens mCT) using a FDG source-to-background ratio of 10:1. Images were retrospectively reconstructed using different iterations (2–3), subsets (21–24), Gaussian filter widths (2, 4, 6mm), and matrix sizes (128,192,256). The 710 and mCT used time-of-flight and point-spread-functions in reconstruction. The axial-image through the center of the 6 active spheres was used for analysis. A region-of-interest containing all spheres was ablemore » to simulate a heterogeneous lesion due to partial volume effects. Maximum voxel deviations from all retrospectively reconstructed images (18 per scanner) was compared to our standard clinical protocol. PET Images from 195 non-small cell lung cancer patients were used to compare feature variation. The ratio of a feature’s standard deviation from the patient cohort versus the phantom images was calculated to assess for feature robustness. Results: Across all images, the percentage of voxels differing by <1SUV and <2SUV ranged from 61–92% and 88–99%, respectively. Voxel-voxel similarity decreased when using higher resolution image matrices (192/256 versus 128) and was comparable across scanners. Taking the ratio of patient and phantom feature standard deviation was able to identify features that were not robust to changes in reconstruction parameters (e.g. co-occurrence correlation). Metrics found to be reasonably robust (standard deviation ratios > 3) were observed for routinely used SUV metrics (e.g. SUVmean and SUVmax) as well as some radiomics features (e.g. co-occurrence contrast, co-occurrence energy, standard deviation, and uniformity). Similar standard deviation ratios were observed across scanners. Conclusions: Our method enabled a comparison of feature variability across scanners and was able to identify features that were not robust to changes in reconstruction parameters.« less
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
Wu, Jian; Murphy, Martin J
2010-06-01
To assess the precision and robustness of patient setup corrections computed from 3D/3D rigid registration methods using image intensity, when no ground truth validation is possible. Fifteen pairs of male pelvic CTs were rigidly registered using four different in-house registration methods. Registration results were compared for different resolutions and image content by varying the image down-sampling ratio and by thresholding out soft tissue to isolate bony landmarks. Intrinsic registration precision was investigated by comparing the different methods and by reversing the source and the target roles of the two images being registered. The translational reversibility errors for successful registrations ranged from 0.0 to 1.69 mm. Rotations were less than 1 degrees. Mutual information failed in most registrations that used only bony landmarks. The magnitude of the reversibility error was strongly correlated with the success/ failure of each algorithm to find the global minimum. Rigid image registrations have an intrinsic uncertainty and robustness that depends on the imaging modality, the registration algorithm, the image resolution, and the image content. In the absence of an absolute ground truth, the variation in the shifts calculated by several different methods provides a useful estimate of that uncertainty. The difference observed by reversing the source and target images can be used as an indication of robust convergence.
Multi-focus image fusion and robust encryption algorithm based on compressive sensing
NASA Astrophysics Data System (ADS)
Xiao, Di; Wang, Lan; Xiang, Tao; Wang, Yong
2017-06-01
Multi-focus image fusion schemes have been studied in recent years. However, little work has been done in multi-focus image transmission security. This paper proposes a scheme that can reduce data transmission volume and resist various attacks. First, multi-focus image fusion based on wavelet decomposition can generate complete scene images and optimize the perception of the human eye. The fused images are sparsely represented with DCT and sampled with structurally random matrix (SRM), which reduces the data volume and realizes the initial encryption. Then the obtained measurements are further encrypted to resist noise and crop attack through combining permutation and diffusion stages. At the receiver, the cipher images can be jointly decrypted and reconstructed. Simulation results demonstrate the security and robustness of the proposed scheme.
Robustness of speckle imaging techniques applied to horizontal imaging scenarios
NASA Astrophysics Data System (ADS)
Bos, Jeremy P.
Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction to improve the quality of imagery available to operators. To be effective, these systems must operate over significant variations in turbulence conditions while also subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition to robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods are one of a variety of methods recently been proposed for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. This performance evaluation is made possible using a novel technique for simulating anisoplanatic image formation. I find that incorporate as few as 15 image frames and 4 estimates of the object phase per reconstructed frame provide an average reduction of 45% reduction in Mean Squared Error (MSE) and 68% reduction in deviation in MSE. In addition, the Knox-Thompson phase recovery method is demonstrated to produce images in half the time required by the bispectrum. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate reconstruction quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.
Xu, Yuan; Ding, Kun; Huo, Chunlei; Zhong, Zisha; Li, Haichang; Pan, Chunhong
2015-01-01
Very high resolution (VHR) image change detection is challenging due to the low discriminative ability of change feature and the difficulty of change decision in utilizing the multilevel contextual information. Most change feature extraction techniques put emphasis on the change degree description (i.e., in what degree the changes have happened), while they ignore the change pattern description (i.e., how the changes changed), which is of equal importance in characterizing the change signatures. Moreover, the simultaneous consideration of the classification robust to the registration noise and the multiscale region-consistent fusion is often neglected in change decision. To overcome such drawbacks, in this paper, a novel VHR image change detection method is proposed based on sparse change descriptor and robust discriminative dictionary learning. Sparse change descriptor combines the change degree component and the change pattern component, which are encoded by the sparse representation error and the morphological profile feature, respectively. Robust change decision is conducted by multiscale region-consistent fusion, which is implemented by the superpixel-level cosparse representation with robust discriminative dictionary and the conditional random field model. Experimental results confirm the effectiveness of the proposed change detection technique. PMID:25918748
A constrained robust least squares approach for contaminant release history identification
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.
2006-04-01
Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.
1974-01-01
REGRESSION MODEL - THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January 1974 Nelson Delfino d’Avila Mascarenha;? Image...Report 520 DIGITAL IMAGE RESTORATION UNDER A REGRESSION MODEL THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January...a two- dimensional form adequately describes the linear model . A dis- cretization is performed by using quadrature methods. By trans
Position Accuracy Analysis of a Robust Vision-Based Navigation
NASA Astrophysics Data System (ADS)
Gaglione, S.; Del Pizzo, S.; Troisi, S.; Angrisano, A.
2018-05-01
Using images to determine camera position and attitude is a consolidated method, very widespread for application like UAV navigation. In harsh environment, where GNSS could be degraded or denied, image-based positioning could represent a possible candidate for an integrated or alternative system. In this paper, such method is investigated using a system based on single camera and 3D maps. A robust estimation method is proposed in order to limit the effect of blunders or noisy measurements on position solution. The proposed approach is tested using images collected in an urban canyon, where GNSS positioning is very unaccurate. A previous photogrammetry survey has been performed to build the 3D model of tested area. The position accuracy analysis is performed and the effect of the robust method proposed is validated.
Loi, Gianfranco; Dominietto, Marco; Manfredda, Irene; Mones, Eleonora; Carriero, Alessandro; Inglese, Eugenio; Krengli, Marco; Brambilla, Marco
2008-09-01
This note describes a method to characterize the performances of image fusion software (Syntegra) with respect to accuracy and robustness. Computed tomography (CT), magnetic resonance imaging (MRI), and single-photon emission computed tomography (SPECT) studies were acquired from two phantoms and 10 patients. Image registration was performed independently by two couples composed of one radiotherapist and one physicist by means of superposition of anatomic landmarks. Each couple performed jointly and saved the registration. The two solutions were averaged to obtain the gold standard registration. A new set of estimators was defined to identify translation and rotation errors in the coordinate axes, independently from point position in image field of view (FOV). Algorithms evaluated were local correlation (LC) for CT-MRI, normalized mutual information (MI) for CT-MRI, and CT-SPECT registrations. To evaluate accuracy, estimator values were compared to limiting values for the algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from a sample patient were produced and registration errors determined. LC algorithm resulted accurate in CT-MRI registrations in phantoms, but exceeded limiting values in 3 of 10 patients. MI algorithm resulted accurate in CT-MRI and CT-SPECT registrations in phantoms; limiting values were exceeded in one case in CT-MRI and never reached in CT-SPECT registrations. Thus, the evaluation of robustness was restricted to the algorithm of MI both for CT-MRI and CT-SPECT registrations. The algorithm of MI proved to be robust: limiting values were not exceeded with translation perturbations up to 2.5 cm, rotation perturbations up to 10 degrees and roto-translational perturbation up to 3 cm and 5 degrees.
Robust energy harvesting from walking vibrations by means of nonlinear cantilever beams
NASA Astrophysics Data System (ADS)
Kluger, Jocelyn M.; Sapsis, Themistoklis P.; Slocum, Alexander H.
2015-04-01
In the present work we examine how mechanical nonlinearity can be appropriately utilized to achieve strong robustness of performance in an energy harvesting setting. More specifically, for energy harvesting applications, a great challenge is the uncertain character of the excitation. The combination of this uncertainty with the narrow range of good performance for linear oscillators creates the need for more robust designs that adapt to a wider range of excitation signals. A typical application of this kind is energy harvesting from walking vibrations. Depending on the particular characteristics of the person that walks as well as on the pace of walking, the excitation signal obtains completely different forms. In the present work we study a nonlinear spring mechanism that is composed of a cantilever wrapping around a curved surface as it deflects. While for the free cantilever, the force acting on the free tip depends linearly on the tip displacement, the utilization of a contact surface with the appropriate distribution of curvature leads to essentially nonlinear dependence between the tip displacement and the acting force. The studied nonlinear mechanism has favorable mechanical properties such as low frictional losses, minimal moving parts, and a rugged design that can withstand excessive loads. Through numerical simulations we illustrate that by utilizing this essentially nonlinear element in a 2 degrees-of-freedom (DOF) system, we obtain strongly nonlinear energy transfers between the modes of the system. We illustrate that this nonlinear behavior is associated with strong robustness over three radically different excitation signals that correspond to different walking paces. To validate the strong robustness properties of the 2DOF nonlinear system, we perform a direct parameter optimization for 1DOF and 2DOF linear systems as well as for a class of 1DOF and 2DOF systems with nonlinear springs similar to that of the cubic spring that are physically realized by the cantilever-surface mechanism. The optimization results show that the 2DOF nonlinear system presents the best average performance when the excitation signals have three possible forms. Moreover, we observe that while for the linear systems the optimal performance is obtained for small values of the electromagnetic damping, for the 2DOF nonlinear system optimal performance is achieved for large values of damping. This feature is of particular importance for the system's robustness to parasitic damping.
Robust water fat separated dual-echo MRI by phase-sensitive reconstruction.
Romu, Thobias; Dahlström, Nils; Leinhard, Olof Dahlqvist; Borga, Magnus
2017-09-01
The purpose of this work was to develop and evaluate a robust water-fat separation method for T1-weighted symmetric two-point Dixon data. A method for water-fat separation by phase unwrapping of the opposite-phase images by phase-sensitive reconstruction (PSR) is introduced. PSR consists of three steps; (1), identification of clusters of tissue voxels; (2), unwrapping of the phase in each cluster by solving Poisson's equation; and (3), finding the correct sign of each unwrapped opposite-phase cluster, so that the water-fat images are assigned the correct identities. Robustness was evaluated by counting the number of water-fat swap artifacts in a total of 733 image volumes. The method was also compared to commercial software. In the water-fat separated image volumes, the PSR method failed to unwrap the phase of one cluster and misclassified 10. One swap was observed in areas affected by motion and was constricted to the affected area. Twenty swaps were observed surrounding susceptibility artifacts, none of which spread outside the artifact affected regions. The PSR method had fewer swaps when compared to commercial software. The PSR method can robustly produce water-fat separated whole-body images based on symmetric two-echo spoiled gradient echo images, under both ideal conditions and in the presence of common artifacts. Magn Reson Med 78:1208-1216, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera
NASA Astrophysics Data System (ADS)
Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert
2018-03-01
Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.
Dera, Dimah; Bouaynaya, Nidhal; Fathallah-Shaykh, Hassan M
2016-07-01
We address the problem of fully automated region discovery and robust image segmentation by devising a new deformable model based on the level set method (LSM) and the probabilistic nonnegative matrix factorization (NMF). We describe the use of NMF to calculate the number of distinct regions in the image and to derive the local distribution of the regions, which is incorporated into the energy functional of the LSM. The results demonstrate that our NMF-LSM method is superior to other approaches when applied to synthetic binary and gray-scale images and to clinical magnetic resonance images (MRI) of the human brain with and without a malignant brain tumor, glioblastoma multiforme. In particular, the NMF-LSM method is fully automated, highly accurate, less sensitive to the initial selection of the contour(s) or initial conditions, more robust to noise and model parameters, and able to detect as small distinct regions as desired. These advantages stem from the fact that the proposed method relies on histogram information instead of intensity values and does not introduce nuisance model parameters. These properties provide a general approach for automated robust region discovery and segmentation in heterogeneous images. Compared with the retrospective radiological diagnoses of two patients with non-enhancing grade 2 and 3 oligodendroglioma, the NMF-LSM detects earlier progression times and appears suitable for monitoring tumor response. The NMF-LSM method fills an important need of automated segmentation of clinical MRI.
Robust 2DPCA with non-greedy l1 -norm maximization for image analysis.
Wang, Rong; Nie, Feiping; Yang, Xiaojun; Gao, Feifei; Yao, Minli
2015-05-01
2-D principal component analysis based on l1 -norm (2DPCA-L1) is a recently developed approach for robust dimensionality reduction and feature extraction in image domain. Normally, a greedy strategy is applied due to the difficulty of directly solving the l1 -norm maximization problem, which is, however, easy to get stuck in local solution. In this paper, we propose a robust 2DPCA with non-greedy l1 -norm maximization in which all projection directions are optimized simultaneously. Experimental results on face and other datasets confirm the effectiveness of the proposed approach.
NASA Astrophysics Data System (ADS)
Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.
2018-02-01
In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.
Reversible Watermarking Surviving JPEG Compression.
Zain, J; Clarke, M
2005-01-01
This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).
Molina, D.; Pérez-Beteta, J.; Martínez-González, A.; Velásquez, C.; Martino, J.; Luque, B.; Revert, A.; Herruzo, I.; Arana, E.; Pérez-García, V. M.
2017-01-01
Abstract Introduction: Textural analysis refers to a variety of mathematical methods used to quantify the spatial variations in grey levels within images. In brain tumors, textural features have a great potential as imaging biomarkers having been shown to correlate with survival, tumor grade, tumor type, etc. However, these measures should be reproducible under dynamic range and matrix size changes for their clinical use. Our aim is to study this robustness in brain tumors with 3D magnetic resonance imaging, not previously reported in the literature. Materials and methods: 3D T1-weighted images of 20 patients with glioblastoma (64.80 ± 9.12 years-old) obtained from a 3T scanner were analyzed. Tumors were segmented using an in-house semi-automatic 3D procedure. A set of 16 3D textural features of the most common types (co-occurrence and run-length matrices) were selected, providing regional (run-length based measures) and local information (co-ocurrence matrices) on the tumor heterogeneity. Feature robustness was assessed by means of the coefficient of variation (CV) under both dynamic range (16, 32 and 64 gray levels) and/or matrix size (256x256 and 432x432) changes. Results: None of the textural features considered were robust under dynamic range changes. The textural co-occurrence matrix feature Entropy was the only textural feature robust (CV < 10%) under spatial resolution changes. Conclusions: In general, textural measures of three-dimensional brain tumor images are neither robust under dynamic range nor under matrix size changes. Thus, it becomes mandatory to fix standards for image rescaling after acquisition before the textural features are computed if they are to be used as imaging biomarkers. For T1-weighted images a dynamic range of 16 grey levels and a matrix size of 256x256 (and isotropic voxel) is found to provide reliable and comparable results and is feasible with current MRI scanners. The implications of this work go beyond the specific tumor type and MRI sequence studied here and pose the need for standardization in textural feature calculation of oncological images. FUNDING: James S. Mc. Donnell Foundation (USA) 21st Century Science Initiative in Mathematical and Complex Systems Approaches for Brain Cancer [Collaborative award 220020450 and planning grant 220020420], MINECO/FEDER [MTM2015-71200-R], JCCM [PEII-2014-031-P].
Detection of endometrial lesions by degree of linear polarization maps
NASA Astrophysics Data System (ADS)
Kim, Jihoon; Fazleabas, Asgerally; Walsh, Joseph T.
2010-02-01
Endometriosis is one of the most common causes of chronic pelvic pain and infertility and is characterized by the presence of endometrial glands and stroma outside of the uterine cavity. A novel laparoscopic polarization imaging system was designed to detect endometriosis by imaging endometrial lesions. Linearly polarized light with varying incident polarization angles illuminated endometrial lesions. Degree of linear polarization image maps of endometrial lesions were constructed by using remitted polarized light. The image maps were compared with regular laparoscopy image. The degree of linear polarization map contributed to the detection of endometriosis by revealing structures inside the lesion. The utilization of rotating incident polarization angle (IPA) for the linearly polarized light provides extended understanding of endometrial lesions. The developed polarization system with varying IPA and the collected image maps could provide improved characterization of endometrial lesions via higher visibility of the structure of the lesions and thereby improve diagnosis of endometriosis.
NASA Astrophysics Data System (ADS)
Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.
2015-03-01
Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.
Zhou, Ruixi; Huang, Wei; Yang, Yang; Chen, Xiao; Weller, Daniel S; Kramer, Christopher M; Kozerke, Sebastian; Salerno, Michael
2018-02-01
Cardiovascular magnetic resonance (CMR) stress perfusion imaging provides important diagnostic and prognostic information in coronary artery disease (CAD). Current clinical sequences have limited temporal and/or spatial resolution, and incomplete heart coverage. Techniques such as k-t principal component analysis (PCA) or k-t sparcity and low rank structure (SLR), which rely on the high degree of spatiotemporal correlation in first-pass perfusion data, can significantly accelerate image acquisition mitigating these problems. However, in the presence of respiratory motion, these techniques can suffer from significant degradation of image quality. A number of techniques based on non-rigid registration have been developed. However, to first approximation, breathing motion predominantly results in rigid motion of the heart. To this end, a simple robust motion correction strategy is proposed for k-t accelerated and compressed sensing (CS) perfusion imaging. A simple respiratory motion compensation (MC) strategy for k-t accelerated and compressed-sensing CMR perfusion imaging to selectively correct respiratory motion of the heart was implemented based on linear k-space phase shifts derived from rigid motion registration of a region-of-interest (ROI) encompassing the heart. A variable density Poisson disk acquisition strategy was used to minimize coherent aliasing in the presence of respiratory motion, and images were reconstructed using k-t PCA and k-t SLR with or without motion correction. The strategy was evaluated in a CMR-extended cardiac torso digital (XCAT) phantom and in prospectively acquired first-pass perfusion studies in 12 subjects undergoing clinically ordered CMR studies. Phantom studies were assessed using the Structural Similarity Index (SSIM) and Root Mean Square Error (RMSE). In patient studies, image quality was scored in a blinded fashion by two experienced cardiologists. In the phantom experiments, images reconstructed with the MC strategy had higher SSIM (p < 0.01) and lower RMSE (p < 0.01) in the presence of respiratory motion. For patient studies, the MC strategy improved k-t PCA and k-t SLR reconstruction image quality (p < 0.01). The performance of k-t SLR without motion correction demonstrated improved image quality as compared to k-t PCA in the setting of respiratory motion (p < 0.01), while with motion correction there is a trend of better performance in k-t SLR as compared with motion corrected k-t PCA. Our simple and robust rigid motion compensation strategy greatly reduces motion artifacts and improves image quality for standard k-t PCA and k-t SLR techniques in setting of respiratory motion due to imperfect breath-holding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, J; Qi, H; Wu, S
Purpose: In transmitted X-ray tomography imaging, projections are sometimes incomplete due to a variety of reasons, such as geometry inaccuracy, defective detector cells, etc. To address this issue, we have derived a direct consistency condition based on John’s Equation, and proposed a method to effectively restore incomplete projections based on this consistency condition. Methods: Through parameter substitutions, we have derived a direct consistency condition equation from John’s equation, in which the left side is only projection derivative of view and the right side is projection derivative of other geometrical parameters. Based on this consistency condition, a projection restoration method ismore » proposed, which includes five steps: 1) Forward projecting reconstructed image and using linear interpolation to estimate the incomplete projections as the initial result; 2) Performing Fourier transform on the projections; 3) Restoring the incomplete frequency data using the consistency condition equation; 4) Performing inverse Fourier transform; 5) Repeat step 2)∼4) until our criteria is met to terminate the iteration. Results: A beam-blocking-based scatter correction case and a bad-pixel correction case were used to demonstrate the efficacy and robustness of our restoration method. The mean absolute error (MAE), signal noise ratio (SNR) and mean square error (MSE) were employed as our evaluation metrics of the reconstructed images. For the scatter correction case, the MAE is reduced from 63.3% to 71.7% with 4 iterations. Compared with the existing Patch’s method, the MAE of our method is further reduced by 8.72%. For the bad-pixel case, the SNR of the reconstructed image by our method is increased from 13.49% to 21.48%, with the MSE being decreased by 45.95%, compared with linear interpolation method. Conclusion: Our studies have demonstrated that our restoration method based on the new consistency condition could effectively restore the incomplete projections, especially for their high frequency component.« less
Perriñez, Phillip R.; Kennedy, Francis E.; Van Houten, Elijah E. W.; Weaver, John B.; Paulsen, Keith D.
2010-01-01
Magnetic Resonance Poroelastography (MRPE) is introduced as an alternative to single-phase model-based elastographic reconstruction methods. A three-dimensional (3D) finite element poroelastic inversion algorithm was developed to recover the mechanical properties of fluid-saturated tissues. The performance of this algorithm was assessed through a variety of numerical experiments, using synthetic data to probe its stability and sensitivity to the relevant model parameters. Preliminary results suggest the algorithm is robust in the presence of noise and capable of producing accurate assessments of the underlying mechanical properties in simulated phantoms. Further, a 3D time-harmonic motion field was recorded for a poroelastic phantom containing a single cylindrical inclusion and used to assess the feasibility of MRPE image reconstruction from experimental data. The elastograms obtained from the proposed poroelastic algorithm demonstrate significant improvement over linearly elastic MRE images generated using the same data. In addition, MRPE offers the opportunity to estimate the time-harmonic pressure field resulting from tissue excitation, highlighting the potential for its application in the diagnosis and monitoring of disease processes associated with changes in interstitial pressure. PMID:20199912
Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen
2010-01-01
The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399
Two-dimensional phase unwrapping using robust derivative estimation and adaptive integration.
Strand, Jarle; Taxt, Torfinn
2002-01-01
The adaptive integration (ADI) method for two-dimensional (2-D) phase unwrapping is presented. The method uses an algorithm for noise robust estimation of partial derivatives, followed by a noise robust adaptive integration process. The ADI method can easily unwrap phase images with moderate noise levels, and the resulting images are congruent modulo 2pi with the observed, wrapped, input images. In a quantitative evaluation, both the ADI and the BLS methods (Strand et al.) were better than the least-squares methods of Ghiglia and Romero (GR), and of Marroquin and Rivera (MRM). In a qualitative evaluation, the ADI, the BLS, and a conjugate gradient version of the MRM method (MRMCG), were all compared using a synthetic image with shear, using 115 magnetic resonance images, and using 22 fiber-optic interferometry images. For the synthetic image and the interferometry images, the ADI method gave consistently visually better results than the other methods. For the MR images, the MRMCG method was best, and the ADI method second best. The ADI method was less sensitive to the mask definition and the block size than the BLS method, and successfully unwrapped images with shears that were not marked in the masks. The computational requirements of the ADI method for images of nonrectangular objects were comparable to only two iterations of many least-squares-based methods (e.g., GR). We believe the ADI method provides a powerful addition to the ensemble of tools available for 2-D phase unwrapping.
Quantifying and visualizing variations in sets of images using continuous linear optimal transport
NASA Astrophysics Data System (ADS)
Kolouri, Soheil; Rohde, Gustavo K.
2014-03-01
Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.
Linear Quantum Systems: Non-Classical States and Robust Stability
2016-06-29
quantum linear systems subject to non-classical quantum fields. The major outcomes of this project are (i) derivation of quantum filtering equations for...derivation of quantum filtering equations for systems non-classical input states including single photon states, (ii) determination of how linear...history going back some 50 years, to the birth of modern control theory with Kalman’s foundational work on filtering and LQG optimal control
Multi-Objective Memetic Search for Robust Motion and Distortion Correction in Diffusion MRI.
Hering, Jan; Wolf, Ivo; Maier-Hein, Klaus H
2016-10-01
Effective image-based artifact correction is an essential step in the analysis of diffusion MR images. Many current approaches are based on retrospective registration, which becomes challenging in the realm of high b -values and low signal-to-noise ratio, rendering the corresponding correction schemes more and more ineffective. We propose a novel registration scheme based on memetic search optimization that allows for simultaneous exploitation of different signal intensity relationships between the images, leading to more robust registration results. We demonstrate the increased robustness and efficacy of our method on simulated as well as in vivo datasets. In contrast to the state-of-art methods, the median target registration error (TRE) stayed below the voxel size even for high b -values (3000 s ·mm -2 and higher) and low SNR conditions. We also demonstrate the increased precision in diffusion-derived quantities by evaluating Neurite Orientation Dispersion and Density Imaging (NODDI) derived measures on a in vivo dataset with severe motion artifacts. These promising results will potentially inspire further studies on metaheuristic optimization in diffusion MRI artifact correction and image registration in general.
Multi-Satellite Observation Scheduling for Large Area Disaster Emergency Response
NASA Astrophysics Data System (ADS)
Niu, X. N.; Tang, H.; Wu, L. X.
2018-04-01
an optimal imaging plan, plays a key role in coordinating multiple satellites to monitor the disaster area. In the paper, to generate imaging plan dynamically according to the disaster relief, we propose a dynamic satellite task scheduling method for large area disaster response. First, an initial robust scheduling scheme is generated by a robust satellite scheduling model in which both the profit and the robustness of the schedule are simultaneously maximized. Then, we use a multi-objective optimization model to obtain a series of decomposing schemes. Based on the initial imaging plan, we propose a mixed optimizing algorithm named HA_NSGA-II to allocate the decomposing results thus to obtain an adjusted imaging schedule. A real disaster scenario, i.e., 2008 Wenchuan earthquake, is revisited in terms of rapid response using satellite resources and used to evaluate the performance of the proposed method with state-of-the-art approaches. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.
Kopriva, Ivica; Persin, Antun; Puizina-Ivić, Neira; Mirić, Lina
2010-07-02
This study was designed to demonstrate robust performance of the novel dependent component analysis (DCA)-based approach to demarcation of the basal cell carcinoma (BCC) through unsupervised decomposition of the red-green-blue (RGB) fluorescent image of the BCC. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms, which exploit spectral and spatial diversities between the BCC and the surrounding tissue. Used filtering-based DCA approach represents an extension of the independent component analysis (ICA) and is necessary in order to account for statistical dependence that is induced by spectral similarity between the BCC and surrounding tissue. This generates weak edges what represents a challenge for other segmentation methods as well. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization, ICA and ratio imaging we experimentally demonstrate good performance of DCA-based BCC demarcation in two demanding scenarios where intensity of the fluorescent image has been varied almost two orders of magnitude. Copyright 2010 Elsevier B.V. All rights reserved.
Homographic Patch Feature Transform: A Robustness Registration for Gastroscopic Surgery.
Hu, Weiling; Zhang, Xu; Wang, Bin; Liu, Jiquan; Duan, Huilong; Dai, Ning; Si, Jianmin
2016-01-01
Image registration is a key component of computer assistance in image guided surgery, and it is a challenging topic in endoscopic environments. In this study, we present a method for image registration named Homographic Patch Feature Transform (HPFT) to match gastroscopic images. HPFT can be used for tracking lesions and augmenting reality applications during gastroscopy. Furthermore, an overall evaluation scheme is proposed to validate the precision, robustness and uniformity of the registration results, which provides a standard for rejection of false matching pairs from corresponding results. Finally, HPFT is applied for processing in vivo gastroscopic data. The experimental results show that HPFT has stable performance in gastroscopic applications.
Transfer Alignment Error Compensator Design Based on Robust State Estimation
NASA Astrophysics Data System (ADS)
Lyou, Joon; Lim, You-Chol
This paper examines the transfer alignment problem of the StrapDown Inertial Navigation System (SDINS), which is subject to the ship’s roll and pitch. Major error sources for velocity and attitude matching are lever arm effect, measurement time delay and ship-body flexure. To reduce these alignment errors, an error compensation method based on state augmentation and robust state estimation is devised. A linearized error model for the velocity and attitude matching transfer alignment system is derived first by linearizing the nonlinear measurement equation with respect to its time delay and dominant Y-axis flexure, and by augmenting the delay state and flexure state into conventional linear state equations. Then an H∞ filter is introduced to account for modeling uncertainties of time delay and the ship-body flexure. The simulation results show that this method considerably decreases azimuth alignment errors considerably.
Robust consensus control with guaranteed rate of convergence using second-order Hurwitz polynomials
NASA Astrophysics Data System (ADS)
Fruhnert, Michael; Corless, Martin
2017-10-01
This paper considers homogeneous networks of general, linear time-invariant, second-order systems. We consider linear feedback controllers and require that the directed graph associated with the network contains a spanning tree and systems are stabilisable. We show that consensus with a guaranteed rate of convergence can always be achieved using linear state feedback. To achieve this, we provide a new and simple derivation of the conditions for a second-order polynomial with complex coefficients to be Hurwitz. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. Based on the conditions found, methods to compute feedback gains are proposed. We show that gains can be chosen such that consensus is achieved robustly over a variety of communication structures and system dynamics. We also consider the use of static output feedback.
A robust background regression based score estimation algorithm for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
2016-12-01
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.
NASA Astrophysics Data System (ADS)
Cheng, Xiang-Qin; Qu, Jing-Yuan; Yan, Zhe-Ping; Bian, Xin-Qian
2010-03-01
In order to improve the security and reliability for autonomous underwater vehicle (AUV) navigation, an H∞ robust fault-tolerant controller was designed after analyzing variations in state-feedback gain. Operating conditions and the design method were then analyzed so that the control problem could be expressed as a mathematical optimization problem. This permitted the use of linear matrix inequalities (LMI) to solve for the H∞ controller for the system. When considering different actuator failures, these conditions were then also mathematically expressed, allowing the H∞ robust controller to solve for these events and thus be fault-tolerant. Finally, simulation results showed that the H∞ robust fault-tolerant controller could provide precise AUV navigation control with strong robustness.
Evaluating a robust contour tracker on echocardiographic sequences.
Jacob, G; Noble, J A; Mulet-Parada, M; Blake, A
1999-03-01
In this paper we present an evaluation of a robust visual image tracker on echocardiographic image sequences. We show how the tracking framework can be customized to define an appropriate shape space that describes heart shape deformations that can be learnt from a training data set. We also investigate energy-based temporal boundary enhancement methods to improve image feature measurement. Results are presented demonstrating real-time tracking on real normal heart motion data sequences and abnormal synthesized and real heart motion data sequences. We conclude by discussing some of our current research efforts.
Application of separable parameter space techniques to multi-tracer PET compartment modeling.
Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J
2016-02-07
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
NASA Astrophysics Data System (ADS)
Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose
2018-06-01
An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.
Research on registration algorithm for check seal verification
NASA Astrophysics Data System (ADS)
Wang, Shuang; Liu, Tiegen
2008-03-01
Nowadays seals play an important role in China. With the development of social economy, the traditional method of manual check seal identification can't meet the need s of banking transactions badly. This paper focus on pre-processing and registration algorithm for check seal verification using theory of image processing and pattern recognition. First of all, analyze the complex characteristics of check seals. To eliminate the difference of producing conditions and the disturbance caused by background and writing in check image, many methods are used in the pre-processing of check seal verification, such as color components transformation, linearity transform to gray-scale image, medium value filter, Otsu, close calculations and labeling algorithm of mathematical morphology. After the processes above, the good binary seal image can be obtained. On the basis of traditional registration algorithm, a double-level registration method including rough and precise registration method is proposed. The deflection angle of precise registration method can be precise to 0.1°. This paper introduces the concepts of difference inside and difference outside and use the percent of difference inside and difference outside to judge whether the seal is real or fake. The experimental results of a mass of check seals are satisfied. It shows that the methods and algorithmic presented have good robustness to noise sealing conditions and satisfactory tolerance of difference within class.
Vehicle license plate recognition in dense fog based on improved atmospheric scattering model
NASA Astrophysics Data System (ADS)
Tang, Chunming; Lin, Jun; Chen, Chunkai; Dong, Yancheng
2018-04-01
An effective method based on improved atmospheric scattering model is proposed in this paper to handle the problem of the vehicle license plate location and recognition in dense fog. Dense fog detection is performed firstly by the top-hat transformation and the vertical edge detection, and the moving vehicle image is separated from the traffic video image. After the vehicle image is decomposed into two layers: structure and texture layers, the glow layer is separated from the structure layer to get the background layer. Followed by performing the mean-pooling and the bicubic interpolation algorithm, the atmospheric light map of the background layer can be predicted, meanwhile the transmission of the background layer is estimated through the grayed glow layer, whose gray value is altered by linear mapping. Then, according to the improved atmospheric scattering model, the final restored image can be obtained by fusing the restored background layer and the optimized texture layer. License plate location is performed secondly by a series of morphological operations, connected domain analysis and various validations. Characters extraction is achieved according to the projection. Finally, an offline trained pattern classifier of hybrid discriminative restricted boltzmann machines (HDRBM) is applied to recognize the characters. Experimental results on thorough data sets are reported to demonstrate that the proposed method can achieve high recognition accuracy and works robustly in the dense fog traffic environment during 24h or one day.
NASA Astrophysics Data System (ADS)
Ding, Xuemei; Wang, Bingyuan; Liu, Dongyuan; Zhang, Yao; He, Jie; Zhao, Huijuan; Gao, Feng
2018-02-01
During the past two decades there has been a dramatic rise in the use of functional near-infrared spectroscopy (fNIRS) as a neuroimaging technique in cognitive neuroscience research. Diffuse optical tomography (DOT) and optical topography (OT) can be employed as the optical imaging techniques for brain activity investigation. However, most current imagers with analogue detection are limited by sensitivity and dynamic range. Although photon-counting detection can significantly improve detection sensitivity, the intrinsic nature of sequential excitations reduces temporal resolution. To improve temporal resolution, sensitivity and dynamic range, we develop a multi-channel continuous-wave (CW) system for brain functional imaging based on a novel lock-in photon-counting technique. The system consists of 60 Light-emitting device (LED) sources at three wavelengths of 660nm, 780nm and 830nm, which are modulated by current-stabilized square-wave signals at different frequencies, and 12 photomultiplier tubes (PMT) based on lock-in photon-counting technique. This design combines the ultra-high sensitivity of the photon-counting technique with the parallelism of the digital lock-in technique. We can therefore acquire the diffused light intensity for all the source-detector pairs (SD-pairs) in parallel. The performance assessments of the system are conducted using phantom experiments, and demonstrate its excellent measurement linearity, negligible inter-channel crosstalk, strong noise robustness and high temporal resolution.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
NASA Astrophysics Data System (ADS)
Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.
2016-02-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
2D-3D registration using gradient-based MI for image guided surgery systems
NASA Astrophysics Data System (ADS)
Yim, Yeny; Chen, Xuanyi; Wakid, Mike; Bielamowicz, Steve; Hahn, James
2011-03-01
Registration of preoperative CT data to intra-operative video images is necessary not only to compare the outcome of the vocal fold after surgery with the preplanned shape but also to provide the image guidance for fusion of all imaging modalities. We propose a 2D-3D registration method using gradient-based mutual information. The 3D CT scan is aligned to 2D endoscopic images by finding the corresponding viewpoint between the real camera for endoscopic images and the virtual camera for CT scans. Even though mutual information has been successfully used to register different imaging modalities, it is difficult to robustly register the CT rendered image to the endoscopic image due to varying light patterns and shape of the vocal fold. The proposed method calculates the mutual information in the gradient images as well as original images, assigning more weight to the high gradient regions. The proposed method can emphasize the effect of vocal fold and allow a robust matching regardless of the surface illumination. To find the viewpoint with maximum mutual information, a downhill simplex method is applied in a conditional multi-resolution scheme which leads to a less-sensitive result to local maxima. To validate the registration accuracy, we evaluated the sensitivity to initial viewpoint of preoperative CT. Experimental results showed that gradient-based mutual information provided robust matching not only for two identical images with different viewpoints but also for different images acquired before and after surgery. The results also showed that conditional multi-resolution scheme led to a more accurate registration than single-resolution.
Onboard Image Registration from Invariant Features
NASA Technical Reports Server (NTRS)
Wang, Yi; Ng, Justin; Garay, Michael J.; Burl, Michael C
2008-01-01
This paper describes a feature-based image registration technique that is potentially well-suited for onboard deployment. The overall goal is to provide a fast, robust method for dynamically combining observations from multiple platforms into sensors webs that respond quickly to short-lived events and provide rich observations of objects that evolve in space and time. The approach, which has enjoyed considerable success in mainstream computer vision applications, uses invariant SIFT descriptors extracted at image interest points together with the RANSAC algorithm to robustly estimate transformation parameters that relate one image to another. Experimental results for two satellite image registration tasks are presented: (1) automatic registration of images from the MODIS instrument on Terra to the MODIS instrument on Aqua and (2) automatic stabilization of a multi-day sequence of GOES-West images collected during the October 2007 Southern California wildfires.
Widely accessible method for superresolution fluorescence imaging of living systems
Dedecker, Peter; Mo, Gary C. H.; Dertinger, Thomas; Zhang, Jin
2012-01-01
Superresolution fluorescence microscopy overcomes the diffraction resolution barrier and allows the molecular intricacies of life to be revealed with greatly enhanced detail. However, many current superresolution techniques still face limitations and their implementation is typically associated with a steep learning curve. Patterned illumination-based superresolution techniques [e.g., stimulated emission depletion (STED), reversible optically-linear fluorescence transitions (RESOLFT), and saturated structured illumination microscopy (SSIM)] require specialized equipment, whereas single-molecule–based approaches [e.g., stochastic optical reconstruction microscopy (STORM), photo-activation localization microscopy (PALM), and fluorescence-PALM (F-PALM)] involve repetitive single-molecule localization, which requires its own set of expertise and is also temporally demanding. Here we present a superresolution fluorescence imaging method, photochromic stochastic optical fluctuation imaging (pcSOFI). In this method, irradiating a reversibly photoswitching fluorescent protein at an appropriate wavelength produces robust single-molecule intensity fluctuations, from which a superresolution picture can be extracted by a statistical analysis of the fluctuations in each pixel as a function of time, as previously demonstrated in SOFI. This method, which uses off-the-shelf equipment, genetically encodable labels, and simple and rapid data acquisition, is capable of providing two- to threefold-enhanced spatial resolution, significant background rejection, markedly improved contrast, and favorable temporal resolution in living cells. Furthermore, both 3D and multicolor imaging are readily achievable. Because of its ease of use and high performance, we anticipate that pcSOFI will prove an attractive approach for superresolution imaging. PMID:22711840
A hybrid deep learning approach to predict malignancy of breast lesions using mammograms
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Heidari, Morteza; Mirniaharikandehei, Seyedehnafiseh; Gong, Jing; Qian, Wei; Qiu, Yuchen; Zheng, Bin
2018-03-01
Applying deep learning technology to medical imaging informatics field has been recently attracting extensive research interest. However, the limited medical image dataset size often reduces performance and robustness of the deep learning based computer-aided detection and/or diagnosis (CAD) schemes. In attempt to address this technical challenge, this study aims to develop and evaluate a new hybrid deep learning based CAD approach to predict likelihood of a breast lesion detected on mammogram being malignant. In this approach, a deep Convolutional Neural Network (CNN) was firstly pre-trained using the ImageNet dataset and serve as a feature extractor. A pseudo-color Region of Interest (ROI) method was used to generate ROIs with RGB channels from the mammographic images as the input to the pre-trained deep network. The transferred CNN features from different layers of the CNN were then obtained and a linear support vector machine (SVM) was trained for the prediction task. By applying to a dataset involving 301 suspicious breast lesions and using a leave-one-case-out validation method, the areas under the ROC curves (AUC) = 0.762 and 0.792 using the traditional CAD scheme and the proposed deep learning based CAD scheme, respectively. An ensemble classifier that combines the classification scores generated by the two schemes yielded an improved AUC value of 0.813. The study results demonstrated feasibility and potentially improved performance of applying a new hybrid deep learning approach to develop CAD scheme using a relatively small dataset of medical images.
Gallium nitride light sources for optical coherence tomography
NASA Astrophysics Data System (ADS)
Goldberg, Graham R.; Ivanov, Pavlo; Ozaki, Nobuhiko; Childs, David T. D.; Groom, Kristian M.; Kennedy, Kenneth L.; Hogg, Richard A.
2017-02-01
The advent of optical coherence tomography (OCT) has permitted high-resolution, non-invasive, in vivo imaging of the eye, skin and other biological tissue. The axial resolution is limited by source bandwidth and central wavelength. With the growing demand for short wavelength imaging, super-continuum sources and non-linear fibre-based light sources have been demonstrated in tissue imaging applications exploiting the near-UV and visible spectrum. Whilst the potential has been identified of using gallium nitride devices due to relative maturity of laser technology, there have been limited reports on using such low cost, robust devices in imaging systems. A GaN super-luminescent light emitting diode (SLED) was first reported in 2009, using tilted facets to suppress lasing, with the focus since on high power, low speckle and relatively low bandwidth applications. In this paper we discuss a method of producing a GaN based broadband source, including a passive absorber to suppress lasing. The merits of this passive absorber are then discussed with regards to broad-bandwidth applications, rather than power applications. For the first time in GaN devices, the performance of the light sources developed are assessed though the point spread function (PSF) (which describes an imaging systems response to a point source), calculated from the emission spectra. We show a sub-7μm resolution is possible without the use of special epitaxial techniques, ultimately outlining the suitability of these short wavelength, broadband, GaN devices for use in OCT applications.
MR PROSTATE SEGMENTATION VIA DISTRIBUTED DISCRIMINATIVE DICTIONARY (DDD) LEARNING.
Guo, Yanrong; Zhan, Yiqiang; Gao, Yaozong; Jiang, Jianguo; Shen, Dinggang
2013-01-01
Segmenting prostate from MR images is important yet challenging. Due to non-Gaussian distribution of prostate appearances in MR images, the popular active appearance model (AAM) has its limited performance. Although the newly developed sparse dictionary learning method[1, 2] can model the image appearance in a non-parametric fashion, the learned dictionaries still lack the discriminative power between prostate and non-prostate tissues, which is critical for accurate prostate segmentation. In this paper, we propose to integrate deformable model with a novel learning scheme, namely the Distributed Discriminative Dictionary ( DDD ) learning, which can capture image appearance in a non-parametric and discriminative fashion. In particular, three strategies are designed to boost the tissue discriminative power of DDD. First , minimum Redundancy Maximum Relevance (mRMR) feature selection is performed to constrain the dictionary learning in a discriminative feature space. Second , linear discriminant analysis (LDA) is employed to assemble residuals from different dictionaries for optimal separation between prostate and non-prostate tissues. Third , instead of learning the global dictionaries, we learn a set of local dictionaries for the local regions (each with small appearance variations) along prostate boundary, thus achieving better tissue differentiation locally. In the application stage, DDDs will provide the appearance cues to robustly drive the deformable model onto the prostate boundary. Experiments on 50 MR prostate images show that our method can yield a Dice Ratio of 88% compared to the manual segmentations, and have 7% improvement over the conventional AAM.
Robust Multimodal Dictionary Learning
Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc
2014-01-01
We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674
The analysis of image feature robustness using cometcloud
Qi, Xin; Kim, Hyunjoo; Xing, Fuyong; Parashar, Manish; Foran, David J.; Yang, Lin
2012-01-01
The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval. PMID:23248759
Jani, Shyam S; Low, Daniel A; Lamb, James M
2015-01-01
To develop an automated system that detects patient identification and positioning errors between 3-dimensional computed tomography (CT) and kilovoltage CT planning images. Planning kilovoltage CT images were collected for head and neck (H&N), pelvis, and spine treatments with corresponding 3-dimensional cone beam CT and megavoltage CT setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. For positioning errors, setup and planning images were misaligned by 1 to 5 cm in the 6 anatomical directions for H&N and pelvis patients. Spinal misalignments were simulated by misaligning to adjacent vertebral bodies. Image pairs were assessed using commonly used image similarity metrics as well as custom-designed metrics. Linear discriminant analysis classification models were trained and tested on the imaging datasets, and misclassification error (MCE), sensitivity, and specificity parameters were estimated using 10-fold cross-validation. For patient identification, our workflow produced MCE estimates of 0.66%, 1.67%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivity and specificity ranged from 97.5% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 95.4% and 97.7%. MCEs for 1-cm H&N/pelvis misalignments were 1.3%/5.1% and 9.1%/8.6% for TomoTherapy and TrueBeam images, respectively. Two-centimeter MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. MCEs for vertebral body misalignments were 4.8% and 3.6% for TomoTherapy and TrueBeam images, respectively. Patient identification and gross misalignment errors can be robustly and automatically detected using 3-dimensional setup images of different energies across 3 commonly treated anatomical sites. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Eibl, Matthias; Karpf, Sebastian; Hakert, Hubertus; Weng, Daniel; Pfeiffer, Tom; Kolb, Jan Philip; Huber, Robert
2017-07-01
Newly developed microscopy methods have the goal to give researches in bio-molecular science a better understanding of processes ongoing on a cellular level. Especially two-photon excited fluorescence (TPEF) microscopy is a readily applied and widespread modality. Compared to one photon fluorescence imaging, it is possible to image not only the surface but also deeper lying structures. Together with fluorescence lifetime imaging (FLIM), which provides information on the chemical composition of a specimen, deeper insights on a molecular level can be gained. However, the need for elaborate light sources for TPEF and speed limitations for FLIM hinder an even wider application. In this contribution, we present a way to overcome this limitations by combining a robust and inexpensive fiber laser for nonlinear excitation with a fast analog digitization method for rapid FLIM imaging. The applied sub nanosecond pulsed laser source is perfectly suited for fiber delivery as typically limiting non-linear effects like self-phase or cross-phase modulation (SPM, XPM) are negligible. Furthermore, compared to the typically applied femtosecond pulses, our longer pulses produce much more fluorescence photons per single shot. In this paper, we show that this higher number of fluorescence photons per pulse combined with a high analog bandwidth detection makes it possible to not only use a single pulse per pixel for TPEF imaging but also to resolve the exponential time decay for FLIM. To evaluate our system, we acquired FLIM images of a dye solution with single exponential behavior to assess the accuracy of our lifetime determination and also FLIM images of a plant stem at a pixel rate of 1 MHz to show the speed performance of our single pulse two-photon FLIM (SP-FLIM) system.
NASA Astrophysics Data System (ADS)
Xing, Yafei; Macq, Benoit
2017-11-01
With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.
Color normalization for robust evaluation of microscopy images
NASA Astrophysics Data System (ADS)
Švihlík, Jan; Kybic, Jan; Habart, David
2015-09-01
This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.
Probabilistic sparse matching for robust 3D/3D fusion in minimally invasive surgery.
Neumann, Dominik; Grbic, Sasa; John, Matthias; Navab, Nassir; Hornegger, Joachim; Ionasec, Razvan
2015-01-01
Classical surgery is being overtaken by minimally invasive and transcatheter procedures. As there is no direct view or access to the affected anatomy, advanced imaging techniques such as 3D C-arm computed tomography (CT) and C-arm fluoroscopy are routinely used in clinical practice for intraoperative guidance. However, due to constraints regarding acquisition time and device configuration, intraoperative modalities have limited soft tissue image quality and reliable assessment of the cardiac anatomy typically requires contrast agent, which is harmful to the patient and requires complex acquisition protocols. We propose a probabilistic sparse matching approach to fuse high-quality preoperative CT images and nongated, noncontrast intraoperative C-arm CT images by utilizing robust machine learning and numerical optimization techniques. Thus, high-quality patient-specific models can be extracted from the preoperative CT and mapped to the intraoperative imaging environment to guide minimally invasive procedures. Extensive quantitative experiments on 95 clinical datasets demonstrate that our model-based fusion approach has an average execution time of 1.56 s, while the accuracy of 5.48 mm between the anchor anatomy in both images lies within expert user confidence intervals. In direct comparison with image-to-image registration based on an open-source state-of-the-art medical imaging library and a recently proposed quasi-global, knowledge-driven multi-modal fusion approach for thoracic-abdominal images, our model-based method exhibits superior performance in terms of registration accuracy and robustness with respect to both target anatomy and anchor anatomy alignment errors.
Implementation of an algorithm for cylindrical object identification using range data
NASA Technical Reports Server (NTRS)
Bozeman, Sylvia T.; Martin, Benjamin J.
1989-01-01
One of the problems in 3-D object identification and localization is addressed. In robotic and navigation applications the vision system must be able to distinguish cylindrical or spherical objects as well as those of other geometric shapes. An algorithm was developed to identify cylindrical objects in an image when range data is used. The algorithm incorporates the Hough transform for line detection using edge points which emerge from a Sobel mask. Slices of the data are examined to locate arcs of circles using the normal equations of an over-determined linear system. Current efforts are devoted to testing the computer implementation of the algorithm. Refinements are expected to continue in order to accommodate cylinders in various positions. A technique is sought which is robust in the presence of noise and partial occlusions.
High-Resolution Broadband Spectral Interferometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erskine, D J; Edelstein, J
2002-08-09
We demonstrate solar spectra from a novel interferometric method for compact broadband high-resolution spectroscopy. The spectral interferometer (SI) is a hybrid instrument that uses a spectrometer to externally disperse the output of a fixed-delay interferometer. It also has been called an externally dispersed interferometer (EDI). The interferometer can be used with linear spectrometers for imaging spectroscopy or with echelle spectrometers for very broad-band coverage. EDI's heterodyning technique enhances the spectrometer's response to high spectral-density features, increasing the effective resolution by factors of several while retaining its bandwidth. The method is extremely robust to instrumental insults such as focal spot sizemore » or displacement. The EDI uses no moving parts, such as purely interferometric FTS spectrometers, and can cover a much wider simultaneous bandpass than other internally dispersed interferometers (e.g. HHS or SHS).« less
Quasi-eccentricity error modeling and compensation in vision metrology
NASA Astrophysics Data System (ADS)
Shen, Yijun; Zhang, Xu; Cheng, Wei; Zhu, Limin
2018-04-01
Circular targets are commonly used in vision applications for its detection accuracy and robustness. The eccentricity error of the circular target caused by perspective projection is one of the main factors of measurement error which needs to be compensated in high-accuracy measurement. In this study, the impact of the lens distortion on the eccentricity error is comprehensively investigated. The traditional eccentricity error turns to a quasi-eccentricity error in the non-linear camera model. The quasi-eccentricity error model is established by comparing the quasi-center of the distorted ellipse with the true projection of the object circle center. Then, an eccentricity error compensation framework is proposed which compensates the error by iteratively refining the image point to the true projection of the circle center. Both simulation and real experiment confirm the effectiveness of the proposed method in several vision applications.
Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking
Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua
2014-01-01
To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252
Technical notes and correspondence: Stochastic robustness of linear time-invariant control systems
NASA Technical Reports Server (NTRS)
Stengel, Robert F.; Ray, Laura R.
1991-01-01
A simple numerical procedure for estimating the stochastic robustness of a linear time-invariant system is described. Monte Carlo evaluations of the system's eigenvalues allows the probability of instability and the related stochastic root locus to be estimated. This analysis approach treats not only Gaussian parameter uncertainties but non-Gaussian cases, including uncertain-but-bounded variation. Confidence intervals for the scalar probability of instability address computational issues inherent in Monte Carlo simulation. Trivial extensions of the procedure admit consideration of alternate discriminants; thus, the probabilities that stipulated degrees of instability will be exceeded or that closed-loop roots will leave desirable regions can also be estimated. Results are particularly amenable to graphical presentation.
A methodology for designing robust multivariable nonlinear control systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Grunberg, D. B.
1986-01-01
A new methodology is described for the design of nonlinear dynamic controllers for nonlinear multivariable systems providing guarantees of closed-loop stability, performance, and robustness. The methodology is an extension of the Linear-Quadratic-Gaussian with Loop-Transfer-Recovery (LQG/LTR) methodology for linear systems, thus hinging upon the idea of constructing an approximate inverse operator for the plant. A major feature of the methodology is a unification of both the state-space and input-output formulations. In addition, new results on stability theory, nonlinear state estimation, and optimal nonlinear regulator theory are presented, including the guaranteed global properties of the extended Kalman filter and optimal nonlinear regulators.