Sample records for pixel level fusion

  1. Blob-level active-passive data fusion for Benthic classification

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Kalluri, Hemanth; Mathur, Abhinav; Ramnath, Vinod; Kim, Minsu; Aitken, Jennifer; Tuell, Grady

    2012-06-01

    We extend the data fusion pixel level to the more semantically meaningful blob level, using the mean-shift algorithm to form labeled blobs having high similarity in the feature domain, and connectivity in the spatial domain. We have also developed Bhattacharyya Distance (BD) and rule-based classifiers, and have implemented these higher-level data fusion algorithms into the CZMIL Data Processing System. Applying these new algorithms to recent SHOALS and CASI data at Plymouth Harbor, Massachusetts, we achieved improved benthic classification accuracies over those produced with either single sensor, or pixel-level fusion strategies. These results appear to validate the hypothesis that classification accuracy may be generally improved by adopting higher spatial and semantic levels of fusion.

  2. Ghost detection and removal based on super-pixel grouping in exposure fusion

    NASA Astrophysics Data System (ADS)

    Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun

    2014-09-01

    A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.

  3. PET-CT image fusion using random forest and à-trous wavelet transform.

    PubMed

    Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo

    2018-03-01

    New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Weber-aware weighted mutual information evaluation for infrared-visible image fusion

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoyan; Wang, Shining; Yuan, Ding

    2016-10-01

    A performance metric for infrared and visible image fusion is proposed based on Weber's law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.

  5. Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    PubMed

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D

    2015-05-08

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  6. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  7. A research on radiation calibration of high dynamic range based on the dual channel CMOS

    NASA Astrophysics Data System (ADS)

    Ma, Kai; Shi, Zhan; Pan, Xiaodong; Wang, Yongsheng; Wang, Jianghua

    2017-10-01

    The dual channel complementary metal-oxide semiconductor (CMOS) can get high dynamic range (HDR) image through extending the gray level of the image by using image fusion with high gain channel image and low gain channel image in a same frame. In the process of image fusion with dual channel, it adopts the coefficients of radiation response of a pixel from dual channel in a same frame, and then calculates the gray level of the pixel in the HDR image. For the coefficients of radiation response play a crucial role in image fusion, it has to find an effective method to acquire these parameters. In this article, it makes a research on radiation calibration of high dynamic range based on the dual channel CMOS, and designs an experiment to calibrate the coefficients of radiation response for the sensor it used. In the end, it applies these response parameters in the dual channel CMOS which calibrates, and verifies the correctness and feasibility of the method mentioned in this paper.

  8. Fast single image dehazing based on image fusion

    NASA Astrophysics Data System (ADS)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  9. Pixel-based image fusion with false color mapping

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Mao, Shiyi

    2003-06-01

    In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.

  10. Enhancement of low light level images using color-plus-mono dual camera.

    PubMed

    Jung, Yong Ju

    2017-05-15

    In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

  11. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  12. Multispectral image fusion for target detection

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-09-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  13. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  14. Angiogram, fundus, and oxygen saturation optic nerve head image fusion

    NASA Astrophysics Data System (ADS)

    Cao, Hua; Khoobehi, Bahram

    2009-02-01

    A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.

  15. A new multi-spectral feature level image fusion method for human interpretation

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-03-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  16. Study on polarization image methods in turbid medium

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong

    2014-11-01

    Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.

  17. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  18. Effective Multifocus Image Fusion Based on HVS and BP Neural Network

    PubMed Central

    Yang, Yong

    2014-01-01

    The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations. PMID:24683327

  19. The fusion of large scale classified side-scan sonar image mosaics.

    PubMed

    Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan

    2006-07-01

    This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.

  20. a Study of the Impact of Insolation on Remote Sensing-Based Landcover and Landuse Data Extraction

    NASA Astrophysics Data System (ADS)

    Becek, K.; Borkowski, A.; Mekik, Ç.

    2016-06-01

    We examined the dependency of the pixel reflectance of hyperspectral imaging spectrometer data (HISD) on a normalized total insolation index (NTII). The NTII was estimated using a light detection and ranging (LiDAR)-derived digital surface model (DSM). The NTII and the pixel reflectance were dependent, to various degrees, on the band considered, and on the properties of the objects. The findings could be used to improve land cover (LC)/land use (LU) classification, using indices constructed from the spectral bands of imaging spectrometer data (ISD). To study this possibility, we investigated the normalized difference vegetation index (NDVI) at various NTII levels. The results also suggest that the dependency of the pixel reflectance and NTII could be used to mitigate the shadows in ISD. This project was carried out using data provided by the Hyperspectral Image Analysis Group and the NSF-funded Centre for Airborne Laser Mapping (NCALM), University of Houston, for the purpose of organizing the 2013 Data Fusion Contest (IEEE 2014). This contest was organized by the IEEE GRSS Data Fusion Technical Committee.

  1. Exploring the optimal integration levels between SAR and optical data for better urban land cover mapping in the Pearl River Delta

    NASA Astrophysics Data System (ADS)

    Zhang, Hongsheng; Xu, Ru

    2018-02-01

    Integrating synthetic aperture radar (SAR) and optical data to improve urban land cover classification has been identified as a promising approach. However, which integration level is the most suitable remains unclear but important to many researchers and engineers. This study aimed to compare different integration levels for providing a scientific reference for a wide range of studies using optical and SAR data. SAR data from TerraSAR-X and ENVISAT ASAR in both WSM and IMP modes were used to be combined with optical data at pixel level, feature level and decision levels using four typical machine learning methods. The experimental results indicated that: 1) feature level that used both the original images and extracted features achieved a significant improvement of up to 10% compared to that using optical data alone; 2) different levels of fusion required different suitable methods depending on the data distribution and data resolution. For instance, support vector machine was the most stable at both the feature and decision levels, while random forest was suitable at the pixel level but not suitable at the decision level. 3) By examining the distribution of SAR features, some features (e.g., homogeneity) exhibited a close-to-normal distribution, explaining the improvement from the maximum likelihood method at the feature and decision levels. This indicated the benefits of using texture features from SAR data when being combined with optical data for land cover classification. Additionally, the research also shown that combining optical and SAR data does not guarantee improvement compared with using single data source for urban land cover classification, depending on the selection of appropriate fusion levels and fusion methods.

  2. Information theoretic partitioning and confidence based weight assignment for multi-classifier decision level fusion in hyperspectral target recognition applications

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Bruce, L. M.

    2007-04-01

    There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.

  3. Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Fan, Lei

    Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.

  4. A fast fusion scheme for infrared and visible light images in NSCT domain

    NASA Astrophysics Data System (ADS)

    Zhao, Chunhui; Guo, Yunting; Wang, Yulei

    2015-09-01

    Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.

  5. Score Fusion and Decision Fusion for the Performance Improvement of Face Recognition

    DTIC Science & Technology

    2013-07-01

    0.1). A Hamming distance (HD) [7] is calculated with the FP-CGF to measure the similarities among faces. The matched face has the shortest HD from...then put into a face pattern byte (FPB) pixel- by-pixel. A HD is calculated with the FPB to measure the similarities among faces, and recognition is...all query users are included in the database), the recognition performance can be measured by a verification rate (VR), the percentage of the

  6. Applying data fusion techniques for benthic habitat mapping and monitoring in a coral reef ecosystem

    NASA Astrophysics Data System (ADS)

    Zhang, Caiyun

    2015-06-01

    Accurate mapping and effective monitoring of benthic habitat in the Florida Keys are critical in developing management strategies for this valuable coral reef ecosystem. For this study, a framework was designed for automated benthic habitat mapping by combining multiple data sources (hyperspectral, aerial photography, and bathymetry data) and four contemporary imagery processing techniques (data fusion, Object-based Image Analysis (OBIA), machine learning, and ensemble analysis). In the framework, 1-m digital aerial photograph was first merged with 17-m hyperspectral imagery and 10-m bathymetry data using a pixel/feature-level fusion strategy. The fused dataset was then preclassified by three machine learning algorithms (Random Forest, Support Vector Machines, and k-Nearest Neighbor). Final object-based habitat maps were produced through ensemble analysis of outcomes from three classifiers. The framework was tested for classifying a group-level (3-class) and code-level (9-class) habitats in a portion of the Florida Keys. Informative and accurate habitat maps were achieved with an overall accuracy of 88.5% and 83.5% for the group-level and code-level classifications, respectively.

  7. Poster — Thur Eve — 09: Evaluation of electrical impedance and computed tomography fusion algorithms using an anthropomorphic phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chugh, Brige Paul; Krishnan, Kalpagam; Liu, Jeff

    2014-08-15

    Integration of biological conductivity information provided by Electrical Impedance Tomography (EIT) with anatomical information provided by Computed Tomography (CT) imaging could improve the ability to characterize tissues in clinical applications. In this paper, we report results of our study which compared the fusion of EIT with CT using three different image fusion algorithms, namely: weighted averaging, wavelet fusion, and ROI indexing. The ROI indexing method of fusion involves segmenting the regions of interest from the CT image and replacing the pixels with the pixels of the EIT image. The three algorithms were applied to a CT and EIT image ofmore » an anthropomorphic phantom, constructed out of five acrylic contrast targets with varying diameter embedded in a base of gelatin bolus. The imaging performance was assessed using Detectability and Structural Similarity Index Measure (SSIM). Wavelet fusion and ROI-indexing resulted in lower Detectability (by 35% and 47%, respectively) yet higher SSIM (by 66% and 73%, respectively) than weighted averaging. Our results suggest that wavelet fusion and ROI-indexing yielded more consistent and optimal fusion performance than weighted averaging.« less

  8. A real-time photogrammetric algorithm for sensor and synthetic image fusion with application to aviation combined vision

    NASA Astrophysics Data System (ADS)

    Lebedev, M. A.; Stepaniants, D. G.; Komarov, D. V.; Vygolov, O. V.; Vizilter, Yu. V.; Zheltov, S. Yu.

    2014-08-01

    The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.

  9. Multiscale image fusion using the undecimated wavelet transform with spectral factorization and nonorthogonal filter banks.

    PubMed

    Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B

    2013-03-01

    Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.

  10. Decision-Level Fusion of Spatially Scattered Multi-Modal Data for Nondestructive Inspection of Surface Defects

    PubMed Central

    Heideklang, René; Shokouhi, Parisa

    2016-01-01

    This article focuses on the fusion of flaw indications from multi-sensor nondestructive materials testing. Because each testing method makes use of a different physical principle, a multi-method approach has the potential of effectively differentiating actual defect indications from the many false alarms, thus enhancing detection reliability. In this study, we propose a new technique for aggregating scattered two- or three-dimensional sensory data. Using a density-based approach, the proposed method explicitly addresses localization uncertainties such as registration errors. This feature marks one of the major of advantages of this approach over pixel-based image fusion techniques. We provide guidelines on how to set all the key parameters and demonstrate the technique’s robustness. Finally, we apply our fusion approach to experimental data and demonstrate its capability to locate small defects by substantially reducing false alarms under conditions where no single-sensor method is adequate. PMID:26784200

  11. Quantifying the Uncertainty in High Spatial and Temporal Resolution Synthetic Land Surface Reflectance at Pixel Level Using Ground-Based Measurements

    NASA Astrophysics Data System (ADS)

    Kong, J.; Ryu, Y.

    2017-12-01

    Algorithms for fusing high temporal frequency and high spatial resolution satellite images are widely used to develop dense time-series land surface observations. While many studies have revealed that the synthesized frequent high spatial resolution images could be successfully applied in vegetation mapping and monitoring, validation and correction of fused images have not been focused than its importance. To evaluate the precision of fused image in pixel level, in-situ reflectance measurements which could account for the pixel-level heterogeneity are necessary. In this study, the synthetic images of land surface reflectance were predicted by the coarse high-frequency images acquired from MODIS and high spatial resolution images from Landsat-8 OLI using the Flexible Spatiotemporal Data Fusion (FSDAF). Ground-based reflectance was measured by JAZ Spectrometer (Ocean Optics, Dunedin, FL, USA) on rice paddy during five main growth stages in Cheorwon-gun, Republic of Korea, where the landscape heterogeneity changes through the growing season. After analyzing the spatial heterogeneity and seasonal variation of land surface reflectance based on the ground measurements, the uncertainties of the fused images were quantified at pixel level. Finally, this relationship was applied to correct the fused reflectance images and build the seasonal time series of rice paddy surface reflectance. This dataset could be significant for rice planting area extraction, phenological stages detection, and variables estimation.

  12. Formulation of image fusion as a constrained least squares optimization problem

    PubMed Central

    Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge

    2017-01-01

    Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885

  13. Object-based land cover classification based on fusion of multifrequency SAR data and THAICHOTE optical imagery

    NASA Astrophysics Data System (ADS)

    Sukawattanavijit, Chanika; Srestasathiern, Panu

    2017-10-01

    Land Use and Land Cover (LULC) information are significant to observe and evaluate environmental change. LULC classification applying remotely sensed data is a technique popularly employed on a global and local dimension particularly, in urban areas which have diverse land cover types. These are essential components of the urban terrain and ecosystem. In the present, object-based image analysis (OBIA) is becoming widely popular for land cover classification using the high-resolution image. COSMO-SkyMed SAR data was fused with THAICHOTE (namely, THEOS: Thailand Earth Observation Satellite) optical data for land cover classification using object-based. This paper indicates a comparison between object-based and pixel-based approaches in image fusion. The per-pixel method, support vector machines (SVM) was implemented to the fused image based on Principal Component Analysis (PCA). For the objectbased classification was applied to the fused images to separate land cover classes by using nearest neighbor (NN) classifier. Finally, the accuracy assessment was employed by comparing with the classification of land cover mapping generated from fused image dataset and THAICHOTE image. The object-based data fused COSMO-SkyMed with THAICHOTE images demonstrated the best classification accuracies, well over 85%. As the results, an object-based data fusion provides higher land cover classification accuracy than per-pixel data fusion.

  14. Enhanced Deforestation Mapping in North Korea using Spatial-temporal Image Fusion Method and Phenology-based Index

    NASA Astrophysics Data System (ADS)

    Jin, Y.; Lee, D.

    2017-12-01

    North Korea (the Democratic People's Republic of Korea, DPRK) is known to have some of the most degraded forest in the world. The characteristics of forest landscape in North Korea is complex and heterogeneous, the major vegetation cover types in the forest are hillside farm, unstocked forest, natural forest, and plateau vegetation. Better classification of types in high spatial resolution of deforested areas could provide essential information for decisions about forest management priorities and restoration of deforested areas. For mapping heterogeneous vegetation covers, the phenology-based indices are helpful to overcome the reflectance value confusion that occurs when using one season images. Coarse spatial resolution images may be acquired with a high repetition rate and it is useful for analyzing phenology characteristics, but may not capture the spatial detail of the land cover mosaic of the region of interest. Previous spatial-temporal fusion methods were only capture the temporal change, or focused on both temporal change and spatial change but with low accuracy in heterogeneous landscapes and small patches. In this study, a new concept for spatial-temporal image fusion method focus on heterogeneous landscape was proposed to produce fine resolution images at both fine spatial and temporal resolution. We classified the three types of pixels between the base image and target image, the first type is only reflectance changed caused by phenology, this type of pixels supply the reflectance, shape and texture information; the second type is both reflectance and spectrum changed in some bands caused by phenology like rice paddy or farmland, this type of pixels only supply shape and texture information; the third type is reflectance and spectrum changed caused by land cover type change, this type of pixels don't provide any information because we can't know how land cover changed in target image; and each type of pixels were applied different prediction methods. Results show that both STARFM and FSDAF predicted in low accuracy in second type pixels and small patches. Classification results used spatial-temporal image fusion method proposed in this study showed overall classification accuracy of 89.38%, with corresponding kappa coefficients of 0.87.

  15. Computational polarization difference underwater imaging based on image fusion

    NASA Astrophysics Data System (ADS)

    Han, Hongwei; Zhang, Xiaohui; Guan, Feng

    2016-01-01

    Polarization difference imaging can improve the quality of images acquired underwater, whether the background and veiling light are unpolarized or partial polarized. Computational polarization difference imaging technique which replaces the mechanical rotation of polarization analyzer and shortens the time spent to select the optimum orthogonal ǁ and ⊥axes is the improvement of the conventional PDI. But it originally gets the output image by setting the weight coefficient manually to an identical constant for all pixels. In this paper, a kind of algorithm is proposed to combine the Q and U parameters of the Stokes vector through pixel-level image fusion theory based on non-subsample contourlet transform. The experimental system built by the green LED array with polarizer to illuminate a piece of flat target merged in water and the CCD with polarization analyzer to obtain target image under different angle is used to verify the effect of the proposed algorithm. The results showed that the output processed by our algorithm could show more details of the flat target and had higher contrast compared to original computational polarization difference imaging.

  16. Performance of photovoltaic arrays in-vivo and characteristics of prosthetic vision in animals with retinal degeneration

    PubMed Central

    Lorach, Henri; Goetz, Georges; Mandel, Yossi; Lei, Xin; Kamins, Theodore I.; Mathieson, Keith; Huie, Philip; Dalal, Roopa; Harris, James S.; Palanker, Daniel

    2014-01-01

    Summary Loss of photoreceptors during retinal degeneration leads to blindness, but information can be reintroduced into the visual system using electrical stimulation of the remaining retinal neurons. Subretinal photovoltaic arrays convert pulsed illumination into pulsed electric current to stimulate the inner retinal neurons. Since required irradiance exceeds the natural luminance levels, an invisible near-infrared (915nm) light is used to avoid photophobic effects. We characterized the thresholds and dynamic range of cortical responses to prosthetic stimulation with arrays of various pixel sizes and with different number of photodiodes. Stimulation thresholds for devices with 140µm pixels were approximately half those of 70µm pixels, and with both pixel sizes, thresholds were lower with 2 diodes than with 3 diodes per pixel. In all cases these thresholds were more than two orders of magnitude below the ocular safety limit. At high stimulation frequencies (>20Hz), the cortical response exhibited flicker fusion. Over one order of magnitude of dynamic range could be achieved by varying either pulse duration or irradiance. However, contrast sensitivity was very limited. Cortical responses could be detected even with only a few illuminated pixels. Finally, we demonstrate that recording of the corneal electric potential in response to patterned illumination of the subretinal arrays allows monitoring the current produced by each pixel, and thereby assessing the changes in the implant performance over time. PMID:25255990

  17. Segment fusion of ToF-SIMS images.

    PubMed

    Milillo, Tammy M; Miller, Mary E; Fischione, Remo; Montes, Angelina; Gardella, Joseph A

    2016-06-08

    The imaging capabilities of time-of-flight secondary ion mass spectrometry (ToF-SIMS) have not been used to their full potential in the analysis of polymer and biological samples. Imaging has been limited by the size of the dataset and the chemical complexity of the sample being imaged. Pixel and segment based image fusion algorithms commonly used in remote sensing, ecology, geography, and geology provide a way to improve spatial resolution and classification of biological images. In this study, a sample of Arabidopsis thaliana was treated with silver nanoparticles and imaged with ToF-SIMS. These images provide insight into the uptake mechanism for the silver nanoparticles into the plant tissue, giving new understanding to the mechanism of uptake of heavy metals in the environment. The Munechika algorithm was programmed in-house and applied to achieve pixel based fusion, which improved the spatial resolution of the image obtained. Multispectral and quadtree segment or region based fusion algorithms were performed using ecognition software, a commercially available remote sensing software suite, and used to classify the images. The Munechika fusion improved the spatial resolution for the images containing silver nanoparticles, while the segment fusion allowed classification and fusion based on the tissue types in the sample, suggesting potential pathways for the uptake of the silver nanoparticles.

  18. Region-based multifocus image fusion for the precise acquisition of Pap smear images.

    PubMed

    Tello-Mijares, Santiago; Bescós, Jesús

    2018-05-01

    A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  19. Terrestrial hyperspectral image shadow restoration through fusion with terrestrial lidar

    NASA Astrophysics Data System (ADS)

    Hartzell, Preston J.; Glennie, Craig L.; Finnegan, David C.; Hauser, Darren L.

    2017-05-01

    Recent advances in remote sensing technology have expanded the acquisition and fusion of active lidar and passive hyperspectral imagery (HSI) from exclusively airborne observations to include terrestrial modalities. In contrast to airborne collection geometry, hyperspectral imagery captured from terrestrial cameras is prone to extensive solar shadowing on vertical surfaces leading to reductions in pixel classification accuracies or outright removal of shadowed areas from subsequent analysis tasks. We demonstrate the use of lidar spatial information for sub-pixel HSI shadow detection and the restoration of shadowed pixel spectra via empirical methods that utilize sunlit and shadowed pixels of similar material composition. We examine the effectiveness of radiometrically calibrated lidar intensity in identifying these similar materials in sun and shade conditions and further evaluate a restoration technique that leverages ratios derived from the overlapping lidar laser and HSI wavelengths. Simulations of multiple lidar wavelengths, i.e., multispectral lidar, indicate the potential for HSI spectral restoration that is independent of the complexity and costs associated with rigorous radiometric transfer models, which have yet to be developed for horizontal-viewing terrestrial HSI sensors. The spectral restoration performance of shadowed HSI pixels is quantified for imagery of a geologic outcrop through improvements in spectral shape, spectral scale, and HSI band correlation.

  20. 3D Spatial and Spectral Fusion of Terrestrial Hyperspectral Imagery and Lidar for Hyperspectral Image Shadow Restoration Applied to a Geologic Outcrop

    NASA Astrophysics Data System (ADS)

    Hartzell, P. J.; Glennie, C. L.; Hauser, D. L.; Okyay, U.; Khan, S.; Finnegan, D. C.

    2016-12-01

    Recent advances in remote sensing technology have expanded the acquisition and fusion of active lidar and passive hyperspectral imagery (HSI) from an exclusively airborne technique to terrestrial modalities. This enables high resolution 3D spatial and spectral quantification of vertical geologic structures for applications such as virtual 3D rock outcrop models for hydrocarbon reservoir analog analysis and mineral quantification in open pit mining environments. In contrast to airborne observation geometry, the vertical surfaces observed by horizontal-viewing terrestrial HSI sensors are prone to extensive topography-induced solar shadowing, which leads to reduced pixel classification accuracy or outright removal of shadowed pixels from analysis tasks. Using a precisely calibrated and registered offset cylindrical linear array camera model, we demonstrate the use of 3D lidar data for sub-pixel HSI shadow detection and the restoration of the shadowed pixel spectra via empirical methods that utilize illuminated and shadowed pixels of similar material composition. We further introduce a new HSI shadow restoration technique that leverages collocated backscattered lidar intensity, which is resistant to solar conditions, obtained by projecting the 3D lidar points through the HSI camera model into HSI pixel space. Using ratios derived from the overlapping lidar laser and HSI wavelengths, restored shadow pixel spectra are approximated using a simple scale factor. Simulations of multiple lidar wavelengths, i.e., multi-spectral lidar, indicate the potential for robust HSI spectral restoration that is independent of the complexity and costs associated with rigorous radiometric transfer models, which have yet to be developed for horizontal-viewing terrestrial HSI sensors. The spectral restoration performance is quantified through HSI pixel classification consistency between full sun and partial sun exposures of a single geologic outcrop.

  1. Joint sparsity based heterogeneous data-level fusion for target detection and estimation

    NASA Astrophysics Data System (ADS)

    Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe

    2017-05-01

    Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.

  2. Review of Fusion Systems and Contributing Technologies for SIHS-TD (Examen des Systemes de Fusion et des Technologies d’Appui pour la DT SIHS)

    DTIC Science & Technology

    2007-03-31

    Unlimited, Nivisys, Insight technology, Elcan, FLIR Systems, Stanford photonics Hardware Sensor fusion processors Video processing boards Image, video...Engineering The SPIE Digital Library is a resource for optics and photonics information. It contains more than 70,000 full-text papers from SPIE...conditions Top row: Stanford Photonics XR-Mega-10 Extreme 1400 x 1024 pixels ICCD detector, 33 msec exposure, no binning. Middle row: Andor EEV iXon

  3. A novel fusion method of improved adaptive LTP and two-directional two-dimensional PCA for face feature extraction

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Wang, Bo-yu; Zhang, Yi; Zhao, Li-ming

    2018-03-01

    In this paper, under different illuminations and random noises, focusing on the local texture feature's defects of a face image that cannot be completely described because the threshold of local ternary pattern (LTP) cannot be calculated adaptively, a local three-value model of improved adaptive local ternary pattern (IALTP) is proposed. Firstly, the difference function between the center pixel and the neighborhood pixel weight is established to obtain the statistical characteristics of the central pixel and the neighborhood pixel. Secondly, the adaptively gradient descent iterative function is established to calculate the difference coefficient which is defined to be the threshold of the IALTP operator. Finally, the mean and standard deviation of the pixel weight of the local region are used as the coding mode of IALTP. In order to reflect the overall properties of the face and reduce the dimension of features, the two-directional two-dimensional PCA ((2D)2PCA) is adopted. The IALTP is used to extract local texture features of eyes and mouth area. After combining the global features and local features, the fusion features (IALTP+) are obtained. The experimental results on the Extended Yale B and AR standard face databases indicate that under different illuminations and random noises, the algorithm proposed in this paper is more robust than others, and the feature's dimension is smaller. The shortest running time reaches 0.329 6 s, and the highest recognition rate reaches 97.39%.

  4. Three-Dimensional Road Network by Fusion of Polarimetric and Interferometric SAR Data

    NASA Technical Reports Server (NTRS)

    Gamba, P.; Houshmand, B.

    1998-01-01

    In this paper a fuzzy classification procedure is applied to polarimetric radar measurements, and street pixels are detected. These data are successively grouped into consistent roads by means of a dynamic programming approach based on the fuzzy membership function values. Further fusion of the 2D road network extracted and 3D TOPSAR measurements provides a powerful way to analyze urban infrastructures.

  5. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  6. Shadow-free single-pixel imaging

    NASA Astrophysics Data System (ADS)

    Li, Shunhua; Zhang, Zibang; Ma, Xiao; Zhong, Jingang

    2017-11-01

    Single-pixel imaging is an innovative imaging scheme and receives increasing attention in recent years, for it is applicable for imaging at non-visible wavelengths and imaging under weak light conditions. However, as in conventional imaging, shadows would likely occur in single-pixel imaging and sometimes bring negative effects in practical uses. In this paper, the principle of shadows occurrence in single-pixel imaging is analyzed, following which a technique for shadows removal is proposed. In the proposed technique, several single-pixel detectors are used to detect the backscattered light at different locations so that the shadows in the reconstructed images corresponding to each detector shadows are complementary. Shadow-free reconstruction can be derived by fusing the shadow-complementary images using maximum selection rule. To deal with the problem of intensity mismatch in image fusion, we put forward a simple calibration. As experimentally demonstrated, the technique is able to reconstruct monochromatic and full-color shadow-free images.

  7. Multi-focus image fusion based on window empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao

    2017-09-01

    In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.

  8. Classification of weld defect based on information fusion technology for radiographic testing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Hongquan; Liang, Zeming, E-mail: heavenlzm@126.com; Gao, Jianmin

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster–Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defectmore » feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.« less

  9. Classification of weld defect based on information fusion technology for radiographic testing system.

    PubMed

    Jiang, Hongquan; Liang, Zeming; Gao, Jianmin; Dang, Changying

    2016-03-01

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster-Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.

  10. Dual-band QWIP MWIR/LWIR focal plane array test results

    NASA Astrophysics Data System (ADS)

    Goldberg, Arnold C.; Fischer, Theodore; Kennerly, Stephen; Wang, Samuel C.; Sundaram, Mani; Uppal, Parvez; Winn, Michael L.; Milne, Gregory L.; Stevens, Mark A.

    2000-07-01

    We report on the results of laboratory and field tests on a pixel-registered, 2-color MWIR/LWIR 256 X 256 QWIP FPA with simultaneous integrating capability. The FPA studied contained stacked QWIP structures with spectral peaks at 5.1 micrometer and 9.0 micrometer. Normally incident radiation was coupled into the devices using a diffraction grating designed to operate in both spectral bands. Each pixel is connected to the read-out integrated circuit by three bumps to permit the application of separate bias levels to each QWIP stack and allow simultaneous integration of the signal current in each band. We found the FPA to have high pixel operability, well balanced response, good imaging performance, high optical fill factor, and low spectral crosstalk. We present data on measurements of the noise-equivalent temperature difference of the FPA in both bands as functions of temperature and bias. The FPA data are compared to single-pixel data taken on devices from the same wafer. We also present data on the sensitivity of this FPA to polarized light. It is found that the LWIR portion of the device is very sensitive to the direction of polarization of the incident light. The MWIR part of the device is relatively insensitive to the polarization. In addition, imagery was taken with this FPA of military targets in the field. Image fusion techniques were applied to the resulting images.

  11. Space-Time Data Fusion

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Nguyen, Hai; Olsen, Edward; Cressie, Noel

    2011-01-01

    Space-time Data Fusion (STDF) is a methodology for combing heterogeneous remote sensing data to optimally estimate the true values of a geophysical field of interest, and obtain uncertainties for those estimates. The input data sets may have different observing characteristics including different footprints, spatial resolutions and fields of view, orbit cycles, biases, and noise characteristics. Despite these differences all observed data can be linked to the underlying field, and therefore the each other, by a statistical model. Differences in footprints and other geometric characteristics are accounted for by parameterizing pixel-level remote sensing observations as spatial integrals of true field values lying within pixel boundaries, plus measurement error. Both spatial and temporal correlations in the true field and in the observations are estimated and incorporated through the use of a space-time random effects (STRE) model. Once the models parameters are estimated, we use it to derive expressions for optimal (minimum mean squared error and unbiased) estimates of the true field at any arbitrary location of interest, computed from the observations. Standard errors of these estimates are also produced, allowing confidence intervals to be constructed. The procedure is carried out on a fine spatial grid to approximate a continuous field. We demonstrate STDF by applying it to the problem of estimating CO2 concentration in the lower-atmosphere using data from the Atmospheric Infrared Sounder (AIRS) and the Japanese Greenhouse Gasses Observing Satellite (GOSAT) over one year for the continental US.

  12. Imaging properties of pixellated scintillators with deep pixels

    PubMed Central

    Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.

    2015-01-01

    We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10×10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm × 1mm × 20 mm pixels) made by Proteus, Inc. with similar 10×10 arrays of LSO:Ce and BGO (1mm × 1mm × 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10×10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors. PMID:26236070

  13. Imaging properties of pixellated scintillators with deep pixels

    NASA Astrophysics Data System (ADS)

    Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.

    2014-09-01

    We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10x10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm x 1mm x 20 mm pixels) made by Proteus, Inc. with similar 10x10 arrays of LSO:Ce and BGO (1mm x 1mm x 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10x10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors.

  14. A dual-channel fusion system of visual and infrared images based on color transfer

    NASA Astrophysics Data System (ADS)

    Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong

    2013-09-01

    A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.

  15. A coarse-to-fine approach for medical hyperspectral image classification with sparse representation

    NASA Astrophysics Data System (ADS)

    Chang, Lan; Zhang, Mengmeng; Li, Wei

    2017-10-01

    A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.

  16. Fusion of GEDI, ICESAT2 & NISAR data for above ground biomass mapping in Sonoma County, California

    NASA Astrophysics Data System (ADS)

    Duncanson, L.; Simard, M.; Thomas, N. M.; Neuenschwander, A. L.; Hancock, S.; Armston, J.; Dubayah, R.; Hofton, M. A.; Huang, W.; Tang, H.; Marselis, S.; Fatoyinbo, T.

    2017-12-01

    Several upcoming NASA missions will collect data sensitive to forest structure (GEDI, ICESAT-2 & NISAR). The LiDAR and SAR data collected by these missions will be used in coming years to map forest aboveground biomass at various resolutions. This research focuses on developing and testing multi-sensor data fusion approaches in advance of these missions. Here, we present the first case study of a CMS-16 grant with results from Sonoma County, California. We simulate lidar and SAR datasets from GEDI, ICESAT-2 and NISAR using airborne discrete return lidar and UAVSAR data, respectively. GEDI and ICESAT-2 signals are simulated from high point density discrete return lidar that was acquired over the entire county in 2014 through a previous CMS project (Dubayah & Hurtt, CMS-13). NISAR is simulated from L-band UAVSAR data collected in 2014. These simulations are empirically related to 300 field plots of aboveground biomass as well as a 30m biomass map produced from the 2014 airborne lidar data. We model biomass independently for each simulated mission dataset and then test two fusion methods for County-wide mapping 1) a pixel based approach and 2) an object oriented approach. In the pixel-based approach, GEDI and ICESAT-2 biomass models are calibrated over field plots and applied in orbital simulations for a 2-year period of the GEDI and ICESAT-2 missions. These simulated samples are then used to calibrate UAVSAR data to produce a 0.25 ha map. In the object oriented approach, the GEDI and ICESAT-2 data are identical to the pixel-based approach, but calibrate image objects of similar L-band backscatter rather than uniform pixels. The results of this research demonstrate the estimated ability for each of these three missions to independently map biomass in a temperate, high biomass system, as well as the potential improvement expected through combining mission datasets.

  17. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT.

    PubMed

    Mazaheri, Samaneh; Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.

  18. Stacked sparse autoencoder in hyperspectral data classification using spectral-spatial, higher order statistics and multifractal spectrum features

    NASA Astrophysics Data System (ADS)

    Wan, Xiaoqing; Zhao, Chunhui; Wang, Yanchun; Liu, Wu

    2017-11-01

    This paper proposes a novel classification paradigm for hyperspectral image (HSI) using feature-level fusion and deep learning-based methodologies. Operation is carried out in three main steps. First, during a pre-processing stage, wave atoms are introduced into bilateral filter to smooth HSI, and this strategy can effectively attenuate noise and restore texture information. Meanwhile, high quality spectral-spatial features can be extracted from HSI by taking geometric closeness and photometric similarity among pixels into consideration simultaneously. Second, higher order statistics techniques are firstly introduced into hyperspectral data classification to characterize the phase correlations of spectral curves. Third, multifractal spectrum features are extracted to characterize the singularities and self-similarities of spectra shapes. To this end, a feature-level fusion is applied to the extracted spectral-spatial features along with higher order statistics and multifractal spectrum features. Finally, stacked sparse autoencoder is utilized to learn more abstract and invariant high-level features from the multiple feature sets, and then random forest classifier is employed to perform supervised fine-tuning and classification. Experimental results on two real hyperspectral data sets demonstrate that the proposed method outperforms some traditional alternatives.

  19. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  20. A multi-focus image fusion method via region mosaicking on Laplacian pyramids

    PubMed Central

    Kou, Liang; Zhang, Liguo; Sun, Jianguo; Han, Qilong; Jin, Zilong

    2018-01-01

    In this paper, a method named Region Mosaicking on Laplacian Pyramids (RMLP) is proposed to fuse multi-focus images that is captured by microscope. First, the Sum-Modified-Laplacian is applied to measure the focus of multi-focus images. Then the density-based region growing algorithm is utilized to segment the focused region mask of each image. Finally, the mask is decomposed into a mask pyramid to supervise region mosaicking on a Laplacian pyramid. The region level pyramid keeps more original information than the pixel level. The experiment results show that RMLP has best performance in quantitative comparison with other methods. In addition, RMLP is insensitive to noise and can reduces the color distortion of the fused images on two datasets. PMID:29771912

  1. Classification of Urban Aerial Data Based on Pixel Labelling with Deep Convolutional Neural Networks and Logistic Regression

    NASA Astrophysics Data System (ADS)

    Yao, W.; Poleswki, P.; Krzystek, P.

    2016-06-01

    The recent success of deep convolutional neural networks (CNN) on a large number of applications can be attributed to large amounts of available training data and increasing computing power. In this paper, a semantic pixel labelling scheme for urban areas using multi-resolution CNN and hand-crafted spatial-spectral features of airborne remotely sensed data is presented. Both CNN and hand-crafted features are applied to image/DSM patches to produce per-pixel class probabilities with a L1-norm regularized logistical regression classifier. The evidence theory infers a degree of belief for pixel labelling from different sources to smooth regions by handling the conflicts present in the both classifiers while reducing the uncertainty. The aerial data used in this study were provided by ISPRS as benchmark datasets for 2D semantic labelling tasks in urban areas, which consists of two data sources from LiDAR and color infrared camera. The test sites are parts of a city in Germany which is assumed to consist of typical object classes including impervious surfaces, trees, buildings, low vegetation, vehicles and clutter. The evaluation is based on the computation of pixel-based confusion matrices by random sampling. The performance of the strategy with respect to scene characteristics and method combination strategies is analyzed and discussed. The competitive classification accuracy could be not only explained by the nature of input data sources: e.g. the above-ground height of nDSM highlight the vertical dimension of houses, trees even cars and the nearinfrared spectrum indicates vegetation, but also attributed to decision-level fusion of CNN's texture-based approach with multichannel spatial-spectral hand-crafted features based on the evidence combination theory.

  2. Our solution for fusion of simultaneusly acquired whole body scintigrams and optical images, as usesful tool in clinical practice in patients with differentiated thyroid carcinomas after radioiodine therapy. A useful tool in clinical practice.

    PubMed

    Matovic, Milovan; Jankovic, Milica; Barjaktarovic, Marko; Jeremic, Marija

    2017-01-01

    After radioiodine therapy of differentiated thyroid cancer (DTC) patients, whole body scintigraphy (WBS) is standard procedure before releasing the patient from the hospital. A common problem is the precise localization of regions where the iod-avide tissue is located. Sometimes is practically impossible to perform precise topographic localization of such regions. In order to face this problem, we have developed a low-cost Vision-Fusion system for web-camera image acquisition simultaneously with routine scintigraphic whole body acquisition including the algorithm for fusion of images given from both cameras. For image acquisition in the gamma part of the spectra we used e.cam dual head gamma camera (Siemens, Erlangen, Germany) in WBS modality, with matrix size of 256×1024 pixels and bed speed of 6cm/min, equipped with high energy collimator. For optical image acquisition in visible part of spectra we have used web-camera model C905 (Logitech, USA) with Carl Zeiss® optics, native resolution 1600×1200 pixels, 34 o field of view, 30g weight, with autofocus option turned "off" and auto white balance turned "on". Web camera is connected to upper head of gamma camera (GC) by a holder of lightweight aluminum rod and a plexiglas adapter. Our own Vision-Fusion software for image acquisition and coregistration was developed using NI LabVIEW programming environment 2015 (National Instruments, Texas, USA) and two additional LabVIEW modules: NI Vision Acquisition Software (VAS) and NI Vision Development Module (VDM). Vision acquisition software enables communication and control between laptop computer and web-camera. Vision development module is image processing library used for image preprocessing and fusion. Software starts the web-camera image acquisition before starting image acquisition on GC and stops it when GC completes the acquisition. Web-camera is in continuous acquisition mode with frame rate f depending on speed of patient bed movement v (f=v/∆ cm , where ∆ cm is a displacement step that can be changed in Settings option of Vision-Fusion software; by default, ∆ cm is set to 1cm corresponding to ∆ p =15 pixels). All images captured while patient's bed is moving are processed. Movement of patient's bed is checked using cross-correlation of two successive images. After each image capturing, algorithm extracts the central region of interest (ROI) of the image, with the same width as captured image (1600 pixels) and the height that is equal to the ∆ p displacement in pixels. All extracted central ROI are placed next to each other in the overall whole-body image. Stacking of narrow central ROI introduces negligible distortion in the overall whole-body image. The first step for fusion of the scintigram and the optical image was determination of spatial transformation between them. We have made an experiment with two markers (point radioactivity sources of 99m Tc pertechnetate 1MBq) visible in both images (WBS and optical) to find transformation of coordinates between images. The distance between point markers is used for spatial coregistration of the gamma and optical images. At the end of coregistration process, gamma image is rescaled in spatial domain and added to the optical image (green or red channel, amplification changeable from user interface). We tested our system for 10 patients with DTC who received radioiodine therapy (8 women and two men, with average age of 50.10±12.26 years). Five patients received 5.55Gbq, three 3.70GBq and two 1.85GBq. Whole-body scintigraphy and optical image acquisition were performed 72 hours after application of radioiodine therapy. Based on our first results during clinical testing of our system, we can conclude that our system can improve diagnostic possibility of whole body scintigraphy to detect thyroid remnant tissue in patients with DTC after radioiodine therapy.

  3. Supervised classification of aerial imagery and multi-source data fusion for flood assessment

    NASA Astrophysics Data System (ADS)

    Sava, E.; Harding, L.; Cervone, G.

    2015-12-01

    Floods are among the most devastating natural hazards and the ability to produce an accurate and timely flood assessment before, during, and after an event is critical for their mitigation and response. Remote sensing technologies have become the de-facto approach for observing the Earth and its environment. However, satellite remote sensing data are not always available. For these reasons, it is crucial to develop new techniques in order to produce flood assessments during and after an event. Recent advancements in data fusion techniques of remote sensing with near real time heterogeneous datasets have allowed emergency responders to more efficiently extract increasingly precise and relevant knowledge from the available information. This research presents a fusion technique using satellite remote sensing imagery coupled with non-authoritative data such as Civil Air Patrol (CAP) and tweets. A new computational methodology is proposed based on machine learning algorithms to automatically identify water pixels in CAP imagery. Specifically, wavelet transformations are paired with multiple classifiers, run in parallel, to build models discriminating water and non-water regions. The learned classification models are first tested against a set of control cases, and then used to automatically classify each image separately. A measure of uncertainty is computed for each pixel in an image proportional to the number of models classifying the pixel as water. Geo-tagged tweets are continuously harvested and stored on a MongoDB and queried in real time. They are fused with CAP classified data, and with satellite remote sensing derived flood extent results to produce comprehensive flood assessment maps. The final maps are then compared with FEMA generated flood extents to assess their accuracy. The proposed methodology is applied on two test cases, relative to the 2013 floods in Boulder CO, and the 2015 floods in Texas.

  4. Computation Methods for NASA Data-streams for Agricultural Efficiency Applications

    NASA Astrophysics Data System (ADS)

    Shrestha, B.; O'Hara, C. G.; Mali, P.

    2007-12-01

    Temporal Map Algebra (TMA) is a novel technique for analyzing time-series of satellite imageries using simple algebraic operators that treats time-series imageries as a three-dimensional dataset, where two dimensions encode planimetric position on earth surface and the third dimension encodes time. Spatio-temporal analytical processing methods such as TMA that utilize moderate spatial resolution satellite imagery having high temporal resolution to create multi-temporal composites are data intensive as well as computationally intensive. TMA analysis for multi-temporal composites provides dramatically enhanced usefulness that will yield previously unavailable capabilities to user communities, if deployment is coupled with significant High Performance Computing (HPC) capabilities; and interfaces are designed to deliver the full potential for these new technological developments. In this research, cross-platform data fusion and adaptive filtering using TMA was employed to create highly useful daily datasets and cloud-free high-temporal resolution vegetation index (VI) composites with enhanced information content for vegetation and bio-productivity monitoring, surveillance, and modeling. Fusion of Normalized Difference Vegetation Index (NDVI) data created from Aqua and Terra Moderate Resolution Imaging Spectroradiometer (MODIS) surface-reflectance data (MOD09) enables the creation of daily composites which are of immense value to a broad spectrum of global and national applications. Additionally these products are highly desired by many natural resources agencies like USDA/FAS/PECAD. Utilizing data streams collected by similar sensors on different platforms that transit the same areas at slightly different times of the day offers the opportunity to develop fused data products that have enhanced cloud-free and reduced noise characteristics. Establishing a Fusion Quality Confidence Code (FQCC) provides a metadata product that quantifies the method of fusion for a given pixel and enables a relative quality and confidence factor to be established for a given daily pixel value. When coupled with metadata that quantify the source sensor, day and time of acquisition, and the fusion method of each pixel to create the daily product; a wealth of information is available to assist in deriving new data and information products. These newly developed abilities to create highly useful daily data sets imply that temporal composites for a geographic area of interest may be created for user-defined temporal intervals that emphasize a user designated day of interest. At GeoResources Institute, Mississippi State University, solutions have been developed to create custom composites and cross-platform satellite data fusion using TMA which are useful for National Aeronautics and Space Administration (NASA) Rapid Prototyping Capability (RPC) and Integrated System Solutions (ISS) experiments for agricultural applications.

  5. Design of two-DMD based zoom MW and LW dual-band IRSP using pixel fusion

    NASA Astrophysics Data System (ADS)

    Pan, Yue; Xu, Xiping; Qiao, Yang

    2018-06-01

    In order to test the anti-jamming ability of mid-wave infrared (MWIR) and long-wave infrared (LWIR) dual-band imaging system, a zoom mid-wave (MW) and long-wave (LW) dual-band infrared scene projector (IRSP) based on two-digital micro-mirror device (DMD) was designed by using a projection method of pixel fusion. Two illumination systems, which illuminate the two DMDs directly with Kohler telecentric beam respectively, were combined with projection system by a spatial layout way. The distances of projection entrance pupil and illumination exit pupil were also analyzed separately. MWIR and LWIR virtual scenes were generated respectively by two DMDs and fused by a dichroic beam combiner (DBC), resulting in two radiation distributions in projected image. The optical performance of each component was evaluated by ray tracing simulations. Apparent temperature and image contrast were demonstrated by imaging experiments. On the basis of test and simulation results, the aberrations of optical system were well corrected, and the quality of projected image meets test requirements.

  6. Mk x Nk gated CMOS imager

    NASA Astrophysics Data System (ADS)

    Janesick, James; Elliott, Tom; Andrews, James; Tower, John; Bell, Perry; Teruya, Alan; Kimbrough, Joe; Bishop, Jeanne

    2014-09-01

    Our paper will describe a recently designed Mk x Nk x 10 um pixel CMOS gated imager intended to be first employed at the LLNL National Ignition Facility (NIF). Fabrication involves stitching MxN 1024x1024x10 um pixel blocks together into a monolithic imager (where M = 1, 2, . .10 and N = 1, 2, . . 10). The imager has been designed for either NMOS or PMOS pixel fabrication using a base 0.18 um/3.3V CMOS process. Details behind the design are discussed with emphasis on a custom global reset feature which erases the imager of unwanted charge in ~1 us during the fusion ignition process followed by an exposure to obtain useful data. Performance data generated by prototype imagers designed similar to the Mk x Nk sensor is presented.

  7. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT

    PubMed Central

    Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965

  8. Autofocus and fusion using nonlinear correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabazos-Marín, Alma Rocío; Álvarez-Borrego, Josué, E-mail: josue@cicese.mx; Coronel-Beltrán, Ángel

    2014-10-06

    In this work a new algorithm is proposed for auto focusing and images fusion captured by microscope's CCD. The proposed algorithm for auto focusing implements the spiral scanning of each image in the stack f(x, y){sub w} to define the V{sub w} vector. The spectrum of the vector FV{sub w} is calculated by fast Fourier transform. The best in-focus image is determined by a focus measure that is obtained by the FV{sub 1} nonlinear correlation vector, of the reference image, with each other FV{sub W} images in the stack. In addition, fusion is performed with a subset of selected imagesmore » f(x, y){sub SBF} like the images with best focus measurement. Fusion creates a new improved image f(x, y){sub F} with the selection of pixels of higher intensity.« less

  9. The Nonsubsampled Contourlet Transform Based Statistical Medical Image Fusion Using Generalized Gaussian Density

    PubMed Central

    Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie

    2015-01-01

    We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871

  10. Harmonization of Multiple Forest Disturbance Data to Create a 1986-2011 Database for the Conterminous United States

    NASA Astrophysics Data System (ADS)

    Soulard, C. E.; Acevedo, W.; Yang, Z.; Cohen, W. B.; Stehman, S. V.; Taylor, J. L.

    2015-12-01

    A wide range of spatial forest disturbance data exist for the conterminous United States, yet inconsistencies between map products arise because of differing programmatic objectives and methodologies. Researchers on the Land Change Research Project (LCRP) are working to assess spatial agreement, characterize uncertainties, and resolve discrepancies between these national level datasets, in regard to forest disturbance. Disturbance maps from the Global Forest Change (GFC), Landfire Vegetation Disturbance (LVD), National Land Cover Dataset (NLCD), Vegetation Change Tracker (VCT), Web-enabled Landsat Data (WELD), and Monitoring Trends in Burn Severity (MTBS) were harmonized using a pixel-based data fusion process. The harmonization process reconciled forest harvesting, forest fire, and remaining forest disturbance across four intervals (1986-1992, 1992-2001, 2001-2006, and 2006-2011) by relying on convergence of evidence across all datasets available for each interval. Pixels with high agreement across datasets were retained, while moderate-to-low agreement pixels were visually assessed and either manually edited using reference imagery or discarded from the final disturbance map(s). National results show that annual rates of forest harvest and overall fire have increased over the past 25 years. Overall, this study shows that leveraging the best elements of readily-available data improves forest loss monitoring relative to using a single dataset to monitor forest change, particularly by reducing commission errors.

  11. High-Speed Incoming Infrared Target Detection by Fusion of Spatial and Temporal Detectors

    PubMed Central

    Kim, Sungho

    2015-01-01

    This paper presents a method for detecting high-speed incoming targets by the fusion of spatial and temporal detectors to achieve a high detection rate for an active protection system (APS). The incoming targets have different image velocities according to the target-camera geometry. Therefore, single-target detector-based approaches, such as a 1D temporal filter, 2D spatial filter and 3D matched filter, cannot provide a high detection rate with moderate false alarms. The target speed variation was analyzed according to the incoming angle and target velocity. The speed of the distant target at the firing time is almost stationary and increases slowly. The speed varying targets are detected stably by fusing the spatial and temporal filters. The stationary target detector is activated by an almost zero temporal contrast filter (TCF) and identifies targets using a spatial filter called the modified mean subtraction filter (M-MSF). A small motion (sub-pixel velocity) target detector is activated by a small TCF value and finds targets using the same spatial filter. A large motion (pixel-velocity) target detector works when the TCF value is high. The final target detection is terminated by fusing the three detectors based on the threat priority. The experimental results of the various target sequences show that the proposed fusion-based target detector produces the highest detection rate with an acceptable false alarm rate. PMID:25815448

  12. Toward Gleasonian landscape ecology: From communities to species, from patches to pixels

    Treesearch

    Samuel A. Cushman; Jeffrey S. Evans; Kevin McGarigal; Joseph M. Kiesecker

    2010-01-01

    The fusion of individualistic community ecology with the Hutchinsonian niche concept enabled a broad integration of ecological theory, spanning all the way from the niche characteristics of individual species, to the composition, structure, and dynamics of ecological communities. Landscape ecology has been variously described as the study of the structure, function,...

  13. Image Edge Extraction via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A. (Inventor); Klinko, Steve (Inventor)

    2008-01-01

    A computer-based technique for detecting edges in gray level digital images employs fuzzy reasoning to analyze whether each pixel in an image is likely on an edge. The image is analyzed on a pixel-by-pixel basis by analyzing gradient levels of pixels in a square window surrounding the pixel being analyzed. An edge path passing through the pixel having the greatest intensity gradient is used as input to a fuzzy membership function, which employs fuzzy singletons and inference rules to assigns a new gray level value to the pixel that is related to the pixel's edginess degree.

  14. Innovative monolithic detector for tri-spectral (THz, IR, Vis) imaging

    NASA Astrophysics Data System (ADS)

    Pocas, S.; Perenzoni, M.; Massari, N.; Simoens, F.; Meilhan, J.; Rabaud, W.; Martin, S.; Delplanque, B.; Imperinetti, P.; Goudon, V.; Vialle, C.; Arnaud, A.

    2012-10-01

    Fusion of multispectral images has been explored for many years for security and used in a number of commercial products. CEA-Leti and FBK have developed an innovative sensor technology that gathers monolithically on a unique focal plane arrays, pixels sensitive to radiation in three spectral ranges that are terahertz (THz), infrared (IR) and visible. This technology benefits of many assets for volume market: compactness, full CMOS compatibility on 200mm wafers, advanced functions of the CMOS read-out integrated circuit (ROIC), and operation at room temperature. The ROIC houses visible APS diodes while IR and THz detections are carried out by microbolometers collectively processed above the CMOS substrate. Standard IR bolometric microbridges (160x160 pixels) are surrounding antenna-coupled bolometers (32X32 pixels) built on a resonant cavity customized to THz sensing. This paper presents the different technological challenges achieved in this development and first electrical and sensitivity experimental tests.

  15. An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks.

    PubMed

    Shamwell, E Jared; Nothwang, William D; Perlis, Donald

    2018-05-04

    Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.

  16. A scalable multi-DLP pico-projector system for virtual reality

    NASA Astrophysics Data System (ADS)

    Teubl, F.; Kurashima, C.; Cabral, M.; Fels, S.; Lopes, R.; Zuffo, M.

    2014-03-01

    Virtual Reality (VR) environments can offer immersion, interaction and realistic images to users. A VR system is usually expensive and requires special equipment in a complex setup. One approach is to use Commodity-Off-The-Shelf (COTS) desktop multi-projectors manually or camera based calibrated to reduce the cost of VR systems without significant decrease of the visual experience. Additionally, for non-planar screen shapes, special optics such as lenses and mirrors are required thus increasing costs. We propose a low-cost, scalable, flexible and mobile solution that allows building complex VR systems that projects images onto a variety of arbitrary surfaces such as planar, cylindrical and spherical surfaces. This approach combines three key aspects: 1) clusters of DLP-picoprojectors to provide homogeneous and continuous pixel density upon arbitrary surfaces without additional optics; 2) LED lighting technology for energy efficiency and light control; 3) smaller physical footprint for flexibility purposes. Therefore, the proposed system is scalable in terms of pixel density, energy and physical space. To achieve these goals, we developed a multi-projector software library called FastFusion that calibrates all projectors in a uniform image that is presented to viewers. FastFusion uses a camera to automatically calibrate geometric and photometric correction of projected images from ad-hoc positioned projectors, the only requirement is some few pixels overlapping amongst them. We present results with eight Pico-projectors, with 7 lumens (LED) and DLP 0.17 HVGA Chipset.

  17. A Hierarchical Object-oriented Urban Land Cover Classification Using WorldView-2 Imagery and Airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Wu, M. F.; Sun, Z. C.; Yang, B.; Yu, S. S.

    2016-11-01

    In order to reduce the “salt and pepper” in pixel-based urban land cover classification and expand the application of fusion of multi-source data in the field of urban remote sensing, WorldView-2 imagery and airborne Light Detection and Ranging (LiDAR) data were used to improve the classification of urban land cover. An approach of object- oriented hierarchical classification was proposed in our study. The processing of proposed method consisted of two hierarchies. (1) In the first hierarchy, LiDAR Normalized Digital Surface Model (nDSM) image was segmented to objects. The NDVI, Costal Blue and nDSM thresholds were set for extracting building objects. (2) In the second hierarchy, after removing building objects, WorldView-2 fused imagery was obtained by Haze-ratio-based (HR) fusion, and was segmented. A SVM classifier was applied to generate road/parking lot, vegetation and bare soil objects. (3) Trees and grasslands were split based on an nDSM threshold (2.4 meter). The results showed that compared with pixel-based and non-hierarchical object-oriented approach, proposed method provided a better performance of urban land cover classification, the overall accuracy (OA) and overall kappa (OK) improved up to 92.75% and 0.90. Furthermore, proposed method reduced “salt and pepper” in pixel-based classification, improved the extraction accuracy of buildings based on LiDAR nDSM image segmentation, and reduced the confusion between trees and grasslands through setting nDSM threshold.

  18. Pulse-coupled neural network sensor fusion

    NASA Astrophysics Data System (ADS)

    Johnson, John L.; Schamschula, Marius P.; Inguva, Ramarao; Caulfield, H. John

    1998-03-01

    Perception is assisted by sensed impressions of the outside world but not determined by them. The primary organ of perception is the brain and, in particular, the cortex. With that in mind, we have sought to see how a computer-modeled cortex--the PCNN or Pulse Coupled Neural Network--performs as a sensor fusing element. In essence, the PCNN is comprised of an array of integrate-and-fire neurons with one neuron for each input pixel. In such a system, the neurons corresponding to bright pixels reach firing threshold faster than the neurons corresponding to duller pixels. Thus, firing rate is proportional to brightness. In PCNNs, when a neuron fires it sends some of the resulting signal to its neighbors. This linking can cause a near-threshold neuron to fire earlier than it would have otherwise. This leads to synchronization of the pulses across large regions of the image. We can simplify the 3D PCNN output by integrating out the time dimension. Over a long enough time interval, the resulting 2D (x,y) pattern IS the input image. The PCNN has taken it apart and put it back together again. The shorter- term time integrals are interesting in themselves and will be commented upon in the paper. The main thrust of this paper is the use of multiple PCNNs mutually coupled in various ways to assemble a single 2D pattern or fused image. Results of experiments on PCNN image fusion and an evaluation of its advantages are our primary objectives.

  19. Quantitative image fusion in infrared radiometry

    NASA Astrophysics Data System (ADS)

    Romm, Iliya; Cukurel, Beni

    2018-05-01

    Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.

  20. New false color mapping for image fusion

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Walraven, Jan

    1996-03-01

    A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).

  1. High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.

    PubMed

    Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong

    2018-08-01

    This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.

  2. Hyperspectral Imagery Throughput and Fusion Evaluation over Compression and Interpolation

    DTIC Science & Technology

    2008-07-01

    MSE ⎛ ⎞ = ⎜ ⎟ ⎝ ⎠ (17) The PSNR values and compression ratios are shown in Table 1 and a plot of PSNR against the bits per pixel ( bpp ) is shown...Ratio bpp 59.3 2.9:1 2.76 46.0 9.2:1 0.87 43.2 14.5:1 0.55 40.8 25.0:1 0.32 38.7 34.6:1 0.23 35.5 62.1:1 0.13 Figure 11. PSNR vs. bits per...and a plot of PSNR against the bits per pixel ( bpp ) is shown in Figure 13. The 3D DCT compression yielded better results than the baseline JPEG

  3. [An improved low spectral distortion PCA fusion method].

    PubMed

    Peng, Shi; Zhang, Ai-Wu; Li, Han-Lun; Hu, Shao-Xing; Meng, Xian-Gang; Sun, Wei-Dong

    2013-10-01

    Aiming at the spectral distortion produced in PCA fusion process, the present paper proposes an improved low spectral distortion PCA fusion method. This method uses NCUT (normalized cut) image segmentation algorithm to make a complex hyperspectral remote sensing image into multiple sub-images for increasing the separability of samples, which can weaken the spectral distortions of traditional PCA fusion; Pixels similarity weighting matrix and masks were produced by using graph theory and clustering theory. These masks are used to cut the hyperspectral image and high-resolution image into some sub-region objects. All corresponding sub-region objects between the hyperspectral image and high-resolution image are fused by using PCA method, and all sub-regional integration results are spliced together to produce a new image. In the experiment, Hyperion hyperspectral data and Rapid Eye data were used. And the experiment result shows that the proposed method has the same ability to enhance spatial resolution and greater ability to improve spectral fidelity performance.

  4. Multifocus image fusion scheme based on the multiscale curvature in nonsubsampled contourlet transform domain

    NASA Astrophysics Data System (ADS)

    Li, Xiaosong; Li, Huafeng; Yu, Zhengtao; Kong, Yingchun

    2015-07-01

    An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.

  5. Vulnerability of CMOS image sensors in Megajoule Class Laser harsh environment.

    PubMed

    Goiffon, V; Girard, S; Chabane, A; Paillet, P; Magnan, P; Cervantes, P; Martin-Gonthier, P; Baggio, J; Estribeau, M; Bourgade, J-L; Darbon, S; Rousseau, A; Glebov, V Yu; Pien, G; Sangster, T C

    2012-08-27

    CMOS image sensors (CIS) are promising candidates as part of optical imagers for the plasma diagnostics devoted to the study of fusion by inertial confinement. However, the harsh radiative environment of Megajoule Class Lasers threatens the performances of these optical sensors. In this paper, the vulnerability of CIS to the transient and mixed pulsed radiation environment associated with such facilities is investigated during an experiment at the OMEGA facility at the Laboratory for Laser Energetics (LLE), Rochester, NY, USA. The transient and permanent effects of the 14 MeV neutron pulse on CIS are presented. The behavior of the tested CIS shows that active pixel sensors (APS) exhibit a better hardness to this harsh environment than a CCD. A first order extrapolation of the reported results to the higher level of radiation expected for Megajoule Class Laser facilities (Laser Megajoule in France or National Ignition Facility in the USA) shows that temporarily saturated pixels due to transient neutron-induced single event effects will be the major issue for the development of radiation-tolerant plasma diagnostic instruments whereas the permanent degradation of the CIS related to displacement damage or total ionizing dose effects could be reduced by applying well known mitigation techniques.

  6. Forest Stand Segmentation Using Airborne LIDAR Data and Very High Resolution Multispectral Imagery

    NASA Astrophysics Data System (ADS)

    Dechesne, Clément; Mallet, Clément; Le Bris, Arnaud; Gouet, Valérie; Hervieu, Alexandre

    2016-06-01

    Forest stands are the basic units for forest inventory and mapping. Stands are large forested areas (e.g., ≥ 2 ha) of homogeneous tree species composition. The accurate delineation of forest stands is usually performed by visual analysis of human operators on very high resolution (VHR) optical images. This work is highly time consuming and should be automated for scalability purposes. In this paper, a method based on the fusion of airborne laser scanning data (or lidar) and very high resolution multispectral imagery for automatic forest stand delineation and forest land-cover database update is proposed. The multispectral images give access to the tree species whereas 3D lidar point clouds provide geometric information on the trees. Therefore, multi-modal features are computed, both at pixel and object levels. The objects are individual trees extracted from lidar data. A supervised classification is performed at the object level on the computed features in order to coarsely discriminate the existing tree species in the area of interest. The analysis at tree level is particularly relevant since it significantly improves the tree species classification. A probability map is generated through the tree species classification and inserted with the pixel-based features map in an energetical framework. The proposed energy is then minimized using a standard graph-cut method (namely QPBO with α-expansion) in order to produce a segmentation map with a controlled level of details. Comparison with an existing forest land cover database shows that our method provides satisfactory results both in terms of stand labelling and delineation (matching ranges between 94% and 99%).

  7. Pixel-Level Digital-to-Analog Conversion Scheme with Compensation of Thin-Film-Transistor Variations for Compact Integrated Data Drivers of Active Matrix Organic Light Emitting Diodes

    NASA Astrophysics Data System (ADS)

    Kim, Tae-Wook; Park, Sang-Gyu; Choi, Byong-Deok

    2011-03-01

    The previous pixel-level digital-to-analog-conversion (DAC) scheme that implements a part of a DAC in a pixel circuit turned out to be very efficient for reducing the peripheral area of an integrated data driver fabricated with low-temperature polycrystalline silicon thin-film transistors (LTPS TFTs). However, how the pixel-level DAC can be compatible with the existing pixel circuits including compensation schemes of TFT variations and IR drops on supply rails, which is of primary importance for active matrix organic light emitting diodes (AMOLEDs) is an issue in this scheme, because LTPS TFTs suffer from random variations in their characteristics. In this paper, we show that the pixel-level DAC scheme can be successfully used with the previous compensation schemes by giving two examples of voltage- and current-programming pixels. The previous pixel-level DAC schemes require additional two TFTs and one capacitor, but for these newly proposed pixel circuits, the overhead is no more than two TFTs by utilizing the already existing capacitor. In addition, through a detailed analysis, it has been shown that the pixel-level DAC can be expanded to a 4-bit resolution, or be applied together with 1:2 demultiplexing driving for 6- to 8-in. diagonal XGA AMOLED display panels.

  8. A sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image

    NASA Astrophysics Data System (ADS)

    Li, Jing; Xie, Weixin; Pei, Jihong

    2018-03-01

    Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.

  9. Real-time and encryption efficiency improvements of simultaneous fusion, compression and encryption method based on chaotic generators

    NASA Astrophysics Data System (ADS)

    Jridi, Maher; Alfalou, Ayman

    2018-03-01

    In this paper, enhancement of an existing optical simultaneous fusion, compression and encryption (SFCE) scheme in terms of real-time requirements, bandwidth occupation and encryption robustness is proposed. We have used and approximate form of the DCT to decrease the computational resources. Then, a novel chaos-based encryption algorithm is introduced in order to achieve the confusion and diffusion effects. In the confusion phase, Henon map is used for row and column permutations, where the initial condition is related to the original image. Furthermore, the Skew Tent map is employed to generate another random matrix in order to carry out pixel scrambling. Finally, an adaptation of a classical diffusion process scheme is employed to strengthen security of the cryptosystem against statistical, differential, and chosen plaintext attacks. Analyses of key space, histogram, adjacent pixel correlation, sensitivity, and encryption speed of the encryption scheme are provided, and favorably compared to those of the existing crypto-compression system. The proposed method has been found to be digital/optical implementation-friendly which facilitates the integration of the crypto-compression system on a very broad range of scenarios.

  10. Super-resolution fusion of complementary panoramic images based on cross-selection kernel regression interpolation.

    PubMed

    Chen, Lidong; Basu, Anup; Zhang, Maojun; Wang, Wei; Liu, Yu

    2014-03-20

    A complementary catadioptric imaging technique was proposed to solve the problem of low and nonuniform resolution in omnidirectional imaging. To enhance this research, our paper focuses on how to generate a high-resolution panoramic image from the captured omnidirectional image. To avoid the interference between the inner and outer images while fusing the two complementary views, a cross-selection kernel regression method is proposed. First, in view of the complementarity of sampling resolution in the tangential and radial directions between the inner and the outer images, respectively, the horizontal gradients in the expected panoramic image are estimated based on the scattered neighboring pixels mapped from the outer, while the vertical gradients are estimated using the inner image. Then, the size and shape of the regression kernel are adaptively steered based on the local gradients. Furthermore, the neighboring pixels in the next interpolation step of kernel regression are also selected based on the comparison between the horizontal and vertical gradients. In simulation and real-image experiments, the proposed method outperforms existing kernel regression methods and our previous wavelet-based fusion method in terms of both visual quality and objective evaluation.

  11. Fusion of High Resolution Multispectral Imagery in Vulnerable Coastal and Land Ecosystems.

    PubMed

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier; Garcia-Pedrero, Angel; Rodriguez-Esparragon, Dionisio

    2017-01-25

    Ecosystems provide a wide variety of useful resources that enhance human welfare, but these resources are declining due to climate change and anthropogenic pressure. In this work, three vulnerable ecosystems, including shrublands, coastal areas with dunes systems and areas of shallow water, are studied. As far as these resources' reduction is concerned, remote sensing and image processing techniques could contribute to the management of these natural resources in a practical and cost-effective way, although some improvements are needed for obtaining a higher quality of the information available. An important quality improvement is the fusion at the pixel level. Hence, the objective of this work is to assess which pansharpening technique provides the best fused image for the different types of ecosystems. After a preliminary evaluation of twelve classic and novel fusion algorithms, a total of four pansharpening algorithms was analyzed using six quality indices. The quality assessment was implemented not only for the whole set of multispectral bands, but also for the subset of spectral bands covered by the wavelength range of the panchromatic image and outside of it. A better quality result is observed in the fused image using only the bands covered by the panchromatic band range. It is important to highlight the use of these techniques not only in land and urban areas, but a novel analysis in areas of shallow water ecosystems. Although the algorithms do not show a high difference in land and coastal areas, coastal ecosystems require simpler algorithms, such as fast intensity hue saturation, whereas more heterogeneous ecosystems need advanced algorithms, as weighted wavelet ' à trous ' through fractal dimension maps for shrublands and mixed ecosystems. Moreover, quality map analysis was carried out in order to study the fusion result in each band at the local level. Finally, to demonstrate the performance of these pansharpening techniques, advanced Object-Based (OBIA) support vector machine classification was applied, and a thematic map for the shrubland ecosystem was obtained, which corroborates wavelet ' à trous ' through fractal dimension maps as the best fusion algorithm for this ecosystem.

  12. Fusion of High Resolution Multispectral Imagery in Vulnerable Coastal and Land Ecosystems

    PubMed Central

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier; Garcia-Pedrero, Angel; Rodriguez-Esparragon, Dionisio

    2017-01-01

    Ecosystems provide a wide variety of useful resources that enhance human welfare, but these resources are declining due to climate change and anthropogenic pressure. In this work, three vulnerable ecosystems, including shrublands, coastal areas with dunes systems and areas of shallow water, are studied. As far as these resources’ reduction is concerned, remote sensing and image processing techniques could contribute to the management of these natural resources in a practical and cost-effective way, although some improvements are needed for obtaining a higher quality of the information available. An important quality improvement is the fusion at the pixel level. Hence, the objective of this work is to assess which pansharpening technique provides the best fused image for the different types of ecosystems. After a preliminary evaluation of twelve classic and novel fusion algorithms, a total of four pansharpening algorithms was analyzed using six quality indices. The quality assessment was implemented not only for the whole set of multispectral bands, but also for the subset of spectral bands covered by the wavelength range of the panchromatic image and outside of it. A better quality result is observed in the fused image using only the bands covered by the panchromatic band range. It is important to highlight the use of these techniques not only in land and urban areas, but a novel analysis in areas of shallow water ecosystems. Although the algorithms do not show a high difference in land and coastal areas, coastal ecosystems require simpler algorithms, such as fast intensity hue saturation, whereas more heterogeneous ecosystems need advanced algorithms, as weighted wavelet ‘à trous’ through fractal dimension maps for shrublands and mixed ecosystems. Moreover, quality map analysis was carried out in order to study the fusion result in each band at the local level. Finally, to demonstrate the performance of these pansharpening techniques, advanced Object-Based (OBIA) support vector machine classification was applied, and a thematic map for the shrubland ecosystem was obtained, which corroborates wavelet ‘à trous’ through fractal dimension maps as the best fusion algorithm for this ecosystem. PMID:28125055

  13. Graphene metamaterial spatial light modulator for infrared single pixel imaging.

    PubMed

    Fan, Kebin; Suen, Jonathan Y; Padilla, Willie J

    2017-10-16

    High-resolution and hyperspectral imaging has long been a goal for multi-dimensional data fusion sensing applications - of interest for autonomous vehicles and environmental monitoring. In the long wave infrared regime this quest has been impeded by size, weight, power, and cost issues, especially as focal-plane array detector sizes increase. Here we propose and experimentally demonstrated a new approach based on a metamaterial graphene spatial light modulator (GSLM) for infrared single pixel imaging. A frequency-division multiplexing (FDM) imaging technique is designed and implemented, and relies entirely on the electronic reconfigurability of the GSLM. We compare our approach to the more common raster-scan method and directly show FDM image frame rates can be 64 times faster with no degradation of image quality. Our device and related imaging architecture are not restricted to the infrared regime, and may be scaled to other bands of the electromagnetic spectrum. The study presented here opens a new approach for fast and efficient single pixel imaging utilizing graphene metamaterials with novel acquisition strategies.

  14. On the creation of high spatial resolution imaging spectroscopy data from multi-temporal low spatial resolution imagery

    NASA Astrophysics Data System (ADS)

    Yao, Wei; van Aardt, Jan; Messinger, David

    2017-05-01

    The Hyperspectral Infrared Imager (HyspIRI) mission aims to provide global imaging spectroscopy data to the benefit of especially ecosystem studies. The onboard spectrometer will collect radiance spectra from the visible to short wave infrared (VSWIR) regions (400-2500 nm). The mission calls for fine spectral resolution (10 nm band width) and as such will enable scientists to perform material characterization, species classification, and even sub-pixel mapping. However, the global coverage requirement results in a relatively low spatial resolution (GSD 30m), which restricts applications to objects of similar scales. We therefore have focused on the assessment of sub-pixel vegetation structure from spectroscopy data in past studies. In this study, we investigate the development or reconstruction of higher spatial resolution imaging spectroscopy data via fusion of multi-temporal data sets to address the drawbacks implicit in low spatial resolution imagery. The projected temporal resolution of the HyspIRI VSWIR instrument is 15 days, which implies that we have access to as many as six data sets for an area over the course of a growth season. Previous studies have shown that select vegetation structural parameters, e.g., leaf area index (LAI) and gross ecosystem production (GEP), are relatively constant in summer and winter for temperate forests; we therefore consider the data sets collected in summer to be from a similar, stable forest structure. The first step, prior to fusion, involves registration of the multi-temporal data. A data fusion algorithm then can be applied to the pre-processed data sets. The approach hinges on an algorithm that has been widely applied to fuse RGB images. Ideally, if we have four images of a scene which all meet the following requirements - i) they are captured with the same camera configurations; ii) the pixel size of each image is x; and iii) at least r2 images are aligned on a grid of x/r - then a high-resolution image, with a pixel size of x/r, can be reconstructed from the multi-temporal set. The algorithm was applied to data from NASA's classic Airborne Visible and Infrared Imaging Spectrometer (AVIRIS-C; GSD 18m), collected between 2013-2015 (summer and fall) over our study area (NEON's Southwest Pacific Domain; Fresno, CA) to generate higher spatial resolution imagery (GSD 9m). The reconstructed data set was validated via comparison to NEON's imaging spectrometer (NIS) data (GSD 1m). The results showed that algorithm worked well with the AVIRIS-C data and could be applied to the HyspIRI data.

  15. Segmentation via fusion of edge and needle map

    NASA Astrophysics Data System (ADS)

    Ahn, Hong-Young; Tou, Julius T.

    1991-03-01

    This paper presents an integrated image segmentation method using edge and needle map which compensates deficiencies of using either edge-based approach or region-based approach. Segmentation of an image is the first and most difficult step toward symbolic transformation of a raw image, which is essential in image understanding. In industrial applications, the task is further complicated by the ubiquitous presence of specularity in most industrial parts. Three images taken from three different illumination directions were used to separate specular and Lambertian components in the images. Needle map is generated from Lambertian component images using photometric stereo technique. In one channel, edges are extracted and linked from the averaged Lambertian images providing one source of segmentation. The other channel, Gaussian curvature and mean curvature values are estimated at each pixel from least square local surface fit of needle map. Labeled surface type image is then generated using the signs of Gaussian and mean curvatures, where one of ten surface types is assigned to each pixel. Connected regions of identical surface type pixels provide the first level grouping, a rough initial segmentation. Edge information and initial segmentation of surface type are fed to an integration module which interprets the edges and regions in a consistent way. During interpretation regions are merged or split, edges are discarded or generated depending upon global surface fit error and consistency with neighboring regions. The output of integrated segmentation is an explicit description of surface type and contours of each region which facilitates recognition, localization and attitude determination of objects in the image.

  16. Intensity-hue-saturation-based image fusion using iterative linear regression

    NASA Astrophysics Data System (ADS)

    Cetin, Mufit; Tepecik, Abdulkadir

    2016-10-01

    The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.

  17. Sensor fusion to enable next generation low cost Night Vision systems

    NASA Astrophysics Data System (ADS)

    Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.

    2010-04-01

    The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be compensated.

  18. Estimation of urban surface water at subpixel level from neighborhood pixels using multispectral remote sensing image (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie

    2016-10-01

    Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window. The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).

  19. Resolution Enhancement of MODIS-derived Water Indices for Studying Persistent Flooding

    NASA Astrophysics Data System (ADS)

    Underwood, L. W.; Kalcic, M. T.; Fletcher, R. M.

    2012-12-01

    Monitoring coastal marshes for persistent flooding and salinity stress is a high priority issue in Louisiana. Remote sensing can identify environmental variables that can be indicators of marsh habitat conditions, and offer timely and relatively accurate information for aiding wetland vegetation management. Monitoring activity accuracy is often limited by mixed pixels which occur when areas represented by the pixel encompasses more than one cover type. Mixtures of marsh grasses and open water in 250m Moderate Resolution Imaging Spectroradiometer (MODIS) data can impede flood area estimation. Flood mapping of such mixtures requires finer spatial resolution data to better represent the cover type composition within 250m MODIS pixel. Fusion of MODIS and Landsat can improve both spectral and temporal resolution of time series products to resolve rapid changes from forcing mechanisms like hurricane winds and storm surge. For this study, using a method for estimating sub-pixel values from a MODIS time series of a Normalized Difference Water Index (NDWI), using temporal weighting, was implemented to map persistent flooding in Louisiana coastal marshes. Ordinarily NDWI computed from daily 250m MODIS pixels represents a mixture of fragmented marshes and water. Here, sub-pixel NDWI values were derived for MODIS data using Landsat 30-m data. Each MODIS pixel was disaggregated into a mixture of the eight cover types according to the classified image pixels falling inside the MODIS pixel. The Landsat pixel means for each cover type inside a MODIS pixel were computed for the Landsat data preceding the MODIS image in time and for the Landsat data succeeding the MODIS image. The Landsat data were then weighted exponentially according to closeness in date to the MODIS data. The reconstructed MODIS data were produced by summing the product of fractional cover type with estimated NDWI values within each cover type. A new daily time series was produced using both the reconstructed 250-m MODIS, with enhanced features, and the approximated daily 30-m high-resolution image based on Landsat data. The algorithm was developed and tested over the Calcasieu-Sabine Basin, which was heavily inundated by storm surge from Hurricane Ike to study the extent and duration of flooding following the storm. Time series for 2000-2009, covering flooding events by Hurricane Rita in 2005 and Hurricane Ike in 2008, were derived. High resolution images were formed for all days in 2008 between the first cloud free Landsat scene and the last cloud-free Landsat scene. To refine and validate flooding maps, each time series was compared to Louisiana Coastwide Reference Monitoring System (CRMS) station water levels adjusted to marsh to optimize thresholds for MODIS-derived time series of NDWI. Seasonal fluctuations were adjusted by subtracting ten year average NDWI for marshes, excluding the hurricane events. Results from different NDWI indices and a combination of indices were compared. Flooding persistence that was mapped with higher-resolution data showed some improvement over the original MODIS time series estimates. The advantage of this novel technique is that improved mapping of extent and duration of inundation can be provided.

  20. Resolution Enhancement of MODIS-Derived Water Indices for Studying Persistent Flooding

    NASA Technical Reports Server (NTRS)

    Underwood, L. W.; Kalcic, Maria; Fletcher, Rose

    2012-01-01

    Monitoring coastal marshes for persistent flooding and salinity stress is a high priority issue in Louisiana. Remote sensing can identify environmental variables that can be indicators of marsh habitat conditions, and offer timely and relatively accurate information for aiding wetland vegetation management. Monitoring activity accuracy is often limited by mixed pixels which occur when areas represented by the pixel encompasses more than one cover type. Mixtures of marsh grasses and open water in 250m Moderate Resolution Imaging Spectroradiometer (MODIS) data can impede flood area estimation. Flood mapping of such mixtures requires finer spatial resolution data to better represent the cover type composition within 250m MODIS pixel. Fusion of MODIS and Landsat can improve both spectral and temporal resolution of time series products to resolve rapid changes from forcing mechanisms like hurricane winds and storm surge. For this study, using a method for estimating sub-pixel values from a MODIS time series of a Normalized Difference Water Index (NDWI), using temporal weighting, was implemented to map persistent flooding in Louisiana coastal marshes. Ordinarily NDWI computed from daily 250m MODIS pixels represents a mixture of fragmented marshes and water. Here, sub-pixel NDWI values were derived for MODIS data using Landsat 30-m data. Each MODIS pixel was disaggregated into a mixture of the eight cover types according to the classified image pixels falling inside the MODIS pixel. The Landsat pixel means for each cover type inside a MODIS pixel were computed for the Landsat data preceding the MODIS image in time and for the Landsat data succeeding the MODIS image. The Landsat data were then weighted exponentially according to closeness in date to the MODIS data. The reconstructed MODIS data were produced by summing the product of fractional cover type with estimated NDWI values within each cover type. A new daily time series was produced using both the reconstructed 250-m MODIS, with enhanced features, and the approximated daily 30-m high-resolution image based on Landsat data. The algorithm was developed and tested over the Calcasieu-Sabine Basin, which was heavily inundated by storm surge from Hurricane Ike to study the extent and duration of flooding following the storm. Time series for 2000-2009, covering flooding events by Hurricane Rita in 2005 and Hurricane Ike in 2008, were derived. High resolution images were formed for all days in 2008 between the first cloud free Landsat scene and the last cloud-free Landsat scene. To refine and validate flooding maps, each time series was compared to Louisiana Coastwide Reference Monitoring System (CRMS) station water levels adjusted to marsh to optimize thresholds for MODIS-derived time series of NDWI. Seasonal fluctuations were adjusted by subtracting ten year average NDWI for marshes, excluding the hurricane events. Results from different NDWI indices and a combination of indices were compared. Flooding persistence that was mapped with higher-resolution data showed some improvement over the original MODIS time series estimates. The advantage of this novel technique is that improved mapping of extent and duration of inundation can be provided.

  1. Comparison of Hybrid Classifiers for Crop Classification Using Normalized Difference Vegetation Index Time Series: A Case Study for Major Crops in North Xinjiang, China

    PubMed Central

    Hao, Pengyu; Wang, Li; Niu, Zheng

    2015-01-01

    A range of single classifiers have been proposed to classify crop types using time series vegetation indices, and hybrid classifiers are used to improve discriminatory power. Traditional fusion rules use the product of multi-single classifiers, but that strategy cannot integrate the classification output of machine learning classifiers. In this research, the performance of two hybrid strategies, multiple voting (M-voting) and probabilistic fusion (P-fusion), for crop classification using NDVI time series were tested with different training sample sizes at both pixel and object levels, and two representative counties in north Xinjiang were selected as study area. The single classifiers employed in this research included Random Forest (RF), Support Vector Machine (SVM), and See 5 (C 5.0). The results indicated that classification performance improved (increased the mean overall accuracy by 5%~10%, and reduced standard deviation of overall accuracy by around 1%) substantially with the training sample number, and when the training sample size was small (50 or 100 training samples), hybrid classifiers substantially outperformed single classifiers with higher mean overall accuracy (1%~2%). However, when abundant training samples (4,000) were employed, single classifiers could achieve good classification accuracy, and all classifiers obtained similar performances. Additionally, although object-based classification did not improve accuracy, it resulted in greater visual appeal, especially in study areas with a heterogeneous cropping pattern. PMID:26360597

  2. Dependence of the appearance-based perception of criminality, suggestibility, and trustworthiness on the level of pixelation of facial images.

    PubMed

    Nurmoja, Merle; Eamets, Triin; Härma, Hanne-Loore; Bachmann, Talis

    2012-10-01

    While the dependence of face identification on the level of pixelation-transform of the images of faces has been well studied, similar research on face-based trait perception is underdeveloped. Because depiction formats used for hiding individual identity in visual media and evidential material recorded by surveillance cameras often consist of pixelized images, knowing the effects of pixelation on person perception has practical relevance. Here, the results of two experiments are presented showing the effect of facial image pixelation on the perception of criminality, trustworthiness, and suggestibility. It appears that individuals (N = 46, M age = 21.5 yr., SD = 3.1 for criminality ratings; N = 94, M age = 27.4 yr., SD = 10.1 for other ratings) have the ability to discriminate between facial cues ndicative of these perceived traits from the coarse level of image pixelation (10-12 pixels per face horizontally) and that the discriminability increases with a decrease in the coarseness of pixelation. Perceived criminality and trustworthiness appear to be better carried by the pixelized images than perceived suggestibility.

  3. Design and characterization of high precision in-pixel discriminators for rolling shutter CMOS pixel sensors with full CMOS capability

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Hu-Guo, C.; Dorokhov, A.; Pham, H.; Hu, Y.

    2013-07-01

    In order to exploit the ability to integrate a charge collecting electrode with analog and digital processing circuitry down to the pixel level, a new type of CMOS pixel sensors with full CMOS capability is presented in this paper. The pixel array is read out based on a column-parallel read-out architecture, where each pixel incorporates a diode, a preamplifier with a double sampling circuitry and a discriminator to completely eliminate analog read-out bottlenecks. The sensor featuring a pixel array of 8 rows and 32 columns with a pixel pitch of 80 μm×16 μm was fabricated in a 0.18 μm CMOS process. The behavior of each pixel-level discriminator isolated from the diode and the preamplifier was studied. The experimental results indicate that all in-pixel discriminators which are fully operational can provide significant improvements in the read-out speed and the power consumption of CMOS pixel sensors.

  4. Simulating urban land cover changes at sub-pixel level in a coastal city

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaofeng; Deng, Lei; Feng, Huihui; Zhao, Yanchuang

    2014-10-01

    The simulation of urban expansion or land cover changes is a major theme in both geographic information science and landscape ecology. Yet till now, almost all of previous studies were based on grid computations at pixel level. With the prevalence of spectral mixture analysis in urban land cover research, the simulation of urban land cover at sub-pixel level is being put into agenda. This study provided a new approach of land cover simulation at sub-pixel level. Landsat TM/ETM+ images of Xiamen city, China on both the January of 2002 and 2007 were used to acquire land cover data through supervised classification. Then the two classified land cover data were utilized to extract the transformation rule between 2002 and 2007 using logistic regression. The transformation possibility of each land cover type in a certain pixel was taken as its percent in the same pixel after normalization. And cellular automata (CA) based grid computation was carried out to acquire simulated land cover on 2007. The simulated 2007 sub-pixel land cover was testified with a validated sub-pixel land cover achieved by spectral mixture analysis in our previous studies on the same date. And finally the sub-pixel land cover of 2017 was simulated for urban planning and management. The results showed that our method is useful in land cover simulation at sub-pixel level. Although the simulation accuracy is not quite satisfactory for all the land cover types, it provides an important idea and a good start in the CA-based urban land cover simulation.

  5. Data processing for soft X-ray diagnostics based on GEM detector measurements for fusion plasma imaging

    NASA Astrophysics Data System (ADS)

    Czarski, T.; Chernyshova, M.; Pozniak, K. T.; Kasprowicz, G.; Byszuk, A.; Juszczyk, B.; Wojenski, A.; Zabolotny, W.; Zienkiewicz, P.

    2015-12-01

    The measurement system based on GEM - Gas Electron Multiplier detector is developed for X-ray diagnostics of magnetic confinement fusion plasmas. The Triple Gas Electron Multiplier (T-GEM) is presented as soft X-ray (SXR) energy and position sensitive detector. The paper is focused on the measurement subject and describes the fundamental data processing to obtain reliable characteristics (histograms) useful for physicists. So, it is the software part of the project between the electronic hardware and physics applications. The project is original and it was developed by the paper authors. Multi-channel measurement system and essential data processing for X-ray energy and position recognition are considered. Several modes of data acquisition determined by hardware and software processing are introduced. Typical measuring issues are deliberated for the enhancement of data quality. The primary version based on 1-D GEM detector was applied for the high-resolution X-ray crystal spectrometer KX1 in the JET tokamak. The current version considers 2-D detector structures initially for the investigation purpose. Two detector structures with single-pixel sensors and multi-pixel (directional) sensors are considered for two-dimensional X-ray imaging. Fundamental output characteristics are presented for one and two dimensional detector structure. Representative results for reference source and tokamak plasma are demonstrated.

  6. Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.

    PubMed

    Liu, Da; Li, Jianxun

    2016-12-16

    Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.

  7. A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.

    PubMed

    Mignotte, Max

    2010-06-01

    This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.

  8. Evaluation and display of polarimetric image data using long-wave cooled microgrid focal plane arrays

    NASA Astrophysics Data System (ADS)

    Bowers, David L.; Boger, James K.; Wellems, L. David; Black, Wiley T.; Ortega, Steve E.; Ratliff, Bradley M.; Fetrow, Matthew P.; Hubbs, John E.; Tyo, J. Scott

    2006-05-01

    Recent developments for Long Wave InfraRed (LWIR) imaging polarimeters include incorporating a microgrid polarizer array onto the focal plane array (FPA). Inherent advantages over typical polarimeters include packaging and instantaneous acquisition of thermal and polarimetric information. This allows for real time video of thermal and polarimetric products. The microgrid approach has inherent polarization measurement error due to the spatial sampling of a non-uniform scene, residual pixel to pixel variations in the gain corrected responsivity and in the noise equivalent input (NEI), and variations in the pixel to pixel micro-polarizer performance. The Degree of Linear Polarization (DoLP) is highly sensitive to these parameters and is consequently used as a metric to explore instrument sensitivities. Image processing and fusion techniques are used to take advantage of the inherent thermal and polarimetric sensing capability of this FPA, providing additional scene information in real time. Optimal operating conditions are employed to improve FPA uniformity and sensitivity. Data from two DRS Infrared Technologies, L.P. (DRS) microgrid polarizer HgCdTe FPAs are presented. One FPA resides in a liquid nitrogen (LN2) pour filled dewar with a 80°K nominal operating temperature. The other FPA resides in a cryogenic (cryo) dewar with a 60° K nominal operating temperature.

  9. Multispectral image fusion based on fractal features

    NASA Astrophysics Data System (ADS)

    Tian, Jie; Chen, Jie; Zhang, Chunhua

    2004-01-01

    Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the composition of source pyramid images. So this fusion scheme is a multi-resolution analysis. The wavelet decomposition of image can be actually considered as special pyramid decomposition. According to wavelet decomposition theories, the approximation of image (formula available in paper) at resolution 2j+1 equal to its orthogonal projection in space , that is, where Ajf is the low-frequency approximation of image f(x, y) at resolution 2j and , , represent the vertical, horizontal and diagonal wavelet coefficients respectively at resolution 2j. These coefficients describe the high-frequency information of image at direction of vertical, horizontal and diagonal respectively. Ajf, , and are independent and can be considered as images. In this paper J is set to be 1, so the source image is decomposed to produce the son-images Af, D1f, D2f and D3f. To solve the problem of detecting artifacts, the concepts of vertical fractal dimension FD1, horizontal fractal dimension FD2 and diagonal fractal dimension FD3 are proposed in this paper. The vertical fractal dimension FD1 corresponds to the vertical wavelet coefficients image after the wavelet decomposition of source image, the horizontal fractal dimension FD2 corresponds to the horizontal wavelet coefficients and the diagonal fractal dimension FD3 the diagonal one. These definitions enrich the illustration of source images. Therefore they are helpful to classify the targets. Then the detection of artifacts in the decomposed images is a problem of pattern recognition in 4-D space. The combination of FD0, FD1, FD2 and FD3 make a vector of (FD0, FD1, FD2, FD3), which can be considered as a united feature vector of the studied image. All the parts of the images are classified in the 4-D pattern space created by the vector of (FD0, FD1, FD2, FD3) so that the area that contains man-made objects could be detected. This detection can be considered as a coarse recognition, and then the significant areas in each son-images are signed so that they can be dealt with special rules. There has been various fusion rules developed with each one aiming at a special problem. These rules have different performance, so it is very important to select an appropriate rule during the design of an image fusion system. Recent research denotes that the rule should be adjustable so that it is always suitable to extrude the features of targets and to preserve the pixels of useful information. In this paper, owing to the consideration that fractal dimension is one of the main features to distinguish man-made targets from natural objects, the fusion rule was defined that if the studied region of image contains man-made target, the pixels of the source image whose fractal dimension is minimal are saved to be the pixels of the fused image, otherwise, a weighted average operator is adopted to avoid loss of information. The main idea of this rule is to store the pixels with low fractal dimensions, so it can be named Minimal Fractal dimensions (MFD) fusion rule. This fractal-based algorithm is compared with a common weighted average fusion algorithm. An objective assessment is taken to the two fusion results. The criteria of Entropy, Cross-Entropy, Peak Signal-to-Noise Ratio (PSNR) and Standard Gray Scale Difference are defined in this paper. Reversely to the idea of constructing an ideal image as the assessing reference, the source images are selected to be the reference in this paper. It can be deemed that this assessment is to calculate how much the image quality has been enhanced and the quantity of information has been increased when the fused image is compared with the source images. The experimental results imply that the fractal-based multi-spectral fusion algorithm can effectively preserve the information of man-made objects with a high contrast. It is proved that this algorithm could well preserve features of military targets because that battlefield targets are most man-made objects and in common their images differ from fractal models obviously. Furthermore, the fractal features are not sensitive to the imaging conditions and the movement of targets, so this fractal-based algorithm may be very practical.

  10. Estimating rice yield from MODIS-Landsat fusion data in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, C. R.; Chen, C. F.; Nguyen, S. T.

    2017-12-01

    Rice production monitoring with remote sensing is an important activity in Taiwan due to official initiatives. Yield estimation is a challenge in Taiwan because rice fields are small and fragmental. High spatiotemporal satellite data providing phenological information of rice crops is thus required for this monitoring purpose. This research aims to develop data fusion approaches to integrate daily Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat data for rice yield estimation in Taiwan. In this study, the low-resolution MODIS LST and emissivity data are used as reference data sources to obtain the high-resolution LST from Landsat data using the mixed-pixel analysis technique, and the time-series EVI data were derived the fusion of MODIS and Landsat spectral band data using STARFM method. The LST and EVI simulated results showed the close agreement between the LST and EVI obtained by the proposed methods with the reference data. The rice-yield model was established using EVI and LST data based on information of rice crop phenology collected from 371 ground survey sites across the country in 2014. The results achieved from the fusion datasets compared with the reference data indicated the close relationship between the two datasets with the correlation coefficient (R2) of 0.75 and root mean square error (RMSE) of 338.7 kgs, which were more accurate than those using the coarse-resolution MODIS LST data (R2 = 0.71 and RMSE = 623.82 kgs). For the comparison of total production, 64 towns located in the west part of Taiwan were used. The results also confirmed that the model using fusion datasets produced more accurate results (R2 = 0.95 and RMSE = 1,243 tons) than that using the course-resolution MODIS data (R2 = 0.91 and RMSE = 1,749 tons). This study demonstrates the application of MODIS-Landsat fusion data for rice yield estimation at the township level in Taiwan. The results obtained from the methods used in this study could be useful to policymakers; and thus, the methods can be transferable to other regions in the world for rice yield estimation.

  11. A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi

    1997-01-01

    A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.

  12. Fusion of local and global detection systems to detect tuberculosis in chest radiographs.

    PubMed

    Hogeweg, Laurens; Mol, Christian; de Jong, Pim A; Dawson, Rodney; Ayles, Helen; van Ginneken, Bramin

    2010-01-01

    Automatic detection of tuberculosis (TB) on chest radiographs is a difficult problem because of the diverse presentation of the disease. A combination of detection systems for abnormalities and normal anatomy is used to improve detection performance. A textural abnormality detection system operating at the pixel level is combined with a clavicle detection system to suppress false positive responses. The output of a shape abnormality detection system operating at the image level is combined in a next step to further improve performance by reducing false negatives. Strategies for combining systems based on serial and parallel configurations were evaluated using the minimum, maximum, product, and mean probability combination rules. The performance of TB detection increased, as measured using the area under the ROC curve, from 0.67 for the textural abnormality detection system alone to 0.86 when the three systems were combined. The best result was achieved using the sum and product rule in a parallel combination of outputs.

  13. Remote sensing of evapotranspiration using automated calibration: Development and testing in the state of Florida

    NASA Astrophysics Data System (ADS)

    Evans, Aaron H.

    Thermal remote sensing is a powerful tool for measuring the spatial variability of evapotranspiration due to the cooling effect of vaporization. The residual method is a popular technique which calculates evapotranspiration by subtracting sensible heat from available energy. Estimating sensible heat requires aerodynamic surface temperature which is difficult to retrieve accurately. Methods such as SEBAL/METRIC correct for this problem by calibrating the relationship between sensible heat and retrieved surface temperature. Disadvantage of these calibrations are 1) user must manually identify extremely dry and wet pixels in image 2) each calibration is only applicable over limited spatial extent. Producing larger maps is operationally limited due to time required to manually calibrate multiple spatial extents over multiple days. This dissertation develops techniques which automatically detect dry and wet pixels. LANDSAT imagery is used because it resolves dry pixels. Calibrations using 1) only dry pixels and 2) including wet pixels are developed. Snapshots of retrieved evaporative fraction and actual evapotranspiration are compared to eddy covariance measurements for five study areas in Florida: 1) Big Cypress 2) Disney Wilderness 3) Everglades 4) near Gainesville, FL. 5) Kennedy Space Center. The sensitivity of evaporative fraction to temperature, available energy, roughness length and wind speed is tested. A technique for temporally interpolating evapotranspiration by fusing LANDSAT and MODIS is developed and tested. The automated algorithm is successful at detecting wet and dry pixels (if they exist). Including wet pixels in calibration and assuming constant atmospheric conductance significantly improved results for all but Big Cypress and Gainesville. Evaporative fraction is not very sensitive to instantaneous available energy but it is sensitive to temperature when wet pixels are included because temperature is required for estimating wet pixel evapotranspiration. Data fusion techniques only slightly outperformed linear interpolation. Eddy covariance comparison and temporal interpolation produced acceptable bias error for most cases suggesting automated calibration and interpolation could be used to predict monthly or annual ET. Maps demonstrating spatial patterns of evapotranspiration at field scale were successfully produced, but only for limited spatial extents. A framework has been established for producing larger maps by creating a mosaic of smaller individual maps.

  14. In vivo characterization of a reporter gene system for imaging hypoxia-induced gene expression.

    PubMed

    Carlin, Sean; Pugachev, Andrei; Sun, Xiaorong; Burke, Sean; Claus, Filip; O'Donoghue, Joseph; Ling, C Clifton; Humm, John L

    2009-10-01

    To characterize a tumor model containing a hypoxia-inducible reporter gene and to demonstrate utility by comparison of reporter gene expression to the uptake and distribution of the hypoxia tracer (18)F-fluoromisonidazole ((18)F-FMISO). Three tumors derived from the rat prostate cancer cell line R3327-AT were grown in each of two rats as follows: (1) parental R3327-AT, (2) positive control R3327-AT/PC in which the HSV1-tkeGFP fusion reporter gene was expressed constitutively, (3) R3327-AT/HRE in which the reporter gene was placed under the control of a hypoxia-inducible factor-responsive promoter sequence (HRE). Animals were coadministered a hypoxia-specific marker (pimonidazole) and the reporter gene probe (124)I-2'-fluoro-2'-deoxy-1-beta-d-arabinofuranosyl-5-iodouracil ((124)I-FIAU) 3 h prior to sacrifice. Statistical analysis of the spatial association between (124)I-FIAU uptake and pimonidazole fluorescent staining intensity was then performed on a pixel-by-pixel basis. Utility of this system was demonstrated by assessment of reporter gene expression versus the exogenous hypoxia probe (18)F-FMISO. Two rats, each bearing a single R3327-AT/HRE tumor, were injected with (124)I-FIAU (3 h before sacrifice) and (18)F-FMISO (2 h before sacrifice). Statistical analysis of the spatial association between (18)F-FMISO and (124)I-FIAU on a pixel-by-pixel basis was performed. Correlation coefficients between (124)I-FIAU uptake and pimonidazole staining intensity were: 0.11 in R3327-AT tumors, -0.66 in R3327-AT/PC and 0.76 in R3327-AT/HRE, confirming that only in the R3327-AT/HRE tumor was HSV1-tkeGFP gene expression associated with hypoxia. Correlation coefficients between (18)F-FMISO and (124)I-FIAU uptakes in R3327-AT/HRE tumors were r=0.56, demonstrating good spatial correspondence between the two tracers. We have confirmed hypoxia-specific expression of the HSV1-tkeGFP fusion gene in the R3327-AT/HRE tumor model and demonstrated the utility of this model for the evaluation of radiolabeled hypoxia tracers.

  15. Multimodal Medical Image Fusion by Adaptive Manifold Filter.

    PubMed

    Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna

    2015-01-01

    Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.

  16. Application of spatially resolved high resolution crystal spectrometry to inertial confinement fusion plasmas.

    PubMed

    Hill, K W; Bitter, M; Delgado-Aparacio, L; Pablant, N A; Beiersdorfer, P; Schneider, M; Widmann, K; Sanchez del Rio, M; Zhang, L

    2012-10-01

    High resolution (λ∕Δλ ∼ 10 000) 1D imaging x-ray spectroscopy using a spherically bent crystal and a 2D hybrid pixel array detector is used world wide for Doppler measurements of ion-temperature and plasma flow-velocity profiles in magnetic confinement fusion plasmas. Meter sized plasmas are diagnosed with cm spatial resolution and 10 ms time resolution. This concept can also be used as a diagnostic of small sources, such as inertial confinement fusion plasmas and targets on x-ray light source beam lines, with spatial resolution of micrometers, as demonstrated by laboratory experiments using a 250-μm (55)Fe source, and by ray-tracing calculations. Throughput calculations agree with measurements, and predict detector counts in the range 10(-8)-10(-6) times source x-rays, depending on crystal reflectivity and spectrometer geometry. Results of the lab demonstrations, application of the technique to the National Ignition Facility (NIF), and predictions of performance on NIF will be presented.

  17. A CMOS image sensor with programmable pixel-level analog processing.

    PubMed

    Massari, Nicola; Gottardi, Massimo; Gonzo, Lorenzo; Stoppa, David; Simoni, Andrea

    2005-11-01

    A prototype of a 34 x 34 pixel image sensor, implementing real-time analog image processing, is presented. Edge detection, motion detection, image amplification, and dynamic-range boosting are executed at pixel level by means of a highly interconnected pixel architecture based on the absolute value of the difference among neighbor pixels. The analog operations are performed over a kernel of 3 x 3 pixels. The square pixel, consisting of 30 transistors, has a pitch of 35 microm with a fill-factor of 20%. The chip was fabricated in a 0.35 microm CMOS technology, and its power consumption is 6 mW with 3.3 V power supply. The device was fully characterized and achieves a dynamic range of 50 dB with a light power density of 150 nW/mm2 and a frame rate of 30 frame/s. The measured fixed pattern noise corresponds to 1.1% of the saturation level. The sensor's dynamic range can be extended up to 96 dB using the double-sampling technique.

  18. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion

    PubMed Central

    Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang

    2016-01-01

    Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost. PMID:26840313

  19. General fusion approaches for the age determination of latent fingerprint traces: results for 2D and 3D binary pixel feature fusion

    NASA Astrophysics Data System (ADS)

    Merkel, Ronny; Gruhn, Stefan; Dittmann, Jana; Vielhauer, Claus; Bräutigam, Anja

    2012-03-01

    Determining the age of latent fingerprint traces found at crime scenes is an unresolved research issue since decades. Solving this issue could provide criminal investigators with the specific time a fingerprint trace was left on a surface, and therefore would enable them to link potential suspects to the time a crime took place as well as to reconstruct the sequence of events or eliminate irrelevant fingerprints to ensure privacy constraints. Transferring imaging techniques from different application areas, such as 3D image acquisition, surface measurement and chemical analysis to the domain of lifting latent biometric fingerprint traces is an upcoming trend in forensics. Such non-destructive sensor devices might help to solve the challenge of determining the age of a latent fingerprint trace, since it provides the opportunity to create time series and process them using pattern recognition techniques and statistical methods on digitized 2D, 3D and chemical data, rather than classical, contact-based capturing techniques, which alter the fingerprint trace and therefore make continuous scans impossible. In prior work, we have suggested to use a feature called binary pixel, which is a novel approach in the working field of fingerprint age determination. The feature uses a Chromatic White Light (CWL) image sensor to continuously scan a fingerprint trace over time and retrieves a characteristic logarithmic aging tendency for 2D-intensity as well as 3D-topographic images from the sensor. In this paper, we propose to combine such two characteristic aging features with other 2D and 3D features from the domains of surface measurement, microscopy, photography and spectroscopy, to achieve an increase in accuracy and reliability of a potential future age determination scheme. Discussing the feasibility of such variety of sensor devices and possible aging features, we propose a general fusion approach, which might combine promising features to a joint age determination scheme in future. We furthermore demonstrate the feasibility of the introduced approach by exemplary fusing the binary pixel features based on 2D-intensity and 3D-topographic images of the mentioned CWL sensor. We conclude that a formula based age determination approach requires very precise image data, which cannot be achieved at the moment, whereas a machine learning based classification approach seems to be feasible, if an adequate amount of features can be provided.

  20. Adjacent level effects of bi level disc replacement, bi level fusion and disc replacement plus fusion in cervical spine--a finite element based study.

    PubMed

    Faizan, Ahmad; Goel, Vijay K; Biyani, Ashok; Garfin, Steven R; Bono, Christopher M

    2012-03-01

    Studies delineating the adjacent level effect of single level disc replacement systems have been reported in literature. The aim of this study was to compare the adjacent level biomechanics of bi-level disc replacement, bi-level fusion and a construct having adjoining level disc replacement and fusion system. In total, biomechanics of four models- intact, bi level disc replacement, bi level fusion and fusion plus disc replacement at adjoining levels- was studied to gain insight into the effects of various instrumentation systems on cranial and caudal adjacent levels using finite element analysis (73.6N+varying moment). The bi-level fusion models are more than twice as stiff as compared to the intact model during flexion-extension, lateral bending and axial rotation. Bi-level disc replacement model required moments lower than intact model (1.5Nm). Fusion plus disc replacement model required moment 10-25% more than intact model, except in extension. Adjacent level motions, facet loads and endplate stresses increased substantially in the bi-level fusion model. On the other hand, adjacent level motions, facet loads and endplate stresses were similar to intact for the bi-level disc replacement model. For the fusion plus disc replacement model, adjacent level motions, facet loads and endplate stresses were closer to intact model rather than the bi-level fusion model, except in extension. Based on our finite element analysis, fusion plus disc replacement procedure has less severe biomechanical effects on adjacent levels when compared to bi-level fusion procedure. Bi-level disc replacement procedure did not have any adverse mechanical effects on adjacent levels. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification

    NASA Astrophysics Data System (ADS)

    Zhang, Ce; Pan, Xin; Li, Huapeng; Gardiner, Andy; Sargent, Isabel; Hare, Jonathon; Atkinson, Peter M.

    2018-06-01

    The contextual-based convolutional neural network (CNN) with deep architecture and pixel-based multilayer perceptron (MLP) with shallow structure are well-recognized neural network algorithms, representing the state-of-the-art deep learning method and the classical non-parametric machine learning approach, respectively. The two algorithms, which have very different behaviours, were integrated in a concise and effective way using a rule-based decision fusion approach for the classification of very fine spatial resolution (VFSR) remotely sensed imagery. The decision fusion rules, designed primarily based on the classification confidence of the CNN, reflect the generally complementary patterns of the individual classifiers. In consequence, the proposed ensemble classifier MLP-CNN harvests the complementary results acquired from the CNN based on deep spatial feature representation and from the MLP based on spectral discrimination. Meanwhile, limitations of the CNN due to the adoption of convolutional filters such as the uncertainty in object boundary partition and loss of useful fine spatial resolution detail were compensated. The effectiveness of the ensemble MLP-CNN classifier was tested in both urban and rural areas using aerial photography together with an additional satellite sensor dataset. The MLP-CNN classifier achieved promising performance, consistently outperforming the pixel-based MLP, spectral and textural-based MLP, and the contextual-based CNN in terms of classification accuracy. This research paves the way to effectively address the complicated problem of VFSR image classification.

  2. Archeological treasures protection based on early forest wildfire multi-band imaging detection system

    NASA Astrophysics Data System (ADS)

    Gouverneur, B.; Verstockt, S.; Pauwels, E.; Han, J.; de Zeeuw, P. M.; Vermeiren, J.

    2012-10-01

    Various visible and infrared cameras have been tested for the early detection of wildfires to protect archeological treasures. This analysis was possible thanks to the EU Firesense project (FP7-244088). Although visible cameras are low cost and give good results during daytime for smoke detection, they fall short under bad visibility conditions. In order to improve the fire detection probability and reduce the false alarms, several infrared bands are tested ranging from the NIR to the LWIR. The SWIR and the LWIR band are helpful to locate the fire through smoke if there is a direct Line Of Sight. The Emphasis is also put on the physical and the electro-optical system modeling for forest fire detection at short and longer ranges. The fusion in three bands (Visible, SWIR, LWIR) is discussed at the pixel level for image enhancement and for fire detection.

  3. The biomechanics of a multilevel lumbar spine hybrid using nucleus replacement in conjunction with fusion.

    PubMed

    Dahl, Michael C; Ellingson, Arin M; Mehta, Hitesh P; Huelman, Justin H; Nuckley, David J

    2013-02-01

    Degenerative disc disease is commonly a multilevel pathology with varying deterioration severity. The use of fusion on multiple levels can significantly affect functionality and has been linked to persistent adjacent disc degeneration. A hybrid approach of fusion and nucleus replacement (NR) has been suggested as a solution for mildly degenerated yet painful levels adjacent to fusion. To compare the biomechanical metrics of different hybrid implant constructs, hypothesizing that an NR+fusion hybrid would be similar to a single-level fusion and perform more naturally compared with a two-level fusion. A cadaveric in vitro repeated-measures study was performed to evaluate a multilevel lumbar NR+fusion hybrid. Eight cadaveric spines (L3-S1) were tested in a Spine Kinetic Simulator (Instron, Norwood, MA, USA). Pure moments of 8 Nm were applied in flexion/extension, lateral bending, and axial rotation as well as compression loading. Specimens were tested intact; fused (using transforaminal lumbar interbody fusion instrumentation with posterior rods) at L5-S1; with a nuclectomy at L4-L5 including fusion at L5-S1; with NR at L4-L5 including fusion at L5-S1; and finally with a two-level fusion spanning L4-S1. Repeated-measures analysis of variance and corrected t tests were used to statistically compare outcomes. The NR+fusion hybrid and single-level fusion exhibited no statistical differences for range of motion (ROM), stiffness, neutral zone, and intradiscal pressure in all loading directions. Compared with two-level fusion, the hybrid affords the construct 41.9% more ROM on average. Two-level fusion stiffness was statistically higher than all other constructs and resulted in significantly lower ROM in flexion, extension, and lateral bending. The hybrid construct produced approximately half of the L3-L4 adjacent-level pressures as the two-level fusion case while generating similar pressures to the single-level fusion case. These data portend more natural functional outcomes and fewer adjacent disc complications for a multilevel NR+fusion hybrid compared with the classical two-level fusion. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Crown-level tree species classification from AISA hyperspectral imagery using an innovative pixel-weighting approach

    NASA Astrophysics Data System (ADS)

    Liu, Haijian; Wu, Changshan

    2018-06-01

    Crown-level tree species classification is a challenging task due to the spectral similarity among different tree species. Shadow, underlying objects, and other materials within a crown may decrease the purity of extracted crown spectra and further reduce classification accuracy. To address this problem, an innovative pixel-weighting approach was developed for tree species classification at the crown level. The method utilized high density discrete LiDAR data for individual tree delineation and Airborne Imaging Spectrometer for Applications (AISA) hyperspectral imagery for pure crown-scale spectra extraction. Specifically, three steps were included: 1) individual tree identification using LiDAR data, 2) pixel-weighted representative crown spectra calculation using hyperspectral imagery, with which pixel-based illuminated-leaf fractions estimated using a linear spectral mixture analysis (LSMA) were employed as weighted factors, and 3) representative spectra based tree species classification was performed through applying a support vector machine (SVM) approach. Analysis of results suggests that the developed pixel-weighting approach (OA = 82.12%, Kc = 0.74) performed better than treetop-based (OA = 70.86%, Kc = 0.58) and pixel-majority methods (OA = 72.26, Kc = 0.62) in terms of classification accuracy. McNemar tests indicated the differences in accuracy between pixel-weighting and treetop-based approaches as well as that between pixel-weighting and pixel-majority approaches were statistically significant.

  5. A new high dynamic range ROIC with smart light intensity control unit

    NASA Astrophysics Data System (ADS)

    Yazici, Melik; Ceylan, Omer; Shafique, Atia; Abbasi, Shahbaz; Galioglu, Arman; Gurbuz, Yasar

    2017-05-01

    This journal presents a new high dynamic range ROIC with smart pixel which consists of two pre-amplifiers that are controlled by a circuit inside the pixel. Each pixel automatically decides which pre-amplifier is used according to the incoming illumination level. Instead of using single pre-amplifier, two input pre-amplifiers, which are optimized for different signal levels, are placed inside each pixel. The smart circuit mechanism, which decides the best input circuit according to the incoming light level, is also designed for each pixel. In short, an individual pixel has the ability to select the best input amplifier circuit that performs the best/highest SNR for the incoming signal level. A 32 × 32 ROIC prototype chip is designed to demonstrate the concept in 0.18 μ m CMOS technology. The prototype is optimized for NIR and SWIR bands. Instead of a detector, process variation optimized current sources are placed inside the ROIC. The chip achieves minimum 8.6 e- input referred noise and 98.9 dB dynamic range. It has the highest dynamic range in the literature in terms of analog ROICs for SWIR band. It is operating in room temperature and power consumption is 2.8 μ W per pixel.

  6. A multi-temporal fusion-based approach for land cover mapping in support of nuclear incident response

    NASA Astrophysics Data System (ADS)

    Sah, Shagan

    An increasingly important application of remote sensing is to provide decision support during emergency response and disaster management efforts. Land cover maps constitute one such useful application product during disaster events; if generated rapidly after any disaster, such map products can contribute to the efficacy of the response effort. In light of recent nuclear incidents, e.g., after the earthquake/tsunami in Japan (2011), our research focuses on constructing rapid and accurate land cover maps of the impacted area in case of an accidental nuclear release. The methodology involves integration of results from two different approaches, namely coarse spatial resolution multi-temporal and fine spatial resolution imagery, to increase classification accuracy. Although advanced methods have been developed for classification using high spatial or temporal resolution imagery, only a limited amount of work has been done on fusion of these two remote sensing approaches. The presented methodology thus involves integration of classification results from two different remote sensing modalities in order to improve classification accuracy. The data used included RapidEye and MODIS scenes over the Nine Mile Point Nuclear Power Station in Oswego (New York, USA). The first step in the process was the construction of land cover maps from freely available, high temporal resolution, low spatial resolution MODIS imagery using a time-series approach. We used the variability in the temporal signatures among different land cover classes for classification. The time series-specific features were defined by various physical properties of a pixel, such as variation in vegetation cover and water content over time. The pixels were classified into four land cover classes - forest, urban, water, and vegetation - using Euclidean and Mahalanobis distance metrics. On the other hand, a high spatial resolution commercial satellite, such as RapidEye, can be tasked to capture images over the affected area in the case of a nuclear event. This imagery served as a second source of data to augment results from the time series approach. The classifications from the two approaches were integrated using an a posteriori probability-based fusion approach. This was done by establishing a relationship between the classes, obtained after classification of the two data sources. Despite the coarse spatial resolution of MODIS pixels, acceptable accuracies were obtained using time series features. The overall accuracies using the fusion-based approach were in the neighborhood of 80%, when compared with GIS data sets from New York State. This fusion thus contributed to classification accuracy refinement, with a few additional advantages, such as correction for cloud cover and providing for an approach that is robust against point-in-time seasonal anomalies, due to the inclusion of multi-temporal data. We concluded that this approach is capable of generating land cover maps of acceptable accuracy and rapid turnaround, which in turn can yield reliable estimates of crop acreage of a region. The final algorithm is part of an automated software tool, which can be used by emergency response personnel to generate a nuclear ingestion pathway information product within a few hours of data collection.

  7. From data to information and knowledge for geospatial applications

    NASA Astrophysics Data System (ADS)

    Schenk, T.; Csatho, B.; Yoon, T.

    2006-12-01

    An ever-increasing number of airborne and spaceborne data-acquisition missions with various sensors produce a glut of data. Sensory data rarely contains information in a explicit form such that an application can directly use it. The processing and analyzing of data constitutes a real bottleneck; therefore, automating the processes of gaining useful information and knowledge from the raw data is of paramount interest. This presentation is concerned with the transition from data to information and knowledge. With data we refer to the sensor output and we notice that data provide very rarely direct answers for applications. For example, a pixel in a digital image or a laser point from a LIDAR system (data) have no direct relationship with elevation changes of topographic surfaces or the velocity of a glacier (information, knowledge). We propose to employ the computer vision paradigm to extract information and knowledge as it pertains to a wide range of geoscience applications. After introducing the paradigm we describe the major steps to be undertaken for extracting information and knowledge from sensory input data. Features play an important role in this process. Thus we focus on extracting features and their perceptual organization to higher order constructs. We demonstrate these concepts with imaging data and laser point clouds. The second part of the presentation addresses the problem of combining data obtained by different sensors. An absolute prerequisite for successful fusion is to establish a common reference frame. We elaborate on the concept of sensor invariant features that allow the registration of such disparate data sets as aerial/satellite imagery, 3D laser point clouds, and multi/hyperspectral imagery. Fusion takes place on the data level (sensor registration) and on the information level. We show how fusion increases the degree of automation for reconstructing topographic surfaces. Moreover, fused information gained from the three sensors results in a more abstract surface representation with a rich set of explicit surface information that can be readily used by an analyst for applications such as change detection.

  8. Intershot Analysis of Flows in DIII-D

    NASA Astrophysics Data System (ADS)

    Meyer, W. H.; Allen, S. L.; Samuell, C. M.; Howard, J.

    2016-10-01

    Analysis of the DIII-D flow diagnostic data require demodulation of interference images, and inversion of the resultant line integrated emissivity and flow (phase) images. Four response matrices are pre-calculated: the emissivity line integral and the line integral of the scalar product of the lines-of-site with the orthogonal unit vectors of parallel flow. Equilibrium data determines the relative weight of the component matrices used in the final flow inversion matrix. Serial processing has been used for the lower divertor viewing flow camera 800x600 pixel image. The full cross section viewing camera will require parallel processing of the 2160x2560 pixel image. We will discuss using a Posix thread pool and a Tesla K40c GPU in the processing of this data. Prepared by LLNL under Contract DE-AC52-07NA27344. This material is based upon work supported by the U.S. DOE, Office of Science, Fusion Energy Sciences.

  9. Feature level fusion of hand and face biometrics

    NASA Astrophysics Data System (ADS)

    Ross, Arun A.; Govindarajan, Rohin

    2005-03-01

    Multibiometric systems utilize the evidence presented by multiple biometric sources (e.g., face and fingerprint, multiple fingers of a user, multiple matchers, etc.) in order to determine or verify the identity of an individual. Information from multiple sources can be consolidated in several distinct levels, including the feature extraction level, match score level and decision level. While fusion at the match score and decision levels have been extensively studied in the literature, fusion at the feature level is a relatively understudied problem. In this paper we discuss fusion at the feature level in 3 different scenarios: (i) fusion of PCA and LDA coefficients of face; (ii) fusion of LDA coefficients corresponding to the R,G,B channels of a face image; (iii) fusion of face and hand modalities. Preliminary results are encouraging and help in highlighting the pros and cons of performing fusion at this level. The primary motivation of this work is to demonstrate the viability of such a fusion and to underscore the importance of pursuing further research in this direction.

  10. Fully Convolutional Network-Based Multifocus Image Fusion.

    PubMed

    Guo, Xiaopeng; Nie, Rencan; Cao, Jinde; Zhou, Dongming; Qian, Wenhua

    2018-07-01

    As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is an efficient technology that can synthesize an all-in-focus image using several partially focused images. Previous methods have accomplished the fusion task in spatial or transform domains. However, fusion rules are always a problem in most methods. In this letter, from the aspect of focus region detection, we propose a novel multifocus image fusion method based on a fully convolutional network (FCN) learned from synthesized multifocus images. The primary novelty of this method is that the pixel-wise focus regions are detected through a learning FCN, and the entire image, not just the image patches, are exploited to train the FCN. First, we synthesize 4500 pairs of multifocus images by repeatedly using a gaussian filter for each image from PASCAL VOC 2012, to train the FCN. After that, a pair of source images is fed into the trained FCN, and two score maps indicating the focus property are generated. Next, an inversed score map is averaged with another score map to produce an aggregative score map, which take full advantage of focus probabilities in two score maps. We implement the fully connected conditional random field (CRF) on the aggregative score map to accomplish and refine a binary decision map for the fusion task. Finally, we exploit the weighted strategy based on the refined decision map to produce the fused image. To demonstrate the performance of the proposed method, we compare its fused results with several start-of-the-art methods not only on a gray data set but also on a color data set. Experimental results show that the proposed method can achieve superior fusion performance in both human visual quality and objective assessment.

  11. Novel fusion for hybrid optical/microcomputed tomography imaging based on natural light surface reconstruction and iterated closest point

    NASA Astrophysics Data System (ADS)

    Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo

    2014-02-01

    In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.

  12. Multispectral image sharpening using wavelet transform techniques and spatial correlation of edges

    USGS Publications Warehouse

    Lemeshewsky, George P.; Schowengerdt, Robert A.

    2000-01-01

    Several reported image fusion or sharpening techniques are based on the discrete wavelet transform (DWT). The technique described here uses a pixel-based maximum selection rule to combine respective transform coefficients of lower spatial resolution near-infrared (NIR) and higher spatial resolution panchromatic (pan) imagery to produce a sharpened NIR image. Sharpening assumes a radiometric correlation between the spectral band images. However, there can be poor correlation, including edge contrast reversals (e.g., at soil-vegetation boundaries), between the fused images and, consequently, degraded performance. To improve sharpening, a local area-based correlation technique originally reported for edge comparison with image pyramid fusion is modified for application with the DWT process. Further improvements are obtained by using redundant, shift-invariant implementation of the DWT. Example images demonstrate the improvements in NIR image sharpening with higher resolution pan imagery.

  13. Investigating the Importance of Stereo Displays for Helicopter Landing Simulation

    DTIC Science & Technology

    2016-08-11

    visualization. The two instances of X Plane® were implemented using two separate PCs, each incorporating Intel i7 processors and Nvidia Quadro K4200... Nvidia GeForce GTX 680 graphics card was used to administer the stereo acuity and fusion range tests. The tests were displayed on an Asus VG278HE 3D...monitor with 1920x1080 pixels that was compatible with Nvidia 3D Vision2 and that used active shutter glasses. At a 1-m viewing distance, the

  14. Automated identification of retinal vessels using a multiscale directional contrast quantification (MDCQ) strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, Yi; Zhang, Xinyuan; Wang, Ningli, E-mail: wningli@vip.163.com, E-mail: puj@upmc.edu

    2014-09-15

    Purpose: A novel algorithm is presented to automatically identify the retinal vessels depicted in color fundus photographs. Methods: The proposed algorithm quantifies the contrast of each pixel in retinal images at multiple scales and fuses the resulting consequent contrast images in a progressive manner by leveraging their spatial difference and continuity. The multiscale strategy is to deal with the variety of retinal vessels in width, intensity, resolution, and orientation; and the progressive fusion is to combine consequent images and meanwhile avoid a sudden fusion of image noise and/or artifacts in space. To quantitatively assess the performance of the algorithm, wemore » tested it on three publicly available databases, namely, DRIVE, STARE, and HRF. The agreement between the computer results and the manual delineation in these databases were quantified by computing their overlapping in both area and length (centerline). The measures include sensitivity, specificity, and accuracy. Results: For the DRIVE database, the sensitivities in identifying vessels in area and length were around 90% and 70%, respectively, the accuracy in pixel classification was around 99%, and the precisions in terms of both area and length were around 94%. For the STARE database, the sensitivities in identifying vessels were around 90% in area and 70% in length, and the accuracy in pixel classification was around 97%. For the HRF database, the sensitivities in identifying vessels were around 92% in area and 83% in length for the healthy subgroup, around 92% in area and 75% in length for the glaucomatous subgroup, around 91% in area and 73% in length for the diabetic retinopathy subgroup. For all three subgroups, the accuracy was around 98%. Conclusions: The experimental results demonstrate that the developed algorithm is capable of identifying retinal vessels depicted in color fundus photographs in a relatively reliable manner.« less

  15. A 256×256 low-light-level CMOS imaging sensor with digital CDS

    NASA Astrophysics Data System (ADS)

    Zou, Mei; Chen, Nan; Zhong, Shengyou; Li, Zhengfen; Zhang, Jicun; Yao, Li-bin

    2016-10-01

    In order to achieve high sensitivity for low-light-level CMOS image sensors (CIS), a capacitive transimpedance amplifier (CTIA) pixel circuit with a small integration capacitor is used. As the pixel and the column area are highly constrained, it is difficult to achieve analog correlated double sampling (CDS) to remove the noise for low-light-level CIS. So a digital CDS is adopted, which realizes the subtraction algorithm between the reset signal and pixel signal off-chip. The pixel reset noise and part of the column fixed-pattern noise (FPN) can be greatly reduced. A 256×256 CIS with CTIA array and digital CDS is implemented in the 0.35μm CMOS technology. The chip size is 7.7mm×6.75mm, and the pixel size is 15μm×15μm with a fill factor of 20.6%. The measured pixel noise is 24LSB with digital CDS in RMS value at dark condition, which shows 7.8× reduction compared to the image sensor without digital CDS. Running at 7fps, this low-light-level CIS can capture recognizable images with the illumination down to 0.1lux.

  16. Miniaturized LEDs for flat-panel displays

    NASA Astrophysics Data System (ADS)

    Radauscher, Erich J.; Meitl, Matthew; Prevatte, Carl; Bonafede, Salvatore; Rotzoll, Robert; Gomez, David; Moore, Tanya; Raymond, Brook; Cok, Ronald; Fecioru, Alin; Trindade, António Jose; Fisher, Brent; Goodwin, Scott; Hines, Paul; Melnik, George; Barnhill, Sam; Bower, Christopher A.

    2017-02-01

    Inorganic light emitting diodes (LEDs) serve as bright pixel-level emitters in displays, from indoor/outdoor video walls with pixel sizes ranging from one to thirty millimeters to micro displays with more than one thousand pixels per inch. Pixel sizes that fall between those ranges, roughly 50 to 500 microns, are some of the most commercially significant ones, including flat panel displays used in smart phones, tablets, and televisions. Flat panel displays that use inorganic LEDs as pixel level emitters (μILED displays) can offer levels of brightness, transparency, and functionality that are difficult to achieve with other flat panel technologies. Cost-effective production of μILED displays requires techniques for precisely arranging sparse arrays of extremely miniaturized devices on a panel substrate, such as transfer printing with an elastomer stamp. Here we present lab-scale demonstrations of transfer printed μILED displays and the processes used to make them. Demonstrations include passive matrix μILED displays that use conventional off-the shelf drive ASICs and active matrix μILED displays that use miniaturized pixel-level control circuits from CMOS wafers. We present a discussion of key considerations in the design and fabrication of highly miniaturized emitters for μILED displays.

  17. Fusion of light-field and photogrammetric surface form data

    NASA Astrophysics Data System (ADS)

    Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard K.

    2017-08-01

    Photogrammetry based systems are able to produce 3D reconstructions of an object given a set of images taken from different orientations. In this paper, we implement a light-field camera within a photogrammetry system in order to capture additional depth information, as well as the photogrammetric point cloud. Compared to a traditional camera that only captures the intensity of the incident light, a light-field camera also provides angular information for each pixel. In principle, this additional information allows 2D images to be reconstructed at a given focal plane, and hence a depth map can be computed. Through the fusion of light-field and photogrammetric data, we show that it is possible to improve the measurement uncertainty of a millimetre scale 3D object, compared to that from the individual systems. By imaging a series of test artefacts from various positions, individual point clouds were produced from depth-map information and triangulation of corresponding features between images. Using both measurements, data fusion methods were implemented in order to provide a single point cloud with reduced measurement uncertainty.

  18. Spatial Aspects of Multi-Sensor Data Fusion: Aerosol Optical Thickness

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory; Zubko, V.; Gopalan, A.

    2007-01-01

    The Goddard Earth Sciences Data and Information Services Center (GES DISC) investigated the applicability and limitations of combining multi-sensor data through data fusion, to increase the usefulness of the multitude of NASA remote sensing data sets, and as part of a larger effort to integrate this capability in the GES-DISC Interactive Online Visualization and Analysis Infrastructure (Giovanni). This initial study focused on merging daily mean Aerosol Optical Thickness (AOT), as measured by the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the Terra and Aqua satellites, to increase spatial coverage and produce complete fields to facilitate comparison with models and station data. The fusion algorithm used the maximum likelihood technique to merge the pixel values where available. The algorithm was applied to two regional AOT subsets (with mostly regular and irregular gaps, respectively) and a set of AOT fields that differed only in the size and location of artificially created gaps. The Cumulative Semivariogram (CSV) was found to be sensitive to the spatial distribution of gap areas and, thus, useful for assessing the sensitivity of the fused data to spatial gaps.

  19. Effects of spatial resolution ratio in image fusion

    USGS Publications Warehouse

    Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.

    2008-01-01

    In image fusion, the spatial resolution ratio can be defined as the ratio between the spatial resolution of the high-resolution panchromatic image and that of the low-resolution multispectral image. This paper attempts to assess the effects of the spatial resolution ratio of the input images on the quality of the fused image. Experimental results indicate that a spatial resolution ratio of 1:10 or higher is desired for optimal multisensor image fusion provided the input panchromatic image is not downsampled to a coarser resolution. Due to the synthetic pixels generated from resampling, the quality of the fused image decreases as the spatial resolution ratio decreases (e.g. from 1:10 to 1:30). However, even with a spatial resolution ratio as small as 1:30, the quality of the fused image is still better than the original multispectral image alone for feature interpretation. In cases where the spatial resolution ratio is too small (e.g. 1:30), to obtain better spectral integrity of the fused image, one may downsample the input high-resolution panchromatic image to a slightly lower resolution before fusing it with the multispectral image.

  20. Joint image registration and fusion method with a gradient strength regularization

    NASA Astrophysics Data System (ADS)

    Lidong, Huang; Wei, Zhao; Jun, Wang

    2015-05-01

    Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.

  1. Five major controversial issues about fusion level selection in corrective surgery for adolescent idiopathic scoliosis: a narrative review.

    PubMed

    Lee, Choon Sung; Hwang, Chang Ju; Lee, Dong-Ho; Cho, Jae Hwan

    2017-07-01

    Shoulder imbalance, coronal decompensation, and adding-on phenomenon following corrective surgery in patients with adolescent idiopathic scoliosis are known to be related to the fusion level selected. Although many studies have assessed the appropriate selection of the proximal and distal fusion level, no definite conclusions have been drawn thus far. We aimed to assess the problems with fusion level selection for corrective surgery in patients with adolescent idiopathic scoliosis, and to enhance understanding about these problems. This study is a narrative review. We conducted a literature search of fusion level selection in corrective surgery for adolescent idiopathic scoliosis. Accordingly, we selected and reviewed five debatable topics related to fusion level selection: (1) selective thoracic fusion; (2) selective thoracolumbar-lumbar (TL-L) fusion; (3) adding-on phenomenon; (4) distal fusion level selection for major TL-L curves; and (5) proximal fusion level selection and shoulder imbalance. Selective fusion can be chosen in specific curve types, although there is a risk of coronal decompensation or adding-on phenomenon. Generally, wider indications for selective fusions are usually associated with more frequent complications. Despite the determination of several indications for selective fusion to avoid such complications, no clear guidelines have been established. Although authors have suggested various criteria to prevent the adding-on phenomenon, no consensus has been reached on the appropriate selection of lower instrumented vertebra. The fusion level selection for major TL-L curves primarily focuses on whether distal fusion can terminate at L3, a topic that remains unclear. Furthermore, because of the presence of several related factors and complications, proximal level selection and shoulder imbalance has been constantly debated and remains controversial from its etiology to its prevention. Although several difficult problems in the diagnosis and treatment of adolescent idiopathic scoliosis have been resolved by understanding its mechanism and via technical advancement, no definite guideline for fusion level selection has been established. A review of five major controversial issues about fusion level selection could provide better understanding of adolescent idiopathic scoliosis. We believe that a thorough validation study of the abovementioned controversial issues can help address them. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Introducing two Random Forest based methods for cloud detection in remote sensing images

    NASA Astrophysics Data System (ADS)

    Ghasemian, Nafiseh; Akhoondzadeh, Mehdi

    2018-07-01

    Cloud detection is a necessary phase in satellite images processing to retrieve the atmospheric and lithospheric parameters. Currently, some cloud detection methods based on Random Forest (RF) model have been proposed but they do not consider both spectral and textural characteristics of the image. Furthermore, they have not been tested in the presence of snow/ice. In this paper, we introduce two RF based algorithms, Feature Level Fusion Random Forest (FLFRF) and Decision Level Fusion Random Forest (DLFRF) to incorporate visible, infrared (IR) and thermal spectral and textural features (FLFRF) including Gray Level Co-occurrence Matrix (GLCM) and Robust Extended Local Binary Pattern (RELBP_CI) or visible, IR and thermal classifiers (DLFRF) for highly accurate cloud detection on remote sensing images. FLFRF first fuses visible, IR and thermal features. Thereafter, it uses the RF model to classify pixels to cloud, snow/ice and background or thick cloud, thin cloud and background. DLFRF considers visible, IR and thermal features (both spectral and textural) separately and inserts each set of features to RF model. Then, it holds vote matrix of each run of the model. Finally, it fuses the classifiers using the majority vote method. To demonstrate the effectiveness of the proposed algorithms, 10 Terra MODIS and 15 Landsat 8 OLI/TIRS images with different spatial resolutions are used in this paper. Quantitative analyses are based on manually selected ground truth data. Results show that after adding RELBP_CI to input feature set cloud detection accuracy improves. Also, the average cloud kappa values of FLFRF and DLFRF on MODIS images (1 and 0.99) are higher than other machine learning methods, Linear Discriminate Analysis (LDA), Classification And Regression Tree (CART), K Nearest Neighbor (KNN) and Support Vector Machine (SVM) (0.96). The average snow/ice kappa values of FLFRF and DLFRF on MODIS images (1 and 0.85) are higher than other traditional methods. The quantitative values on Landsat 8 images show similar trend. Consequently, while SVM and K-nearest neighbor show overestimation in predicting cloud and snow/ice pixels, our Random Forest (RF) based models can achieve higher cloud, snow/ice kappa values on MODIS and thin cloud, thick cloud and snow/ice kappa values on Landsat 8 images. Our algorithms predict both thin and thick cloud on Landsat 8 images while the existing cloud detection algorithm, Fmask cannot discriminate them. Compared to the state-of-the-art methods, our algorithms have acquired higher average cloud and snow/ice kappa values for different spatial resolutions.

  3. Face-iris multimodal biometric scheme based on feature level fusion

    NASA Astrophysics Data System (ADS)

    Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei

    2015-11-01

    Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.

  4. Optimal scan strategy for mega-pixel and kilo-gray-level OLED-on-silicon microdisplay.

    PubMed

    Ji, Yuan; Ran, Feng; Ji, Weigui; Xu, Meihua; Chen, Zhangjing; Jiang, Yuxi; Shen, Weixin

    2012-06-10

    The digital pixel driving scheme makes the organic light-emitting diode (OLED) microdisplays more immune to the pixel luminance variations and simplifies the circuit architecture and design flow compared to the analog pixel driving scheme. Additionally, it is easily applied in full digital systems. However, the data bottleneck becomes a notable problem as the number of pixels and gray levels grow dramatically. This paper will discuss the digital driving ability to achieve kilogray-levels for megapixel displays. The optimal scan strategy is proposed for creating ultra high gray levels and increasing light efficiency and contrast ratio. Two correction schemes are discussed to improve the gray level linearity. A 1280×1024×3 OLED-on-silicon microdisplay, with 4096 gray levels, is designed based on the optimal scan strategy. The circuit driver is integrated in the silicon backplane chip in the 0.35 μm 3.3 V-6 V dual voltage one polysilicon layer, four metal layers (1P4M) complementary metal-oxide semiconductor (CMOS) process with custom top metal. The design aspects of the optimal scan controller are also discussed. The test results show the gray level linearity of the correction schemes for the optimal scan strategy is acceptable by the human eye.

  5. CMOS image sensors: State-of-the-art

    NASA Astrophysics Data System (ADS)

    Theuwissen, Albert J. P.

    2008-09-01

    This paper gives an overview of the state-of-the-art of CMOS image sensors. The main focus is put on the shrinkage of the pixels : what is the effect on the performance characteristics of the imagers and on the various physical parameters of the camera ? How is the CMOS pixel architecture optimized to cope with the negative performance effects of the ever-shrinking pixel size ? On the other hand, the smaller dimensions in CMOS technology allow further integration on column level and even on pixel level. This will make CMOS imagers even smarter that they are already.

  6. Adaptive pixel-to-pixel projection intensity adjustment for measuring a shiny surface using orthogonal color fringe pattern projection

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Gao, Nan; Wang, Xiangjun; Zhang, Zonghua

    2018-05-01

    Three-dimensional (3D) shape measurement based on fringe pattern projection techniques has been commonly used in various fields. One of the remaining challenges in fringe pattern projection is that camera sensor saturation may occur if there is a large range of reflectivity variation across the surface that causes measurement errors. To overcome this problem, a novel fringe pattern projection method is proposed to avoid image saturation and maintain high-intensity modulation for measuring shiny surfaces by adaptively adjusting the pixel-to-pixel projection intensity according to the surface reflectivity. First, three sets of orthogonal color fringe patterns and a sequence of uniform gray-level patterns with different gray levels are projected onto a measured surface by a projector. The patterns are deformed with respect to the object surface and captured by a camera from a different viewpoint. Subsequently, the optimal projection intensity at each pixel is determined by fusing different gray levels and transforming the camera pixel coordinate system into the projector pixel coordinate system. Finally, the adapted fringe patterns are created and used for 3D shape measurement. Experimental results on a flat checkerboard and shiny objects demonstrate that the proposed method can measure shiny surfaces with high accuracy.

  7. FT-MIR and NIR spectral data fusion: a synergetic strategy for the geographical traceability of Panax notoginseng.

    PubMed

    Li, Yun; Zhang, Jin-Yu; Wang, Yuan-Zhong

    2018-01-01

    Three data fusion strategies (low-llevel, mid-llevel, and high-llevel) combined with a multivariate classification algorithm (random forest, RF) were applied to authenticate the geographical origins of Panax notoginseng collected from five regions of Yunnan province in China. In low-level fusion, the original data from two spectra (Fourier transform mid-IR spectrum and near-IR spectrum) were directly concatenated into a new matrix, which then was applied for the classification. Mid-level fusion was the strategy that inputted variables extracted from the spectral data into an RF classification model. The extracted variables were processed by iterate variable selection of the RF model and principal component analysis. The use of high-level fusion combined the decision making of each spectroscopic technique and resulted in an ensemble decision. The results showed that the mid-level and high-level data fusion take advantage of the information synergy from two spectroscopic techniques and had better classification performance than that of independent decision making. High-level data fusion is the most effective strategy since the classification results are better than those of the other fusion strategies: accuracy rates ranged between 93% and 96% for the low-level data fusion, between 95% and 98% for the mid-level data fusion, and between 98% and 100% for the high-level data fusion. In conclusion, the high-level data fusion strategy for Fourier transform mid-IR and near-IR spectra can be used as a reliable tool for correct geographical identification of P. notoginseng. Graphical abstract The analytical steps of Fourier transform mid-IR and near-IR spectral data fusion for the geographical traceability of Panax notoginseng.

  8. Neuromorphic infrared focal plane performs sensor fusion on-plane local-contrast-enhancement spatial and temporal filtering

    NASA Astrophysics Data System (ADS)

    Massie, Mark A.; Woolaway, James T., II; Curzan, Jon P.; McCarley, Paul L.

    1993-08-01

    An infrared focal plane has been simulated, designed and fabricated which mimics the form and function of the vertebrate retina. The `Neuromorphic' focal plane has the capability of performing pixel-based sensor fusion and real-time local contrast enhancement, much like the response of the human eye. The device makes use of an indium antimonide detector array with a 3 - 5 micrometers spectral response, and a switched capacitor resistive network to compute a real-time 2D spatial average. This device permits the summation of other sensor outputs to be combined on-chip with the infrared detections of the focal plane itself. The resulting real-time analog processed information thus represents the combined information of many sensors with the advantage that analog spatial and temporal signal processing is performed at the focal plane. A Gaussian subtraction method is used to produce the pixel output which when displayed produces an image with enhanced edges, representing spatial and temporal derivatives in the scene. The spatial and temporal responses of the device are tunable during operation, permitting the operator to `peak up' the response of the array to spatial and temporally varying signals. Such an array adapts to ambient illumination conditions without loss of detection performance. This paper reviews the Neuromorphic infrared focal plane from initial operational simulations to detailed design characteristics, and concludes with a presentation of preliminary operational data for the device as well as videotaped imagery.

  9. Sub-pixel image classification for forest types in East Texas

    NASA Astrophysics Data System (ADS)

    Westbrook, Joey

    Sub-pixel classification is the extraction of information about the proportion of individual materials of interest within a pixel. Landcover classification at the sub-pixel scale provides more discrimination than traditional per-pixel multispectral classifiers for pixels where the material of interest is mixed with other materials. It allows for the un-mixing of pixels to show the proportion of each material of interest. The materials of interest for this study are pine, hardwood, mixed forest and non-forest. The goal of this project was to perform a sub-pixel classification, which allows a pixel to have multiple labels, and compare the result to a traditional supervised classification, which allows a pixel to have only one label. The satellite image used was a Landsat 5 Thematic Mapper (TM) scene of the Stephen F. Austin Experimental Forest in Nacogdoches County, Texas and the four cover type classes are pine, hardwood, mixed forest and non-forest. Once classified, a multi-layer raster datasets was created that comprised four raster layers where each layer showed the percentage of that cover type within the pixel area. Percentage cover type maps were then produced and the accuracy of each was assessed using a fuzzy error matrix for the sub-pixel classifications, and the results were compared to the supervised classification in which a traditional error matrix was used. The overall accuracy of the sub-pixel classification using the aerial photo for both training and reference data had the highest (65% overall) out of the three sub-pixel classifications. This was understandable because the analyst can visually observe the cover types actually on the ground for training data and reference data, whereas using the FIA (Forest Inventory and Analysis) plot data, the analyst must assume that an entire pixel contains the exact percentage of a cover type found in a plot. An increase in accuracy was found after reclassifying each sub-pixel classification from nine classes with 10 percent interval each to five classes with 20 percent interval each. When compared to the supervised classification which has a satisfactory overall accuracy of 90%, none of the sub-pixel classification achieved the same level. However, since traditional per-pixel classifiers assign only one label to pixels throughout the landscape while sub-pixel classifications assign multiple labels to each pixel, the traditional 85% accuracy of acceptance for pixel-based classifications should not apply to sub-pixel classifications. More research is needed in order to define the level of accuracy that is deemed acceptable for sub-pixel classifications.

  10. Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach

    NASA Astrophysics Data System (ADS)

    Jazaeri, Amin

    High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.

  11. Fusion of LIDAR Data and Multispectral Imagery for Effective Building Detection Based on Graph and Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Gilani, S. A. N.; Awrangjeb, M.; Lu, G.

    2015-03-01

    Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets: Aitkenvale and Hervey Bay, for object-based and pixel-based completeness, correctness, and quality. The proposed technique detects buildings larger than 50 m2 and 10 m2 in the Aitkenvale site with 100% and 91% accuracy, respectively, while in the Hervey Bay site it performs better with 100% accuracy for buildings larger than 10 m2 in area.

  12. Spatial Statistical Data Fusion (SSDF)

    NASA Technical Reports Server (NTRS)

    Braverman, Amy J.; Nguyen, Hai M.; Cressie, Noel

    2013-01-01

    As remote sensing for scientific purposes has transitioned from an experimental technology to an operational one, the selection of instruments has become more coordinated, so that the scientific community can exploit complementary measurements. However, tech nological and scientific heterogeneity across devices means that the statistical characteristics of the data they collect are different. The challenge addressed here is how to combine heterogeneous remote sensing data sets in a way that yields optimal statistical estimates of the underlying geophysical field, and provides rigorous uncertainty measures for those estimates. Different remote sensing data sets may have different spatial resolutions, different measurement error biases and variances, and other disparate characteristics. A state-of-the-art spatial statistical model was used to relate the true, but not directly observed, geophysical field to noisy, spatial aggregates observed by remote sensing instruments. The spatial covariances of the true field and the covariances of the true field with the observations were modeled. The observations are spatial averages of the true field values, over pixels, with different measurement noise superimposed. A kriging framework is used to infer optimal (minimum mean squared error and unbiased) estimates of the true field at point locations from pixel-level, noisy observations. A key feature of the spatial statistical model is the spatial mixed effects model that underlies it. The approach models the spatial covariance function of the underlying field using linear combinations of basis functions of fixed size. Approaches based on kriging require the inversion of very large spatial covariance matrices, and this is usually done by making simplifying assumptions about spatial covariance structure that simply do not hold for geophysical variables. In contrast, this method does not require these assumptions, and is also computationally much faster. This method is fundamentally different than other approaches to data fusion for remote sensing data because it is inferential rather than merely descriptive. All approaches combine data in a way that minimizes some specified loss function. Most of these are more or less ad hoc criteria based on what looks good to the eye, or some criteria that relate only to the data at hand.

  13. Cellular Neural Network for Real Time Image Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vagliasindi, G.; Arena, P.; Fortuna, L.

    2008-03-12

    Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information formore » plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET)« less

  14. Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach

    NASA Astrophysics Data System (ADS)

    Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai

    2006-01-01

    With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.

  15. Hybrid testing of lumbar CHARITE discs versus fusions.

    PubMed

    Panjabi, Manohar; Malcolmson, George; Teng, Edward; Tominaga, Yasuhiro; Henderson, Gweneth; Serhan, Hassan

    2007-04-20

    An in vitro human cadaveric biomechanical study. To quantify effects on operated and other levels, including adjacent levels, due to CHARITE disc implantations versus simulated fusions, using follower load and the new hybrid test method in flexion-extension and bilateral torsion. Spinal fusion has been associated with long-term accelerated degeneration at adjacent levels. As opposed to the fusion, artificial discs are designed to preserve motion and diminish the adjacent-level effects. Five fresh human cadaveric lumbar specimens (T12-S1) underwent multidirectional testing in flexion-extension and bilateral torsion with 400 N follower load. Intact specimen total ranges of motion were determined with +/-10 Nm unconstrained pure moments. The intact range of motion was used as input for the hybrid tests of 5 constructs: 1) CHARITE disc at L5-S1; 2) fusion at L5-S1; 3) CHARITE discs at L4-L5 and L5-S1; 4) CHARITE disc at L4-L5 and fusion at L5-S1; and 5) 2-level fusion at L4-L5-S1. Using repeated-measures single factor analysis of variance and Bonferroni statistical tests (P < 0.05), intervertebral motion redistribution of each construct was compared with the intact. In flexion-extension, 1-level CHARITE disc preserved motion at the operated and other levels, while 2-level CHARITE showed some amount of other-level effects. In contrast, 1- and 2-level fusions increased other-level motions (average, 21.0% and 61.9%, respectively). In torsion, both 1- and 2-level discs preserved motions at all levels. The 2-level simulated fusion increased motions at proximal levels (22.9%), while the 1-level fusion produced no significant changes. In general, CHARITE discs preserved operated- and other-level motions. Fusion simulations affected motion redistribution at other levels, including adjacent levels.

  16. Thermodynamic free-energy minimization for unsupervised fusion of dual-color infrared breast images

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Miao, Lidan; Qi, Hairong

    2006-04-01

    This paper presents algorithmic details of an unsupervised neural network and unbiased diagnostic methodology, that is, no lookup table is needed that labels the input training data with desired outputs. We deploy the smart algorithm on two satellite-grade infrared (IR) cameras. Although an early malignant tumor must be small in size and cannot be resolved by a single pixel that images about hundreds cells, these cells reveal themselves physiologically by emitting spontaneously thermal radiation due to the rapid cell growth angiogenesis effect (In Greek: vessels generation for increasing tumor blood supply), shifting toward, according to physics, a shorter IR wavelengths emission band. If we use those exceedingly sensitive IR spectral band cameras, we can in principle detect whether or not the breast tumor is perhaps malignant through a thin blouse in a close-up dark room. If this protocol turns out to be reliable in a large scale follow-on Vatican experiment in 2006, which might generate business investment interests of nano-engineering manufacture of nano-camera made of 1-D Carbon Nano-Tubes without traditional liquid Nitrogen coolant for Mid IR camera, then one can accumulate the probability of any type of malignant tumor at every pixel over time in the comfort of privacy without religious or other concerns. Such a non-intrusive protocol alone may not have enough information to make the decision, but the changes tracked over time will be surely becoming significant. Such an ill-posed inverse heat source transfer problem can be solved because of the universal constraint of equilibrium physics governing the blackbody Planck radiation distribution, to be spatio-temporally sampled. Thus, we must gather two snapshots with two IR cameras to form a vector data X(t) per pixel to invert the matrix-vector equation X=[A]S pixel-by-pixel independently, known as a single-pixel blind sources separation (BSS). Because the unknown heat transfer matrix or the impulse response function [A] may vary from the point tumor to its neighborhood, we could not rely on neighborhood statistics as did in a popular unsupervised independent component analysis (ICA) mathematical statistical method, we instead impose the physics equilibrium condition of the minimum of Helmholtz free-energy, H = E - T °S. In case of the point breast cancer, we can assume the constant ground state energy E ° to be normalized by those benign neighborhood tissue, and then the excited state can be computed by means of Taylor series expansion in terms of the pixel I/O data. We can augment the X-ray mammogram technique with passive IR imaging to reduce the unwanted X-rays during the chemotherapy recovery. When the sequence is animated into a movie, and the recovery dynamics is played backward in time, the movie simulates the cameras' potential for early detection without suffering the PD=0.1 search uncertainty. In summary, we applied two satellite-grade dual-color IR imaging cameras and advanced military (automatic target recognition) ATR spectrum fusion algorithm at the middle wavelength IR (3 - 5μm) and long wavelength IR (8 - 12μm), which are capable to screen malignant tumors proved by the time-reverse fashion of the animated movie experiments. On the contrary, the traditional thermal breast scanning/imaging, known as thermograms over decades, was IR spectrum-blind, and limited to a single night-vision camera and the necessary waiting for the cool down period for taking a second look for change detection suffers too many environmental and personnel variabilities.

  17. Fusion Rate and Clinical Outcomes in Two-Level Posterior Lumbar Interbody Fusion.

    PubMed

    Aono, Hiroyuki; Takenaka, Shota; Nagamoto, Yukitaka; Tobimatsu, Hidekazu; Yamashita, Tomoya; Furuya, Masayuki; Iwasaki, Motoki

    2018-04-01

    Posterior lumbar interbody fusion (PLIF) has become a general surgical method for degenerative lumbar diseases. Although many reports have focused on single-level PLIF, few have focused on 2-level PLIF, and no report has covered the fusion status of 2-level PLIF. The purpose of this study is to investigate clinical outcomes and fusion for 2-level PLIF by using a combination of dynamic radiographs and multiplanar-reconstruction computed tomography scans. This study consisted of 48 consecutive patients who underwent 2-level PLIF for degenerative lumbar diseases. We assessed surgery duration, estimated blood loss, complications, clinical outcomes as measured by the Japanese Orthopaedic Association score, lumbar sagittal alignment as measured on standing lateral radiographs, and fusion status as measured by dynamic radiographs and multiplanar-reconstruction computed tomography. Patients were examined at a follow-up point of 4.8 ± 2.2 years after surgery. Thirty-eight patients who did not undergo lumbosacral fusion comprised the lumbolumbar group, and 10 patients who underwent lumbosacral fusion comprised the lumbosacral group. The mean Japanese Orthopaedic Association score improved from 12.1 to 22.4 points by the final follow-up examination. Sagittal alignment also was improved. All patients had fusion in the cranial level. Seven patients had nonunion in the caudal level, and the lumbosacral group (40%) had a significantly poorer fusion rate than the lumbolumbar group (97%) did. Surgical outcomes of 2-level PLIF were satisfactory. The fusion rate at both levels was 85%. All nonunion was observed at the caudal level and concentrated at L5-S level in L4-5-S PLIF. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. High Level Information Fusion (HLIF) with nested fusion loops

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Gosnell, Michael; Fischer, Amber

    2013-05-01

    Situation modeling and threat prediction require higher levels of data fusion in order to provide actionable information. Beyond the sensor data and sources the analyst has access to, the use of out-sourced and re-sourced data is becoming common. Through the years, some common frameworks have emerged for dealing with information fusion—perhaps the most ubiquitous being the JDL Data Fusion Group and their initial 4-level data fusion model. Since these initial developments, numerous models of information fusion have emerged, hoping to better capture the human-centric process of data analyses within a machine-centric framework. 21st Century Systems, Inc. has developed Fusion with Uncertainty Reasoning using Nested Assessment Characterizer Elements (FURNACE) to address challenges of high level information fusion and handle bias, ambiguity, and uncertainty (BAU) for Situation Modeling, Threat Modeling, and Threat Prediction. It combines JDL fusion levels with nested fusion loops and state-of-the-art data reasoning. Initial research has shown that FURNACE is able to reduce BAU and improve the fusion process by allowing high level information fusion (HLIF) to affect lower levels without the double counting of information or other biasing issues. The initial FURNACE project was focused on the underlying algorithms to produce a fusion system able to handle BAU and repurposed data in a cohesive manner. FURNACE supports analyst's efforts to develop situation models, threat models, and threat predictions to increase situational awareness of the battlespace. FURNACE will not only revolutionize the military intelligence realm, but also benefit the larger homeland defense, law enforcement, and business intelligence markets.

  19. Assessment of Minimum 124I Activity Required in Uptake Measurements Before Radioiodine Therapy for Benign Thyroid Diseases.

    PubMed

    Gabler, Anja S; Kühnel, Christian; Winkens, Thomas; Freesmeyer, Martin

    2016-08-01

    This study aimed to assess a hypothetical minimum administered activity of (124)I required to achieve comparability between pretherapeutic radioiodine uptake (RAIU) measurements by (124)I PET/CT and by (131)I RAIU probe, the clinical standard. In addition, the impact of different reconstruction algorithms on (124)I RAIU and the evaluation of pixel noise as a parameter for image quality were investigated. Different scan durations were simulated by different reconstruction intervals of 600-s list-mode PET datasets (including 15 intervals up to 600 s and 5 different reconstruction algorithms: filtered-backprojection and 4 iterative techniques) acquired 30 h after administration of 1 MBq of (124)I. The Bland-Altman method was used to compare mean (124)I RAIU levels versus mean 3-MBq (131)I RAIU levels (clinical standard). The data of 37 patients with benign thyroid diseases were assessed. The impact of different reconstruction lengths on pixel noise was investigated for all 5 of the (124)I PET reconstruction algorithms. A hypothetical minimum activity was sought by means of a proportion equation, considering that the length of a reconstruction interval equates to a hypothetical activity. Mean (124)I RAIU and (131)I RAIU already showed high levels of agreement for reconstruction intervals of as short as 10 s, corresponding to a hypothetical minimum activity of 0.017 MBq of (124)I. The iterative algorithms proved generally superior to the filtered-backprojection algorithm. (124)I RAIU showed a trend toward higher levels than (131)I RAIU if the influence of retrosternal tissue was not considered, which was proven to be the cause of a slight overestimation by (124)I RAIU measurement. A hypothetical minimum activity of 0.5 MBq of (124)I obtained with iterative reconstruction appeared sufficient both visually and with regard to pixel noise. This study confirms the potential of (124)I RAIU measurement as an alternative method for (131)I RAIU measurement in benign thyroid disease and suggests that reducing the administered activity is an option. CT information is particularly important in cases of retrosternal expansion. The results are relevant because (124)I PET/CT allows additional diagnostic means, that is, the possibility of performing fusion imaging with ultrasound. (124)I PET/CT might be an alternative, especially when hybrid (123)I SPECT/CT is not available. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  20. Pulsed excitation terahertz tomography - multiparametric approach

    NASA Astrophysics Data System (ADS)

    Lopato, Przemyslaw

    2018-04-01

    This article deals with pulsed excitation terahertz computed tomography (THz CT). Opposite to x-ray CT, where just a single value (pixel) is obtained, in case of pulsed THz CT the time signal is acquired for each position. Recorded waveform can be parametrized - many features carrying various information about examined structure can be calculated. Based on this, multiparametric reconstruction algorithm was proposed: inverse Radon transform based reconstruction is applied for each parameter and then fusion of results is utilized. Performance of the proposed imaging scheme was experimentally verified using dielectric phantoms.

  1. Automatic blood detection in capsule endoscopy video

    NASA Astrophysics Data System (ADS)

    Novozámský, Adam; Flusser, Jan; Tachecí, Ilja; Sulík, Lukáš; Bureš, Jan; Krejcar, Ondřej

    2016-12-01

    We propose two automatic methods for detecting bleeding in wireless capsule endoscopy videos of the small intestine. The first one uses solely the color information, whereas the second one incorporates the assumptions about the blood spot shape and size. The original idea is namely the definition of a new color space that provides good separability of blood pixels and intestinal wall. Both methods can be applied either individually or their results can be fused together for the final decision. We evaluate their individual performance and various fusion rules on real data, manually annotated by an endoscopist.

  2. Hybrid Biosynthetic Autograft Extender for Use in Posterior Lumbar Interbody Fusion: Safety and Clinical Effectiveness.

    PubMed

    Chedid, Mokbel K; Tundo, Kelly M; Block, Jon E; Muir, Jeffrey M

    2015-01-01

    Autologous iliac crest bone graft is the preferred option for spinal fusion, but the morbidity associated with bone harvest and the need for graft augmentation in more demanding cases necessitates combining local bone with bone substitutes. The purpose of this study was to document the clinical effectiveness and safety of a novel hybrid biosynthetic scaffold material consisting of poly(D,L-lactide-co-glycolide) (PLGA, 75:25) combined by lyophilization with unmodified high molecular weight hyaluronic acid (10-12% wt:wt) as an extender for a broad range of spinal fusion procedures. We retrospectively evaluated all patients undergoing single- and multi-level posterior lumbar interbody fusion at an academic medical center over a 3-year period. A total of 108 patients underwent 109 procedures (245 individual vertebral levels). Patient-related outcomes included pain measured on a Visual Analog Scale. Radiographic outcomes were assessed at 6 weeks, 3-6 months, and 1 year postoperatively. Radiographic fusion or progression of fusion was documented in 221 of 236 index levels (93.6%) at a mean (±SD) time to fusion of 10.2+4.1 months. Single and multi-level fusions were not associated with significantly different success rates. Mean pain scores (+SD) for all patients improved from 6.8+2.5 at baseline to 3.6+2.9 at approximately 12 months. Improvements in VAS were greatest in patients undergoing one- or two-level fusion, with patients undergoing multi-level fusion demonstrating lesser but still statistically significant improvements. Overall, stable fusion was observed in 64.8% of vertebral levels; partial fusion was demonstrated in 28.8% of vertebral levels. Only 15 of 236 levels (6.4%) were non-fused at final follow-up.

  3. An investigation of signal performance enhancements achieved through innovative pixel design across several generations of indirect detection, active matrix, flat-panel arrays

    PubMed Central

    Antonuk, Larry E.; Zhao, Qihua; El-Mohri, Youcef; Du, Hong; Wang, Yi; Street, Robert A.; Ho, Jackson; Weisfield, Richard; Yao, William

    2009-01-01

    Active matrix flat-panel imager (AMFPI) technology is being employed for an increasing variety of imaging applications. An important element in the adoption of this technology has been significant ongoing improvements in optical signal collection achieved through innovations in indirect detection array pixel design. Such improvements have a particularly beneficial effect on performance in applications involving low exposures and∕or high spatial frequencies, where detective quantum efficiency is strongly reduced due to the relatively high level of additive electronic noise compared to signal levels of AMFPI devices. In this article, an examination of various signal properties, as determined through measurements and calculations related to novel array designs, is reported in the context of the evolution of AMFPI pixel design. For these studies, dark, optical, and radiation signal measurements were performed on prototype imagers incorporating a variety of increasingly sophisticated array designs, with pixel pitches ranging from 75 to 127 μm. For each design, detailed measurements of fundamental pixel-level properties conducted under radiographic and fluoroscopic operating conditions are reported and the results are compared. A series of 127 μm pitch arrays employing discrete photodiodes culminated in a novel design providing an optical fill factor of ∼80% (thereby assuring improved x-ray sensitivity), and demonstrating low dark current, very low charge trapping and charge release, and a large range of linear signal response. In two of the designs having 75 and 90 μm pitches, a novel continuous photodiode structure was found to provide fill factors that approach the theoretical maximum of 100%. Both sets of novel designs achieved large fill factors by employing architectures in which some, or all of the photodiode structure was elevated above the plane of the pixel addressing transistor. Generally, enhancement of the fill factor in either discrete or continuous photodiode arrays was observed to result in no degradation in MTF due to charge sharing between pixels. While the continuous designs exhibited relatively high levels of charge trapping and release, as well as shorter ranges of linearity, it is possible that these behaviors can be addressed through further refinements to pixel design. Both the continuous and the most recent discrete photodiode designs accommodate more sophisticated pixel circuitry than is present on conventional AMFPIs – such as a pixel clamp circuit, which is demonstrated to limit signal saturation under conditions corresponding to high exposures. It is anticipated that photodiode structures such as the ones reported in this study will enable the development of even more complex pixel circuitry, such as pixel-level amplifiers, that will lead to further significant improvements in imager performance. PMID:19673228

  4. A 25μm pitch LWIR focal plane array with pixel-level 15-bit ADC providing high well capacity and targeting 2mK NETD

    NASA Astrophysics Data System (ADS)

    Guellec, Fabrice; Peizerat, Arnaud; Tchagaspanian, Michael; de Borniol, Eric; Bisotto, Sylvette; Mollard, Laurent; Castelein, Pierre; Zanatta, Jean-Paul; Maillart, Patrick; Zecri, Michel; Peyrard, Jean-Christophe

    2010-04-01

    CEA Leti has recently developed a new readout IC (ROIC) with pixel-level ADC for cooled infrared focal plane arrays (FPAs). It operates at 50Hz frame rate in a snapshot Integrate-While-Read (IWR) mode. It targets applications that provide a large amount of integrated charge thanks to a long integration time. The pixel-level analog-to-digital conversion is based on charge packets counting. This technique offers a large well capacity that paves the way for a breakthrough in NETD performances. The 15 bits ADC resolution preserves the excellent detector SNR at full well (3Ge-). These characteristics are essential for LWIR FPAs as broad intra-scene dynamic range imaging requires high sensitivity. The ROIC, featuring a 320x256 array with 25μm pixel pitch, has been designed in a standard 0.18μm CMOS technology. The main design challenges for this digital pixel array (SNR, power consumption and layout density) are discussed. The IC has been hybridized to a LWIR detector fabricated using our in-house HgCdTe process. The first electro-optical test results of the detector dewar assembly are presented. They validate both the pixel-level ADC concept and its circuit implementation. Finally, the benefit of this LWIR FPA in terms of NETD performance is demonstrated.

  5. Disc replacement adjacent to cervical fusion: a biomechanical comparison of hybrid construct versus two-level fusion.

    PubMed

    Lee, Michael J; Dumonski, Mark; Phillips, Frank M; Voronov, Leonard I; Renner, Susan M; Carandang, Gerard; Havey, Robert M; Patwardhan, Avinash G

    2011-11-01

    A cadaveric biomechanical study. To investigate the biomechanical behavior of the cervical spine after cervical total disc replacement (TDR) adjacent to a fusion as compared to a two-level fusion. There are concerns regarding the biomechanical effects of cervical fusion on the mobile motion segments. Although previous biomechanical studies have demonstrated that cervical disc replacement normalizes adjacent segment motion, there is a little information regarding the function of a cervical disc replacement adjacent to an anterior cervical decompression and fusion, a potentially common clinical application. Nine cadaveric cervical spines (C3-T1, age: 60.2 ± 3.5 years) were tested under load- and displacement-control testing. After intact testing, a simulated fusion was performed at C4-C5, followed by C6-C7. The simulated fusion was then reversed, and the response of TDR at C5-C6 was measured. A hybrid construct was then tested with the TDR either below or above a single-level fusion and contrasted with a simulated two-level fusion (C4-C6 and C5-C7). The external fixator device used to simulate fusion significantly reduced range of motion (ROM) at C4-C5 and C6-C7 by 74.7 ± 8.1% and 78.1 ± 11.5%, respectively (P < 0.05). Removal of the fusion construct restored the motion response of the spinal segments to their intact state. Arthroplasty performed at C5-C6 using the porous-coated motion disc prosthesis maintained the total flexion-extension ROM to the level of the intact controls when used as a stand-alone procedure or when implanted adjacent to a single-level fusion (P > 0.05). The location of the single-level fusion, whether above or below the arthroplasty, did not significantly affect the motion response of the arthroplasty in the hybrid construct. Performing a two-level fusion significantly increased the motion demands on the nonoperated segments as compared to a hybrid TDR-plus fusion construct when the spine was required to reach the same motion end points. The spine with a hybrid construct required significantly less extension moment than the spine with a two-level fusion to reach the same extension end point. The porous-coated motion cervical prosthesis restored the ROM of the treated level to the intact state. When the porous-coated motion prosthesis was used in a hybrid construct, the TDR response was not adversely affected. A hybrid construct seems to offer significant biomechanical advantages over two-level fusion in terms of reducing compensatory adjacent-level hypermobility and also loads required to achieve a predetermined ROM.

  6. Fast distributed large-pixel-count hologram computation using a GPU cluster.

    PubMed

    Pan, Yuechao; Xu, Xuewu; Liang, Xinan

    2013-09-10

    Large-pixel-count holograms are one essential part for big size holographic three-dimensional (3D) display, but the generation of such holograms is computationally demanding. In order to address this issue, we have built a graphics processing unit (GPU) cluster with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques, such as shared memory on GPU, GPU level adaptive load balancing, and node level load distribution. Using these speed improvement techniques on the GPU cluster, we have achieved 71.4 times computation speed increase for 186M-pixel holograms. Furthermore, we have used the approaches of diffraction limits and subdivision of holograms to overcome the GPU memory limit in computing large-pixel-count holograms. 745M-pixel and 1.80G-pixel holograms were computed in 343 and 3326 s, respectively, for more than 2 million object points with RGB colors. Color 3D objects with 1.02M points were successfully reconstructed from 186M-pixel hologram computed in 8.82 s with all the above three speed improvement techniques. It is shown that distributed hologram computation using a GPU cluster is a promising approach to increase the computation speed of large-pixel-count holograms for large size holographic display.

  7. Self-amplified CMOS image sensor using a current-mode readout circuit

    NASA Astrophysics Data System (ADS)

    Santos, Patrick M.; de Lima Monteiro, Davies W.; Pittet, Patrick

    2014-05-01

    The feature size of the CMOS processes decreased during the past few years and problems such as reduced dynamic range have become more significant in voltage-mode pixels, even though the integration of more functionality inside the pixel has become easier. This work makes a contribution on both sides: the possibility of a high signal excursion range using current-mode circuits together with functionality addition by making signal amplification inside the pixel. The classic 3T pixel architecture was rebuild with small modifications to integrate a transconductance amplifier providing a current as an output. The matrix with these new pixels will operate as a whole large transistor outsourcing an amplified current that will be used for signal processing. This current is controlled by the intensity of the light received by the matrix, modulated pixel by pixel. The output current can be controlled by the biasing circuits to achieve a very large range of output signal levels. It can also be controlled with the matrix size and this permits a very high degree of freedom on the signal level, observing the current densities inside the integrated circuit. In addition, the matrix can operate at very small integration times. Its applications would be those in which fast imaging processing, high signal amplification are required and low resolution is not a major problem, such as UV image sensors. Simulation results will be presented to support: operation, control, design, signal excursion levels and linearity for a matrix of pixels that was conceived using this new concept of sensor.

  8. Iterative algorithm for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution

    NASA Astrophysics Data System (ADS)

    Quan, Haiyang; Wu, Fan; Hou, Xi

    2015-10-01

    New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.

  9. Visual saliency in MPEG-4 AVC video stream

    NASA Astrophysics Data System (ADS)

    Ammar, M.; Mitrea, M.; Hasnaoui, M.; Le Callet, P.

    2015-03-01

    Visual saliency maps already proved their efficiency in a large variety of image/video communication application fields, covering from selective compression and channel coding to watermarking. Such saliency maps are generally based on different visual characteristics (like color, intensity, orientation, motion,…) computed from the pixel representation of the visual content. This paper resumes and extends our previous work devoted to the definition of a saliency map solely extracted from the MPEG-4 AVC stream syntax elements. The MPEG-4 AVC saliency map thus defined is a fusion of static and dynamic map. The static saliency map is in its turn a combination of intensity, color and orientation features maps. Despite the particular way in which all these elementary maps are computed, the fusion techniques allowing their combination plays a critical role in the final result and makes the object of the proposed study. A total of 48 fusion formulas (6 for combining static features and, for each of them, 8 to combine static to dynamic features) are investigated. The performances of the obtained maps are evaluated on a public database organized at IRCCyN, by computing two objective metrics: the Kullback-Leibler divergence and the area under curve.

  10. Single-Pixel Optical Fluctuation Analysis of Calcium Channel Function in Active Zones of Motor Nerve Terminals

    PubMed Central

    Luo, Fujun; Dittrich, Markus; Stiles, Joel R.; Meriney, Stephen D.

    2011-01-01

    We used high-resolution fluorescence imaging and single-pixel optical fluctuation analysis to estimate the opening probability of individual voltage-gated calcium (Ca2+) channels during an action potential and the number of such Ca2+ channels within active zones of frog neuromuscular junctions. Analysis revealed ~36 Ca2+ channels within each active zone, similar to the number of docked synaptic vesicles but far less than the total number of transmembrane particles reported based on freeze-fracture analysis (~200–250). The probability that each channel opened during an action potential was only ~0.2. These results suggest why each active zone averages only one quantal release event during every other action potential, despite a substantial number of docked vesicles. With sparse Ca2+ channels and low opening probability, triggering of fusion for each vesicle is primarily controlled by Ca2+ influx through individual Ca2+ channels. In contrast, the entire synapse is highly reliable because it contains hundreds of active zones. PMID:21813687

  11. Photovoltaic restoration of sight in rodents with retinal degeneration (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Palanker, Daniel V.

    2017-02-01

    To restore vision in patients who lost their photoreceptors due to retinal degeneration, we developed a photovoltaic subretinal prosthesis which converts light into pulsed electric current, stimulating the nearby inner retinal neurons. Visual information is projected onto the retina by video goggles using pulsed near-infrared ( 900nm) light. This design avoids the use of bulky electronics and wiring, thereby greatly reducing the surgical complexity. Optical activation of the photovoltaic pixels allows scaling the implants to thousands of electrodes, and multiple modules can be tiled under the retina to expand the visual field. We found that similarly to normal vision, retinal response to prosthetic stimulation exhibits flicker fusion at high frequencies (>20Hz), adaptation to static images, and non-linear summation of subunits in the receptive fields. Photovoltaic arrays with 70um pixels restored visual acuity up to a single pixel pitch, which is only two times lower than natural acuity in rats. If these results translate to human retina, such implants could restore visual acuity up to 20/250. With eye scanning and perceptual learning, human patients might even cross the 20/200 threshold of legal blindness. In collaboration with Pixium Vision, we are preparing this system (PRIMA) for a clinical trial. To further improve visual acuity, we are developing smaller pixels - down to 40um, and on 3-D interface to improve proximity to the target neurons. Scalability, ease of implantation and tiling of these wireless modules to cover a large visual field, combined with high resolution opens the door to highly functional restoration of sight.

  12. Accumulating pyramid spatial-spectral collaborative coding divergence for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Zou, Huanxin; Zhou, Shilin

    2016-03-01

    Detection of anomalous targets of various sizes in hyperspectral data has received a lot of attention in reconnaissance and surveillance applications. Many anomaly detectors have been proposed in literature. However, current methods are susceptible to anomalies in the processing window range and often make critical assumptions about the distribution of the background data. Motivated by the fact that anomaly pixels are often distinctive from their local background, in this letter, we proposed a novel hyperspectral anomaly detection framework for real-time remote sensing applications. The proposed framework consists of four major components, sparse feature learning, pyramid grid window selection, joint spatial-spectral collaborative coding and multi-level divergence fusion. It exploits the collaborative representation difference in the feature space to locate potential anomalies and is totally unsupervised without any prior assumptions. Experimental results on airborne recorded hyperspectral data demonstrate that the proposed methods adaptive to anomalies in a large range of sizes and is well suited for parallel processing.

  13. Biomechanical Analysis of Cervical Disc Replacement and Fusion Using Single Level, Two Level, and Hybrid Constructs.

    PubMed

    Gandhi, Anup A; Kode, Swathi; DeVries, Nicole A; Grosland, Nicole M; Smucker, Joseph D; Fredericks, Douglas C

    2015-10-15

    A biomechanical study comparing arthroplasty with fusion using human cadaveric C2-T1 spines. To compare the kinematics of the cervical spine after arthroplasty and fusion using single level, 2 level and hybrid constructs. Previous studies have shown that spinal levels adjacent to a fusion experience increased motion and higher stress which may lead to adjacent segment disc degeneration. Cervical arthroplasty achieves similar decompression but preserves the motion at the operated level, potentially decreasing the occurrence of adjacent segment disc degeneration. 11 specimens (C2-T1) were divided into 2 groups (BRYAN and PRESTIGE LP). The specimens were tested in the following order; intact, single level total disc replacement (TDR) at C5-C6, 2-level TDR at C5-C6-C7, fusion at C5-C6 and TDR at C6-C7 (Hybrid construct), and lastly a 2-level fusion. The intact specimens were tested up to a moment of 2.0 Nm. After each surgical intervention, the specimens were loaded until the primary motion (C2-T1) matched the motion of the respective intact state (hybrid control). An arthroplasty preserved motion at the implanted level and maintained normal motion at the nonoperative levels. Arthrodesis resulted in a significant decrease in motion at the fused level and an increase in motion at the unfused levels. In the hybrid construct, the TDR adjacent to fusion preserved motion at the arthroplasty level, thereby reducing the demand on the other levels. Cervical disc arthroplasty with both the BRYAN and PRESTIGE LP discs not only preserved the motion at the operated level, but also maintained the normal motion at the adjacent levels. Under simulated physiologic loading, the motion patterns of the spine with the BRYAN or PRESTIGE LP disc were very similar and were closer than fusion to the intact motion pattern. An adjacent segment disc replacement is biomechanically favorable to a fusion in the presence of a pre-existing fusion.

  14. The importance of proximal fusion level selection for outcomes of multi-level lumbar posterolateral fusion.

    PubMed

    Nam, Woo Dong; Cho, Jae Hwan

    2015-03-01

    There are few studies about risk factors for poor outcomes from multi-level lumbar posterolateral fusion limited to three or four level lumbar posterolateral fusions. The purpose of this study was to analyze the outcomes of multi-level lumbar posterolateral fusion and to search for possible risk factors for poor surgical outcomes. We retrospectively analyzed 37 consecutive patients who underwent multi-level lumbar or lumbosacral posterolateral fusion with posterior instrumentation. The outcomes were deemed either 'good' or 'bad' based on clinical and radiological results. Many demographic and radiological factors were analyzed to examine potential risk factors for poor outcomes. Student t-test, Fisher exact test, and the chi-square test were used based on the nature of the variables. Multiple logistic regression analysis was used to exclude confounding factors. Twenty cases showed a good outcome (group A, 54.1%) and 17 cases showed a bad outcome (group B, 45.9%). The overall fusion rate was 70.3%. The revision procedures (group A: 1/20, 5.0%; group B: 4/17, 23.5%), proximal fusion to L2 (group A: 5/20, 25.0%; group B: 10/17, 58.8%), and severity of stenosis (group A: 12/19, 63.3%; group B: 3/11, 27.3%) were adopted as possible related factors to the outcome in univariate analysis. Multiple logistic regression analysis revealed that only the proximal fusion level (superior instrumented vertebra, SIV) was a significant risk factor. The cases in which SIV was L2 showed inferior outcomes than those in which SIV was L3. The odds ratio was 6.562 (95% confidence interval, 1.259 to 34.203). The overall outcome of multi-level lumbar or lumbosacral posterolateral fusion was not as high as we had hoped it would be. Whether the SIV was L2 or L3 was the only significant risk factor identified for poor outcomes in multi-level lumbar or lumbosacral posterolateral fusion in the current study. Thus, the authors recommend that proximal fusion levels be carefully determined when multi-level lumbar fusions are considered.

  15. The Importance of Proximal Fusion Level Selection for Outcomes of Multi-Level Lumbar Posterolateral Fusion

    PubMed Central

    Nam, Woo Dong

    2015-01-01

    Background There are few studies about risk factors for poor outcomes from multi-level lumbar posterolateral fusion limited to three or four level lumbar posterolateral fusions. The purpose of this study was to analyze the outcomes of multi-level lumbar posterolateral fusion and to search for possible risk factors for poor surgical outcomes. Methods We retrospectively analyzed 37 consecutive patients who underwent multi-level lumbar or lumbosacral posterolateral fusion with posterior instrumentation. The outcomes were deemed either 'good' or 'bad' based on clinical and radiological results. Many demographic and radiological factors were analyzed to examine potential risk factors for poor outcomes. Student t-test, Fisher exact test, and the chi-square test were used based on the nature of the variables. Multiple logistic regression analysis was used to exclude confounding factors. Results Twenty cases showed a good outcome (group A, 54.1%) and 17 cases showed a bad outcome (group B, 45.9%). The overall fusion rate was 70.3%. The revision procedures (group A: 1/20, 5.0%; group B: 4/17, 23.5%), proximal fusion to L2 (group A: 5/20, 25.0%; group B: 10/17, 58.8%), and severity of stenosis (group A: 12/19, 63.3%; group B: 3/11, 27.3%) were adopted as possible related factors to the outcome in univariate analysis. Multiple logistic regression analysis revealed that only the proximal fusion level (superior instrumented vertebra, SIV) was a significant risk factor. The cases in which SIV was L2 showed inferior outcomes than those in which SIV was L3. The odds ratio was 6.562 (95% confidence interval, 1.259 to 34.203). Conclusions The overall outcome of multi-level lumbar or lumbosacral posterolateral fusion was not as high as we had hoped it would be. Whether the SIV was L2 or L3 was the only significant risk factor identified for poor outcomes in multi-level lumbar or lumbosacral posterolateral fusion in the current study. Thus, the authors recommend that proximal fusion levels be carefully determined when multi-level lumbar fusions are considered. PMID:25729522

  16. Data fusion for CD metrology: heterogeneous hybridization of scatterometry, CDSEM, and AFM data

    NASA Astrophysics Data System (ADS)

    Hazart, J.; Chesneau, N.; Evin, G.; Largent, A.; Derville, A.; Thérèse, R.; Bos, S.; Bouyssou, R.; Dezauzier, C.; Foucher, J.

    2014-04-01

    The manufacturing of next generation semiconductor devices forces metrology tool providers for an exceptional effort in order to meet the requirements for precision, accuracy and throughput stated in the ITRS. In the past years hybrid metrology (based on data fusion theories) has been investigated as a new methodology for advanced metrology [1][2][3]. This paper provides a new point of view of data fusion for metrology through some experiments and simulations. The techniques are presented concretely in terms of equations to be solved. The first point of view is High Level Fusion which is the use of simple numbers with their associated uncertainty postprocessed by tools. In this paper, it is divided into two stages: one for calibration to reach accuracy, the second to reach precision thanks to Bayesian Fusion. From our perspective, the first stage is mandatory before applying the second stage which is commonly presented [1]. However a reference metrology system is necessary for this fusion. So, precision can be improved if and only if the tools to be fused are perfectly matched at least for some parameters. We provide a methodology similar to a multidimensional TMU able to perform this matching exercise. It is demonstrated on a 28 nm node backend lithography case. The second point of view is Deep Level Fusion which works on the contrary with raw data and their combination. In the approach presented here, the analysis of each raw data is based on a parametric model and connections between the parameters of each tool. In order to allow OCD/SEM Deep Level Fusion, a SEM Compact Model derived from [4] has been developed and compared to AFM. As far as we know, this is the first time such techniques have been coupled at Deep Level. A numerical study on the case of a simple stack for lithography is performed. We show strict equivalence of Deep Level Fusion and High Level Fusion when tools are sensitive and models are perfect. When one of the tools can be considered as a reference and the second is biased, High Level Fusion is far superior to standard Deep Level Fusion. Otherwise, only the second stage of High Level Fusion is possible (Bayesian Fusion) and do not provide substantial advantage. Finally, when OCD is equipped with methods for bias detection [5], Deep Level Fusion outclasses the two-stage High Level Fusion and will benefit to the industry for most advanced nodes production.

  17. Performance Evaluation of Fusing Protected Fingerprint Minutiae Templates on the Decision Level

    PubMed Central

    Yang, Bian; Busch, Christoph; de Groot, Koen; Xu, Haiyun; Veldhuis, Raymond N. J.

    2012-01-01

    In a biometric authentication system using protected templates, a pseudonymous identifier is the part of a protected template that can be directly compared. Each compared pair of pseudonymous identifiers results in a decision testing whether both identifiers are derived from the same biometric characteristic. Compared to an unprotected system, most existing biometric template protection methods cause to a certain extent degradation in biometric performance. Fusion is therefore a promising way to enhance the biometric performance in template-protected biometric systems. Compared to feature level fusion and score level fusion, decision level fusion has not only the least fusion complexity, but also the maximum interoperability across different biometric features, template protection and recognition algorithms, templates formats, and comparison score rules. However, performance improvement via decision level fusion is not obvious. It is influenced by both the dependency and the performance gap among the conducted tests for fusion. We investigate in this paper several fusion scenarios (multi-sample, multi-instance, multi-sensor, multi-algorithm, and their combinations) on the binary decision level, and evaluate their biometric performance and fusion efficiency on a multi-sensor fingerprint database with 71,994 samples. PMID:22778583

  18. Fusion of Geophysical Images in the Study of Archaeological Sites

    NASA Astrophysics Data System (ADS)

    Karamitrou, A. A.; Petrou, M.; Tsokas, G. N.

    2011-12-01

    This paper presents results from different fusion techniques between geophysical images from different modalities in order to combine them into one image with higher information content than the two original images independently. The resultant image will be useful for the detection and mapping of buried archaeological relics. The examined archaeological area is situated in Kampana site (NE Greece) near the ancient theater of Maronia city. Archaeological excavations revealed an ancient theater, an aristocratic house and the temple of the ancient Greek God Dionysus. Numerous ceramic objects found in the broader area indicated the probability of the existence of buried urban structure. In order to accurately locate and map the latter, geophysical measurements performed with the use of the magnetic method (vertical gradient of the magnetic field) and of the electrical method (apparent resistivity). We performed a semi-stochastic pixel based registration method between the geophysical images in order to fine register them by correcting their local spatial offsets produced by the use of hand held devices. After this procedure we applied to the registered images three different fusion approaches. Image fusion is a relatively new technique that not only allows integration of different information sources, but also takes advantage of the spatial and spectral resolution as well as the orientation characteristics of each image. We have used three different fusion techniques, fusion with mean values, with wavelets by enhancing selected frequency bands and curvelets giving emphasis at specific bands and angles (according the expecting orientation of the relics). In all three cases the fused images gave significantly better results than each of the original geophysical images separately. The comparison of the results of the three different approaches showed that the fusion with the use of curvelets, giving emphasis at the features' orientation, seems to give the best fused image. In the resultant image appear clear linear and ellipsoid features corresponding to potential archaeological relics.

  19. LWIR pupil imaging and longer-term calibration stability

    NASA Astrophysics Data System (ADS)

    LeVan, Paul D.; Sakoglu, Ünal

    2016-09-01

    A previous paper described LWIR pupil imaging, and an improved understanding of the behavior of this type of sensor for which the high-sensitivity focal plane array (FPA) operated at higher flux levels includes a reversal in signal integration polarity. We have since considered a candidate methodology for efficient, long-term calibration stability that exploits the following two properties of pupil imaging: (1) a fixed pupil position on the FPA, and (2) signal levels from the scene imposed on significant but fixed LWIR background levels. These two properties serve to keep each pixel operating over a limited dynamic range that corresponds to its location in the pupil and to the signal levels generated at this location by the lower and upper calibration flux levels. Exploiting this property for which each pixel of the Pupil Imager operates over its limited dynamic range, the signal polarity reversal between low and high flux pixels, which occurs for a circular region of pixels near the upper edges of the pupil illumination profile, can be rectified to unipolar integration with a two-level non-uniformity correction (NUC). Images corrected real-time with standard non-uniformity correction (NUC) techniques, are still subject to longer-term drifts in pixel offsets between recalibrations. Long-term calibration stability might then be achieved using either a scene-based non-uniformity correction approach, or with periodic repointing for off-source background estimation and subtraction. Either approach requires dithering of the field of view, by sub-pixel amounts for the first method, or by large off-source motions outside the 0.38 milliradian FOV for the latter method. We report on the results of investigations along both these lines.

  20. Characterization and correction of charge-induced pixel shifts in DECam

    DOE PAGES

    Gruen, D.; Bernstein, G. M.; Jarvis, M.; ...

    2015-05-28

    Interaction of charges in CCDs with the already accumulated charge distribution causes both a flux dependence of the point-spread function (an increase of observed size with flux, also known as the brighter/fatter effect) and pixel-to-pixel correlations of the Poissonian noise in flat fields. We describe these effects in the Dark Energy Camera (DECam) with charge dependent shifts of effective pixel borders, i.e. the Antilogus et al. (2014) model, which we fit to measurements of flat-field Poissonian noise correlations. The latter fall off approximately as a power-law r -2.5 with pixel separation r, are isotropic except for an asymmetry in themore » direct neighbors along rows and columns, are stable in time, and are weakly dependent on wavelength. They show variations from chip to chip at the 20% level that correlate with the silicon resistivity. The charge shifts predicted by the model cause biased shape measurements, primarily due to their effect on bright stars, at levels exceeding weak lensing science requirements. We measure the flux dependence of star images and show that the effect can be mitigated by applying the reverse charge shifts at the pixel level during image processing. Differences in stellar size, however, remain significant due to residuals at larger distance from the centroid.« less

  1. Combinational pixel-by-pixel and object-level classifying, segmenting, and agglomerating in performing quantitative image analysis that distinguishes between healthy non-cancerous and cancerous cell nuclei and delineates nuclear, cytoplasm, and stromal material objects from stained biological tissue materials

    DOEpatents

    Boucheron, Laura E

    2013-07-16

    Quantitative object and spatial arrangement-level analysis of tissue are detailed using expert (pathologist) input to guide the classification process. A two-step method is disclosed for imaging tissue, by classifying one or more biological materials, e.g. nuclei, cytoplasm, and stroma, in the tissue into one or more identified classes on a pixel-by-pixel basis, and segmenting the identified classes to agglomerate one or more sets of identified pixels into segmented regions. Typically, the one or more biological materials comprises nuclear material, cytoplasm material, and stromal material. The method further allows a user to markup the image subsequent to the classification to re-classify said materials. The markup is performed via a graphic user interface to edit designated regions in the image.

  2. The Level 0 Pixel Trigger system for the ALICE experiment

    NASA Astrophysics Data System (ADS)

    Aglieri Rinella, G.; Kluge, A.; Krivda, M.; ALICE Silicon Pixel Detector project

    2007-01-01

    The ALICE Silicon Pixel Detector contains 1200 readout chips. Fast-OR signals indicate the presence of at least one hit in the 8192 pixel matrix of each chip. The 1200 bits are transmitted every 100 ns on 120 data readout optical links using the G-Link protocol. The Pixel Trigger System extracts and processes them to deliver an input signal to the Level 0 trigger processor targeting a latency of 800 ns. The system is compact, modular and based on FPGA devices. The architecture allows the user to define and implement various trigger algorithms. The system uses advanced 12-channel parallel optical fiber modules operating at 1310 nm as optical receivers and 12 deserializer chips closely packed in small area receiver boards. Alternative solutions with multi-channel G-Link deserializers implemented directly in programmable hardware devices were investigated. The design of the system and the progress of the ALICE Pixel Trigger project are described in this paper.

  3. Toward Unified Satellite Climatology of Aerosol Properties. 3. MODIS Versus MISR Versus AERONET

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Liu, Li; Geogdzhayev, Igor V.; Travis, Larry D.; Cairns, Brian; Lacis, Andrew A.

    2010-01-01

    We use the full duration of collocated pixel-level MODIS-Terra and MISR aerosol optical thickness (AOT) retrievals and level 2 cloud-screened quality-assured AERONET measurements to evaluate the likely individual MODIS and MISR retrieval accuracies globally over oceans and land. We show that the use of quality-assured MODIS AOTs as opposed to the use of all MODIS AOTs has little effect on the resulting accuracy. The MODIS and MISR relative standard deviations (RSTDs) with respect to AERONET are remarkably stable over the entire measurement record and reveal nearly identical overall AOT performances of MODIS and MISR over the entire suite of AERONET sites. This result is used to evaluate the likely pixel-level MODIS and MISR performances on the global basis with respect to the (unknown) actual AOTs. For this purpose, we use only fully compatible MISR and MODIS aerosol pixels. We conclude that the likely RSTDs for this subset of MODIS and MISR AOTs are 73% over land and 30% over oceans. The average RSTDs for the combined [AOT(MODIS)+AOT(MISR)]/2 pixel-level product are close to 66% and 27%, respectively, which allows us to recommend this simple blend as a better alternative to the original MODIS and MISR data. These accuracy estimates still do not represent the totality of MISR and quality-assured MODIS pixel-level AOTs since an unaccounted for and potentially significant source of errors is imperfect cloud screening. Furthermore, many collocated pixels for which one of the datasets reports a retrieval, whereas the other one does not may also be problematic.

  4. Analysis and Enhancement of Low-Light-Level Performance of Photodiode-Type CMOS Active Pixel Images Operated with Sub-Threshold Reset

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Yang, Guang; Ortiz, Monico; Wrigley, Christopher; Hancock, Bruce; Cunningham, Thomas

    2000-01-01

    Noise in photodiode-type CMOS active pixel sensors (APS) is primarily due to the reset (kTC) noise at the sense node, since it is difficult to implement in-pixel correlated double sampling for a 2-D array. Signal integrated on the photodiode sense node (SENSE) is calculated by measuring difference between the voltage on the column bus (COL) - before and after the reset (RST) is pulsed. Lower than kTC noise can be achieved with photodiode-type pixels by employing "softreset" technique. Soft-reset refers to resetting with both drain and gate of the n-channel reset transistor kept at the same potential, causing the sense node to be reset using sub-threshold MOSFET current. However, lowering of noise is achieved only at the expense higher image lag and low-light-level non-linearity. In this paper, we present an analysis to explain the noise behavior, show evidence of degraded performance under low-light levels, and describe new pixels that eliminate non-linearity and lag without compromising noise.

  5. Daily monitoring of 30 m crop condition over complex agricultural landscapes

    NASA Astrophysics Data System (ADS)

    Sun, L.; Gao, F.; Xie, D.; Anderson, M. C.; Yang, Y.

    2017-12-01

    Crop progress provides information necessary for efficient irrigation, scheduling fertilization and harvesting operations at optimal times for achieving higher yields. In the United States, crop progress reports are released online weekly by US Department of Agriculture (USDA) - National Agricultural Statistics Service (NASS). However, the ground data collection is time consuming and subjective, and these reports are provided at either district (multiple counties) or state level. Remote sensing technologies have been widely used to map crop conditions, to extract crop phenology, and to predict crop yield. However, for current satellite-based sensors, it is difficult to acquire both high spatial resolution and frequent coverage. For example, Landsat satellites are capable to capture 30 m resolution images, while the long revisit cycles, cloud contamination further limited their use in detecting rapid surface changes. On the other hand, MODIS can provide daily observations, but with coarse spatial resolutions range from 250 to 1000 m. In recent years, multi-satellite data fusion technology such as the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) has been used to combine the spatial resolution of Landsat with the temporal frequency of MODIS. It has been found that this synthetic dataset could provide more valuable information compared to the images acquired from only one single sensor. However, accuracy of STARFM depends on heterogeneity of landscape and available clear image pairs of MODIS and Landsat. In this study, a new fusion method was developed using the crop vegetation index (VI) timeseries extracted from "pure" MODIS pixels and Landsat overpass images to generate daily 30 m VI for crops. The fusion accuracy was validated by comparing to the original Landsat images. Results show that the relative error in non-rapid growing period is around 3-5% and in rapid growing period is around 6-8% . The accuracy is much better than that of STARFM which is 4-9% in non-rapid growing period and 10-16% in rapid growing period based on 13 image pairs. The predicted VI from this approach looks consistent and smooth in the SLC-off gap stripes of Landsat 7 ETM+ image. The new fusion results will be used to map crop phenology and to predict crop yield at field scale in the complex agricultural landscapes.

  6. [Research Progress of Multi-Model Medical Image Fusion at Feature Level].

    PubMed

    Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun

    2016-04-01

    Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.

  7. Ambiguity of Quality in Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Leptoukh, Greg

    2010-01-01

    This slide presentation reviews some of the issues in quality of remote sensing data. Data "quality" is used in several different contexts in remote sensing data, with quite different meanings. At the pixel level, quality typically refers to a quality control process exercised by the processing algorithm, not an explicit declaration of accuracy or precision. File level quality is usually a statistical summary of the pixel-level quality but is of doubtful use for scenes covering large areal extents. Quality at the dataset or product level, on the other hand, usually refers to how accurately the dataset is believed to represent the physical quantities it purports to measure. This assessment often bears but an indirect relationship at best to pixel level quality. In addition to ambiguity at different levels of granularity, ambiguity is endemic within levels. Pixel-level quality terms vary widely, as do recommendations for use of these flags. At the dataset/product level, quality for low-resolution gridded products is often extrapolated from validation campaigns using high spatial resolution swath data, a suspect practice at best. Making use of quality at all levels is complicated by the dependence on application needs. We will present examples of the various meanings of quality in remote sensing data and possible ways forward toward a more unified and usable quality framework.

  8. Ontological Issues in Higher Levels of Information Fusion: User Refinement of the Fusion Process

    DTIC Science & Technology

    2003-01-01

    fusion question, the thing that is separates the Greek We explore the higher-level purpose offusion systems by philosophical questions and modem day...the The Greeks focused on both data fusion and the Fusion02 conference there are common fusion questions philosophical questions of an ontology - the...data World of Visible Things Belief (pistis) fusion - user refinement. The rest of the paper is as Appearances follows: Section 2 details the Greek

  9. A color fusion method of infrared and low-light-level images based on visual perception

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  10. Construction of pixel-level resolution DEMs from monocular images by shape and albedo from shading constrained with low-resolution DEM

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Liu, Wai Chung; Grumpe, Arne; Wöhler, Christian

    2018-06-01

    Lunar Digital Elevation Model (DEM) is important for lunar successful landing and exploration missions. Lunar DEMs are typically generated by photogrammetry or laser altimetry approaches. Photogrammetric methods require multiple stereo images of the region of interest and it may not be applicable in cases where stereo coverage is not available. In contrast, reflectance based shape reconstruction techniques, such as shape from shading (SfS) and shape and albedo from shading (SAfS), apply monocular images to generate DEMs with pixel-level resolution. We present a novel hierarchical SAfS method that refines a lower-resolution DEM to pixel-level resolution given a monocular image with known light source. We also estimate the corresponding pixel-wise albedo map in the process and based on that to regularize the shape reconstruction with pixel-level resolution based on the low-resolution DEM. In this study, a Lunar-Lambertian reflectance model is applied to estimate the albedo map. Experiments were carried out using monocular images from the Lunar Reconnaissance Orbiter Narrow Angle Camera (LRO NAC), with spatial resolution of 0.5-1.5 m per pixel, constrained by the Selenological and Engineering Explorer and LRO Elevation Model (SLDEM), with spatial resolution of 60 m. The results indicate that local details are well recovered by the proposed algorithm with plausible albedo estimation. The low-frequency topographic consistency depends on the quality of low-resolution DEM and the resolution difference between the image and the low-resolution DEM.

  11. Symbol recognition via statistical integration of pixel-level constraint histograms: a new descriptor.

    PubMed

    Yang, Su

    2005-02-01

    A new descriptor for symbol recognition is proposed. 1) A histogram is constructed for every pixel to figure out the distribution of the constraints among the other pixels. 2) All the histograms are statistically integrated to form a feature vector with fixed dimension. The robustness and invariance were experimentally confirmed.

  12. Decadal Trend in Agricultural Abandonment and Woodland Expansion in an Agro-Pastoral Transition Band in Northern China.

    PubMed

    Wang, Chao; Gao, Qiong; Wang, Xian; Yu, Mei

    2015-01-01

    Land use land cover (LULC) changes frequently in ecotones due to the large climate and soil gradients, and complex landscape composition and configuration. Accurate mapping of LULC changes in ecotones is of great importance for assessment of ecosystem functions/services and policy-decision support. Decadal or sub-decadal mapping of LULC provides scenarios for modeling biogeochemical processes and their feedbacks to climate, and evaluating effectiveness of land-use policies, e.g. forest conversion. However, it remains a great challenge to produce reliable LULC maps in moderate resolution and to evaluate their uncertainties over large areas with complex landscapes. In this study we developed a robust LULC classification system using multiple classifiers based on MODIS (Moderate Resolution Imaging Spectroradiometer) data and posterior data fusion. Not only does the system create LULC maps with high statistical accuracy, but also it provides pixel-level uncertainties that are essential for subsequent analyses and applications. We applied the classification system to the Agro-pasture transition band in northern China (APTBNC) to detect the decadal changes in LULC during 2003-2013 and evaluated the effectiveness of the implementation of major Key Forestry Programs (KFPs). In our study, the random forest (RF), support vector machine (SVM), and weighted k-nearest neighbors (WKNN) classifiers outperformed the artificial neural networks (ANN) and naive Bayes (NB) in terms of high classification accuracy and low sensitivity to training sample size. The Bayesian-average data fusion based on the results of RF, SVM, and WKNN achieved the 87.5% Kappa statistics, higher than any individual classifiers and the majority-vote integration. The pixel-level uncertainty map agreed with the traditional accuracy assessment. However, it conveys spatial variation of uncertainty. Specifically, it pinpoints the southwestern area of APTBNC has higher uncertainty than other part of the region, and the open shrubland is likely to be misclassified to the bare ground in some locations. Forests, closed shrublands, and grasslands in APTBNC expanded by 23%, 50%, and 9%, respectively, during 2003-2013. The expansion of these land cover types is compensated with the shrinkages in croplands (20%), bare ground (15%), and open shrublands (30%). The significant decline in agricultural lands is primarily attributed to the KFPs implemented in the end of last century and the nationwide urbanization in recent decade. The increased coverage of grass and woody plants would largely reduce soil erosion, improve mitigation of climate change, and enhance carbon sequestration in this region.

  13. The Relationship between Serum Vitamin D Levels and Spinal Fusion Success: A Quantitative Analysis

    PubMed Central

    Metzger, Melodie F.; Kanim, Linda E.; Zhao, Li; Robinson, Samuel T.; Delamarter, Rick B.

    2015-01-01

    Study Design An in vivo dosing study of vitamin D in a rat posterolateral spinal fusion model with autogenous bone grafting. Rats randomized to four levels of Vitamin D adjusted rat chow, longitudinal serum validation, surgeons/observers blinded to dietary conditions, and rats followed prospectively for fusion endpoint. Objective To assess the impact of dietary and serum levels of Vitamin D on fusion success, consolidation of fusion mass, and biomechanical stiffness after posterolateral spinal fusion procedure. Summary of Background Data Metabolic risk factors, including vitamin D insufficiency, are often overlooked by spine surgeons. Currently there are no published data on the causal effect of insufficient or deficient vitamin D levels on the success of establishing solid bony union after a spinal fusion procedure. Methods 50 rats were randomized to four experimentally controlled rat chow diets: normal control, vitamin D-deficient, vitamin-D insufficient, and a non-toxic high dose of vitamin D, four weeks prior to surgery and maintained post-surgery until sacrifice. Serum levels of 25(OH)D were determined at surgery and sacrifice using radioimmunoassay. Posterolateral fusion surgery with tail autograft was performed. Rats were sacrificed 12 weeks post-operatively and fusion was evaluated via manual palpation, high resolution radiographs, μCT, and biomechanical testing. Results Serum 25(OH)D and calcium levels were significantly correlated with vitamin-D adjusted chow (p<0.001). There was a dose dependent relationship between vitamin D adjusted chow and manual palpation fusion with greatest differences found in measures of radiographic density between high and deficient vitamin D (p<0.05). Adequate levels of vitamin D (high and normal control) yielded stiffer fusion than inadequate levels (insufficient and deficient) (p<0.05). Conclusions Manual palpation fusion rates increased with supplementation of dietary vitamin D. Biomechanical stiffness, bone volume and density were also positively-related to vitamin D, and calcium. PMID:25627287

  14. Event-driven charge-coupled device design and applications therefor

    NASA Technical Reports Server (NTRS)

    Doty, John P. (Inventor); Ricker, Jr., George R. (Inventor); Burke, Barry E. (Inventor); Prigozhin, Gregory Y. (Inventor)

    2005-01-01

    An event-driven X-ray CCD imager device uses a floating-gate amplifier or other non-destructive readout device to non-destructively sense a charge level in a charge packet associated with a pixel. The output of the floating-gate amplifier is used to identify each pixel that has a charge level above a predetermined threshold. If the charge level is above a predetermined threshold the charge in the triggering charge packet and in the charge packets from neighboring pixels need to be measured accurately. A charge delay register is included in the event-driven X-ray CCD imager device to enable recovery of the charge packets from neighboring pixels for accurate measurement. When a charge packet reaches the end of the charge delay register, control logic either dumps the charge packet, or steers the charge packet to a charge FIFO to preserve it if the charge packet is determined to be a packet that needs accurate measurement. A floating-diffusion amplifier or other low-noise output stage device, which converts charge level to a voltage level with high precision, provides final measurement of the charge packets. The voltage level is eventually digitized by a high linearity ADC.

  15. Two-level noncontiguous versus three-level anterior cervical discectomy and fusion: a biomechanical comparison.

    PubMed

    Finn, Michael A; Samuelson, Mical M; Bishop, Frank; Bachus, Kent N; Brodke, Darrel S

    2011-03-15

    Biomechanical study. To determine biomechanical forces exerted on intermediate and adjacent segments after two- or three-level fusion for treatment of noncontiguous levels. Increased motion adjacent to fused spinal segments is postulated to be a driving force in adjacent segment degeneration. Occasionally, a patient requires treatment of noncontiguous levels on either side of a normal level. The biomechanical forces exerted on the intermediate and adjacent levels are unknown. Seven intact human cadaveric cervical spines (C3-T1) were mounted in a custom seven-axis spine simulator equipped with a follower load apparatus and OptoTRAK three-dimensional tracking system. Each intact specimen underwent five cycles each of flexion/extension, lateral bending, and axial rotation under a ± 1.5 Nm moment and a 100-Nm axial follower load. Applied torque and motion data in each axis of motion and level were recorded. Testing was repeated under the same parameters after C4-C5 and C6-C7 diskectomies were performed and fused with rigid cervical plates and interbody spacers and again after a three-level fusion from C4 to C7. Range of motion was modestly increased (35%) in the intermediate and adjacent levels in the skip fusion construct. A significant or nearly significant difference was reached in seven of nine moments. With the three-level fusion construct, motion at the infra- and supra-adjacent levels was significantly or nearly significantly increased in all applied moments over the intact and the two-level noncontiguous construct. The magnitude of this change was substantial (72%). Infra- and supra-adjacent levels experienced a marked increase in strain in all moments with a three-level fusion, whereas the intermediate, supra-, and infra-adjacent segments of a two-level fusion experienced modest strain moments relative to intact. It would be appropriate to consider noncontiguous fusions instead of a three-level fusion when confronted with nonadjacent disease.

  16. CMOS Active Pixel Sensors for Low Power, Highly Miniaturized Imaging Systems

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.

    1996-01-01

    The complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology has been developed over the past three years by NASA at the Jet Propulsion Laboratory, and has reached a level of performance comparable to CCDs with greatly increased functionality but at a very reduced power level.

  17. Data fusion in cyber security: first order entity extraction from common cyber data

    NASA Astrophysics Data System (ADS)

    Giacobe, Nicklaus A.

    2012-06-01

    The Joint Directors of Labs Data Fusion Process Model (JDL Model) provides a framework for how to handle sensor data to develop higher levels of inference in a complex environment. Beginning from a call to leverage data fusion techniques in intrusion detection, there have been a number of advances in the use of data fusion algorithms in this subdomain of cyber security. While it is tempting to jump directly to situation-level or threat-level refinement (levels 2 and 3) for more exciting inferences, a proper fusion process starts with lower levels of fusion in order to provide a basis for the higher fusion levels. The process begins with first order entity extraction, or the identification of important entities represented in the sensor data stream. Current cyber security operational tools and their associated data are explored for potential exploitation, identifying the first order entities that exist in the data and the properties of these entities that are described by the data. Cyber events that are represented in the data stream are added to the first order entities as their properties. This work explores typical cyber security data and the inferences that can be made at the lower fusion levels (0 and 1) with simple metrics. Depending on the types of events that are expected by the analyst, these relatively simple metrics can provide insight on their own, or could be used in fusion algorithms as a basis for higher levels of inference.

  18. Active-Pixel Image Sensor With Analog-To-Digital Converters

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.; Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.

    1995-01-01

    Proposed single-chip integrated-circuit image sensor contains 128 x 128 array of active pixel sensors at 50-micrometer pitch. Output terminals of all pixels in each given column connected to analog-to-digital (A/D) converter located at bottom of column. Pixels scanned in semiparallel fashion, one row at time; during time allocated to scanning row, outputs of all active pixel sensors in row fed to respective A/D converters. Design of chip based on complementary metal oxide semiconductor (CMOS) technology, and individual circuit elements fabricated according to 2-micrometer CMOS design rules. Active pixel sensors designed to operate at video rate of 30 frames/second, even at low light levels. A/D scheme based on first-order Sigma-Delta modulation.

  19. A design of optical modulation system with pixel-level modulation accuracy

    NASA Astrophysics Data System (ADS)

    Zheng, Shiwei; Qu, Xinghua; Feng, Wei; Liang, Baoqiu

    2018-01-01

    Vision measurement has been widely used in the field of dimensional measurement and surface metrology. However, traditional methods of vision measurement have many limits such as low dynamic range and poor reconfigurability. The optical modulation system before image formation has the advantage of high dynamic range, high accuracy and more flexibility, and the modulation accuracy is the key parameter which determines the accuracy and effectiveness of optical modulation system. In this paper, an optical modulation system with pixel level accuracy is designed and built based on multi-points reflective imaging theory and digital micromirror device (DMD). The system consisted of digital micromirror device, CCD camera and lens. Firstly we achieved accurate pixel-to-pixel correspondence between the DMD mirrors and the CCD pixels by moire fringe and an image processing of sampling and interpolation. Then we built three coordinate systems and calculated the mathematic relationship between the coordinate of digital micro-mirror and CCD pixels using a checkerboard pattern. A verification experiment proves that the correspondence error is less than 0.5 pixel. The results show that the modulation accuracy of system meets the requirements of modulation. Furthermore, the high reflecting edge of a metal circular piece can be detected using the system, which proves the effectiveness of the optical modulation system.

  20. Multi-energy x-ray detector calibration for Te and impurity density (nZ) measurements of MCF plasmas

    NASA Astrophysics Data System (ADS)

    Maddox, J.; Pablant, N.; Efthimion, P.; Delgado-Aparicio, L.; Hill, K. W.; Bitter, M.; Reinke, M. L.; Rissi, M.; Donath, T.; Luethi, B.; Stratton, B.

    2016-11-01

    Soft x-ray detection with the new "multi-energy" PILATUS3 detector systems holds promise as a magnetically confined fusion (MCF) plasma diagnostic for ITER and beyond. The measured x-ray brightness can be used to determine impurity concentrations, electron temperatures, ne 2 Z eff products, and to probe the electron energy distribution. However, in order to be effective, these detectors which are really large arrays of detectors with photon energy gating capabilities must be precisely calibrated for each pixel. The energy-dependence of the detector response of the multi-energy PILATUS3 system with 100 K pixels has been measured at Dectris Laboratory. X-rays emitted from a tube under high voltage bombard various elements such that they emit x-ray lines from Zr-Lα to Ag-Kα between 1.8 and 22.16 keV. Each pixel on the PILATUS3 can be set to a minimum energy threshold in the range from 1.6 to 25 keV. This feature allows a single detector to be sensitive to a variety of x-ray energies, so that it is possible to sample the energy distribution of the x-ray continuum and line-emission. PILATUS3 can be configured for 1D or 2D imaging of MCF plasmas with typical spatial energy and temporal resolution of 1 cm, 0.6 keV, and 5 ms, respectively.

  1. Multidirectional testing of one- and two-level ProDisc-L versus simulated fusions.

    PubMed

    Panjabi, Manohar; Henderson, Gweneth; Abjornson, Celeste; Yue, James

    2007-05-20

    An in vitro human cadaveric biomechanical study. To evaluate intervertebral rotation changes due to lumbar ProDisc-L compared with simulated fusion, using follower load and multidirectional testing. Artificial discs, as opposed to the fusions, are thought to decrease the long-term accelerated degeneration at adjacent levels. A biomechanical assessment can be helpful, as the long-term clinical evaluation is impractical. Six fresh human cadaveric lumbar specimens (T12-S1) underwent multidirectional testing in flexion-extension, bilateral lateral bending, and bilateral torsion using the Hybrid test method. First, intact specimen total range of rotation (T12-S1) was determined. Second, using pure moments again, this range of rotation was achieved in each of the 5 constructs: A) ProDisc-L at L5-S1; B) fusion at L5-S1; C) ProDisc-L at L4-L5 and fusion at L5-S1; D) ProDisc-L at L4-L5 and L5-S1; and E) 2-level fusion at L4-L5 to L5-S1. Significant changes in the intervertebral rotations due to each construct were determined at the operated and nonoperated levels using repeated measures single factor ANOVA and Bonferroni statistical tests (P < 0.05). Adjacent-level effects (ALEs) were defined as the percentage changes in intervertebral rotations at the nonoperated levels due to the constructs. One- and 2-level ProDisc-L constructs showed only small ALE in any of the 3 rotations. In contrast, 1- and 2-level fusions showed increased ALE in all 3 directions (average, 7.8% and 35.3%, respectively, for 1 and 2 levels). In the disc plus fusion combination (construct C), the ALEs were similar to the 1-level fusion alone. In general, ProDisc-L preserved physiologic motions at all spinal levels, while the fusion simulations resulted in significant ALE.

  2. Fusion of shallow and deep features for classification of high-resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Gao, Lang; Tian, Tian; Sun, Xiao; Li, Hang

    2018-02-01

    Effective spectral and spatial pixel description plays a significant role for the classification of high resolution remote sensing images. Current approaches of pixel-based feature extraction are of two main kinds: one includes the widelyused principal component analysis (PCA) and gray level co-occurrence matrix (GLCM) as the representative of the shallow spectral and shape features, and the other refers to the deep learning-based methods which employ deep neural networks and have made great promotion on classification accuracy. However, the former traditional features are insufficient to depict complex distribution of high resolution images, while the deep features demand plenty of samples to train the network otherwise over fitting easily occurs if only limited samples are involved in the training. In view of the above, we propose a GLCM-based convolution neural network (CNN) approach to extract features and implement classification for high resolution remote sensing images. The employment of GLCM is able to represent the original images and eliminate redundant information and undesired noises. Meanwhile, taking shallow features as the input of deep network will contribute to a better guidance and interpretability. In consideration of the amount of samples, some strategies such as L2 regularization and dropout methods are used to prevent over-fitting. The fine-tuning strategy is also used in our study to reduce training time and further enhance the generalization performance of the network. Experiments with popular data sets such as PaviaU data validate that our proposed method leads to a performance improvement compared to individual involved approaches.

  3. Investigation of Joint Visibility Between SAR and Optical Images of Urban Environments

    NASA Astrophysics Data System (ADS)

    Hughes, L. H.; Auer, S.; Schmitt, M.

    2018-05-01

    In this paper, we present a work-flow to investigate the joint visibility between very-high-resolution SAR and optical images of urban scenes. For this task, we extend the simulation framework SimGeoI to enable a simulation of individual pixels rather than complete images. Using the extended SimGeoI simulator, we carry out a case study using a TerraSAR-X staring spotlight image and a Worldview-2 panchromatic image acquired over the city of Munich, Germany. The results of this study indicate that about 55 % of the scene are visible in both images and are thus suitable for matching and data fusion endeavours, while about 25 % of the scene are affected by either radar shadow or optical occlusion. Taking the image acquisition parameters into account, our findings can provide support regarding the definition of upper bounds for image fusion tasks, as well as help to improve acquisition planning with respect to different application goals.

  4. Dynamic monitoring of the Poyang Lake wetland by integrating Landsat and MODIS observations

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Chen, Lifan; Huang, Bo; Michishita, Ryo; Xu, Bing

    2018-05-01

    The spatial and temporal adaptive reflectance fusion models (STARFM) have limited practical applications, because they often enforce the invalid assumption that land cover change does not occur between prior/posterior and target dates. To deal with this challenge, we proposed a spatiotemporal adaptive fusion model for NDVI products (STAFFN), to better blend highly resolved spatial and temporal information from multiple sensors. Compared with existing spatiotemporal fusion models, the proposed model integrates an initial prediction into a hierarchical selection strategy of similar pixels, and can capture landscape changes very well. Experiments using spatial details and temporal abundance comparison among MODIS, Landsat, and fusion results show that the predicted data can accurately capture temporal changes while preserving fine-spatial-resolution details. Model comparison also shows that STAFFNs produce consistently lower biases than STARFMs and the flexible spatiotemporal data fusion models (FSDAFs). A synthetic NDVI product (342 scenes in total) was produced with STAFFNs having a 16-day revisit frequency at 30-m spatial resolution from 2000 to 2014. With this product, we further provided a 15-year spatiotemporal change monitoring map of the Poyang Lake wetland. Results show that the water area in the dry season tended to lose 38.3 km2 yr-1 in coverage over the past 15 years, decreasing by 18.24% of the lake area between 2001 and 2014. The wetland vegetation group tended to increase in coverage, increasing by 10.08% of the lake area in the past 15 years. Our study indicates the STAFFN model can be reasonably applied in monitoring wetland dynamics, and can be easily adapted for the use with other ecosystems.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, Julian; Tate, Mark W.; Shanks, Katherine S.

    Pixel Array Detectors (PADs) consist of an x-ray sensor layer bonded pixel-by-pixel to an underlying readout chip. This approach allows both the sensor and the custom pixel electronics to be tailored independently to best match the x-ray imaging requirements. Here we describe the hybridization of CdTe sensors to two different charge-integrating readout chips, the Keck PAD and the Mixed-Mode PAD (MM-PAD), both developed previously in our laboratory. The charge-integrating architecture of each of these PADs extends the instantaneous counting rate by many orders of magnitude beyond that obtainable with photon counting architectures. The Keck PAD chip consists of rapid, 8-frame,more » in-pixel storage elements with framing periods <150 ns. The second detector, the MM-PAD, has an extended dynamic range by utilizing an in-pixel overflow counter coupled with charge removal circuitry activated at each overflow. This allows the recording of signals from the single-photon level to tens of millions of x-rays/pixel/frame while framing at 1 kHz. Both detector chips consist of a 128×128 pixel array with (150 µm){sup 2} pixels.« less

  6. The effect of imposing 'fractional abundance constraints' onto the multilayer perceptron for sub-pixel land cover classification

    NASA Astrophysics Data System (ADS)

    Heremans, Stien; Suykens, Johan A. K.; Van Orshoven, Jos

    2016-02-01

    To be physically interpretable, sub-pixel land cover fractions or abundances should fulfill two constraints, the Abundance Non-negativity Constraint (ANC) and the Abundance Sum-to-one Constraint (ASC). This paper focuses on the effect of imposing these constraints onto the MultiLayer Perceptron (MLP) for a multi-class sub-pixel land cover classification of a time series of low resolution MODIS-images covering the northern part of Belgium. Two constraining modes were compared, (i) an in-training approach that uses 'softmax' as the transfer function in the MLP's output layer and (ii) a post-training approach that linearly rescales the outputs of the unconstrained MLP. Our results demonstrate that the pixel-level prediction accuracy is markedly increased by the explicit enforcement, both in-training and post-training, of the ANC and the ASC. For aggregations of pixels (municipalities), the constrained perceptrons perform at least as well as their unconstrained counterparts. Although the difference in performance between the in-training and post-training approach is small, we recommend the former for integrating the fractional abundance constraints into MLPs meant for sub-pixel land cover estimation, regardless of the targeted level of spatial aggregation.

  7. On Certain New Methodology for Reducing Sensor and Readout Electronics Circuitry Noise in Digital Domain

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Miko, Joseph; Bradley, Damon; Heinzen, Katherine

    2008-01-01

    NASA Hubble Space Telescope (HST) and upcoming cosmology science missions carry instruments with multiple focal planes populated with many large sensor detector arrays. These sensors are passively cooled to low temperatures for low-level light (L3) and near-infrared (NIR) signal detection, and the sensor readout electronics circuitry must perform at extremely low noise levels to enable new required science measurements. Because we are at the technological edge of enhanced performance for sensors and readout electronics circuitry, as determined by thermal noise level at given temperature in analog domain, we must find new ways of further compensating for the noise in the signal digital domain. To facilitate this new approach, state-of-the-art sensors are augmented at their array hardware boundaries by non-illuminated reference pixels, which can be used to reduce noise attributed to sensors. There are a few proposed methodologies of processing in the digital domain the information carried by reference pixels, as employed by the Hubble Space Telescope and the James Webb Space Telescope Projects. These methods involve using spatial and temporal statistical parameters derived from boundary reference pixel information to enhance the active (non-reference) pixel signals. To make a step beyond this heritage methodology, we apply the NASA-developed technology known as the Hilbert- Huang Transform Data Processing System (HHT-DPS) for reference pixel information processing and its utilization in reconfigurable hardware on-board a spaceflight instrument or post-processing on the ground. The methodology examines signal processing for a 2-D domain, in which high-variance components of the thermal noise are carried by both active and reference pixels, similar to that in processing of low-voltage differential signals and subtraction of a single analog reference pixel from all active pixels on the sensor. Heritage methods using the aforementioned statistical parameters in the digital domain (such as statistical averaging of the reference pixels themselves) zeroes out the high-variance components, and the counterpart components in the active pixels remain uncorrected. This paper describes how the new methodology was demonstrated through analysis of fast-varying noise components using the Hilbert-Huang Transform Data Processing System tool (HHT-DPS) developed at NASA and the high-level programming language MATLAB (Trademark of MathWorks Inc.), as well as alternative methods for correcting for the high-variance noise component, using an HgCdTe sensor data. The NASA Hubble Space Telescope data post-processing, as well as future deep-space cosmology projects on-board instrument data processing from all the sensor channels, would benefit from this effort.

  8. a Data Field Method for Urban Remotely Sensed Imagery Classification Considering Spatial Correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Qin, K.; Zeng, C.; Zhang, E. B.; Yue, M. X.; Tong, X.

    2016-06-01

    Spatial correlation between pixels is important information for remotely sensed imagery classification. Data field method and spatial autocorrelation statistics have been utilized to describe and model spatial information of local pixels. The original data field method can represent the spatial interactions of neighbourhood pixels effectively. However, its focus on measuring the grey level change between the central pixel and the neighbourhood pixels results in exaggerating the contribution of the central pixel to the whole local window. Besides, Geary's C has also been proven to well characterise and qualify the spatial correlation between each pixel and its neighbourhood pixels. But the extracted object is badly delineated with the distracting salt-and-pepper effect of isolated misclassified pixels. To correct this defect, we introduce the data field method for filtering and noise limitation. Moreover, the original data field method is enhanced by considering each pixel in the window as the central pixel to compute statistical characteristics between it and its neighbourhood pixels. The last step employs a support vector machine (SVM) for the classification of multi-features (e.g. the spectral feature and spatial correlation feature). In order to validate the effectiveness of the developed method, experiments are conducted on different remotely sensed images containing multiple complex object classes inside. The results show that the developed method outperforms the traditional method in terms of classification accuracies.

  9. Circuit-level optimisation of a:Si TFT-based AMOLED pixel circuits for maximum hold current

    NASA Astrophysics Data System (ADS)

    Foroughi, Aidin; Mehrpoo, Mohammadreza; Ashtiani, Shahin J.

    2013-11-01

    Design of AMOLED pixel circuits has manifold constraints and trade-offs which provides incentive for circuit designers to seek optimal solutions for different objectives. In this article, we present a discussion on the viability of an optimal solution to achieve the maximum hold current. A compact formula for component sizing in a conventional 2T1C pixel is, therefore, derived. Compared to SPICE simulation results, for several pixel sizes, our predicted optimum sizing yields maximum currents with errors less than 0.4%.

  10. Modeling and analysis of hybrid pixel detector deficiencies for scientific applications

    NASA Astrophysics Data System (ADS)

    Fahim, Farah; Deptuch, Grzegorz W.; Hoff, James R.; Mohseni, Hooman

    2015-08-01

    Semiconductor hybrid pixel detectors often consist of a pixellated sensor layer bump bonded to a matching pixelated readout integrated circuit (ROIC). The sensor can range from high resistivity Si to III-V materials, whereas a Si CMOS process is typically used to manufacture the ROIC. Independent, device physics and electronic design automation (EDA) tools are used to determine sensor characteristics and verify functional performance of ROICs respectively with significantly different solvers. Some physics solvers provide the capability of transferring data to the EDA tool. However, single pixel transient simulations are either not feasible due to convergence difficulties or are prohibitively long. A simplified sensor model, which includes a current pulse in parallel with detector equivalent capacitor, is often used; even then, spice type top-level (entire array) simulations range from days to weeks. In order to analyze detector deficiencies for a particular scientific application, accurately defined transient behavioral models of all the functional blocks are required. Furthermore, various simulations, such as transient, noise, Monte Carlo, inter-pixel effects, etc. of the entire array need to be performed within a reasonable time frame without trading off accuracy. The sensor and the analog front-end can be modeling using a real number modeling language, as complex mathematical functions or detailed data can be saved to text files, for further top-level digital simulations. Parasitically aware digital timing is extracted in a standard delay format (sdf) from the pixel digital back-end layout as well as the periphery of the ROIC. For any given input, detector level worst-case and best-case simulations are performed using a Verilog simulation environment to determine the output. Each top-level transient simulation takes no more than 10-15 minutes. The impact of changing key parameters such as sensor Poissonian shot noise, analog front-end bandwidth, jitter due to clock distribution etc. can be accurately analyzed to determine ROIC architectural viability and bottlenecks. Hence the impact of the detector parameters on the scientific application can be studied.

  11. Measurements with MÖNCH, a 25 μm pixel pitch hybrid pixel detector

    NASA Astrophysics Data System (ADS)

    Ramilli, M.; Bergamaschi, A.; Andrae, M.; Brückner, M.; Cartier, S.; Dinapoli, R.; Fröjdh, E.; Greiffenberg, D.; Hutwelker, T.; Lopez-Cuenca, C.; Mezza, D.; Mozzanica, A.; Ruat, M.; Redford, S.; Schmitt, B.; Shi, X.; Tinti, G.; Zhang, J.

    2017-01-01

    MÖNCH is a hybrid silicon pixel detector based on charge integration and with analog readout, featuring a pixel size of 25×25 μm2. The latest working prototype consists of an array of 400×400 identical pixels for a total active area of 1×1 cm2. Its design is optimized for the single photon regime. An exhaustive characterization of this large area prototype has been carried out in the past months, and it confirms an ENC in the order of 35 electrons RMS and a dynamic range of ~4×12 keV photons in high gain mode, which increases to ~100×12 keV photons with the lowest gain setting. The low noise levels of MÖNCH make it a suitable candidate for X-ray detection at energies around 1 keV and below. Imaging applications in particular can benefit significantly from the use of MÖNCH: due to its extremely small pixel pitch, the detector intrinsically offers excellent position resolution. Moreover, in low flux conditions, charge sharing between neighboring pixels allows the use of position interpolation algorithms which grant a resolution at the micrometer-level. Its energy reconstruction and imaging capabilities have been tested for the first time at a low energy beamline at PSI, with photon energies between 1.75 keV and 3.5 keV, and results will be shown.

  12. Lumbar lordosis restoration following single-level instrumented fusion comparing 4 commonly used techniques.

    PubMed

    Dimar, John R; Glassman, Steven D; Vemuri, Venu M; Esterberg, Justin L; Howard, Jennifer M; Carreon, Leah Y

    2011-11-09

    A major sequelae of lumbar fusion is acceleration of adjacent-level degeneration due to decreased lumbar lordosis. We evaluated the effectiveness of 4 common fusion techniques in restoring lordosis: instrumented posterolateral fusion, translumbar interbody fusion, anteroposterior fusion with posterior instrumentation, and anterior interbody fusion with lordotic threaded (LT) cages (Medtronic Sofamor Danek, Memphis, Tennessee). Radiographs were measured preoperatively, immediately postoperatively, and a minimum of 6 months postoperatively. Parameters measured included anterior and posterior disk space height, lumbar lordosis from L3 to S1, and surgical level lordosis.No significant difference in demographics existed among the 4 groups. All preoperative parameters were similar among the 4 groups. Lumbar lordosis at final follow-up showed no difference between the anteroposterior fusion with posterior instrumentation, translumbar interbody fusion, and LT cage groups, although the posterolateral fusion group showed a significant loss of lordosis (-10°) (P<.001). Immediately postoperatively and at follow-up, the LT cage group had a significantly greater amount of lordosis and showed maintenance of anterior and posterior disk space height postoperatively compared with the other groups. Instrumented posterolateral fusion produces a greater loss of lordosis compared with anteroposterior fusion with posterior instrumentation, translumbar interbody fusion, and LT cages. Maintenance of lordosis and anterior and posterior disk space height is significantly better with anterior interbody fusion with LT cages. Copyright 2011, SLACK Incorporated.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, H; Yoon, D; Jung, J

    Purpose: The purpose of this study is to suggest a tumor monitoring technique using prompt gamma rays emitted during the reaction between an antiproton and a boron particle, and to verify the increase of the therapeutic effectiveness of the antiproton boron fusion therapy using Monte Carlo simulation code. Methods: We acquired the percentage depth dose of the antiproton beam from a water phantom with and without three boron uptake regions (region A, B, and C) using F6 tally of MCNPX. The tomographic image was reconstructed using prompt gamma ray events from the reaction between the antiproton and boron during themore » treatment from 32 projections (reconstruction algorithm: MLEM). For the image reconstruction, we were performed using a 80 × 80 pixel matrix with a pixel size of 5 mm. The energy window was set as a 10 % energy window. Results: The prompt gamma ray peak for imaging was observed at 719 keV in the energy spectrum using the F8 tally fuction (energy deposition tally) of the MCNPX code. The tomographic image shows that the boron uptake regions were successfully identified from the simulation results. In terms of the receiver operating characteristic curve analysis, the area under the curve values were 0.647 (region A), 0.679 (region B), and 0.632 (region C). The SNR values increased as the tumor diameter increased. The CNR indicated the relative signal intensity within different regions. The CNR values also increased as the different of BURs diamter increased. Conclusion: We confirmed the feasibility of tumor monitoring during the antiproton therapy as well as the superior therapeutic effect of the antiproton boron fusion therapy. This result can be beneficial for the development of a more accurate particle therapy.« less

  14. Evaluating an image-fusion algorithm with synthetic-image-generation tools

    NASA Astrophysics Data System (ADS)

    Gross, Harry N.; Schott, John R.

    1996-06-01

    An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.

  15. A robust sub-pixel edge detection method of infrared image based on tremor-based retinal receptive field model

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang

    2008-03-01

    Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.

  16. A low-noise wide-dynamic-range event-driven detector using SOI pixel technology for high-energy particle imaging

    NASA Astrophysics Data System (ADS)

    Shrestha, Sumeet; Kamehama, Hiroki; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Takeda, Ayaki; Tsuru, Takeshi Go; Arai, Yasuo

    2015-08-01

    This paper presents a low-noise wide-dynamic-range pixel design for a high-energy particle detector in astronomical applications. A silicon on insulator (SOI) based detector is used for the detection of wide energy range of high energy particles (mainly for X-ray). The sensor has a thin layer of SOI CMOS readout circuitry and a thick layer of high-resistivity detector vertically stacked in a single chip. Pixel circuits are divided into two parts; signal sensing circuit and event detection circuit. The event detection circuit consisting of a comparator and logic circuits which detect the incidence of high energy particle categorizes the incident photon it into two energy groups using an appropriate energy threshold and generate a two-bit code for an event and energy level. The code for energy level is then used for selection of the gain of the in-pixel amplifier for the detected signal, providing a function of high-dynamic-range signal measurement. The two-bit code for the event and energy level is scanned in the event scanning block and the signals from the hit pixels only are read out. The variable-gain in-pixel amplifier uses a continuous integrator and integration-time control for the variable gain. The proposed design allows the small signal detection and wide dynamic range due to the adaptive gain technique and capability of correlated double sampling (CDS) technique of kTC noise canceling of the charge detector.

  17. Linear dynamic range enhancement in a CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor)

    2008-01-01

    A CMOS imager with increased linear dynamic range but without degradation in noise, responsivity, linearity, fixed-pattern noise, or photometric calibration comprises a linear calibrated dual gain pixel in which the gain is reduced after a pre-defined threshold level by switching in an additional capacitance. The pixel may include a novel on-pixel latch circuit that is used to switch in the additional capacitance.

  18. Moving object detection using dynamic motion modelling from UAV aerial images.

    PubMed

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2014-01-01

    Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.

  19. Change detection using landsat time series: A review of frequencies, preprocessing, algorithms, and applications

    NASA Astrophysics Data System (ADS)

    Zhu, Zhe

    2017-08-01

    The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.

  20. Comparison of region-of-interest-averaged and pixel-averaged analysis of DCE-MRI data based on simulations and pre-clinical experiments

    NASA Astrophysics Data System (ADS)

    He, Dianning; Zamora, Marta; Oto, Aytekin; Karczmar, Gregory S.; Fan, Xiaobing

    2017-09-01

    Differences between region-of-interest (ROI) and pixel-by-pixel analysis of dynamic contrast enhanced (DCE) MRI data were investigated in this study with computer simulations and pre-clinical experiments. ROIs were simulated with 10, 50, 100, 200, 400, and 800 different pixels. For each pixel, a contrast agent concentration as a function of time, C(t), was calculated using the Tofts DCE-MRI model with randomly generated physiological parameters (K trans and v e) and the Parker population arterial input function. The average C(t) for each ROI was calculated and then K trans and v e for the ROI was extracted. The simulations were run 100 times for each ROI with new K trans and v e generated. In addition, white Gaussian noise was added to C(t) with 3, 6, and 12 dB signal-to-noise ratios to each C(t). For pre-clinical experiments, Copenhagen rats (n  =  6) with implanted prostate tumors in the hind limb were used in this study. The DCE-MRI data were acquired with a temporal resolution of ~5 s in a 4.7 T animal scanner, before, during, and after a bolus injection (<5 s) of Gd-DTPA for a total imaging duration of ~10 min. K trans and v e were calculated in two ways: (i) by fitting C(t) for each pixel, and then averaging the pixel values over the entire ROI, and (ii) by averaging C(t) over the entire ROI, and then fitting averaged C(t) to extract K trans and v e. The simulation results showed that in heterogeneous ROIs, the pixel-by-pixel averaged K trans was ~25% to ~50% larger (p  <  0.01) than the ROI-averaged K trans. At higher noise levels, the pixel-averaged K trans was greater than the ‘true’ K trans, but the ROI-averaged K trans was lower than the ‘true’ K trans. The ROI-averaged K trans was closer to the true K trans than pixel-averaged K trans for high noise levels. In pre-clinical experiments, the pixel-by-pixel averaged K trans was ~15% larger than the ROI-averaged K trans. Overall, with the Tofts model, the extracted physiological parameters from the pixel-by-pixel averages were larger than the ROI averages. These differences were dependent on the heterogeneity of the ROI.

  1. Interactive classification and content-based retrieval of tissue images

    NASA Astrophysics Data System (ADS)

    Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof

    2002-11-01

    We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.

  2. Estimating Forest Species Composition Using a Multi-Sensor Approach

    NASA Astrophysics Data System (ADS)

    Wolter, P. T.

    2009-12-01

    The magnitude, duration, and frequency of forest disturbance caused by the spruce budworm and forest tent caterpillar has increased over the last century due to a shift in forest species composition linked to historical fire suppression, forest management, and pesticide application that has fostered the increase in dominance of host tree species. Modeling approaches are currently being used to understand and forecast potential management effects in changing insect disturbance trends. However, detailed forest composition data needed for these efforts is often lacking. Here, we used partial least squares (PLS) regression to integrate satellite sensor data from Landsat, Radarsat-1, and PALSAR, as well as pixel-wise forest structure information derived from SPOT-5 sensor data (Wolter et al. 2009), to estimate species-level forest composition of 12 species required for modeling efforts. C-band Radarsat-1 data and L-band PALSAR data were frequently among the strongest predictors of forest composition. Pixel-level forest structure data were more important for estimating conifer rather than hardwood forest composition. The coefficients of determination for species relative basal area (RBA) ranged from 0.57 (white cedar) to 0.94 (maple) with RMSE of 8.88 to 6.44 % RBA, respectively. Receiver operating characteristic (ROC) curves were used to determine the effective lower limits of usefulness of species RBA estimates which ranged from 5.94 % (jack pine) to 39.41 % (black ash). These estimates were then used to produce a dominant forest species map for the study region with an overall accuracy of 78 %. Most notably, this approach facilitated discrimination of aspen from birch as well as spruce and fir from other conifer species which is crucial for the study of forest tent caterpillar and spruce budworm dynamics, respectively, in the Upper Midwest. Thus, use of PLS regression as a data fusion strategy has proven to be an effective tool for regional characterization of forest composition within spatially heterogeneous forests using large-format satellite sensor data.

  3. Design, optimization and evaluation of a "smart" pixel sensor array for low-dose digital radiography

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Liu, Xinghui; Ou, Hai; Chen, Jun

    2016-04-01

    Amorphous silicon (a-Si:H) thin-film transistors (TFTs) have been widely used to build flat-panel X-ray detectors for digital radiography (DR). As the demand for low-dose X-ray imaging grows, a detector with high signal-to-noise-ratio (SNR) pixel architecture emerges. "Smart" pixel is intended to use a dual-gate photosensitive TFT for sensing, storage, and switch. It differs from a conventional passive pixel sensor (PPS) and active pixel sensor (APS) in that all these three functions are combined into one device instead of three separate units in a pixel. Thus, it is expected to have high fill factor and high spatial resolution. In addition, it utilizes the amplification effect of the dual-gate photosensitive TFT to form a one-transistor APS that leads to a potentially high SNR. This paper addresses the design, optimization and evaluation of the smart pixel sensor and array for low-dose DR. We will design and optimize the smart pixel from the scintillator to TFT levels and validate it through optical and electrical simulation and experiments of a 4x4 sensor array.

  4. A testbed for architecture and fidelity trade studies in the Bayesian decision-level fusion of ATR products

    NASA Astrophysics Data System (ADS)

    Erickson, Kyle J.; Ross, Timothy D.

    2007-04-01

    Decision-level fusion is an appealing extension to automatic/assisted target recognition (ATR) as it is a low-bandwidth technique bolstered by a strong theoretical foundation that requires no modification of the source algorithms. Despite the relative simplicity of decision-level fusion, there are many options for fusion application and fusion algorithm specifications. This paper describes a tool that allows trade studies and optimizations across these many options, by feeding an actual fusion algorithm via models of the system environment. Models and fusion algorithms can be specified and then exercised many times, with accumulated results used to compute performance metrics such as probability of correct identification. Performance differences between the best of the contributing sources and the fused result constitute examples of "gain." The tool, constructed as part of the Fusion for Identifying Targets Experiment (FITE) within the Air Force Research Laboratory (AFRL) Sensors Directorate ATR Thrust, finds its main use in examining the relationships among conditions affecting the target, prior information, fusion algorithm complexity, and fusion gain. ATR as an unsolved problem provides the main challenges to fusion in its high cost and relative scarcity of training data, its variability in application, the inability to produce truly random samples, and its sensitivity to context. This paper summarizes the mathematics underlying decision-level fusion in the ATR domain and describes a MATLAB-based architecture for exploring the trade space thus defined. Specific dimensions within this trade space are delineated, providing the raw material necessary to define experiments suitable for multi-look and multi-sensor ATR systems.

  5. Local electrostatic interactions determine the diameter of fusion pores

    PubMed Central

    Guček, Alenka; Jorgačevski, Jernej; Górska, Urszula; Rituper, Boštjan; Kreft, Marko; Zorec, Robert

    2015-01-01

    In regulated exocytosis vesicular and plasma membranes merge to form a fusion pore in response to stimulation. The nonselective cation HCN channels are involved in the regulation of unitary exocytotic events by at least 2 mechanisms. They can affect SNARE-dependent exocytotic activity indirectly, via the modulation of free intracellular calcium; and/or directly, by altering local cation concentration, which affects fusion pore geometry likely via electrostatic interactions. By monitoring membrane capacitance, we investigated how extracellular cation concentration affects fusion pore diameter in pituitary cells and astrocytes. At low extracellular divalent cation levels predominantly transient fusion events with widely open fusion pores were detected. However, fusion events with predominately narrow fusion pores were present at elevated levels of extracellular trivalent cations. These results show that electrostatic interactions likely help determine the stability of discrete fusion pore states by affecting fusion pore membrane composition. PMID:25835258

  6. Biomechanical effects of fusion levels on the risk of proximal junctional failure and kyphosis in lumbar spinal fusion surgery.

    PubMed

    Park, Won Man; Choi, Dae Kyung; Kim, Kyungsoo; Kim, Yongjung J; Kim, Yoon Hyuk

    2015-12-01

    Spinal fusion surgery is a widely used surgical procedure for sagittal realignment. Clinical studies have reported that spinal fusion may cause proximal junctional kyphosis and failure with disc failure, vertebral fracture, and/or failure at the implant-bone interface. However, the biomechanical injury mechanisms of proximal junctional kyphosis and failure remain unclear. A finite element model of the thoracolumbar spine was used. Nine fusion models with pedicle screw systems implanted at the L2-L3, L3-L4, L4-L5, L5-S1, L2-L4, L3-L5, L4-S1, L2-L5, and L3-S1 levels were developed based on the respective surgical protocols. The developed models simulated flexion-extension using hybrid testing protocol. When spinal fusion was performed at more distal levels, particularly at the L5-S1 level, the following biomechanical properties increased during flexion-extension: range of motion, stress on the annulus fibrosus fibers and vertebra at the adjacent motion segment, and the magnitude of axial forces on the pedicle screw at the uppermost instrumented vertebra. The results of this study demonstrate that more distal fusion levels, particularly in spinal fusion including the L5-S1 level, lead to greater increases in the risk of proximal junctional kyphosis and failure, as evidenced by larger ranges of motion, higher stresses on fibers of the annulus fibrosus and vertebra at the adjacent segment, and higher axial forces on the screw at the uppermost instrumented vertebra in flexion-extension. Therefore, fusion levels should be carefully selected to avoid proximal junctional kyphosis and failure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Increasing Linear Dynamic Range of a CMOS Image Sensor

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata

    2007-01-01

    A generic design and a corresponding operating sequence have been developed for increasing the linear-response dynamic range of a complementary metal oxide/semiconductor (CMOS) image sensor. The design provides for linear calibrated dual-gain pixels that operate at high gain at a low signal level and at low gain at a signal level above a preset threshold. Unlike most prior designs for increasing dynamic range of an image sensor, this design does not entail any increase in noise (including fixed-pattern noise), decrease in responsivity or linearity, or degradation of photometric calibration. The figure is a simplified schematic diagram showing the circuit of one pixel and pertinent parts of its column readout circuitry. The conventional part of the pixel circuit includes a photodiode having a small capacitance, CD. The unconventional part includes an additional larger capacitance, CL, that can be connected to the photodiode via a transfer gate controlled in part by a latch. In the high-gain mode, the signal labeled TSR in the figure is held low through the latch, which also helps to adapt the gain on a pixel-by-pixel basis. Light must be coupled to the pixel through a microlens or by back illumination in order to obtain a high effective fill factor; this is necessary to ensure high quantum efficiency, a loss of which would minimize the efficacy of the dynamic- range-enhancement scheme. Once the level of illumination of the pixel exceeds the threshold, TSR is turned on, causing the transfer gate to conduct, thereby adding CL to the pixel capacitance. The added capacitance reduces the conversion gain, and increases the pixel electron-handling capacity, thereby providing an extension of the dynamic range. By use of an array of comparators also at the bottom of the column, photocharge voltages on sampling capacitors in each column are compared with a reference voltage to determine whether it is necessary to switch from the high-gain to the low-gain mode. Depending upon the built-in offset in each pixel and in each comparator, the point at which the gain change occurs will be different, adding gain-dependent fixed pattern noise in each pixel. The offset, and hence the fixed pattern noise, is eliminated by sampling the pixel readout charge four times by use of four capacitors (instead of two such capacitors as in conventional design) connected to the bottom of the column via electronic switches SHS1, SHR1, SHS2, and SHR2, respectively, corresponding to high and low values of the signals TSR and RST. The samples are combined in an appropriate fashion to cancel offset-induced errors, and provide spurious-free imaging with extended dynamic range.

  8. Noise Reduction Techniques and Scaling Effects towards Photon Counting CMOS Image Sensors

    PubMed Central

    Boukhayma, Assim; Peizerat, Arnaud; Enz, Christian

    2016-01-01

    This paper presents an overview of the read noise in CMOS image sensors (CISs) based on four-transistors (4T) pixels, column-level amplification and correlated multiple sampling. Starting from the input-referred noise analytical formula, process level optimizations, device choices and circuit techniques at the pixel and column level of the readout chain are derived and discussed. The noise reduction techniques that can be implemented at the column and pixel level are verified by transient noise simulations, measurement and results from recently-published low noise CIS. We show how recently-reported process refinement, leading to the reduction of the sense node capacitance, can be combined with an optimal in-pixel source follower design to reach a sub-0.3erms- read noise at room temperature. This paper also discusses the impact of technology scaling on the CIS read noise. It shows how designers can take advantage of scaling and how the Metal-Oxide-Semiconductor (MOS) transistor gate leakage tunneling current appears as a challenging limitation. For this purpose, both simulation results of the gate leakage current and 1/f noise data reported from different foundries and technology nodes are used.

  9. A fusion network for semantic segmentation using RGB-D data

    NASA Astrophysics Data System (ADS)

    Yuan, Jiahui; Zhang, Kun; Xia, Yifan; Qi, Lin; Dong, Junyu

    2018-04-01

    Semantic scene parsing is considerable in many intelligent field, including perceptual robotics. For the past few years, pixel-wise prediction tasks like semantic segmentation with RGB images has been extensively studied and has reached very remarkable parsing levels, thanks to convolutional neural networks (CNNs) and large scene datasets. With the development of stereo cameras and RGBD sensors, it is expected that additional depth information will help improving accuracy. In this paper, we propose a semantic segmentation framework incorporating RGB and complementary depth information. Motivated by the success of fully convolutional networks (FCN) in semantic segmentation field, we design a fully convolutional networks consists of two branches which extract features from both RGB and depth data simultaneously and fuse them as the network goes deeper. Instead of aggregating multiple model, our goal is to utilize RGB data and depth data more effectively in a single model. We evaluate our approach on the NYU-Depth V2 dataset, which consists of 1449 cluttered indoor scenes, and achieve competitive results with the state-of-the-art methods.

  10. A Fusion Architecture for Tracking a Group of People Using a Distributed Sensor Network

    DTIC Science & Technology

    2013-07-01

    Determining the composition of the group is done using several classifiers. The fusion is done at the UGS level to fuse information from all the modalities to...to classification and counting of the targets. Section III also presents the algorithms for fusion of distributed sensor data at the UGS level and...ultrasonic sensors. Determining the composition of the group is done using several classifiers. The fusion is done at the UGS level to fuse

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altunbas, Cem, E-mail: caltunbas@gmail.com; Lai, Chao-Jen; Zhong, Yuncheng

    Purpose: In using flat panel detectors (FPD) for cone beam computed tomography (CBCT), pixel gain variations may lead to structured nonuniformities in projections and ring artifacts in CBCT images. Such gain variations can be caused by change in detector entrance exposure levels or beam hardening, and they are not accounted by conventional flat field correction methods. In this work, the authors presented a method to identify isolated pixel clusters that exhibit gain variations and proposed a pixel gain correction (PGC) method to suppress both beam hardening and exposure level dependent gain variations. Methods: To modulate both beam spectrum and entrancemore » exposure, flood field FPD projections were acquired using beam filters with varying thicknesses. “Ideal” pixel values were estimated by performing polynomial fits in both raw and flat field corrected projections. Residuals were calculated by taking the difference between measured and ideal pixel values to identify clustered image and FPD artifacts in flat field corrected and raw images, respectively. To correct clustered image artifacts, the ratio of ideal to measured pixel values in filtered images were utilized as pixel-specific gain correction factors, referred as PGC method, and they were tabulated as a function of pixel value in a look-up table. Results: 0.035% of detector pixels lead to clustered image artifacts in flat field corrected projections, where 80% of these pixels were traced back and linked to artifacts in the FPD. The performance of PGC method was tested in variety of imaging conditions and phantoms. The PGC method reduced clustered image artifacts and fixed pattern noise in projections, and ring artifacts in CBCT images. Conclusions: Clustered projection image artifacts that lead to ring artifacts in CBCT can be better identified with our artifact detection approach. When compared to the conventional flat field correction method, the proposed PGC method enables characterization of nonlinear pixel gain variations as a function of change in x-ray spectrum and intensity. Hence, it can better suppress image artifacts due to beam hardening as well as artifacts that arise from detector entrance exposure variation.« less

  12. A new optimal seam method for seamless image stitching

    NASA Astrophysics Data System (ADS)

    Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng

    2017-07-01

    A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.

  13. Design of 90×8 ROIC with pixel level digital TDI implementation for scanning type LWIR FPAs

    NASA Astrophysics Data System (ADS)

    Ceylan, Omer; Kayahan, Huseyin; Yazici, Melik; Gurbuz, Yasar

    2013-06-01

    Design of a 90×8 CMOS readout integrated circuit (ROIC) based on pixel level digital time delay integration (TDI) for scanning type LWIR focal plane arrays (FPAs) is presented. TDI is implemented on 8 pixels which improves the SNR of the system with a factor of √8. Oversampling rate of 3 improves the spatial resolution of the system. TDI operation is realized with a novel under-pixel analog-to-digital converter, which improves the noise performance of ROIC with a lower quantization noise. Since analog signal is converted to digital domain in-pixel, non-uniformities and inaccuracies due to analog signal routing over large chip area is eliminated. Contributions of each pixel for proper TDI operation are added in summation counters, no op-amps are used for summation, hence power consumption of ROIC is lower than its analog counterparts. Due to lack of multiple capacitors or summation amplifiers, ROIC occupies smaller chip area compared to its analog counterparts. ROIC is also superior to its digital counterparts due to novel digital TDI implementation in terms of power consumption, noise and chip area. ROIC supports bi-directional scan, multiple gain settings, bypass operation, automatic gain adjustment, pixel select/deselect, and is programmable through serial or parallel interface. Input referred noise of ROIC is less than 750 rms electrons, while power consumption is less than 20mW. ROIC is designed to perform both in room and cryogenic temperatures.

  14. Comparative charge analysis of one- and two-level lumbar total disc arthroplasty versus circumferential lumbar fusion.

    PubMed

    Levin, David A; Bendo, John A; Quirno, Martin; Errico, Thomas; Goldstein, Jeffrey; Spivak, Jeffrey

    2007-12-01

    This is a retrospective, independent study comparing 2 groups of patients treated surgically for discogenic low back pain associated with degenerative disc disease (DDD) in the lumbosacral spine. To compare the surgical and hospitalization charges associated with 1- and 2-level lumbar total disc replacement and circumferential lumbar fusion. Reported series of lumbar total disc replacement have been favorable. However, economic aspects of lumbar total disc replacement (TDR) have not been published or studied. This information is important considering the recent widespread utilization of new technologies. Recent studies have demonstrated comparable short-term clinical results between TDR and lumbar fusion recipients. Relative charges may be another important indicator of the most appropriate procedure. We report a hospital charge-analysis comparing ProDisc lumbar disc replacement with circumferential fusion for discogenic low back pain. In a cohort of 53 prospectively selected patients with severe, disabling back pain and lumbar disc degeneration, 36 received Synthes ProDisc TDR and 17 underwent circumferential fusion for 1- and 2-level degenerative disc disease between L3 and S1. Randomization was performed using a 2-to-1 ratio of ProDisc recipients to control spinal fusion recipients. Charge comparisons, including operating room charges, inpatient hospital charges, and implant charges, were made from hospital records using inflation-corrected 2006 U.S. dollars. Operating room times, estimated blood loss, and length of stay were obtained from hospital records as well. Surgeon and anesthesiologist fees were, for the purposes of comparison, based on Medicare reimbursement rates. Statistical analysis was performed using a 2-tailed Student t test. For patients with 1-level disease, significant differences were noted between the TDR and fusion control group. The mean total charge for the TDR group was $35,592 versus $46,280 for the fusion group (P = 0.0018). Operating room charges were $12,000 and $18,950, respectively, for the TDR and fusion groups (P < 0.05). Implant charges averaged $13,990 for the fusion group, which is slightly higher than the $13,800 for the ProDisc (P = 0.9). Estimated blood loss averaged 794 mL in the fusion group versus 412 mL in the TDR group (P = 0.0058). Mean OR minutes averaged 344 minutes for the fusion group and 185 minutes for the TDR (P < 0.05) Mean length of stay was 4.78 days for fusion versus 4.32 days for TDR (P = 0.394). For patients with 2-level disease, charges were similar between the TDR and fusion groups. The mean total charge for the 2-level TDR group was $55,524 versus $56,823 for the fusion group (P = 0.55). Operating room charges were $15,340 and $20,560, respectively, for the TDR and fusion groups (P = 0.0003). Surgeon fees and anesthesiologist charges based on Medicare reimbursement rates were $5857 and $525 for the fusion group, respectively, versus $2826 and $331 for the TDR group (P < 0.05 for each). Implant charges were significantly lower for the fusion group (mean, $18,460) than those for 2-level Synthes ProDisc ($27,600) (P < 0.05). Operative time averaged 387 minutes for fusion versus 242 minutes for TDR (P < 0.0001). EBL and length of stay were similar. Patients undergoing 1- and 2-level ProDisc total disc replacement spent significantly less time in the OR and had less EBL than controls. Charges were significantly lower for TDR compared with circumferential fusions in the 1-level patient group, while charges were similar in the 2-level group.

  15. Using Trained Pixel Classifiers to Select Images of Interest

    NASA Technical Reports Server (NTRS)

    Mazzoni, D.; Wagstaff, K.; Castano, R.

    2004-01-01

    We present a machine-learning-based approach to ranking images based on learned priorities. Unlike previous methods for image evaluation, which typically assess the value of each image based on the presence of predetermined specific features, this method involves using two levels of machine-learning classifiers: one level is used to classify each pixel as belonging to one of a group of rather generic classes, and another level is used to rank the images based on these pixel classifications, given some example rankings from a scientist as a guide. Initial results indicate that the technique works well, producing new rankings that match the scientist's rankings significantly better than would be expected by chance. The method is demonstrated for a set of images collected by a Mars field-test rover.

  16. Multi-energy x-ray detector calibration for T e and impurity density (n Z) measurements of MCF plasmas

    DOE PAGES

    Maddox, J.; Pablant, N.; Efthimion, P.; ...

    2016-09-07

    Here, soft x-ray detection with the new "multi-energy" PILATUS3 detector systems holds promise as a magnetically confined fusion (MCF) plasma diagnostic for ITER and beyond. The measured x-ray brightness can be used to determine impurity concentrations, electron temperatures, n 2 eZ eff products, and to probe the electron energy distribution. However, in order to be effective, these detectors which are really large arrays of detectors with photon energy gating capabilities must be precisely calibrated for each pixel. The energy-dependence of the detector response of the multi-energy PILATUS3 system with 100 K pixels has been measured at Dectris Laboratory. X-rays emittedmore » from a tube under high voltage bombard various elements such that they emit x-ray lines from Zr-Lα to Ag-Kα between 1.8 and 22.16 keV. Each pixel on the PILATUS3 can be set to a minimum energy threshold in the range from 1.6 to 25 keV. This feature allows a single detector to be sensitive to a variety of x-ray energies, so that it is possible to sample the energy distribution of the x-ray continuum and line-emission. PILATUS3 can be configured for 1D or 2D imaging of MCF plasmas with typical spatial energy and temporal resolution of 1 cm, 0.6 keV, and 5 ms, respectively.« less

  17. Fusion: ultra-high-speed and IR image sensors

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.

    2015-08-01

    Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.

  18. Biomechanics of Artificial Disc Replacements Adjacent to a 2-Level Fusion in 4-Level Hybrid Constructs: An In Vitro Investigation

    PubMed Central

    Liao, Zhenhua; Fogel, Guy R.; Wei, Na; Gu, Hongsheng; Liu, Weiqiang

    2015-01-01

    Background The ideal procedure for multilevel cervical degenerative disc diseases remains controversial. Recent studies on hybrid surgery combining anterior cervical discectomy and fusion (ACDF) and artificial cervical disc replacement (ACDR) for 2-level and 3-level constructs have been reported in the literature. The purpose of this study was to estimate the biomechanics of 3 kinds of 4-level hybrid constructs, which are more likely to be used clinically compared to 4-level arthrodesis. Material/Methods Eighteen human cadaveric spines (C2–T1) were evaluated in different testing conditions: intact, with 3 kinds of 4-level hybrid constructs (hybrid C3–4 ACDR+C4–6 ACDF+C6–7ACDR; hybrid C3–5ACDF+C5–6ACDR+C6–7ACDR; hybrid C3–4ACDR+C4–5ACDR+C5–7ACDF); and 4-level fusion. Results Four-level fusion resulted in significant decrease in the C3–C7 ROM compared with the intact spine. The 3 different 4-level hybrid treatment groups caused only slight change at the instrumented levels compared to intact except for flexion. At the adjacent levels, 4-level fusion resulted in significant increase of contribution of both upper and lower adjacent levels. However, for the 3 hybrid constructs, significant changes of motion increase far lower than 4P at adjacent levels were only noted in partial loading conditions. No destabilizing effect or hypermobility were observed in any 4-level hybrid construct. Conclusions Four-level fusion significantly eliminated motion within the construct and increased motion at the adjacent segments. For all 3 different 4-level hybrid constructs, ACDR normalized motion of the index segment and adjacent segments with no significant hypermobility. Compared with the 4-level ACDF condition, the artificial discs in 4-level hybrid constructs had biomechanical advantages compared to fusion in normalizing adjacent level motion. PMID:26694835

  19. Biomechanics of Artificial Disc Replacements Adjacent to a 2-Level Fusion in 4-Level Hybrid Constructs: An In Vitro Investigation.

    PubMed

    Liao, Zhenhua; Fogel, Guy R; Wei, Na; Gu, Hongsheng; Liu, Weiqiang

    2015-12-23

    BACKGROUND The ideal procedure for multilevel cervical degenerative disc diseases remains controversial. Recent studies on hybrid surgery combining anterior cervical discectomy and fusion (ACDF) and artificial cervical disc replacement (ACDR) for 2-level and 3-level constructs have been reported in the literature. The purpose of this study was to estimate the biomechanics of 3 kinds of 4-level hybrid constructs, which are more likely to be used clinically compared to 4-level arthrodesis. MATERIAL AND METHODS Eighteen human cadaveric spines (C2-T1) were evaluated in different testing conditions: intact, with 3 kinds of 4-level hybrid constructs (hybrid C3-4 ACDR+C4-6 ACDF+C6-7ACDR; hybrid C3-5ACDF+C5-6ACDR+C6-7ACDR; hybrid C3-4ACDR+C4-5ACDR+C5-7ACDF); and 4-level fusion. RESULTS Four-level fusion resulted in significant decrease in the C3-C7 ROM compared with the intact spine. The 3 different 4-level hybrid treatment groups caused only slight change at the instrumented levels compared to intact except for flexion. At the adjacent levels, 4-level fusion resulted in significant increase of contribution of both upper and lower adjacent levels. However, for the 3 hybrid constructs, significant changes of motion increase far lower than 4P at adjacent levels were only noted in partial loading conditions. No destabilizing effect or hypermobility were observed in any 4-level hybrid construct. CONCLUSIONS Four-level fusion significantly eliminated motion within the construct and increased motion at the adjacent segments. For all 3 different 4-level hybrid constructs, ACDR normalized motion of the index segment and adjacent segments with no significant hypermobility. Compared with the 4-level ACDF condition, the artificial discs in 4-level hybrid constructs had biomechanical advantages compared to fusion in normalizing adjacent level motion.

  20. Active pixel imagers incorporating pixel-level amplifiers based on polycrystalline-silicon thin-film transistors

    PubMed Central

    El-Mohri, Youcef; Antonuk, Larry E.; Koniczek, Martin; Zhao, Qihua; Li, Yixin; Street, Robert A.; Lu, Jeng-Ping

    2009-01-01

    Active matrix, flat-panel imagers (AMFPIs) employing a 2D matrix of a-Si addressing TFTs have become ubiquitous in many x-ray imaging applications due to their numerous advantages. However, under conditions of low exposures and∕or high spatial resolution, their signal-to-noise performance is constrained by the modest system gain relative to the electronic additive noise. In this article, a strategy for overcoming this limitation through the incorporation of in-pixel amplification circuits, referred to as active pixel (AP) architectures, using polycrystalline-silicon (poly-Si) TFTs is reported. Compared to a-Si, poly-Si offers substantially higher mobilities, enabling higher TFT currents and the possibility of sophisticated AP designs based on both n- and p-channel TFTs. Three prototype indirect detection arrays employing poly-Si TFTs and a continuous a-Si photodiode structure were characterized. The prototypes consist of an array (PSI-1) that employs a pixel architecture with a single TFT, as well as two arrays (PSI-2 and PSI-3) that employ AP architectures based on three and five TFTs, respectively. While PSI-1 serves as a reference with a design similar to that of conventional AMFPI arrays, PSI-2 and PSI-3 incorporate additional in-pixel amplification circuitry. Compared to PSI-1, results of x-ray sensitivity demonstrate signal gains of ∼10.7 and 20.9 for PSI-2 and PSI-3, respectively. These values are in reasonable agreement with design expectations, demonstrating that poly-Si AP circuits can be tailored to provide a desired level of signal gain. PSI-2 exhibits the same high levels of charge trapping as those observed for PSI-1 and other conventional arrays employing a continuous photodiode structure. For PSI-3, charge trapping was found to be significantly lower and largely independent of the bias voltage applied across the photodiode. MTF results indicate that the use of a continuous photodiode structure in PSI-1, PSI-2, and PSI-3 results in optical fill factors that are close to unity. In addition, the greater complexity of PSI-2 and PSI-3 pixel circuits, compared to that of PSI-1, has no observable effect on spatial resolution. Both PSI-2 and PSI-3 exhibit high levels of additive noise, resulting in no net improvement in the signal-to-noise performance of these early prototypes compared to conventional AMFPIs. However, faster readout rates, coupled with implementation of multiple sampling protocols allowed by the nondestructive nature of pixel readout, resulted in a significantly lower noise level of ∼560 e (rms) for PSI-3. PMID:19673229

  1. Active pixel imagers incorporating pixel-level amplifiers based on polycrystalline-silicon thin-film transistors.

    PubMed

    El-Mohri, Youcef; Antonuk, Larry E; Koniczek, Martin; Zhao, Qihua; Li, Yixin; Street, Robert A; Lu, Jeng-Ping

    2009-07-01

    Active matrix, flat-panel imagers (AMFPIs) employing a 2D matrix of a-Si addressing TFTs have become ubiquitous in many x-ray imaging applications due to their numerous advantages. However, under conditions of low exposures and/or high spatial resolution, their signal-to-noise performance is constrained by the modest system gain relative to the electronic additive noise. In this article, a strategy for overcoming this limitation through the incorporation of in-pixel amplification circuits, referred to as active pixel (AP) architectures, using polycrystalline-silicon (poly-Si) TFTs is reported. Compared to a-Si, poly-Si offers substantially higher mobilities, enabling higher TFT currents and the possibility of sophisticated AP designs based on both n- and p-channel TFTs. Three prototype indirect detection arrays employing poly-Si TFTs and a continuous a-Si photodiode structure were characterized. The prototypes consist of an array (PSI-1) that employs a pixel architecture with a single TFT, as well as two arrays (PSI-2 and PSI-3) that employ AP architectures based on three and five TFTs, respectively. While PSI-1 serves as a reference with a design similar to that of conventional AMFPI arrays, PSI-2 and PSI-3 incorporate additional in-pixel amplification circuitry. Compared to PSI-1, results of x-ray sensitivity demonstrate signal gains of approximately 10.7 and 20.9 for PSI-2 and PSI-3, respectively. These values are in reasonable agreement with design expectations, demonstrating that poly-Si AP circuits can be tailored to provide a desired level of signal gain. PSI-2 exhibits the same high levels of charge trapping as those observed for PSI-1 and other conventional arrays employing a continuous photodiode structure. For PSI-3, charge trapping was found to be significantly lower and largely independent of the bias voltage applied across the photodiode. MTF results indicate that the use of a continuous photodiode structure in PSI-1, PSI-2, and PSI-3 results in optical fill factors that are close to unity. In addition, the greater complexity of PSI-2 and PSI-3 pixel circuits, compared to that of PSI-1, has no observable effect on spatial resolution. Both PSI-2 and PSI-3 exhibit high levels of additive noise, resulting in no net improvement in the signal-to-noise performance of these early prototypes compared to conventional AMFPIs. However, faster readout rates, coupled with implementation of multiple sampling protocols allowed by the nondestructive nature of pixel readout, resulted in a significantly lower noise level of approximately 560 e (rms) for PSI-3.

  2. Ureter Injury as a Complication of Oblique Lumbar Interbody Fusion.

    PubMed

    Lee, Hyeong-Jin; Kim, Jin-Sung; Ryu, Kyeong-Sik; Park, Choon Keun

    2017-06-01

    Oblique lumbar interbody fusion is a commonly used surgical method of achieving lumbar interbody fusion. There have been some reports about complications of oblique lumbar interbody fusion at the L2-L3 level. However, to our knowledge, there have been no reports about ureter injury during oblique lumbar interbody fusion. We report a case of ureter injury during oblique lumbar interbody fusion to share our experience. A 78-year-old male patient presented with a history of lower back pain and neurogenic intermittent claudication. He was diagnosed with spinal stenosis at L2-L3, L4-L5 level and spondylolisthesis at L4-L5 level. Symptoms were not improved after several months of medical treatments. Then, oblique lumbar interbody fusion was performed at L2-L3, L4-L5 level. During the surgery, anesthesiologist noticed hematuria. A retrourethrogram was performed immediately by urologist, and ureter injury was found. Ureteroureterostomy and double-J catheter insertion were performed. The patient was discharged 2 weeks after surgery without urologic or neurologic complications. At 2 months after surgery, an intravenous pyelogram was performed, which showed an intact ureter. Our study shows that a low threshold of suspicion of ureter injury and careful manipulation of retroperitoneal fat can be helpful to prevent ureter injury during oblique lumbar interbody fusion at the upper level. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Performance of spectral MSE diagnostic on C-Mod and ITER

    NASA Astrophysics Data System (ADS)

    Liao, Ken; Rowan, William; Mumgaard, Robert; Granetz, Robert; Scott, Steve; Marchuk, Oleksandr; Ralchenko, Yuri; Alcator C-Mod Team

    2015-11-01

    Magnetic field was measured on Alcator C-mod by applying spectral Motional Stark Effect techniques based on line shift (MSE-LS) and line ratio (MSE-LR) to the H-alpha emission spectrum of the diagnostic neutral beam atoms. The high field of Alcator C-mod allows measurements to be made at close to ITER values of Stark splitting (~ Bv⊥) with similar background levels to those expected for ITER. Accurate modeling of the spectrum requires a non-statistical, collisional-radiative analysis of the excited beam population and quadratic and Zeeman corrections to the Stark shift. A detailed synthetic diagnostic was developed and used to estimate the performance of the diagnostic at C-Mod and ITER parameters. Our analysis includes the sensitivity to view and beam geometry, aperture and divergence broadening, magnetic field, pixel size, background noise, and signal levels. Analysis of preliminary experiments agree with Kinetic+(polarization)MSE EFIT within ~2° in pitch angle and simulations predict uncertainties of 20 mT in | B | and <2° in pitch angle. This material is based upon work supported by the U.S. Department of Energy Office of Science, Office of Fusion Energy Sciences under Award Number DE-FG03-96ER-54373 and DE-FC02-99ER54512.

  4. G-Channel Restoration for RWB CFA with Double-Exposed W Channel

    PubMed Central

    Park, Chulhee; Song, Ki Sun; Kang, Moon Gi

    2017-01-01

    In this paper, we propose a green (G)-channel restoration for a red–white–blue (RWB) color filter array (CFA) image sensor using the dual sampling technique. By using white (W) pixels instead of G pixels, the RWB CFA provides high-sensitivity imaging and an improved signal-to-noise ratio compared to the Bayer CFA. However, owing to this high sensitivity, the W pixel values become rapidly over-saturated before the red–blue (RB) pixel values reach the appropriate levels. Because the missing G color information included in the W channel cannot be restored with a saturated W, multiple captures with dual sampling are necessary to solve this early W-pixel saturation problem. Each W pixel has a different exposure time when compared to those of the R and B pixels, because the W pixels are double-exposed. Therefore, a RWB-to-RGB color conversion method is required in order to restore the G color information, using a double-exposed W channel. The proposed G-channel restoration algorithm restores G color information from the W channel by considering the energy difference caused by the different exposure times. Using the proposed method, the RGB full-color image can be obtained while maintaining the high-sensitivity characteristic of the W pixels. PMID:28165425

  5. G-Channel Restoration for RWB CFA with Double-Exposed W Channel.

    PubMed

    Park, Chulhee; Song, Ki Sun; Kang, Moon Gi

    2017-02-05

    In this paper, we propose a green (G)-channel restoration for a red-white-blue (RWB) color filter array (CFA) image sensor using the dual sampling technique. By using white (W) pixels instead of G pixels, the RWB CFA provides high-sensitivity imaging and an improved signal-to-noise ratio compared to the Bayer CFA. However, owing to this high sensitivity, the W pixel values become rapidly over-saturated before the red-blue (RB) pixel values reach the appropriate levels. Because the missing G color information included in the W channel cannot be restored with a saturated W, multiple captures with dual sampling are necessary to solve this early W-pixel saturation problem. Each W pixel has a different exposure time when compared to those of the R and B pixels, because the W pixels are double-exposed. Therefore, a RWB-to-RGB color conversion method is required in order to restore the G color information, using a double-exposed W channel. The proposed G-channel restoration algorithm restores G color information from the W channel by considering the energy difference caused by the different exposure times. Using the proposed method, the RGB full-color image can be obtained while maintaining the high-sensitivity characteristic of the W pixels.

  6. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter

    PubMed Central

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan

    2018-01-01

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509

  7. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    PubMed

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  8. Matched Comparison of Fusion Rates between Hydroxyapatite Demineralized Bone Matrix and Autograft in Lumbar Interbody Fusion.

    PubMed

    Kim, Dae Hwan; Lee, Nam; Shin, Dong Ah; Yi, Seong; Kim, Keung Nyun; Ha, Yoon

    2016-07-01

    To compare the fusion rate of a hydroxyapatite demineralized bone matrix (DBM) with post-laminectomy acquired autograft in lumbar interbody fusion surgery and to evaluate the correlation between fusion rate and clinical outcome. From January 2013 to April 2014, 98 patients underwent lumbar interbody fusion surgery with hydroxyapatite DBM (HA-DBM group) in our institute. Of those patients, 65 received complete CT scans for 12 months postoperatively in order to evaluate fusion status. For comparison with autograft, we selected another 65 patients who underwent lumbar interbody fusion surgery with post-laminectomy acquired autograft (Autograft group) during the same period. Both fusion material groups were matched in terms of age, sex, body mass index (BMI), and bone mineral density (BMD). To evaluate the clinical outcomes, we analyzed the results of visual analogue scale (VAS), Oswestry Disability Index (ODI), and Short Form Health Survey (SF-36). We reviewed the CT scans of 149 fusion levels in 130 patients (HA-DBM group, 75 levels/65 patients; Autograft group, 74 levels/65 patients). Age, sex, BMI, and BMD were not significantly different between the groups (p=0.528, p=0.848, p=0.527, and p=0.610, respectively). The HA-DBM group showed 39 of 75 fused levels (52%), and the Autograft group showed 46 of 74 fused levels (62.2%). This difference was not statistically significant (p=0.21). In the HA-DBM group, older age and low BMD were significantly associated with non-fusion (61.24 vs. 66.68, p=0.027; -1.63 vs. -2.29, p=0.015, respectively). VAS and ODI showed significant improvement after surgery when fusion was successfully achieved in both groups (p=0.004, p=0.002, HA-DBM group; p=0.012, p=0.03, Autograft group). The fusion rates of the hydroxyapatite DBM and Autograft groups were not significantly different. In addition, clinical outcomes were similar between the groups. However, older age and low BMD are risk factors that might induce non-union after surgery with hydroxyapatite DBM.

  9. Matched Comparison of Fusion Rates between Hydroxyapatite Demineralized Bone Matrix and Autograft in Lumbar Interbody Fusion

    PubMed Central

    Kim, Dae Hwan; Lee, Nam; Shin, Dong Ah; Yi, Seong; Kim, Keung Nyun

    2016-01-01

    Objective To compare the fusion rate of a hydroxyapatite demineralized bone matrix (DBM) with post-laminectomy acquired autograft in lumbar interbody fusion surgery and to evaluate the correlation between fusion rate and clinical outcome. Methods From January 2013 to April 2014, 98 patients underwent lumbar interbody fusion surgery with hydroxyapatite DBM (HA-DBM group) in our institute. Of those patients, 65 received complete CT scans for 12 months postoperatively in order to evaluate fusion status. For comparison with autograft, we selected another 65 patients who underwent lumbar interbody fusion surgery with post-laminectomy acquired autograft (Autograft group) during the same period. Both fusion material groups were matched in terms of age, sex, body mass index (BMI), and bone mineral density (BMD). To evaluate the clinical outcomes, we analyzed the results of visual analogue scale (VAS), Oswestry Disability Index (ODI), and Short Form Health Survey (SF-36). Results We reviewed the CT scans of 149 fusion levels in 130 patients (HA-DBM group, 75 levels/65 patients; Autograft group, 74 levels/65 patients). Age, sex, BMI, and BMD were not significantly different between the groups (p=0.528, p=0.848, p=0.527, and p=0.610, respectively). The HA-DBM group showed 39 of 75 fused levels (52%), and the Autograft group showed 46 of 74 fused levels (62.2%). This difference was not statistically significant (p=0.21). In the HA-DBM group, older age and low BMD were significantly associated with non-fusion (61.24 vs. 66.68, p=0.027; -1.63 vs. -2.29, p=0.015, respectively). VAS and ODI showed significant improvement after surgery when fusion was successfully achieved in both groups (p=0.004, p=0.002, HA-DBM group; p=0.012, p=0.03, Autograft group). Conclusion The fusion rates of the hydroxyapatite DBM and Autograft groups were not significantly different. In addition, clinical outcomes were similar between the groups. However, older age and low BMD are risk factors that might induce non-union after surgery with hydroxyapatite DBM. PMID:27446517

  10. The MODIS Cloud Optical and Microphysical Products: Collection 6 Up-dates and Examples From Terra and Aqua

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Meyer, Kerry G.; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin G.; Arnold, G. Thomas; Zhang, Zhibo; Hubanks, Paul A.; Holz, Robert E.; hide

    2016-01-01

    The MODIS Level-2 cloud product (Earth Science Data Set names MOD06 and MYD06 for Terra and Aqua MODIS, respectively) provides pixel-level retrievals of cloud-top properties (day and night pressure, temperature, and height) and cloud optical properties(optical thickness, effective particle radius, and water path for both liquid water and ice cloud thermodynamic phases daytime only). Collection 6 (C6) reprocessing of the product was completed in May 2014 and March 2015 for MODIS Aqua and Terra, respectively. Here we provide an overview of major C6 optical property algorithm changes relative to the previous Collection 5 (C5) product. Notable C6 optical and microphysical algorithm changes include: (i) new ice cloud optical property models and a more extensive cloud radiative transfer code lookup table (LUT) approach, (ii) improvement in the skill of the shortwave-derived cloud thermodynamic phase, (iii) separate cloud effective radius retrieval datasets for each spectral combination used in previous collections, (iv) separate retrievals for partly cloudy pixels and those associated with cloud edges, (v) failure metrics that provide diagnostic information for pixels having observations that fall outside the LUT solution space, and (vi) enhanced pixel-level retrieval uncertainty calculations.The C6 algorithm changes collectively can result in significant changes relative to C5,though the magnitude depends on the dataset and the pixels retrieval location in the cloud parameter space. Example Level-2 granule and Level-3 gridded dataset differences between the two collections are shown. While the emphasis is on the suite of cloud opticalproperty datasets, other MODIS cloud datasets are discussed when relevant.

  11. The MODIS cloud optical and microphysical products: Collection 6 updates and examples from Terra and Aqua.

    PubMed

    Platnick, Steven; Meyer, Kerry G; King, Michael D; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G Thomas; Zhang, Zhibo; Hubanks, Paul A; Holz, Robert E; Yang, Ping; Ridgway, William L; Riedi, Jérôme

    2017-01-01

    The MODIS Level-2 cloud product (Earth Science Data Set names MOD06 and MYD06 for Terra and Aqua MODIS, respectively) provides pixel-level retrievals of cloud-top properties (day and night pressure, temperature, and height) and cloud optical properties (optical thickness, effective particle radius, and water path for both liquid water and ice cloud thermodynamic phases-daytime only). Collection 6 (C6) reprocessing of the product was completed in May 2014 and March 2015 for MODIS Aqua and Terra, respectively. Here we provide an overview of major C6 optical property algorithm changes relative to the previous Collection 5 (C5) product. Notable C6 optical and microphysical algorithm changes include: (i) new ice cloud optical property models and a more extensive cloud radiative transfer code lookup table (LUT) approach, (ii) improvement in the skill of the shortwave-derived cloud thermodynamic phase, (iii) separate cloud effective radius retrieval datasets for each spectral combination used in previous collections, (iv) separate retrievals for partly cloudy pixels and those associated with cloud edges, (v) failure metrics that provide diagnostic information for pixels having observations that fall outside the LUT solution space, and (vi) enhanced pixel-level retrieval uncertainty calculations. The C6 algorithm changes collectively can result in significant changes relative to C5, though the magnitude depends on the dataset and the pixel's retrieval location in the cloud parameter space. Example Level-2 granule and Level-3 gridded dataset differences between the two collections are shown. While the emphasis is on the suite of cloud optical property datasets, other MODIS cloud datasets are discussed when relevant.

  12. Spectral analysis of views interpolated by chroma subpixel downsampling for 3D autosteroscopic displays

    NASA Astrophysics Data System (ADS)

    Marson, Avishai; Stern, Adrian

    2015-05-01

    One of the main limitations of horizontal parallax autostereoscopic displays is the horizontal resolution loss due the need to repartition the pixels of the display panel among the multiple views. Recently we have shown that this problem can be alleviated by applying a color sub-pixel rendering technique1. Interpolated views are generated by down-sampling the panel pixels at sub-pixel level, thus increasing the number of views. The method takes advantage of lower acuity of the human eye to chromatic resolution. Here we supply further support of the technique by analyzing the spectra of the subsampled images.

  13. A Proposed Extension to the Soil Moisture and Ocean Salinity Level 2 Algorithm for Mixed Forest and Moderate Vegetation Pixels

    NASA Technical Reports Server (NTRS)

    Panciera, Rocco; Walker, Jeffrey P.; Kalma, Jetse; Kim, Edward

    2011-01-01

    The Soil Moisture and Ocean Salinity (SMOS)mission, launched in November 2009, provides global maps of soil moisture and ocean salinity by measuring the L-band (1.4 GHz) emission of the Earth's surface with a spatial resolution of 40-50 km.Uncertainty in the retrieval of soilmoisture over large heterogeneous areas such as SMOS pixels is expected, due to the non-linearity of the relationship between soil moisture and the microwave emission. The current baseline soilmoisture retrieval algorithm adopted by SMOS and implemented in the SMOS Level 2 (SMOS L2) processor partially accounts for the sub-pixel heterogeneity of the land surface, by modelling the individual contributions of different pixel fractions to the overall pixel emission. This retrieval approach is tested in this study using airborne L-band data over an area the size of a SMOS pixel characterised by a mix Eucalypt forest and moderate vegetation types (grassland and crops),with the objective of assessing its ability to correct for the soil moisture retrieval error induced by the land surface heterogeneity. A preliminary analysis using a traditional uniform pixel retrieval approach shows that the sub-pixel heterogeneity of land cover type causes significant errors in soil moisture retrieval (7.7%v/v RMSE, 2%v/v bias) in pixels characterised by a significant amount of forest (40-60%). Although the retrieval approach adopted by SMOS partially reduces this error, it is affected by errors beyond the SMOS target accuracy, presenting in particular a strong dry bias when a fraction of the pixel is occupied by forest (4.1%v/v RMSE,-3.1%v/v bias). An extension to the SMOS approach is proposed that accounts for the heterogeneity of vegetation optical depth within the SMOS pixel. The proposed approach is shown to significantly reduce the error in retrieved soil moisture (2.8%v/v RMSE, -0.3%v/v bias) in pixels characterised by a critical amount of forest (40-60%), at the limited cost of only a crude estimate of the optical depth of the forested area (better than 35% uncertainty). This study makes use of an unprecedented data set of airborne L-band observations and ground supporting data from the National Airborne Field Experiment 2005 (NAFE'05), which allowed accurate characterisation of the land surface heterogeneity over an area equivalent in size to a SMOS pixel.

  14. Adaptive neuro-heuristic hybrid model for fruit peel defects detection.

    PubMed

    Woźniak, Marcin; Połap, Dawid

    2018-02-01

    Fusion of machine learning methods benefits in decision support systems. A composition of approaches gives a possibility to use the most efficient features composed into one solution. In this article we would like to present an approach to the development of adaptive method based on fusion of proposed novel neural architecture and heuristic search into one co-working solution. We propose a developed neural network architecture that adapts to processed input co-working with heuristic method used to precisely detect areas of interest. Input images are first decomposed into segments. This is to make processing easier, since in smaller images (decomposed segments) developed Adaptive Artificial Neural Network (AANN) processes less information what makes numerical calculations more precise. For each segment a descriptor vector is composed to be presented to the proposed AANN architecture. Evaluation is run adaptively, where the developed AANN adapts to inputs and their features by composed architecture. After evaluation, selected segments are forwarded to heuristic search, which detects areas of interest. As a result the system returns the image with pixels located over peel damages. Presented experimental research results on the developed solution are discussed and compared with other commonly used methods to validate the efficacy and the impact of the proposed fusion in the system structure and training process on classification results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Fusion-based multi-target tracking and localization for intelligent surveillance systems

    NASA Astrophysics Data System (ADS)

    Rababaah, Haroun; Shirkhodaie, Amir

    2008-04-01

    In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.

  16. A summary of image segmentation techniques

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    Machine vision systems are often considered to be composed of two subsystems: low-level vision and high-level vision. Low level vision consists primarily of image processing operations performed on the input image to produce another image with more favorable characteristics. These operations may yield images with reduced noise or cause certain features of the image to be emphasized (such as edges). High-level vision includes object recognition and, at the highest level, scene interpretation. The bridge between these two subsystems is the segmentation system. Through segmentation, the enhanced input image is mapped into a description involving regions with common features which can be used by the higher level vision tasks. There is no theory on image segmentation. Instead, image segmentation techniques are basically ad hoc and differ mostly in the way they emphasize one or more of the desired properties of an ideal segmenter and in the way they balance and compromise one desired property against another. These techniques can be categorized in a number of different groups including local vs. global, parallel vs. sequential, contextual vs. noncontextual, interactive vs. automatic. In this paper, we categorize the schemes into three main groups: pixel-based, edge-based, and region-based. Pixel-based segmentation schemes classify pixels based solely on their gray levels. Edge-based schemes first detect local discontinuities (edges) and then use that information to separate the image into regions. Finally, region-based schemes start with a seed pixel (or group of pixels) and then grow or split the seed until the original image is composed of only homogeneous regions. Because there are a number of survey papers available, we will not discuss all segmentation schemes. Rather than a survey, we take the approach of a detailed overview. We focus only on the more common approaches in order to give the reader a flavor for the variety of techniques available yet present enough details to facilitate implementation and experimentation.

  17. Thematic accuracy of the 1992 National Land-Cover Data for the eastern United States: Statistical methodology and regional results

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Smith, J.H.; Yang, L.

    2003-01-01

    The accuracy of the 1992 National Land-Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or alternate reference label determined for a sample pixel and a mode class of the mapped 3×3 block of pixels centered on the sample pixel. Results are reported for each of the four regions comprising the eastern United States for both Anderson Level I and II classifications. Overall accuracies for Levels I and II are 80% and 46% for New England, 82% and 62% for New York/New Jersey (NY/NJ), 70% and 43% for the Mid-Atlantic, and 83% and 66% for the Southeast.

  18. An RBF-based compression method for image-based relighting.

    PubMed

    Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung

    2006-04-01

    In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.

  19. Design of a Low-Light-Level Image Sensor with On-Chip Sigma-Delta Analog-to- Digital Conversion

    NASA Technical Reports Server (NTRS)

    Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.; Fossum, Eric R.

    1993-01-01

    The design and projected performance of a low-light-level active-pixel-sensor (APS) chip with semi-parallel analog-to-digital (A/D) conversion is presented. The individual elements have been fabricated and tested using MOSIS* 2 micrometer CMOS technology, although the integrated system has not yet been fabricated. The imager consists of a 128 x 128 array of active pixels at a 50 micrometer pitch. Each column of pixels shares a 10-bit A/D converter based on first-order oversampled sigma-delta (Sigma-Delta) modulation. The 10-bit outputs of each converter are multiplexed and read out through a single set of outputs. A semi-parallel architecture is chosen to achieve 30 frames/second operation even at low light levels. The sensor is designed for less than 12 e^- rms noise performance.

  20. ALIF and total disc replacement versus 2-level circumferential fusion with TLIF: a prospective, randomized, clinical and radiological trial.

    PubMed

    Hoff, Eike K; Strube, Patrick; Pumberger, Matthias; Zahn, Robert K; Putzier, Michael

    2016-05-01

    Prospective, randomized trial. The treatment of degenerative disc disease (DDD) with two-level fusion has been associated with a reasonable rate of complications. The aim of the present study was to compare (Hybrid) stand-alone anterior lumbar interbody fusion (ALIF) at L5/S1 with total disc replacement at L4/5 (TDR) as an alternative surgical strategy to (Fusion) 2-level circumferential fusion employing transforaminal lumbar interbody fusion (TLIF) with transpedicular stabilization at L4-S1. A total of 62 patients with symptomatic DDD of segments L5/S1 (Modic ≥2°) and L4/5 (Modic ≤2°; positive discography) were enrolled; 31 were treated with Hybrid and 31 with Fusion. Preoperatively, at 0, 12, and a mean follow-up of 37 months, clinical (ODI, VAS) and radiological evaluations (plain/extension-flexion radiographs evaluated for implant failure, fusion, global and segmental lordosis, and ROM) were performed. In 26 of 31 Hybrid and 24 of 31 Fusion patients available at the final follow-up, we found a significant clinical improvement compared to preoperatively. Hybrid patients had significantly lower VAS scores immediately postoperatively and at follow-up compared to Fusion patients. The complication rates were low and similar between the groups. Lumbar lordosis increased in both groups. The increase was mainly located at L4-S1 in the Hybrid group and at L1-L4 in the Fusion group. Hybrid patients presented with increased ROM at L4/5 and L3/4, and Fusion patients presented with increased ROM at L3/4, with significantly greater ROM at L3/4 compared to Hybrid patients at follow-up. Hybrid surgery is a viable surgical alternative for the presented indication. Approach-related inferior trauma and the balanced restoration of lumbar lordosis resulted in superior clinical outcomes compared to two-level circumferential fusion with TLIF.

  1. Depth-color fusion strategy for 3-D scene modeling with Kinect.

    PubMed

    Camplani, Massimo; Mantecon, Tomas; Salgado, Luis

    2013-12-01

    Low-cost depth cameras, such as Microsoft Kinect, have completely changed the world of human-computer interaction through controller-free gaming applications. Depth data provided by the Kinect sensor presents several noise-related problems that have to be tackled to improve the accuracy of the depth data, thus obtaining more reliable game control platforms and broadening its applicability. In this paper, we present a depth-color fusion strategy for 3-D modeling of indoor scenes with Kinect. Accurate depth and color models of the background elements are iteratively built, and used to detect moving objects in the scene. Kinect depth data is processed with an innovative adaptive joint-bilateral filter that efficiently combines depth and color by analyzing an edge-uncertainty map and the detected foreground regions. Results show that the proposed approach efficiently tackles main Kinect data problems: distance-dependent depth maps, spatial noise, and temporal random fluctuations are dramatically reduced; objects depth boundaries are refined, and nonmeasured depth pixels are interpolated. Moreover, a robust depth and color background model and accurate moving objects silhouette are generated.

  2. Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion.

    PubMed

    Leimkuhler, Thomas; Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter

    2018-06-01

    We propose a system to infer binocular disparity from a monocular video stream in real-time. Different from classic reconstruction of physical depth in computer vision, we compute perceptually plausible disparity, that is numerically inaccurate, but results in a very similar overall depth impression with plausible overall layout, sharp edges, fine details and agreement between luminance and disparity. We use several simple monocular cues to estimate disparity maps and confidence maps of low spatial and temporal resolution in real-time. These are complemented by spatially-varying, appearance-dependent and class-specific disparity prior maps, learned from example stereo images. Scene classification selects this prior at runtime. Fusion of prior and cues is done by means of robust MAP inference on a dense spatio-temporal conditional random field with high spatial and temporal resolution. Using normal distributions allows this in constant-time, parallel per-pixel work. We compare our approach to previous 2D-to-3D conversion systems in terms of different metrics, as well as a user study and validate our notion of perceptually plausible disparity.

  3. Compact advanced extreme-ultraviolet imaging spectrometer for spatiotemporally varying tungsten spectra from fusion plasmas.

    PubMed

    Song, Inwoo; Seon, C R; Hong, Joohwan; An, Y H; Barnsley, R; Guirlet, R; Choe, Wonho

    2017-09-01

    A compact advanced extreme-ultraviolet (EUV) spectrometer operating in the EUV wavelength range of a few nanometers to measure spatially resolved line emissions from tungsten (W) was developed for studying W transport in fusion plasmas. This system consists of two perpendicularly crossed slits-an entrance aperture and a space-resolved slit-inside a chamber operating as a pinhole, which enables the system to obtain a spatial distribution of line emissions. Moreover, a so-called v-shaped slit was devised to manage the aperture size for measuring the spatial resolution of the system caused by the finite width of the pinhole. A back-illuminated charge-coupled device was used as a detector with 2048 × 512 active pixels, each with dimensions of 13.5 × 13.5 μm 2 . After the alignment and installation on Korea superconducting tokamak advanced research, the preliminary results were obtained during the 2016 campaign. Several well-known carbon atomic lines in the 2-7 nm range originating from intrinsic carbon impurities were observed and used for wavelength calibration. Further, the time behavior of their spatial distributions is presented.

  4. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    NASA Astrophysics Data System (ADS)

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-12-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure.

  5. Radiographic and Clinical Outcome of Silicate-substituted Calcium Phosphate (Si-CaP) Ceramic Bone Graft in Spinal Fusion Procedures.

    PubMed

    Alimi, Marjan; Navarro-Ramirez, Rodrigo; Parikh, Karishma; Njoku, Innocent; Hofstetter, Christoph P; Tsiouris, Apostolos J; Härtl, Roger

    2017-07-01

    Retrospective cohort study. To evaluate the radiographic and clinical outcome of silicate-substituted calcium phosphate (Si-CaP), utilized as a graft substance in spinal fusion procedures. Specific properties of Si-CaP provide the graft with negative surface charge that can result in a positive effect on the osteoblast activity and neovascularization of the bone. This study included those patients who underwent spinal fusion procedures between 2007 and 2011 in which Si-CaP was used as the only bone graft substance. Fusion was evaluated on follow-up CT scans. Clinical outcome was assessed using Oswestry Disability Index, Neck Disability Index, and the visual analogue scale (VAS) for back, leg, neck, and arm pain. A total of 234 patients (516 spinal fusion levels) were studied. Surgical procedures consisted of 57 transforaminal lumbar interbody fusion, 49 anterior cervical discectomy and fusion, 44 extreme lateral interbody fusion, 30 posterior cervical fusions, 19 thoracic fusion surgeries, 17 axial lumbar interbody fusions, 16 combined anterior and posterior cervical fusions, and 2 anterior lumbar interbody fusion. At a mean radiographic follow-up of 14.2±4.3 months, fusion was found to be present in 82.9% of patients and 86.8% of levels. The highest fusion rate was observed in the cervical region. At the latest clinical follow-up of 21.7±14.2 months, all clinical outcome parameters showed significant improvement. The Oswestry Disability Index improved from 45.6 to 13.3 points, Neck Disability Index from 40.6 to 29.3, VAS back from 6.1 to 3.5, VAS leg from 5.6 to 2.4, VAS neck from 4.7 to 2.7, and VAS arm from 4.1 to 1.7. Of 7 cases with secondary surgical procedure at the index level, the indication for surgery was nonunion in 3 patients. Si-CaP is an effective bone graft substitute. At the latest follow-up, favorable radiographic and clinical outcome was observed in the majority of patients. Level-III.

  6. 320 x 240 uncooled IRFPA with pixel wise thin film vacuum packaging

    NASA Astrophysics Data System (ADS)

    Yon, J.-J.; Dumont, G.; Rabaud, W.; Becker, S.; Carle, L.; Goudon, V.; Vialle, C.; Hamelin, A.; Arnaud, A.

    2012-10-01

    Silicon based vacuum packaging is a key enabling technology for achieving affordable uncooled Infrared Focal Plane Arrays (IRFPA) as required by the promising mass market for very low cost IR applications, such as automotive driving assistance, energy loss monitoring in buildings, motion sensors… Among the various approaches studied worldwide, the CEA, LETI is developing a unique technology where each bolometer pixel is sealed under vacuum at the wafer level, using an IR transparent thin film deposition. This technology referred to as PLP (Pixel Level Packaging), leads to an array of hermetic micro-caps each containing a single microbolometer. Since the successful demonstration that the PLP technology, when applied on a single microbolometer pixel, can provide the required vacuum < 10-3 mbar, the authors have pushed forward the development of the technology on fully operational QVGA readout circuits CMOS base wafers (320 x 240 pixels). In this outlook, the article reports on the electro optical performance obtained from this preliminary PLP based QVGA demonstrator. Apart from the response, noise and NETD distributions, the paper also puts emphasis on additional key features such as thermal time constant, image quality, and ageing properties.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahim, Farah; Deptuch, Grzegorz W.; Hoff, James R.

    Semiconductor hybrid pixel detectors often consist of a pixellated sensor layer bump bonded to a matching pixelated readout integrated circuit (ROIC). The sensor can range from high resistivity Si to III-V materials, whereas a Si CMOS process is typically used to manufacture the ROIC. Independent, device physics and electronic design automation (EDA) tools are used to determine sensor characteristics and verify functional performance of ROICs respectively with significantly different solvers. Some physics solvers provide the capability of transferring data to the EDA tool. However, single pixel transient simulations are either not feasible due to convergence difficulties or are prohibitively long.more » A simplified sensor model, which includes a current pulse in parallel with detector equivalent capacitor, is often used; even then, spice type top-level (entire array) simulations range from days to weeks. In order to analyze detector deficiencies for a particular scientific application, accurately defined transient behavioral models of all the functional blocks are required. Furthermore, various simulations, such as transient, noise, Monte Carlo, inter-pixel effects, etc. of the entire array need to be performed within a reasonable time frame without trading off accuracy. The sensor and the analog front-end can be modeling using a real number modeling language, as complex mathematical functions or detailed data can be saved to text files, for further top-level digital simulations. Parasitically aware digital timing is extracted in a standard delay format (sdf) from the pixel digital back-end layout as well as the periphery of the ROIC. For any given input, detector level worst-case and best-case simulations are performed using a Verilog simulation environment to determine the output. Each top-level transient simulation takes no more than 10-15 minutes. The impact of changing key parameters such as sensor Poissonian shot noise, analog front-end bandwidth, jitter due to clock distribution etc. can be accurately analyzed to determine ROIC architectural viability and bottlenecks. Hence the impact of the detector parameters on the scientific application can be studied.« less

  8. Immense random colocalization, revealed by automated high content image cytometry, seriously questions FISH as gold standard for detecting EML4-ALK fusion.

    PubMed

    Smuk, Gábor; Tornóczky, Tamás; Pajor, László; Chudoba, Ilse; Kajtár, Béla; Sárosi, Veronika; Pajor, Gábor

    2018-05-19

    EML4-ALK gene fusion (inv2(p21p23)) of non-small cell lung cancer (NSCLC) predisposes to tyrosine kinase inhibitor treatment. One of the gold standard diagnostics is the dual color (DC) break-apart (BA) FISH technique, however, the unusual closeness of the involved genes has been suggested to raise likelihood of random co-localization (RCL) of signals. Although this is suspected to decrease sensitivity (often to as low as 40-70%), the exact level and effect of RCL has not been revealed thus far. Signal distances were analyzed to the 0.1 µm precision in more than 25,000 nuclei, via automated high content-image cytometry. Negative and positive controls were created using conventional DC BA-, and inv2(p21p23) mimicking probe-sets, respectively. Average distance between red and green signals was 9.72 pixels (px) (±5.14px) and 3.28px (±2.44px), in positives and negatives, respectively; overlap in distribution being 41%. Specificity and sensitivity of correctly determining ALK status was 97% and 29%, respectively. When investigating inv2(p21p23) with DC BA FISH, specificity is high, but seven out of ten aberrant nuclei are inevitably falsely classified as negative, due to the extreme level of RCL. Together with genetic heterogeneity and dilution effect of non-tumor cells in NSCLC, this immense analytical false negativity is the primary cause behind the often described low diagnostic sensitivity. These results convincingly suggest that if FISH is to remain a gold standard for detecting the therapy relevant inv(2), either a modified evaluation protocol, or a more reliable probe-design should be considered than the current DC BA one. © 2018 International Society for Advancement of Cytometry. © 2018 International Society for Advancement of Cytometry.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart Zweben; Samuel Cohen; Hantao Ji

    Small ''concept exploration'' experiments have for many years been an important part of the fusion research program at the Princeton Plasma Physics Laboratory (PPPL). this paper describes some of the present and planned fusion concept exploration experiments at PPPL. These experiments are a University-scale research level, in contrast with the larger fusion devices at PPPL such as the National Spherical Torus Experiment (NSTX) and the Tokamak Fusion Test Reactor (TFTR), which are at ''proof-of-principle'' and ''proof-of-performance'' levels, respectively.

  10. Mathematical Fundamentals of Probabilistic Semantics for High-Level Fusion

    DTIC Science & Technology

    2013-12-02

    understanding of the fundamental aspects of uncertainty representation and reasoning that a theory of hard and soft high-level fusion must encompass...representation and reasoning that a theory of hard and soft high-level fusion must encompass. Successful completion requires an unbiased, in-depth...and soft information is the lack of a fundamental HLIF theory , backed by a consistent mathematical framework and supporting algorithms. Although there

  11. Quantitative evaluation of the accuracy and variance of individual pixels in a scientific CMOS (sCMOS) camera for computational imaging

    NASA Astrophysics Data System (ADS)

    Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith

    2017-02-01

    The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.

  12. High dynamic range pixel architecture for advanced diagnostic medical x-ray imaging applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izadi, Mohammad Hadi; Karim, Karim S.

    2006-05-15

    The most widely used architecture in large-area amorphous silicon (a-Si) flat panel imagers is a passive pixel sensor (PPS), which consists of a detector and a readout switch. While the PPS has the advantage of being compact and amenable toward high-resolution imaging, small PPS output signals are swamped by external column charge amplifier and data line thermal noise, which reduce the minimum readable sensor input signal. In contrast to PPS circuits, on-pixel amplifiers in a-Si technology reduce readout noise to levels that can meet even the stringent requirements for low noise digital x-ray fluoroscopy (<1000 noise electrons). However, larger voltagesmore » at the pixel input cause the output of the amplified pixel to become nonlinear thus reducing the dynamic range. We reported a hybrid amplified pixel architecture based on a combination of PPS and amplified pixel designs that, in addition to low noise performance, also resulted in large-signal linearity and consequently higher dynamic range [K. S. Karim et al., Proc. SPIE 5368, 657 (2004)]. The additional benefit in large-signal linearity, however, came at the cost of an additional pixel transistor. We present an amplified pixel design that achieves the goals of low noise performance and large-signal linearity without the need for an additional pixel transistor. Theoretical calculations and simulation results for noise indicate the applicability of the amplified a-Si pixel architecture for high dynamic range, medical x-ray imaging applications that require switching between low exposure, real-time fluoroscopy and high-exposure radiography.« less

  13. A robust color image fusion for low light level and infrared images

    NASA Astrophysics Data System (ADS)

    Liu, Chao; Zhang, Xiao-hui; Hu, Qing-ping; Chen, Yong-kang

    2016-09-01

    The low light level and infrared color fusion technology has achieved great success in the field of night vision, the technology is designed to make the hot target of fused image pop out with intenser colors, represent the background details with a nearest color appearance to nature, and improve the ability in target discovery, detection and identification. The low light level images have great noise under low illumination, and that the existing color fusion methods are easily to be influenced by low light level channel noise. To be explicit, when the low light level image noise is very large, the quality of the fused image decreases significantly, and even targets in infrared image would be submerged by the noise. This paper proposes an adaptive color night vision technology, the noise evaluation parameters of low light level image is introduced into fusion process, which improve the robustness of the color fusion. The color fuse results are still very good in low-light situations, which shows that this method can effectively improve the quality of low light level and infrared fused image under low illumination conditions.

  14. A Multi-Modality CMOS Sensor Array for Cell-Based Assay and Drug Screening.

    PubMed

    Chi, Taiyun; Park, Jong Seok; Butts, Jessica C; Hookway, Tracy A; Su, Amy; Zhu, Chengjie; Styczynski, Mark P; McDevitt, Todd C; Wang, Hua

    2015-12-01

    In this paper, we present a fully integrated multi-modality CMOS cellular sensor array with four sensing modalities to characterize different cell physiological responses, including extracellular voltage recording, cellular impedance mapping, optical detection with shadow imaging and bioluminescence sensing, and thermal monitoring. The sensor array consists of nine parallel pixel groups and nine corresponding signal conditioning blocks. Each pixel group comprises one temperature sensor and 16 tri-modality sensor pixels, while each tri-modality sensor pixel can be independently configured for extracellular voltage recording, cellular impedance measurement (voltage excitation/current sensing), and optical detection. This sensor array supports multi-modality cellular sensing at the pixel level, which enables holistic cell characterization and joint-modality physiological monitoring on the same cellular sample with a pixel resolution of 80 μm × 100 μm. Comprehensive biological experiments with different living cell samples demonstrate the functionality and benefit of the proposed multi-modality sensing in cell-based assay and drug screening.

  15. Effects of autocorrelation upon LANDSAT classification accuracy. [Richmond, Virginia and Denver, Colorado

    NASA Technical Reports Server (NTRS)

    Craig, R. G. (Principal Investigator)

    1983-01-01

    Richmond, Virginia and Denver, Colorado were study sites in an effort to determine the effect of autocorrelation on the accuracy of a parallelopiped classifier of LANDSAT digital data. The autocorrelation was assumed to decay to insignificant levels when sampled at distances of at least ten pixels. Spectral themes developed using blocks of adjacent pixels, and using groups of pixels spaced at least 10 pixels apart were used. Effects of geometric distortions were minimized by using only pixels from the interiors of land cover sections. Accuracy was evaluated for three classes; agriculture, residential and "all other"; both type 1 and type 2 errors were evaluated by means of overall classification accuracy. All classes give comparable results. Accuracy is approximately the same in both techniques; however, the variance in accuracy is significantly higher using the themes developed from autocorrelated data. The vectors of mean spectral response were nearly identical regardless of sampling method used. The estimated variances were much larger when using autocorrelated pixels.

  16. Visual mining business service using pixel bar charts

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Dayal, Umeshwar; Casati, Fabio

    2004-06-01

    Basic bar charts have been commonly available, but they only show highly aggregated data. Finding the valuable information hidden in the data is essential to the success of business. We describe a new visualization technique called pixel bar charts, which are derived from regular bar charts. The basic idea of a pixel bar chart is to present all data values directly instead of aggregating them into a few data values. Pixel bar charts provide data distribution and exceptions besides aggregated data. The approach is to represent each data item (e.g. a business transaction) by a single pixel in the bar chart. The attribute of each data item is encoded into the pixel color and can be accessed and drilled down to the detail information as needed. Different color mappings are used to represent multiple attributes. This technique has been prototyped in three business service applications-Business Operation Analysis, Sales Analysis, and Service Level Agreement Analysis at Hewlett Packard Laboratories. Our applications show the wide applicability and usefulness of this new idea.

  17. Terahertz Array Receivers with Integrated Antennas

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Goutam; Llombart, Nuria; Lee, Choonsup; Jung, Cecile; Lin, Robert; Cooper, Ken B.; Reck, Theodore; Siles, Jose; Schlecht, Erich; Peralta, Alessandro; hide

    2011-01-01

    Highly sensitive terahertz heterodyne receivers have been mostly single-pixel. However, now there is a real need of multi-pixel array receivers at these frequencies driven by the science and instrument requirements. In this paper we explore various receiver font-end and antenna architectures for use in multi-pixel integrated arrays at terahertz frequencies. Development of wafer-level integrated terahertz receiver front-end by using advanced semiconductor fabrication technologies has progressed very well over the past few years. Novel stacking of micro-machined silicon wafers which allows for the 3-dimensional integration of various terahertz receiver components in extremely small packages has made it possible to design multi-pixel heterodyne arrays. One of the critical technologies to achieve fully integrated system is the antenna arrays compatible with the receiver array architecture. In this paper we explore different receiver and antenna architectures for multi-pixel heterodyne and direct detector arrays for various applications such as multi-pixel high resolution spectrometer and imaging radar at terahertz frequencies.

  18. The transition zone above a lumbosacral fusion.

    PubMed

    Hambly, M F; Wiltse, L L; Raghavan, N; Schneiderman, G; Koenig, C

    1998-08-15

    The clinical and radiographic effect of a lumbar or lumbosacral fusion was studied in 42 patients who had undergone a posterolateral fusion with an average follow-up of 22.6 years. To examine the long-term effects of posterolateral lumbar or lumbosacral fusion on the cephalad two motion segments (transition zone). It is commonly held that accelerated degeneration occurs in the motion segments adjacent to a fusion. Most studies are of short-term, anecdotal, uncontrolled reports that pay particular attention only to the first motion segment immediately cephalad to the fusion. Forty-two patients who had previously undergone a posterolateral lumbar or lumbosacral fusion underwent radiographic and clinical evaluation. Rate of fusion, range of motion, osteophytes, degenerative spondylolisthesis, retrolisthesis, facet arthrosis, disc ossification, dynamic instability, and disc space height were all studied and statistically compared with an age- and gender-matched control group. The patient's self-reported clinical outcome was also recorded. Degenerative changes occurred at the second level above the fused levels with a frequency equal to those occurring in the first level. There was no statistical difference between the study group and the cohort group in the presence of radiographic changes within the transition zone. In those patients undergoing fusion for degenerative processes, 75% reported a good to excellent outcome, whereas 84% of those undergoing fusion for spondylolysis or spondylolisthesis reported a good to excellent outcome. Radiographic changes occur within the transition zone cephalad to a lumbar or lumbosacral fusion. However, these changes are also seen in control subjects who have had no surgery.

  19. Reanalysis of RNA-Sequencing Data Reveals Several Additional Fusion Genes with Multiple Isoforms

    PubMed Central

    Kangaspeska, Sara; Hultsch, Susanne; Edgren, Henrik; Nicorici, Daniel; Murumägi, Astrid; Kallioniemi, Olli

    2012-01-01

    RNA-sequencing and tailored bioinformatic methodologies have paved the way for identification of expressed fusion genes from the chaotic genomes of solid tumors. We have recently successfully exploited RNA-sequencing for the discovery of 24 novel fusion genes in breast cancer. Here, we demonstrate the importance of continuous optimization of the bioinformatic methodology for this purpose, and report the discovery and experimental validation of 13 additional fusion genes from the same samples. Integration of copy number profiling with the RNA-sequencing results revealed that the majority of the gene fusions were promoter-donating events that occurred at copy number transition points or involved high-level DNA-amplifications. Sequencing of genomic fusion break points confirmed that DNA-level rearrangements underlie selected fusion transcripts. Furthermore, a significant portion (>60%) of the fusion genes were alternatively spliced. This illustrates the importance of reanalyzing sequencing data as gene definitions change and bioinformatic methods improve, and highlights the previously unforeseen isoform diversity among fusion transcripts. PMID:23119097

  20. Reanalysis of RNA-sequencing data reveals several additional fusion genes with multiple isoforms.

    PubMed

    Kangaspeska, Sara; Hultsch, Susanne; Edgren, Henrik; Nicorici, Daniel; Murumägi, Astrid; Kallioniemi, Olli

    2012-01-01

    RNA-sequencing and tailored bioinformatic methodologies have paved the way for identification of expressed fusion genes from the chaotic genomes of solid tumors. We have recently successfully exploited RNA-sequencing for the discovery of 24 novel fusion genes in breast cancer. Here, we demonstrate the importance of continuous optimization of the bioinformatic methodology for this purpose, and report the discovery and experimental validation of 13 additional fusion genes from the same samples. Integration of copy number profiling with the RNA-sequencing results revealed that the majority of the gene fusions were promoter-donating events that occurred at copy number transition points or involved high-level DNA-amplifications. Sequencing of genomic fusion break points confirmed that DNA-level rearrangements underlie selected fusion transcripts. Furthermore, a significant portion (>60%) of the fusion genes were alternatively spliced. This illustrates the importance of reanalyzing sequencing data as gene definitions change and bioinformatic methods improve, and highlights the previously unforeseen isoform diversity among fusion transcripts.

  1. [Rumination and cognitive fusion in dementia family caregivers].

    PubMed

    Romero-Moreno, Rosa; Márquez-González, María; Losada, Andrés; Fernández-Fernández, Virginia; Nogales-González, Celia

    2015-01-01

    Rumination has been described as a dysfunctional coping strategy related to emotional distress. Recently, it has been highlighted from the Acceptance and Commitment Therapy therapeutic approach, the negative role that cognitive fusion (the extent to which we are psychologically tangled with and dominated by the form or content of our thoughts) has on the explanation of distress. The aim of this study is to simultaneously analyze the role of rumination and cognitive fusion in the caregiving stress process. The sample of 176 dementia caregivers was divided in four groups, taking into account their levels of rumination and cognitive fusion: HRHF=high rumination+high cognitive fusion; HRLF=high rumination+low cognitive fusion; LRHF= low rumination+high cognitive fusion; and LRLC=low rumination and low cognitive fusion. Caregiver stress factors, frequency of pleasant events, experiential avoidance, coherence and satisfaction with personal values, depression, anxiety and satisfaction with life, were measured. The HRHF group showed higher levels of depression, anxiety, experiential avoidance and lower levels of satisfaction with life, frequency of pleasant events, coherence and satisfaction with personal values, than the other three groups. Considering simultaneously rumination and cognitive fusion may contribute to a better understanding of caregiver coping and distress. Copyright © 2014 SEGG. Published by Elsevier Espana. All rights reserved.

  2. Mapping Forest Height in Gabon Using UAVSAR Multi-Baseline Polarimetric SAR Interferometry and Lidar Fusion

    NASA Astrophysics Data System (ADS)

    Simard, M.; Denbina, M. W.

    2017-12-01

    Using data collected by NASA's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) and Land, Vegetation, and Ice Sensor (LVIS) lidar, we have estimated forest canopy height for a number of study areas in the country of Gabon using a new machine learning data fusion approach. Using multi-baseline polarimetric synthetic aperture radar interferometry (PolInSAR) data collected by UAVSAR, forest heights can be estimated using the random volume over ground model. In the case of multi-baseline UAVSAR data consisting of many repeat passes with spatially separated flight tracks, we can estimate different forest height values for each different image pair, or baseline. In order to choose the best forest height estimate for each pixel, the baselines must be selected or ranked, taking care to avoid baselines with unsuitable spatial separation, or severe temporal decorrelation effects. The current baseline selection algorithms in the literature use basic quality metrics derived from the PolInSAR data which are not necessarily indicative of the true height accuracy in all cases. We have developed a new data fusion technique which treats PolInSAR baseline selection as a supervised classification problem, where the classifier is trained using a sparse sampling of lidar data within the PolInSAR coverage area. The classifier uses a large variety of PolInSAR-derived features as input, including radar backscatter as well as features based on the PolInSAR coherence region shape and the PolInSAR complex coherences. The resulting data fusion method produces forest height estimates which are more accurate than a purely radar-based approach, while having a larger coverage area than the input lidar training data, combining some of the strengths of each sensor. The technique demonstrates the strong potential for forest canopy height and above-ground biomass mapping using fusion of PolInSAR with data from future spaceborne lidar missions such as the upcoming Global Ecosystems Dynamics Investigation (GEDI) lidar.

  3. Complications with axial presacral lumbar interbody fusion: A 5-year postmarketing surveillance experience

    PubMed Central

    Gundanna, Mukund I.; Miller, Larry E.; Block, Jon E.

    2011-01-01

    Background Open and minimally invasive lumbar fusion procedures have inherent procedural risks, with posterior and transforaminal approaches resulting in significant soft-tissue injury and the anterior approach endangering organs and major blood vessels. An alternative lumbar fusion technique uses a small paracoccygeal incision and a presacral approach to the L5-S1 intervertebral space, which avoids critical structures and may result in a favorable safety profile versus open and other minimally invasive fusion techniques. The purpose of this study was to evaluate complications associated with axial interbody lumbar fusion procedures using the Axial Lumbar Interbody Fusion (AxiaLIF) System (TranS1, Wilmington, North Carolina) in the postmarketing period. Methods Between March 2005 and March 2010, 9,152 patients underwent interbody fusion with the AxiaLIF System through an axial presacral approach. A single-level L5-S1 fusion was performed in 8,034 patients (88%), and a 2-level (L4-S1) fusion was used in 1,118 (12%). A predefined database was designed to record device- or procedure-related complaints via spontaneous reporting. The complications that were recorded included bowel injury, superficial wound and systemic infections, transient intraoperative hypotension, migration, subsidence, presacral hematoma, sacral fracture, vascular injury, nerve injury, and ureter injury. Results Complications were reported in 120 of 9,152 patients (1.3%). The most commonly reported complications were bowel injury (n = 59, 0.6%) and transient intraoperative hypotension (n = 20, 0.2%). The overall complication rate was similar between single-level (n = 102, 1.3%) and 2-level (n = 18, 1.6%) fusion procedures, with no significant differences noted for any single complication. Conclusions The 5-year postmarketing surveillance experience with the AxiaLIF System suggests that axial interbody lumbar fusion through the presacral approach is associated with a low incidence of complications. The overall complication rates observed in our evaluation compare favorably with those reported in trials of open and minimally invasive lumbar fusion surgery. PMID:25802673

  4. High-resolution land cover classification using low resolution global data

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark J.

    2013-05-01

    A fusion approach is described that combines texture features from high-resolution panchromatic imagery with land cover statistics derived from co-registered low-resolution global databases to obtain high-resolution land cover maps. The method does not require training data or any human intervention. We use an MxN Gabor filter bank consisting of M=16 oriented bandpass filters (0-180°) at N resolutions (3-24 meters/pixel). The size range of these spatial filters is consistent with the typical scale of manmade objects and patterns of cultural activity in imagery. Clustering reduces the complexity of the data by combining pixels that have similar texture into clusters (regions). Texture classification assigns a vector of class likelihoods to each cluster based on its textural properties. Classification is unsupervised and accomplished using a bank of texture anomaly detectors. Class likelihoods are modulated by land cover statistics derived from lower resolution global data over the scene. Preliminary results from a number of Quickbird scenes show our approach is able to classify general land cover features such as roads, built up area, forests, open areas, and bodies of water over a wide range of scenes.

  5. A Comparative Study of Landsat TM and SPOT HRG Images for Vegetation Classification in the Brazilian Amazon.

    PubMed

    Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E; Moran, Emilio

    2008-01-01

    Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.

  6. A Comparative Study of Landsat TM and SPOT HRG Images for Vegetation Classification in the Brazilian Amazon

    PubMed Central

    Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E.; Moran, Emilio

    2009-01-01

    Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin. PMID:19789716

  7. Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion

    NASA Astrophysics Data System (ADS)

    Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Tade, Funmilayo; Schuster, David M.; Nieh, Peter; Master, Viraj; Fei, Baowei

    2017-02-01

    Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.

  8. Adaptive Electronic Camouflage Using Texture Synthesis

    DTIC Science & Technology

    2012-04-01

    algorithm begins by computing the GLCMs, GIN and GOUT , of the input image (e.g., image of local environment) and output image (randomly generated...respectively. The algorithm randomly selects a pixel from the output image and cycles its gray-level through all values. For each value, GOUT is updated...The value of the selected pixel is permanently changed to the gray-level value that minimizes the error between GIN and GOUT . Without selecting a

  9. Fusion-nonfusion hybrid construct versus anterior cervical hybrid decompression and fusion: a comparative study for 3-level cervical degenerative disc diseases.

    PubMed

    Ding, Fan; Jia, Zhiwei; Wu, Yaohong; Li, Chao; He, Qing; Ruan, Dike

    2014-11-01

    A retrospective analysis. This study aimed to compare the safety and efficacy between the fusion-nonfusion hybrid construct (HC: anterior cervical corpectomy and fusion plus artificial disc replacement, ACCF plus cADR) and anterior cervical hybrid decompression and fusion (ACHDF: anterior cervical corpectomy and fusion plus discectomy and fusion, ACCF plus ACDF) for 3-level cervical degenerative disc diseases (cDDD). The optimal anterior technique for 3-level cDDD remains uncertain. Long-segment fusion substantially induced biomechanical changes at adjacent levels, which may lead to symptomatic adjacent segment degeneration. Hybrid surgery consisting of ACDF and cADR has been reported with good results for 2-level cDDD. In this context, ACCF combining with cADR may be an alternative to ACHDF for 3-level cDDD. Between 2009 and 2012, 28 patients with 3-level cDDD who underwent HC (n=13) and ACHDF (15) were retrospectively reviewed. Clinical assessments were based on Neck Disability Index, Japanese Orthopedic Association disability scale, visual analogue scale, Japanese Orthopedic Association recovery rate, and Odom criteria. Radiological analysis included range of motion of C2-C7 and adjacent segments and cervical lordosis. Perioperative parameters, radiological adjacent-level changes, and the complications were also assessed. HC showed better Neck Disability Index improvement at 12 and 24 months, as well as Japanese Orthopedic Association and visual analogue scale improvement at 24 months postoperatively (P<0.05). HC had better outcome according to Odom criteria but not significantly (P>0.05). The range of motion of C2-C7 and adjacent segments was less compromised in HC (P<0.05). Both 2 groups showed significant lordosis recovery postoperatively (P<0.05), but no difference was found between groups (P>0.05). The incidence of adjacent-level degenerative changes and complications was higher in ACHDF but not significantly (P>0.05). HC may be an alternative to ACHDF for 3-level cDDD due to the equivalent or superior early clinical outcomes, less compromised C2-C7 range of motion, and less impact at adjacent levels. 3.

  10. A Bio-Inspired Herbal Tea Flavour Assessment Technique

    PubMed Central

    Zakaria, Nur Zawatil Isqi; Masnan, Maz Jamilah; Zakaria, Ammar; Shakaff, Ali Yeon Md

    2014-01-01

    Herbal-based products are becoming a widespread production trend among manufacturers for the domestic and international markets. As the production increases to meet the market demand, it is very crucial for the manufacturer to ensure that their products have met specific criteria and fulfil the intended quality determined by the quality controller. One famous herbal-based product is herbal tea. This paper investigates bio-inspired flavour assessments in a data fusion framework involving an e-nose and e-tongue. The objectives are to attain good classification of different types and brands of herbal tea, classification of different flavour masking effects and finally classification of different concentrations of herbal tea. Two data fusion levels were employed in this research, low level data fusion and intermediate level data fusion. Four classification approaches; LDA, SVM, KNN and PNN were examined in search of the best classifier to achieve the research objectives. In order to evaluate the classifiers' performance, an error estimator based on k-fold cross validation and leave-one-out were applied. Classification based on GC-MS TIC data was also included as a comparison to the classification performance using fusion approaches. Generally, KNN outperformed the other classification techniques for the three flavour assessments in the low level data fusion and intermediate level data fusion. However, the classification results based on GC-MS TIC data are varied. PMID:25010697

  11. Medical image registration based on normalized multidimensional mutual information

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ji, Hongbing; Tong, Ming

    2009-10-01

    Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.

  12. A Survey of Plasmas and Their Applications

    NASA Technical Reports Server (NTRS)

    Eastman, Timothy E.; Grabbe, C. (Editor)

    2006-01-01

    Plasmas are everywhere and relevant to everyone. We bath in a sea of photons, quanta of electromagnetic radiation, whose sources (natural and artificial) are dominantly plasma-based (stars, fluorescent lights, arc lamps.. .). Plasma surface modification and materials processing contribute increasingly to a wide array of modern artifacts; e.g., tiny plasma discharge elements constitute the pixel arrays of plasma televisions and plasma processing provides roughly one-third of the steps to produce semiconductors, essential elements of our networking and computing infrastructure. Finally, plasmas are central to many cutting edge technologies with high potential (compact high-energy particle accelerators; plasma-enhanced waste processors; high tolerance surface preparation and multifuel preprocessors for transportation systems; fusion for energy production).

  13. A dynamic fuzzy genetic algorithm for natural image segmentation using adaptive mean shift

    NASA Astrophysics Data System (ADS)

    Arfan Jaffar, M.

    2017-01-01

    In this paper, a colour image segmentation approach based on hybridisation of adaptive mean shift (AMS), fuzzy c-mean and genetic algorithms (GAs) is presented. Image segmentation is the perceptual faction of pixels based on some likeness measure. GA with fuzzy behaviour is adapted to maximise the fuzzy separation and minimise the global compactness among the clusters or segments in spatial fuzzy c-mean (sFCM). It adds diversity to the search process to find the global optima. A simple fusion method has been used to combine the clusters to overcome the problem of over segmentation. The results show that our technique outperforms state-of-the-art methods.

  14. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.

    PubMed

    Rajagopal, Gayathri; Palaniswamy, Ramamoorthy

    2015-01-01

    This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.

  15. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification

    PubMed Central

    Rajagopal, Gayathri; Palaniswamy, Ramamoorthy

    2015-01-01

    This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813

  16. A Self-Adaptive Dynamic Recognition Model for Fatigue Driving Based on Multi-Source Information and Two Levels of Fusion

    PubMed Central

    Sun, Wei; Zhang, Xiaorui; Peeta, Srinivas; He, Xiaozheng; Li, Yongfu; Zhu, Senlai

    2015-01-01

    To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA) to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model. PMID:26393615

  17. Sensor fusion II: Human and machine strategies; Proceedings of the Meeting, Philadelphia, PA, Nov. 6-9, 1989

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1990-01-01

    Various papers on human and machine strategies in sensor fusion are presented. The general topics addressed include: active vision, measurement and analysis of visual motion, decision models for sensor fusion, implementation of sensor fusion algorithms, applying sensor fusion to image analysis, perceptual modules and their fusion, perceptual organization and object recognition, planning and the integration of high-level knowledge with perception, using prior knowledge and context in sensor fusion.

  18. Saliency-Guided Change Detection of Remotely Sensed Images Using Random Forest

    NASA Astrophysics Data System (ADS)

    Feng, W.; Sui, H.; Chen, X.

    2018-04-01

    Studies based on object-based image analysis (OBIA) representing the paradigm shift in change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA). Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA) algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.

  19. Hyperspectral target detection using heavy-tailed distributions

    NASA Astrophysics Data System (ADS)

    Willis, Chris J.

    2009-09-01

    One promising approach to target detection in hyperspectral imagery exploits a statistical mixture model to represent scene content at a pixel level. The process then goes on to look for pixels which are rare, when judged against the model, and marks them as anomalies. It is assumed that military targets will themselves be rare and therefore likely to be detected amongst these anomalies. For the typical assumption of multivariate Gaussianity for the mixture components, the presence of the anomalous pixels within the training data will have a deleterious effect on the quality of the model. In particular, the derivation process itself is adversely affected by the attempt to accommodate the anomalies within the mixture components. This will bias the statistics of at least some of the components away from their true values and towards the anomalies. In many cases this will result in a reduction in the detection performance and an increased false alarm rate. This paper considers the use of heavy-tailed statistical distributions within the mixture model. Such distributions are better able to account for anomalies in the training data within the tails of their distributions, and the balance of the pixels within their central masses. This means that an improved model of the majority of the pixels in the scene may be produced, ultimately leading to a better anomaly detection result. The anomaly detection techniques are examined using both synthetic data and hyperspectral imagery with injected anomalous pixels. A range of results is presented for the baseline Gaussian mixture model and for models accommodating heavy-tailed distributions, for different parameterizations of the algorithms. These include scene understanding results, anomalous pixel maps at given significance levels and Receiver Operating Characteristic curves.

  20. Pseudo 2-transistor active pixel sensor using an n-well/gate-tied p-channel metal oxide semiconductor field eeffect transistor-type photodetector with built-in transfer gate

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Ho; Seo, Min-Woong; Kong, Jae-Sung; Shin, Jang-Kyoo; Choi, Pyung

    2008-11-01

    In this paper, a pseudo 2-transistor active pixel sensor (APS) has been designed and fabricated by using an n-well/gate-tied p-channel metal oxide semiconductor field effect transistor (PMOSFET)-type photodetector with built-in transfer gate. The proposed sensor has been fabricated using a 0.35 μm 2-poly 4-metal standard complementary metal oxide semiconductor (CMOS) logic process. The pseudo 2-transistor APS consists of two NMOSFETs and one photodetector which can amplify the generated photocurrent. The area of the pseudo 2-transistor APS is 7.1 × 6.2 μm2. The sensitivity of the proposed pixel is 49 lux/(V·s). By using this pixel, a smaller pixel area and a higher level of sensitivity can be realized when compared with a conventional 3-transistor APS which uses a pn junction photodiode.

  1. The FE-I4 Pixel Readout Chip and the IBL Module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbero, Marlon; Arutinov, David; Backhaus, Malte

    2012-05-01

    FE-I4 is the new ATLAS pixel readout chip for the upgraded ATLAS pixel detector. Designed in a CMOS 130 nm feature size process, the IC is able to withstand higher radiation levels compared to the present generation of ATLAS pixel Front-End FE-I3, and can also cope with higher hit rate. It is thus suitable for intermediate radii pixel detector layers in the High Luminosity LHC environment, but also for the inserted layer at 3.3 cm known as the 'Insertable B-Layer' project (IBL), at a shorter timescale. In this paper, an introduction to the FE-I4 will be given, focusing on testmore » results from the first full size FE-I4A prototype which has been available since fall 2010. The IBL project will be introduced, with particular emphasis on the FE-I4-based module concept.« less

  2. X-ray analog pixel array detector for single synchrotron bunch time-resolved imaging.

    PubMed

    Koerner, Lucas J; Gruner, Sol M

    2011-03-01

    Dynamic X-ray studies can reach temporal resolutions limited by only the X-ray pulse duration if the detector is fast enough to segregate synchrotron pulses. An analog integrating pixel array detector with in-pixel storage and temporal resolution of around 150 ns, sufficient to isolate pulses, is presented. Analog integration minimizes count-rate limitations and in-pixel storage captures successive pulses. Fundamental tests of noise and linearity as well as high-speed laser measurements are shown. The detector resolved individual bunch trains at the Cornell High Energy Synchrotron Source at levels of up to 3.7 × 10(3) X-rays per pixel per train. When applied to turn-by-turn X-ray beam characterization, single-shot intensity measurements were made with a repeatability of 0.4% and horizontal oscillations of the positron cloud were detected.

  3. X-ray analog pixel array detector for single synchrotron bunch time-resolved imaging

    PubMed Central

    Koerner, Lucas J.; Gruner, Sol M.

    2011-01-01

    Dynamic X-ray studies can reach temporal resolutions limited by only the X-ray pulse duration if the detector is fast enough to segregate synchrotron pulses. An analog integrating pixel array detector with in-pixel storage and temporal resolution of around 150 ns, sufficient to isolate pulses, is presented. Analog integration minimizes count-rate limitations and in-pixel storage captures successive pulses. Fundamental tests of noise and linearity as well as high-speed laser measurements are shown. The detector resolved individual bunch trains at the Cornell High Energy Synchrotron Source at levels of up to 3.7 × 103 X-rays per pixel per train. When applied to turn-by-turn X-ray beam characterization, single-shot intensity measurements were made with a repeatability of 0.4% and horizontal oscillations of the positron cloud were detected. PMID:21335901

  4. An Adaptive Multi-Sensor Data Fusion Method Based on Deep Convolutional Neural Networks for Fault Diagnosis of Planetary Gearbox

    PubMed Central

    Jing, Luyang; Wang, Taiyong; Zhao, Ming; Wang, Peng

    2017-01-01

    A fault diagnosis approach based on multi-sensor data fusion is a promising tool to deal with complicated damage detection problems of mechanical systems. Nevertheless, this approach suffers from two challenges, which are (1) the feature extraction from various types of sensory data and (2) the selection of a suitable fusion level. It is usually difficult to choose an optimal feature or fusion level for a specific fault diagnosis task, and extensive domain expertise and human labor are also highly required during these selections. To address these two challenges, we propose an adaptive multi-sensor data fusion method based on deep convolutional neural networks (DCNN) for fault diagnosis. The proposed method can learn features from raw data and optimize a combination of different fusion levels adaptively to satisfy the requirements of any fault diagnosis task. The proposed method is tested through a planetary gearbox test rig. Handcraft features, manual-selected fusion levels, single sensory data, and two traditional intelligent models, back-propagation neural networks (BPNN) and a support vector machine (SVM), are used as comparisons in the experiment. The results demonstrate that the proposed method is able to detect the conditions of the planetary gearbox effectively with the best diagnosis accuracy among all comparative methods in the experiment. PMID:28230767

  5. Small target detection using bilateral filter and temporal cross product in infrared images

    NASA Astrophysics Data System (ADS)

    Bae, Tae-Wuk

    2011-09-01

    We introduce a spatial and temporal target detection method using spatial bilateral filter (BF) and temporal cross product (TCP) of temporal pixels in infrared (IR) image sequences. At first, the TCP is presented to extract the characteristics of temporal pixels by using temporal profile in respective spatial coordinates of pixels. The TCP represents the cross product values by the gray level distance vector of a current temporal pixel and the adjacent temporal pixel, as well as the horizontal distance vector of the current temporal pixel and a temporal pixel corresponding to potential target center. The summation of TCP values of temporal pixels in spatial coordinates makes the temporal target image (TTI), which represents the temporal target information of temporal pixels in spatial coordinates. And then the proposed BF filter is used to extract the spatial target information. In order to predict background without targets, the proposed BF filter uses standard deviations obtained by an exponential mapping of the TCP value corresponding to the coordinate of a pixel processed spatially. The spatial target image (STI) is made by subtracting the predicted image from the original image. Thus, the spatial and temporal target image (STTI) is achieved by multiplying the STI and the TTI, and then targets finally are detected in STTI. In experimental result, the receiver operating characteristics (ROC) curves were computed experimentally to compare the objective performance. From the results, the proposed algorithm shows better discrimination of target and clutters and lower false alarm rates than the existing target detection methods.

  6. Landcover classification in MRF context using Dempster-Shafer fusion for multisensor imagery.

    PubMed

    Sarkar, Anjan; Banerjee, Anjan; Banerjee, Nilanjan; Brahma, Siddhartha; Kartikeyan, B; Chakraborty, Manab; Majumder, K L

    2005-05-01

    This work deals with multisensor data fusion to obtain landcover classification. The role of feature-level fusion using the Dempster-Shafer rule and that of data-level fusion in the MRF context is studied in this paper to obtain an optimally segmented image. Subsequently, segments are validated and classification accuracy for the test data is evaluated. Two examples of data fusion of optical images and a synthetic aperture radar image are presented, each set having been acquired on different dates. Classification accuracies of the technique proposed are compared with those of some recent techniques in literature for the same image data.

  7. Return to Work and Multilevel Versus Single-Level Cervical Fusion for Radiculopathy in a Workers' Compensation Setting.

    PubMed

    Faour, Mhamad; Anderson, Joshua T; Haas, Arnold R; Percy, Rick; Woods, Stephen T; Ahn, Uri M; Ahn, Nicholas U

    2017-01-15

    Retrospective comparative cohort study. Examine the impact of multilevel fusion on return to work (RTW) status and compare RTW status after multi- versus single-level cervical fusion for patients with work-related injury. Patients with work-related injuries in the workers' compensation systems have less favorable surgical outcomes. Cervical fusion provides a greater than 90% likelihood of relieving radiculopathy and stabilizing or improving myelopathy. However, more levels fused at index surgery are reportedly associated with poorer surgical outcomes than single-level fusion. Data was collected from the Ohio Bureau of Workers' Compensation (BWC) between 1993 and 2011. The study population included patients who underwent cervical fusion for radiculopathy. Two groups were constructed (multilevel fusion [MLF] vs. single-level fusion [SLF]). Outcomes measures evaluated were: RTW criteria, RTW <1year, reoperation, surgical complication, disability, and legal litigation after surgery. After accounting for a number of independent variables in the regression model, multilevel fusion was a negative predictor of successful RTW status within 3-year follow-up after surgery (OR = 0.82, 95% CI: 0.70-0.95, P <0.05).RTW criteria were met 62.9% of SLF group compared with 54.8% of MLF group. The odds of having a stable RTW for MLF patients were 0.71% compared with the SLF patients (95% CI: 0.61-0.83; P: 0.0001).At 1 year after surgery, RTW rate was 53.1% for the SLF group compared with 43.7% for the MLF group. The odds of RTW within 1 year after surgery for the MLF group were 0.69% compared with SLF patients (95% CI: 0.59-0.80; P: 0.0001).Higher rate of disability after surgery was observed in the MLF group compared with the SLF group (P: 0.0001) CONCLUSION.: Multilevel cervical fusion for radiculopathy was associated with poor return to work profile after surgery. Multilevel cervical fusion was associated with lower RTW rates, less likelihood of achieving stable return to work, and higher rate of disability after surgery. 3.

  8. Multiscale infrared and visible image fusion using gradient domain guided image filtering

    NASA Astrophysics Data System (ADS)

    Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia

    2018-03-01

    For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.

  9. Minimally invasive versus open fusion for Grade I degenerative lumbar spondylolisthesis: analysis of the Quality Outcomes Database.

    PubMed

    Mummaneni, Praveen V; Bisson, Erica F; Kerezoudis, Panagiotis; Glassman, Steven; Foley, Kevin; Slotkin, Jonathan R; Potts, Eric; Shaffrey, Mark; Shaffrey, Christopher I; Coric, Domagoj; Knightly, John; Park, Paul; Fu, Kai-Ming; Devin, Clinton J; Chotai, Silky; Chan, Andrew K; Virk, Michael; Asher, Anthony L; Bydon, Mohamad

    2017-08-01

    OBJECTIVE Lumbar spondylolisthesis is a degenerative condition that can be surgically treated with either open or minimally invasive decompression and instrumented fusion. Minimally invasive surgery (MIS) approaches may shorten recovery, reduce blood loss, and minimize soft-tissue damage with resultant reduced postoperative pain and disability. METHODS The authors queried the national, multicenter Quality Outcomes Database (QOD) registry for patients undergoing posterior lumbar fusion between July 2014 and December 2015 for Grade I degenerative spondylolisthesis. The authors recorded baseline and 12-month patient-reported outcomes (PROs), including Oswestry Disability Index (ODI), EQ-5D, numeric rating scale (NRS)-back pain (NRS-BP), NRS-leg pain (NRS-LP), and satisfaction (North American Spine Society satisfaction questionnaire). Multivariable regression models were fitted for hospital length of stay (LOS), 12-month PROs, and 90-day return to work, after adjusting for an array of preoperative and surgical variables. RESULTS A total of 345 patients (open surgery, n = 254; MIS, n = 91) from 11 participating sites were identified in the QOD. The follow-up rate at 12 months was 84% (83.5% [open surgery]; 85% [MIS]). Overall, baseline patient demographics, comorbidities, and clinical characteristics were similarly distributed between the cohorts. Two hundred fifty seven patients underwent 1-level fusion (open surgery, n = 181; MIS, n = 76), and 88 patients underwent 2-level fusion (open surgery, n = 73; MIS, n = 15). Patients in both groups reported significant improvement in all primary outcomes (all p < 0.001). MIS was associated with a significantly lower mean intraoperative estimated blood loss and slightly longer operative times in both 1- and 2-level fusion subgroups. Although the LOS was shorter for MIS 1-level cases, this was not significantly different. No difference was detected with regard to the 12-month PROs between the 1-level MIS versus the 1-level open surgical groups. However, change in functional outcome scores for patients undergoing 2-level fusion was notably larger in the MIS cohort for ODI (-27 vs -16, p = 0.1), EQ-5D (0.27 vs 0.15, p = 0.08), and NRS-BP (-3.5 vs -2.7, p = 0.41); statistical significance was shown only for changes in NRS-LP scores (-4.9 vs -2.8, p = 0.02). On risk-adjusted analysis for 1-level fusion, open versus minimally invasive approach was not significant for 12-month PROs, LOS, and 90-day return to work. CONCLUSIONS Significant improvement was found in terms of all functional outcomes in patients undergoing open or MIS fusion for lumbar spondylolisthesis. No difference was detected between the 2 techniques for 1-level fusion in terms of patient-reported outcomes, LOS, and 90-day return to work. However, patients undergoing 2-level MIS fusion reported significantly better improvement in NRS-LP at 12 months than patients undergoing 2-level open surgery. Longer follow-up is needed to provide further insight into the comparative effectiveness of the 2 procedures.

  10. Multi-intelligence critical rating assessment of fusion techniques (MiCRAFT)

    NASA Astrophysics Data System (ADS)

    Blasch, Erik

    2015-06-01

    Assessment of multi-intelligence fusion techniques includes credibility of algorithm performance, quality of results against mission needs, and usability in a work-domain context. Situation awareness (SAW) brings together low-level information fusion (tracking and identification), high-level information fusion (threat and scenario-based assessment), and information fusion level 5 user refinement (physical, cognitive, and information tasks). To measure SAW, we discuss the SAGAT (Situational Awareness Global Assessment Technique) technique for a multi-intelligence fusion (MIF) system assessment that focuses on the advantages of MIF against single intelligence sources. Building on the NASA TLX (Task Load Index), SAGAT probes, SART (Situational Awareness Rating Technique) questionnaires, and CDM (Critical Decision Method) decision points; we highlight these tools for use in a Multi-Intelligence Critical Rating Assessment of Fusion Techniques (MiCRAFT). The focus is to measure user refinement of a situation over the information fusion quality of service (QoS) metrics: timeliness, accuracy, confidence, workload (cost), and attention (throughput). A key component of any user analysis includes correlation, association, and summarization of data; so we also seek measures of product quality and QuEST of information. Building a notion of product quality from multi-intelligence tools is typically subjective which needs to be aligned with objective machine metrics.

  11. Enhanced bactericidal potency of nanoliposomes by modification of the fusion activity between liposomes and bacterium.

    PubMed

    Ma, Yufan; Wang, Zhao; Zhao, Wen; Lu, Tingli; Wang, Rutao; Mei, Qibing; Chen, Tao

    2013-01-01

    Pseudomonas aeruginosa represents a good model of antibiotic resistance. These organisms have an outer membrane with a low level of permeability to drugs that is often combined with multidrug efflux pumps, enzymatic inactivation of the drug, or alteration of its molecular target. The acute and growing problem of antibiotic resistance of Pseudomonas to conventional antibiotics made it imperative to develop new liposome formulations to overcome these mechanisms, and investigate the fusion between liposome and bacterium. The rigidity, stability and charge properties of phospholipid vesicles were modified by varying the cholesterol, 1,2-dioleoyl-sn-glycero-3-phosphatidylethanolamine (DOPE), and negatively charged lipids 1,2-dimyristoyl-sn-glycero-3-phosphoglycerol sodium salt (DMPG), 1,2-dimyristoyl-sn-glycero-3-phopho-L-serine sodium salt (DMPS), 1,2-dimyristoyl-sn-glycero-3-phosphate monosodium salt (DMPA), nature phosphatidylserine sodium salt from brain and nature phosphatidylinositol sodium salt from soybean concentrations in liposomes. Liposomal fusion with intact bacteria was monitored using a lipid-mixing assay. It was discovered that the fluid liposomes-bacterium fusion is not dependent on liposomal size and lamellarity. A similar degree of fusion was observed for liposomes with a particle size from 100 to 800 nm. The fluidity of liposomes is an essential pre-request for liposomes fusion with bacteria. Fusion was almost completely inhibited by incorporation of cholesterol into fluid liposomes. The increase in the amount of negative charges in fluid liposomes reduces fluid liposomes-bacteria fusion when tested without calcium cations due to electric repulsion, but addition of calcium cations brings the fusion level of fluid liposomes to similar or higher levels. Among the negative phospholipids examined, DMPA gave the highest degree of fusion, DMPS and DMPG had intermediate fusion levels, and PI resulted in the lowest degree of fusion. Furthermore, the fluid liposomal encapsulated tobramycin was prepared, and the bactericidal effect occurred more quickly when bacteria were cultured with liposomal encapsulated tobramycin. The bactericidal potency of fluid liposomes is dramatically enhanced with respect to fusion ability when the fusogenic lipid, DOPE, is included. Regardless of changes in liposome composition, fluid liposomes-bacterium fusion is universally enhanced by calcium ions. The information obtained in this study will increase our understanding of fluid liposomal action mechanisms, and help in optimizing the new generation of fluid liposomal formulations for the treatment of pulmonary bacterial infections.

  12. Enhanced bactericidal potency of nanoliposomes by modification of the fusion activity between liposomes and bacterium

    PubMed Central

    Ma, Yufan; Wang, Zhao; Zhao, Wen; Lu, Tingli; Wang, Rutao; Mei, Qibing; Chen, Tao

    2013-01-01

    Background Pseudomonas aeruginosa represents a good model of antibiotic resistance. These organisms have an outer membrane with a low level of permeability to drugs that is often combined with multidrug efflux pumps, enzymatic inactivation of the drug, or alteration of its molecular target. The acute and growing problem of antibiotic resistance of Pseudomonas to conventional antibiotics made it imperative to develop new liposome formulations to overcome these mechanisms, and investigate the fusion between liposome and bacterium. Methods The rigidity, stability and charge properties of phospholipid vesicles were modified by varying the cholesterol, 1,2-dioleoyl-sn-glycero-3-phosphatidylethanolamine (DOPE), and negatively charged lipids 1,2-dimyristoyl-sn-glycero-3-phosphoglycerol sodium salt (DMPG), 1,2-dimyristoyl-sn-glycero-3-phopho-L-serine sodium salt (DMPS), 1,2-dimyristoyl-sn-glycero-3-phosphate monosodium salt (DMPA), nature phosphatidylserine sodium salt from brain and nature phosphatidylinositol sodium salt from soybean concentrations in liposomes. Liposomal fusion with intact bacteria was monitored using a lipid-mixing assay. Results It was discovered that the fluid liposomes-bacterium fusion is not dependent on liposomal size and lamellarity. A similar degree of fusion was observed for liposomes with a particle size from 100 to 800 nm. The fluidity of liposomes is an essential pre-request for liposomes fusion with bacteria. Fusion was almost completely inhibited by incorporation of cholesterol into fluid liposomes. The increase in the amount of negative charges in fluid liposomes reduces fluid liposomes-bacteria fusion when tested without calcium cations due to electric repulsion, but addition of calcium cations brings the fusion level of fluid liposomes to similar or higher levels. Among the negative phospholipids examined, DMPA gave the highest degree of fusion, DMPS and DMPG had intermediate fusion levels, and PI resulted in the lowest degree of fusion. Furthermore, the fluid liposomal encapsulated tobramycin was prepared, and the bactericidal effect occurred more quickly when bacteria were cultured with liposomal encapsulated tobramycin. Conclusion The bactericidal potency of fluid liposomes is dramatically enhanced with respect to fusion ability when the fusogenic lipid, DOPE, is included. Regardless of changes in liposome composition, fluid liposomes-bacterium fusion is universally enhanced by calcium ions. The information obtained in this study will increase our understanding of fluid liposomal action mechanisms, and help in optimizing the new generation of fluid liposomal formulations for the treatment of pulmonary bacterial infections. PMID:23847417

  13. Multi-Temporal Multi-Sensor Analysis of Urbanization and Environmental/Climate Impact in China for Sustainable Urban Development

    NASA Astrophysics Data System (ADS)

    Ban, Yifang; Gong, Peng; Gamba, Paolo; Taubenbock, Hannes; Du, Peijun

    2016-08-01

    The overall objective of this research is to investigate multi-temporal, multi-scale, multi-sensor satellite data for analysis of urbanization and environmental/climate impact in China to support sustainable planning. Multi- temporal multi-scale SAR and optical data have been evaluated for urban information extraction using innovative methods and algorithms, including KTH- Pavia Urban Extractor, Pavia UEXT, and an "exclusion- inclusion" framework for urban extent extraction, and KTH-SEG, a novel object-based classification method for detailed urban land cover mapping. Various pixel- based and object-based change detection algorithms were also developed to extract urban changes. Several Chinese cities including Beijing, Shanghai and Guangzhou are selected as study areas. Spatio-temporal urbanization patterns and environmental impact at regional, metropolitan and city core were evaluated through ecosystem service, landscape metrics, spatial indices, and/or their combinations. The relationship between land surface temperature and land-cover classes was also analyzed.The urban extraction results showed that urban areas and small towns could be well extracted using multitemporal SAR data with the KTH-Pavia Urban Extractor and UEXT. The fusion of SAR data at multiple scales from multiple sensors was proven to improve urban extraction. For urban land cover mapping, the results show that the fusion of multitemporal SAR and optical data could produce detailed land cover maps with improved accuracy than that of SAR or optical data alone. Pixel-based and object-based change detection algorithms developed with the project were effective to extract urban changes. Comparing the urban land cover results from mulitemporal multisensor data, the environmental impact analysis indicates major losses for food supply, noise reduction, runoff mitigation, waste treatment and global climate regulation services through landscape structural changes in terms of decreases in service area, edge contamination and fragmentation. In terms ofclimate impact, the results indicate that land surface temperature can be related to land use/land cover classes.

  14. GDP Spatialization and Economic Differences in South China Based on NPP-VIIRS Nighttime Light Imagery

    NASA Astrophysics Data System (ADS)

    Zhao, M.

    2017-12-01

    Accurate data on gross domestic product (GDP) at pixel level are needed to understand the dynamics of regional economies. GDP spatialization is the basis of quantitative analysis on economic diversities of different administrative divisions and areas with different natural or humanistic attributes. Data from the Visible Infrared Imaging Radiometer Suite (VIIRS), carried by the Suomi National Polar-orbiting Partnership (NPP) satellite, are capable of estimating GDP, but few studies have been conducted for mapping GDP at pixel level and further pattern analysis of economic differences in different regions using the VIIRS data. This paper produced a pixel-level (500 m × 500 m) GDP map for South China in 2014 and quantitatively analyzed economic differences among diverse geomorphological types. Based on a regression analysis, the total nighttime light (TNL) of corrected VIIRS data were found to exhibit R2 values of 0.8935 and 0.9243 for prefecture GDP and county GDP, respectively. This demonstrated that TNL showed a more significant capability in reflecting economic status (R2 > 0.88) than other nighttime light indices (R2 < 0.52), and showed quadratic polynomial relationships with GDP rather than simple linear correlations at both prefecture and county levels. The corrected NPP-VIIRS data showed a better fit than the original data, and the estimation at the county level was better than at the prefecture level. The pixel-level GDP map indicated that: (a) economic development in coastal areas was higher than that in inland areas; (b) low altitude plains were the most developed areas, followed by low altitude platforms and low altitude hills; and (c) economic development in middle altitude areas, and low altitude hills and mountains remained to be strengthened.

  15. A versatile photogrammetric camera automatic calibration suite for multispectral fusion and optical helmet tracking

    NASA Astrophysics Data System (ADS)

    de Villiers, Jason; Jermy, Robert; Nicolls, Fred

    2014-06-01

    This paper presents a system to determine the photogrammetric parameters of a camera. The lens distortion, focal length and camera six degree of freedom (DOF) position are calculated. The system caters for cameras of different sensitivity spectra and fields of view without any mechanical modifications. The distortion characterization, a variant of Brown's classic plumb line method, allows many radial and tangential distortion coefficients and finds the optimal principal point. Typical values are 5 radial and 3 tangential coefficients. These parameters are determined stably and demonstrably produce superior results to low order models despite popular and prevalent misconceptions to the contrary. The system produces coefficients to model both the distorted to undistorted pixel coordinate transformation (e.g. for target designation) and the inverse transformation (e.g. for image stitching and fusion) allowing deterministic rates far exceeding real time. The focal length is determined to minimise the error in absolute photogrammetric positional measurement for both multi camera systems or monocular (e.g. helmet tracker) systems. The system determines the 6 DOF position of the camera in a chosen coordinate system. It can also determine the 6 DOF offset of the camera relative to its mechanical mount. This allows faulty cameras to be replaced without requiring a recalibration of the entire system (such as an aircraft cockpit). Results from two simple applications of the calibration results are presented: stitching and fusion of the images from a dual-band visual/ LWIR camera array, and a simple laboratory optical helmet tracker.

  16. Robust foreground detection: a fusion of masked grey world, probabilistic gradient information and extended conditional random field approach.

    PubMed

    Zulkifley, Mohd Asyraf; Moran, Bill; Rawlinson, David

    2012-01-01

    Foreground detection has been used extensively in many applications such as people counting, traffic monitoring and face recognition. However, most of the existing detectors can only work under limited conditions. This happens because of the inability of the detector to distinguish foreground and background pixels, especially in complex situations. Our aim is to improve the robustness of foreground detection under sudden and gradual illumination change, colour similarity issue, moving background and shadow noise. Since it is hard to achieve robustness using a single model, we have combined several methods into an integrated system. The masked grey world algorithm is introduced to handle sudden illumination change. Colour co-occurrence modelling is then fused with the probabilistic edge-based background modelling. Colour co-occurrence modelling is good in filtering moving background and robust to gradual illumination change, while an edge-based modelling is used for solving a colour similarity problem. Finally, an extended conditional random field approach is used to filter out shadow and afterimage noise. Simulation results show that our algorithm performs better compared to the existing methods, which makes it suitable for higher-level applications.

  17. [Glossary of terms used by radiologists in image processing].

    PubMed

    Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P

    1995-01-01

    We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.

  18. THE POSSIBLE MOON OF KEPLER-90g IS A FALSE POSITIVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kipping, D. M.; Torres, G.; Buchhave, L. A.

    2015-01-20

    The discovery of an exomoon would provide deep insights into planet formation and the habitability of planetary systems, with transiting examples being particularly sought after. Of the hundreds of Kepler planets now discovered, the seven-planet system Kepler-90 is unusual for exhibiting an unidentified transit-like signal in close proximity to one of the transits of the long-period gas-giant Kepler-90g, as noted by Cabrera et al. As part of the ''Hunt for Exomoons with Kepler'' project, we investigate this possible exomoon signal and find it passes all conventional photometric, dynamical, and centroid diagnostic tests. However, pixel-level light curves indicate that the moon-like signalmore » occurs on nearly all of the target's pixels, which we confirm using a novel way of examining pixel-level data which we dub the ''transit centroid''. This test reveals that the possible exomoon to Kepler-90g is likely a false positive, perhaps due to a cosmic ray induced sudden pixel sensitivity dropout. This work highlights the extreme care required for seeking non-periodic low-amplitude transit signals, such as exomoons.« less

  19. URREF Reliability Versus Credibility in Information Fusion

    DTIC Science & Technology

    2013-07-01

    Fusion, Vol. 3, No. 2, December, 2008. [31] E. Blasch, J. Dezert, and P. Valin , “DSMT Applied to Seismic and Acoustic Sensor Fusion,” Proc. IEEE Nat...44] E. Blasch, P. Valin , E. Bossé, “Measures of Effectiveness for High- Level Fusion,” Int. Conference on Information Fusion, 2010. [45] X. Mei, H...and P. Valin , “Information Fusion Measures of Effectiveness (MOE) for Decision Support,” Proc. SPIE 8050, 2011. [49] Y. Zheng, W. Dong, and E

  20. Analysis of decision fusion algorithms in handling uncertainties for integrated health monitoring systems

    NASA Astrophysics Data System (ADS)

    Zein-Sabatto, Saleh; Mikhail, Maged; Bodruzzaman, Mohammad; DeSimio, Martin; Derriso, Mark; Behbahani, Alireza

    2012-06-01

    It has been widely accepted that data fusion and information fusion methods can improve the accuracy and robustness of decision-making in structural health monitoring systems. It is arguably true nonetheless, that decision-level is equally beneficial when applied to integrated health monitoring systems. Several decisions at low-levels of abstraction may be produced by different decision-makers; however, decision-level fusion is required at the final stage of the process to provide accurate assessment about the health of the monitored system as a whole. An example of such integrated systems with complex decision-making scenarios is the integrated health monitoring of aircraft. Thorough understanding of the characteristics of the decision-fusion methodologies is a crucial step for successful implementation of such decision-fusion systems. In this paper, we have presented the major information fusion methodologies reported in the literature, i.e., probabilistic, evidential, and artificial intelligent based methods. The theoretical basis and characteristics of these methodologies are explained and their performances are analyzed. Second, candidate methods from the above fusion methodologies, i.e., Bayesian, Dempster-Shafer, and fuzzy logic algorithms are selected and their applications are extended to decisions fusion. Finally, fusion algorithms are developed based on the selected fusion methods and their performance are tested on decisions generated from synthetic data and from experimental data. Also in this paper, a modeling methodology, i.e. cloud model, for generating synthetic decisions is presented and used. Using the cloud model, both types of uncertainties; randomness and fuzziness, involved in real decision-making are modeled. Synthetic decisions are generated with an unbiased process and varying interaction complexities among decisions to provide for fair performance comparison of the selected decision-fusion algorithms. For verification purposes, implementation results of the developed fusion algorithms on structural health monitoring data collected from experimental tests are reported in this paper.

  1. Is the behavior of disc replacement adjacent to fusion affected by the location of the fused level in hybrid surgery?

    PubMed

    Wu, Ting-Kui; Meng, Yang; Wang, Bei-Yu; Hong, Ying; Rong, Xin; Ding, Chen; Chen, Hua; Liu, Hao

    2018-04-27

    Hybrid surgery (HS), consisting of cervical disc arthroplasty (CDA) at the mobile level, along with anterior cervical discectomy and fusion at the spondylotic level, could be a promising treatment for patients with multilevel cervical degenerative disc disease (DDD). An advantage of this technique is that it uses an optimal procedure according to the status of each level. However, information is lacking regarding the influence of the relative location of the replacement and the fusion segment in vivo. We conducted the present study to investigate whether the location of the fusion affected the behavior of the disc replacement and adjacent segments in HS in vivo. This is an observational study. The numbers of patients in the arthroplasty-fusion (AF) and fusion-arthroplasty (FA) groups were 51 and 24, respectively. The Japanese Orthopedic Association (JOA), Neck Disability Index (NDI), and Visual Analog Scale (VAS) scores were evaluated. Global and segmental lordosis, the range of motion (ROM) of C2-C7, and the operated and adjacent segments were measured. Fusion rate and radiological changes at adjacent levels were observed. Between January 2010 and July 2016, 75 patients with cervical DDD at two contiguous levels undergoing a two-level HS were retrospectively reviewed. The patients were divided into AF and FA groups according to the locations of the disc replacement. Clinical outcomes were evaluated according to the JOA, NDI, and VAS scores. Radiological parameters, including global and segmental lordosis, the ROM of C2-C7, the operated and adjacent segments, and complications, were also evaluated. Although the JOA, NDI, and VAS scores were improved in both the AF and the FA groups, no significant differences were found between the two groups at any follow-up point. Both groups maintained cervical lordosis, but no difference was found between the groups. Segmental lordosis at the fusion segment was significantly improved postoperatively (p<.001), whereas it was maintained at the arthroplasty segment. The ROM of C2-C7 was significantly decreased in both groups postoperatively (AF p=.001, FA p=.014), but no difference was found between the groups. The FA group exhibited a non-significant improvement in ROM at the arthroplasty segment. The ROM adjacent to the arthroplasty segment was increased, although not significantly, whereas the ROM adjacent to the fusion segment was significantly improved after surgery in both groups (p<.001). Fusion was achieved in all patients. No significant difference in complications was found between the groups. In HS, cephalic or caudal fusion segments to the arthroplasty segment did not affect the clinical outcomes and the behavior of CDA. However, the ROM of adjacent segments was affected by the location of the fusion segment; segments adjacent to fusion segments had greater ROMs than segments adjacent to arthroplasty segments. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  3. Mitochondrial fusion increases the mitochondrial DNA copy number in budding yeast.

    PubMed

    Hori, Akiko; Yoshida, Minoru; Ling, Feng

    2011-05-01

    Mitochondrial fusion plays an important role in mitochondrial DNA (mtDNA) maintenance, although the underlying mechanisms are unclear. In budding yeast, certain levels of reactive oxygen species (ROS) can promote recombination-mediated mtDNA replication, and mtDNA maintenance depends on the homologous DNA pairing protein Mhr1. Here, we show that the fusion of isolated yeast mitochondria, which can be monitored by the bimolecular fluorescence complementation-derived green fluorescent protein (GFP) fluorescence, increases the mtDNA copy number in a manner dependent on Mhr1. The fusion event, accompanied by the degradation of dissociated electron transport chain complex IV and transient reductions in the complex IV subunits by the inner membrane AAA proteases such as Yme1, increases ROS levels. Analysis of the initial stage of mitochondrial fusion in early log-phase cells produced similar results. Moreover, higher ROS levels in mitochondrial fusion-deficient mutant cells increased the amount of newly synthesized mtDNA, resulting in increases in the mtDNA copy number. In contrast, reducing ROS levels in yme1 null mutant cells significantly decreased the mtDNA copy number, leading to an increase in cells lacking mtDNA. Our results indicate that mitochondrial fusion induces mtDNA synthesis by facilitating ROS-triggered, recombination-mediated replication and thereby prevents the generation of mitochondria lacking DNA. © 2011 The Authors. Journal compilation © 2011 by the Molecular Biology Society of Japan/Blackwell Publishing Ltd.

  4. Real-Time Symbol Extraction From Grey-Level Images

    NASA Astrophysics Data System (ADS)

    Massen, R.; Simnacher, M.; Rosch, J.; Herre, E.; Wuhrer, H. W.

    1988-04-01

    A VME-bus image pipeline processor for extracting vectorized contours from grey-level images in real-time is presented. This 3 Giga operation per second processor uses large kernel convolvers and new non-linear neighbourhood processing algorithms to compute true 1-pixel wide and noise-free contours without thresholding even from grey-level images with quite varying edge sharpness. The local edge orientation is used as an additional cue to compute a list of vectors describing the closed and open contours in real-time and to dump a CAD-like symbolic image description into a symbol memory at pixel clock rate.

  5. Development of n-in-p pixel modules for the ATLAS upgrade at HL-LHC

    NASA Astrophysics Data System (ADS)

    Macchiolo, A.; Nisius, R.; Savic, N.; Terzo, S.

    2016-09-01

    Thin planar pixel modules are promising candidates to instrument the inner layers of the new ATLAS pixel detector for HL-LHC, thanks to the reduced contribution to the material budget and their high charge collection efficiency after irradiation. 100-200 μm thick sensors, interconnected to FE-I4 read-out chips, have been characterized with radioactive sources and beam tests at the CERN-SPS and DESY. The results of these measurements are reported for devices before and after irradiation up to a fluence of 14 ×1015 neq /cm2 . The charge collection and tracking efficiency of the different sensor thicknesses are compared. The outlook for future planar pixel sensor production is discussed, with a focus on sensor design with the pixel pitches (50×50 and 25×100 μm2) foreseen for the RD53 Collaboration read-out chip in 65 nm CMOS technology. An optimization of the biasing structures in the pixel cells is required to avoid the hit efficiency loss presently observed in the punch-through region after irradiation. For this purpose the performance of different layouts have been compared in FE-I4 compatible sensors at various fluence levels by using beam test data. Highly segmented sensors will represent a challenge for the tracking in the forward region of the pixel system at HL-LHC. In order to reproduce the performance of 50×50 μm2 pixels at high pseudo-rapidity values, FE-I4 compatible planar pixel sensors have been studied before and after irradiation in beam tests at high incidence angle (80°) with respect to the short pixel direction. Results on cluster shapes, charge collection and hit efficiency will be shown.

  6. Use of high-granularity CdZnTe pixelated detectors to correct response non-uniformities caused by defects in crystals

    DOE PAGES

    Bolotnikov, A. E.; Camarda, G. S.; Cui, Y.; ...

    2015-09-06

    Following our successful demonstration of the position-sensitive virtual Frisch-grid detectors, we investigated the feasibility of using high-granularity position sensing to correct response non-uniformities caused by the crystal defects in CdZnTe (CZT) pixelated detectors. The development of high-granularity detectors able to correct response non-uniformities on a scale comparable to the size of electron clouds opens the opportunity of using unselected off-the-shelf CZT material, whilst still assuring high spectral resolution for the majority of the detectors fabricated from an ingot. Here, we present the results from testing 3D position-sensitive 15×15×10 mm 3 pixelated detectors, fabricated with conventional pixel patterns with progressively smallermore » pixel sizes: 1.4, 0.8, and 0.5 mm. We employed the readout system based on the H3D front-end multi-channel ASIC developed by BNL's Instrumentation Division in collaboration with the University of Michigan. We use the sharing of electron clouds among several adjacent pixels to measure locations of interaction points with sub-pixel resolution. By using the detectors with small-pixel sizes and a high probability of the charge-sharing events, we were able to improve their spectral resolutions in comparison to the baseline levels, measured for the 1.4-mm pixel size detectors with small fractions of charge-sharing events. These results demonstrate that further enhancement of the performance of CZT pixelated detectors and reduction of costs are possible by using high spatial-resolution position information of interaction points to correct the small-scale response non-uniformities caused by crystal defects present in most devices.« less

  7. Reoperation After Cervical Disc Arthroplasty Versus Anterior Cervical Discectomy and Fusion: A Meta-analysis.

    PubMed

    Zhong, Zhao-Ming; Zhu, Shi-Yuan; Zhuang, Jing-Shen; Wu, Qian; Chen, Jian-Ting

    2016-05-01

    Anterior cervical discectomy and fusion is a standard surgical treatment for cervical radiculopathy and myelopathy, but reoperations sometimes are performed to treat complications of fusion such as pseudarthrosis and adjacent-segment degeneration. A cervical disc arthroplasty is designed to preserve motion and avoid the shortcomings of fusion. Available evidence suggests that a cervical disc arthroplasty can provide pain relief and functional improvements similar or superior to an anterior cervical discectomy and fusion. However, there is controversy regarding whether a cervical disc arthroplasty can reduce the frequency of reoperations. We performed a meta-analysis of randomized controlled trials (RCTs) to compare cervical disc arthroplasty with anterior cervical discectomy and fusion regarding (1) the overall frequency of reoperation at the index and adjacent levels; (2) the frequency of reoperation at the index level; and (3) the frequency of reoperation at the adjacent levels. PubMed, EMBASE, and the Cochrane Register of Controlled Trials databases were searched to identify RCTs comparing cervical disc arthroplasty with anterior cervical discectomy and fusion and reporting the frequency of reoperation. We also manually searched the reference lists of articles and reviews for possible relevant studies. Twelve RCTs with a total of 3234 randomized patients were included. Eight types of disc prostheses were used in the included studies. In the anterior cervical discectomy and fusion group, autograft was used in one study and allograft in 11 studies. Nine of 12 studies were industry sponsored. Pooled risk ratio (RR) and associated 95% CI were calculated for the frequency of reoperation using random-effects or fixed-effects models depending on the heterogeneity of the included studies. A funnel plot suggested the possible presence of publication bias in the available pool of studies; that is, the shape of the plot suggests that smaller negative or no-difference studies may have been performed but have not been published, and so were not identified and included in this meta-analysis. The overall frequency of reoperation at the index and adjacent levels was lower in the cervical disc arthroplasty group (6%; 108/1762) than in the anterior cervical discectomy and fusion group (12%; 171/1472) (RR, 0.54; 95% CI, 0.36-0.80; p = 0.002). Subgroup analyses were performed according to secondary surgical level. Compared with anterior cervical discectomy and fusion, cervical disc arthroplasty was associated with fewer reoperations at the index level (RR, 0.50; 95% CI, 0.37-0.68; p < 0.001) and adjacent levels (RR, 0.52; 95% CI, 0.37-0.74; p < 0.001). Cervical disc arthroplasty is associated with fewer reoperations than anterior cervical discectomy and fusion, indicating that it is a safe and effective alternative to fusion for cervical radiculopathy and myelopathy. However, because of some limitations, these findings should be interpreted with caution. Additional studies are needed. Level I, therapeutic study.

  8. A Multi-modal, Discriminative and Spatially Invariant CNN for RGB-D Object Labeling.

    PubMed

    Asif, Umar; Bennamoun, Mohammed; Sohel, Ferdous

    2017-08-30

    While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multimodal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multimodal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance - this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability - this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multimodal hierarchical fusion - this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., imageand pixel-levels), and fused into a Conditional Random Field (CRF)- based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.

  9. Affordable non-traditional source data mining for context assessment to improve distributed fusion system robustness

    NASA Astrophysics Data System (ADS)

    Bowman, Christopher; Haith, Gary; Steinberg, Alan; Morefield, Charles; Morefield, Michael

    2013-05-01

    This paper describes methods to affordably improve the robustness of distributed fusion systems by opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models, and characterize the model quality. These methods also can measure the conformity of this non-traditional data with fusion system products including situation modeling and mission impact prediction. Non-traditional data can improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are described that automatically learn to characterize and search non-traditional contextual data to enable operators integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven and data-driven data mining methods to discover unknown models from non-traditional and `big data' sources are used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes our context assessment software development, and the demonstration of context assessment of non-traditional data to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolotnikov, A. E.; Camarda, G. S.; Cui, Y.

    Following our successful demonstration of the position-sensitive virtual Frisch-grid detectors, we investigated the feasibility of using high-granularity position sensing to correct response non-uniformities caused by the crystal defects in CdZnTe (CZT) pixelated detectors. The development of high-granularity detectors able to correct response non-uniformities on a scale comparable to the size of electron clouds opens the opportunity of using unselected off-the-shelf CZT material, whilst still assuring high spectral resolution for the majority of the detectors fabricated from an ingot. Here, we present the results from testing 3D position-sensitive 15×15×10 mm 3 pixelated detectors, fabricated with conventional pixel patterns with progressively smallermore » pixel sizes: 1.4, 0.8, and 0.5 mm. We employed the readout system based on the H3D front-end multi-channel ASIC developed by BNL's Instrumentation Division in collaboration with the University of Michigan. We use the sharing of electron clouds among several adjacent pixels to measure locations of interaction points with sub-pixel resolution. By using the detectors with small-pixel sizes and a high probability of the charge-sharing events, we were able to improve their spectral resolutions in comparison to the baseline levels, measured for the 1.4-mm pixel size detectors with small fractions of charge-sharing events. These results demonstrate that further enhancement of the performance of CZT pixelated detectors and reduction of costs are possible by using high spatial-resolution position information of interaction points to correct the small-scale response non-uniformities caused by crystal defects present in most devices.« less

  11. A Framework for Propagation of Uncertainties in the Kepler Data Analysis Pipeline

    NASA Technical Reports Server (NTRS)

    Clarke, Bruce D.; Allen, Christopher; Bryson, Stephen T.; Caldwell, Douglas A.; Chandrasekaran, Hema; Cote, Miles T.; Girouard, Forrest; Jenkins, Jon M.; Klaus, Todd C.; Li, Jie; hide

    2010-01-01

    The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by simultaneously observing 100,000 stellar targets nearly continuously over a three and a half year period. The 96-megapixel focal plane consists of 42 charge-coupled devices (CCD) each containing two 1024 x 1100 pixel arrays. Cross-correlations between calibrated pixels are introduced by common calibrations performed on each CCD requiring downstream data products access to the calibrated pixel covariance matrix in order to properly estimate uncertainties. The prohibitively large covariance matrices corresponding to the 75,000 calibrated pixels per CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used to implement standard propagation of uncertainties (POU) in the Kepler Science Operations Center (SOC) data processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent calibration transformation allowing the full covariance matrix of any subset of calibrated pixels to be recalled on-the-fly at any step in the calibration process. Singular value decomposition (SVD) is used to compress and low-pass filter the raw uncertainty data as well as any data dependent kernels. The combination of POU framework and SVD compression provide downstream consumers of the calibrated pixel data access to the full covariance matrix of any subset of the calibrated pixels traceable to pixel level measurement uncertainties without having to store, retrieve and operate on prohibitively large covariance matrices. We describe the POU Framework and SVD compression scheme and its implementation in the Kepler SOC pipeline.

  12. Kinematics of a selectively constrained radiolucent anterior lumbar disc: comparisons to hybrid and circumferential fusion.

    PubMed

    Daftari, Tapan K; Chinthakunta, Suresh R; Ingalhalikar, Aditya; Gudipally, Manasa; Hussain, Mir; Khalil, Saif

    2012-10-01

    Despite encouraging clinical outcomes of one-level total disc replacements reported in literature, there is no compelling evidence regarding the stability following two-level disc replacement and hybrid constructs. The current study is aimed at evaluating the multidirectional kinematics of a two-level disc arthroplasty and hybrid construct with disc replacement adjacent to rigid circumferential fusion, compared to two-level fusion using a novel selectively constrained radiolucent anterior lumbar disc. Nine osteoligamentous lumbosacral spines (L1-S1) were tested in the following sequence: 1) Intact; 2) One-level disc replacement; 3) Hybrid; 4) Two-level disc replacement; and 5) Two-level fusion. Range of motion (at both implanted and adjacent level), and center of rotation in sagittal plane were recorded and calculated. At the level of implantation, motion was restored when one-level disc replacement was used but tended to decrease with two-level disc arthroplasty. The findings also revealed that both one-level and two-level disc replacement and hybrid constructs did not significantly change adjacent level kinematics compared to the intact condition, whereas the two-level fusion construct demonstrated a significant increase in flexibility at the adjacent level. The location of center of rotation in the sagittal plane at L4-L5 for the one-level disc replacement construct was similar to that of the intact condition. The one-level disc arthroplasty tended to mimic a motion profile similar to the intact spine. However, the two-level disc replacement construct tended to reduce motion and clinical stability of a two-level disc arthroplasty requires additional investigation. Hybrid constructs may be used as a surgical alternative for treating two-level lumbar degenerative disc disease. Published by Elsevier Ltd.

  13. Outcomes of Posterolateral Fusion with and without Instrumentation and of Interbody Fusion for Isthmic Spondylolisthesis: A Prospective Study.

    PubMed

    Endler, Peter; Ekman, Per; Möller, Hans; Gerdhem, Paul

    2017-05-03

    Various methods for the treatment of isthmic spondylolisthesis are available. The aim of this study was to compare outcomes after posterolateral fusion without instrumentation, posterolateral fusion with instrumentation, and interbody fusion. The Swedish Spine Register was used to identify 765 patients who had been operated on for isthmic spondylolisthesis and had at least preoperative and 2-year outcome data; 586 of them had longer follow-up (a mean of 6.9 years). The outcome measures were a global assessment of leg and back pain, the Oswestry Disability Index (ODI), the EuroQol-5 Dimensions (EQ-5D) Questionnaire, the Short Form-36 (SF-36), a visual analog scale (VAS) for back and leg pain, and satisfaction with treatment. Data on additional lumbar spine surgery was searched for in the register, with the mean duration of follow-up for this variable being 10.6 years after the index procedure. Statistical analyses were performed with analysis of covariance or competing-risks proportional hazards regression, adjusted for baseline differences in the studied variables, smoking, employment status, and level of fusion. Posterolateral fusion without instrumentation was performed in 102 patients; posterolateral fusion with instrumentation, in 452; and interbody fusion, in 211. At 1 year, improvement was reported in the global assessment for back pain by 54% of the patients who had posterolateral fusion without instrumentation, 68% of those treated with posterolateral fusion with instrumentation, and 70% of those treated with interbody fusion (p = 0.009). The VAS for back pain and reported satisfaction with treatment showed similar patterns (p = 0.003 and p = 0.017, respectively), whereas other outcomes did not differ among the treatment groups at 1 year. At 2 years, the global assessment for back pain indicated improvement in 57% of the patients who had undergone posterolateral fusion without instrumentation, 70% of those who had posterolateral fusion with instrumentation, and 71% of those treated with interbody fusion (p = 0.022). There were no significant outcome differences at the mean 6.9-year follow-up interval. There was an increased hazard ratio for additional lumbar spine surgery after interbody fusion (4.34; 95% confidence interval [CI] = 1.71 to 11.03) and posterolateral fusion with instrumentation (2.56; 95% CI = 1.02 to 6.42) compared with after posterolateral fusion without instrumentation (1.00; reference). Fusion with instrumentation, with or without interbody fusion, was associated with more improvement in back pain scores and higher satisfaction with treatment compared with fusion without instrumentation at 1 year, but the difference was attenuated with longer follow-up. Fusion with instrumentation was associated with a significantly higher risk of additional spine surgery. Therapeutic Level III. See Instructions for Authors for a complete description of levels of evidence.

  14. Continuous phase and amplitude holographic elements

    NASA Technical Reports Server (NTRS)

    Maker, Paul D. (Inventor); Muller, Richard E. (Inventor)

    1995-01-01

    A method for producing a phase hologram using e-beam lithography provides n-ary levels of phase and amplitude by first producing an amplitude hologram on a transparent substrate by e-beam exposure of a resist over a film of metal by exposing n is less than or equal to m x m spots of an array of spots for each pixel, where the spots are randomly selected in proportion to the amplitude assigned to each pixel, and then after developing and etching the metal film producing a phase hologram by e-beam lithography using a low contrast resist, such as PMMA, and n-ary levels of low doses less than approximately 200 micro-C/sq cm and preferably in the range of 20-200 micro-C/sq cm, and aggressive development using pure acetone for an empirically determined time (about 6 s) controlled to within 1/10 s to produce partial development of each pixel in proportion to the n-ary level of dose assigned to it.

  15. The biomechanical stability of a novel spacer with integrated plate in contiguous two-level and three-level ACDF models: an in vitro cadaveric study.

    PubMed

    Clavenna, Andrew L; Beutler, William J; Gudipally, Manasa; Moldavsky, Mark; Khalil, Saif

    2012-02-01

    Anterior cervical plating increases stability and hence improves fusion rates to treat cervical spine pathologies, which are often symptomatic at multiple levels. However, plating is not without complications, such as dysphagia, injury to neural elements, and plate breakage. The biomechanics of a spacer with integrated plate system combined with posterior instrumentation (PI), in two-level and three-level surgical models, has not yet been investigated. The purpose of the study was to biomechanically evaluate the multidirectional rigidity of spacer with integrated plate (SIP) at multiple levels as comparable to traditional spacers and plating. An in vitro cervical cadaveric model. Eight fresh human cervical (C2-C7) cadaver spines were tested under pure moments of ±1.5 Nm on spine simulator test frame. Each spine was tested in intact condition, with only anterior fixation and with both anterior and PI. Range of motion (ROM) was measured using Optotrak Certus (NDI, Inc., Waterloo, Ontario, Canada) motion analysis system in flexion-extension (FE), lateral bending (LB), and axial rotation (AR) at the instrumented levels (C3-C6). Repeated-measures analysis of variance was used for statistical analysis. All the surgical constructs showed significant reduction in motion compared with intact condition. In two-level fusion, SIP (C4-C6) construct significantly reduced ROM by 66.5%, 65.4%, and 60.3% when compared with intact in FE, LB, and AR, respectively. In three-level fusion, SIP (C3-C6) construct significantly reduced ROM by 65.8%, 66%, and 49.6% when compared with intact in FE, LB, and AR, respectively. Posterior instrumentation showed significant stability only in three-level fusion when compared with their respective anterior constructs. In both two-level and three-level fusion, SIP showed comparable stability to traditional spacer and plate constructs in all loading modes. The anatomically profiled spacer with integrated plate allows treatment of cervical disorders with fewer steps and less impact to cervical structures. In this biomechanical study, spacer with integrated plate construct showed comparable stability to traditional spacer and plate for two-level and three-level fusion. Posterior instrumentation showed significant effect only in three-level fusion. Clinical data are required for further validation of using spacer with integrated plate at multiple levels. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Results from levels 2/3 fusion implementations: issues, challenges, retrospectives, and perspectives for the future an annotated perspective

    NASA Astrophysics Data System (ADS)

    Kadar, Ivan; Bosse, Eloi; Salerno, John; Lambert, Dale A.; Das, Subrata; Ruspini, Enrique H.; Rhodes, Bradley J.; Biermann, Joachim

    2008-04-01

    Even though the definition of the Joint Director of Laboratories (JDL) "fusion levels" were established in 1987, published 1991, revised in 1999 and 2004, the meaning, effects, control and optimization of interactions among the fusion levels have not as yet been fully explored and understood. Specifically, this is apparent from the abstract JDL definitions of "Levels 2/3 Fusion" - situation and threat assessment (SA/TA), which involve deriving relations among entities, e.g., the aggregation of object states (i.e., classification and location) in SA, while TA uses SA products to estimate/predict the impact of actions/interactions effects on situations taken by the participant entities involved. Given all the existing knowledge in the information fusion and human factors literature, (both prior to and after the introduction of "fusion levels" in 1987) there are still open questions remaining in regard to implementation of knowledge representation and reasoning methods under uncertainty to afford SA/TA. Therefore, to promote exchange of ideas and to illuminate the historical, current and future issues associated with Levels 2/3 implementations, leading experts were invited to present their respective views on various facets of this complex problem. This paper is a retrospective annotated view of the invited panel discussion organized by Ivan Kadar (first author), supported by John Salerno, in order to provide both a historical perspective of the evolution of the state-of-the-art (SOA) in higher-level "Levels 2/3" information fusion implementations by looking back over the past ten or more years (before JDL), and based upon the lessons learned to forecast where focus should be placed to further enhance and advance the SOA by addressing key issues and challenges. In order to convey the panel discussion to audiences not present at the panel, annotated position papers summarizing the panel presentation are included.

  17. Application of low-noise CID imagers in scientific instrumentation cameras

    NASA Astrophysics Data System (ADS)

    Carbone, Joseph; Hutton, J.; Arnold, Frank S.; Zarnowski, Jeffrey J.; Vangorden, Steven; Pilon, Michael J.; Wadsworth, Mark V.

    1991-07-01

    CIDTEC has developed a PC-based instrumentation camera incorporating a preamplifier per row CID imager and a microprocessor/LCA camera controller. The camera takes advantage of CID X-Y addressability to randomly read individual pixels and potentially overlapping pixel subsets in true nondestructive (NDRO) as well as destructive readout modes. Using an oxy- nitride fabricated CID and the NDRO readout technique, pixel full well and noise levels of approximately 1*10(superscript 6) and 40 electrons, respectively, were measured. Data taken from test structures indicates noise levels (which appear to be 1/f limited) can be reduced by a factor of two by eliminating the nitride under the preamplifier gate. Due to software programmability, versatile readout capabilities, wide dynamic range, and extended UV/IR capability, this camera appears to be ideally suited for use in spectroscopy and other scientific applications.

  18. Real-time sensor validation and fusion for distributed autonomous sensors

    NASA Astrophysics Data System (ADS)

    Yuan, Xiaojing; Li, Xiangshang; Buckles, Bill P.

    2004-04-01

    Multi-sensor data fusion has found widespread applications in industrial and research sectors. The purpose of real time multi-sensor data fusion is to dynamically estimate an improved system model from a set of different data sources, i.e., sensors. This paper presented a systematic and unified real time sensor validation and fusion framework (RTSVFF) based on distributed autonomous sensors. The RTSVFF is an open architecture which consists of four layers - the transaction layer, the process fusion layer, the control layer, and the planning layer. This paradigm facilitates distribution of intelligence to the sensor level and sharing of information among sensors, controllers, and other devices in the system. The openness of the architecture also provides a platform to test different sensor validation and fusion algorithms and thus facilitates the selection of near optimal algorithms for specific sensor fusion application. In the version of the model presented in this paper, confidence weighted averaging is employed to address the dynamic system state issue noted above. The state is computed using an adaptive estimator and dynamic validation curve for numeric data fusion and a robust diagnostic map for decision level qualitative fusion. The framework is then applied to automatic monitoring of a gas-turbine engine, including a performance comparison of the proposed real-time sensor fusion algorithms and a traditional numerical weighted average.

  19. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    PubMed Central

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-01-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images, but also by detection sensitivity. As the probe size is reduced to below 1 µm, for example, a low signal in each pixel limits lateral resolution due to counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432

  20. Enhanced technologies for unattended ground sensor systems

    NASA Astrophysics Data System (ADS)

    Hartup, David C.

    2010-04-01

    Progress in several technical areas is being leveraged to advantage in Unattended Ground Sensor (UGS) systems. This paper discusses advanced technologies that are appropriate for use in UGS systems. While some technologies provide evolutionary improvements, other technologies result in revolutionary performance advancements for UGS systems. Some specific technologies discussed include wireless cameras and viewers, commercial PDA-based system programmers and monitors, new materials and techniques for packaging improvements, low power cueing sensor radios, advanced long-haul terrestrial and SATCOM radios, and networked communications. Other technologies covered include advanced target detection algorithms, high pixel count cameras for license plate and facial recognition, small cameras that provide large stand-off distances, video transmissions of target activity instead of still images, sensor fusion algorithms, and control center hardware. The impact of each technology on the overall UGS system architecture is discussed, along with the advantages provided to UGS system users. Areas of analysis include required camera parameters as a function of stand-off distance for license plate and facial recognition applications, power consumption for wireless cameras and viewers, sensor fusion communication requirements, and requirements to practically implement video transmission through UGS systems. Examples of devices that have already been fielded using technology from several of these areas are given.

  1. Multiratio fusion change detection with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Hytla, Patrick C.; Balster, Eric J.; Vasquez, Juan R.; Neuroth, Robert M.

    2017-04-01

    A ratio-based change detection method known as multiratio fusion (MRF) is proposed and tested. The MRF framework builds on other change detection components proposed in this work: dual ratio (DR) and multiratio (MR). The DR method involves two ratios coupled with adaptive thresholds to maximize detected changes and minimize false alarms. The use of two ratios is shown to outperform the single ratio case when the means of the image pairs are not equal. MR change detection builds on the DR method by including negative imagery to produce four total ratios with adaptive thresholds. Inclusion of negative imagery is shown to improve detection sensitivity and to boost detection performance in certain target and background cases. MRF further expands this concept by fusing together the ratio outputs using a routine in which detections must be verified by two or more ratios to be classified as a true changed pixel. The proposed method is tested with synthetically generated test imagery and real datasets with results compared to other methods found in the literature. DR is shown to significantly outperform the standard single ratio method. MRF produces excellent change detection results that exhibit up to a 22% performance improvement over other methods from the literature at low false-alarm rates.

  2. A novel design for scintillator-based neutron and gamma imaging in inertial confinement fusion

    NASA Astrophysics Data System (ADS)

    Geppert-Kleinrath, Verena; Cutler, Theresa; Danly, Chris; Madden, Amanda; Merrill, Frank; Tybo, Josh; Volegov, Petr; Wilde, Carl

    2017-10-01

    The LANL Advanced Imaging team has been providing reliable 2D neutron imaging of the burning fusion fuel at NIF for years, revealing possible multi-dimensional asymmetries in the fuel shape, and therefore calling for additional views. Adding a passive imaging system using image plate techniques along a new polar line of sight has recently demonstrated the merit of 3D neutron image reconstruction. Now, the team is in the process of designing a new active neutron imaging system for an additional equatorial view. The design will include a gamma imaging system as well, to allow for the imaging of carbon in the ablator of the NIF fuel capsules, constraining the burning fuel shape even further. The selection of ideal scintillator materials for a position-sensitive detector system is the key component for the new design. A comprehensive study of advanced scintillators has been carried out at the Los Alamos Neutron Science Center and the OMEGA Laser Facility in Rochester, NY. Neutron radiography using a fast-gated CCD camera system delivers measurements of resolution, light output and noise characteristics. The measured performance parameters inform the novel design, for which we conclude the feasibility of monolithic scintillators over pixelated counterparts.

  3. A novel framework for command and control of networked sensor systems

    NASA Astrophysics Data System (ADS)

    Chen, Genshe; Tian, Zhi; Shen, Dan; Blasch, Erik; Pham, Khanh

    2007-04-01

    In this paper, we have proposed a highly innovative advanced command and control framework for sensor networks used for future Integrated Fire Control (IFC). The primary goal is to enable and enhance target detection, validation, and mitigation for future military operations by graphical game theory and advanced knowledge information fusion infrastructures. The problem is approached by representing distributed sensor and weapon systems as generic warfare resources which must be optimized in order to achieve the operational benefits afforded by enabling a system of systems. This paper addresses the importance of achieving a Network Centric Warfare (NCW) foundation of information superiority-shared, accurate, and timely situational awareness upon which advanced automated management aids for IFC can be built. The approach uses the Data Fusion Information Group (DFIG) Fusion hierarchy of Level 0 through Level 4 to fuse the input data into assessments for the enemy target system threats in a battlespace to which military force is being applied. Compact graph models are employed across all levels of the fusion hierarchy to accomplish integrative data fusion and information flow control, as well as cross-layer sensor management. The functional block at each fusion level will have a set of innovative algorithms that not only exploit the corresponding graph model in a computationally efficient manner, but also permit combined functional experiments across levels by virtue of the unifying graphical model approach.

  4. Facility Monitoring: A Qualitative Theory for Sensor Fusion

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando

    2001-01-01

    Data fusion and sensor management approaches have largely been implemented with centralized and hierarchical architectures. Numerical and statistical methods are the most common data fusion methods found in these systems. Given the proliferation and low cost of processing power, there is now an emphasis on designing distributed and decentralized systems. These systems use analytical/quantitative techniques or qualitative reasoning methods for date fusion.Based on other work by the author, a sensor may be treated as a highly autonomous (decentralized) unit. Each highly autonomous sensor (HAS) is capable of extracting qualitative behaviours from its data. For example, it detects spikes, disturbances, noise levels, off-limit excursions, step changes, drift, and other typical measured trends. In this context, this paper describes a distributed sensor fusion paradigm and theory where each sensor in the system is a HAS. Hence, given the reach qualitative information from each HAS, a paradigm and formal definitions are given so that sensors and processes can reason and make decisions at the qualitative level. This approach to sensor fusion makes it possible the implementation of intuitive (effective) methods to monitor, diagnose, and compensate processes/systems and their sensors. This paradigm facilitates a balanced distribution of intelligence (code and/or hardware) to the sensor level, the process/system level, and a higher controller level. The primary application of interest is in intelligent health management of rocket engine test stands.

  5. Dynamic Black-Level Correction and Artifact Flagging in the Kepler Data Pipeline

    NASA Technical Reports Server (NTRS)

    Clarke, B. D.; Kolodziejczak, J. J.; Caldwell, D. A.

    2013-01-01

    Instrument-induced artifacts in the raw Kepler pixel data include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, manifestations of drifting moiré pattern as locally correlated nonstationary noise and rolling bands in the images which find their way into the calibrated pixel time series and ultimately into the calibrated target flux time series. Using a combination of raw science pixel data, full frame images, reverse-clocked pixel data and ancillary temperature data the Keplerpipeline models and removes the FGS crosstalk artifacts by dynamically adjusting the black level correction. By examining the residuals to the model fits, the pipeline detects and flags spatial regions and time intervals of strong time-varying blacklevel (rolling bands ) on a per row per cadence basis. These flags are made available to downstream users of the data since the uncorrected rolling band artifacts could complicate processing or lead to misinterpretation of instrument behavior as stellar. This model fitting and artifact flagging is performed within the new stand-alone pipeline model called Dynablack. We discuss the implementation of Dynablack in the Kepler data pipeline and present results regarding the improvement in calibrated pixels and the expected improvement in cotrending performances as a result of including FGS corrections in the calibration. We also discuss the effectiveness of the rolling band flagging for downstream users and illustrate with some affected light curves.

  6. Surface Density of the Hendra G Protein Modulates Hendra F Protein-Promoted Membrane Fusion: Role for Hendra G Protein Trafficking and Degradation

    PubMed Central

    Whitman, Shannon D.; Dutch, Rebecca Ellis

    2007-01-01

    Hendra virus, like most paramyxoviruses, requires both a fusion (F) and attachment (G) protein for promotion of cell-cell fusion. Recent studies determined that Hendra F is proteolytically processed by the cellular protease cathepsin L after endocytosis. This unique cathepsin L processing results in a small percentage of Hendra F on the cell surface. To determine how the surface densities of the two Hendra glycoproteins affect fusion promotion, we performed experiments that varied the levels of glycoproteins expressed in transfected cells. Using two different fusion assays, we found a marked increase in fusion when expression of the Hendra G protein was increased, with a 1:1 molar ratio of Hendra F:G on the cell surface resulting in optimal membrane fusion. Our results also showed that Hendra G protein levels are modulated by both more rapid protein turnover and slower protein trafficking than is seen for Hendra F. PMID:17328935

  7. Connecting Swath Satellite Data With Imagery in Mapping Applications

    NASA Astrophysics Data System (ADS)

    Thompson, C. K.; Hall, J. R.; Penteado, P. F.; Roberts, J. T.; Zhou, A. Y.

    2016-12-01

    Visualizations of gridded science data products (referred to as Level 3 or Level 4) typically provide a straightforward correlation between image pixels and the source science data. This direct relationship allows users to make initial inferences based on imagery values, facilitating additional operations on the underlying data values, such as data subsetting and analysis. However, that same pixel-to-data relationship for ungridded science data products (referred to as Level 2) is significantly more challenging. These products, also referred to as "swath products", are in orbital "instrument space" and raster visualization pixels do not directly correlate to science data values. Interpolation algorithms are often employed during the gridding or projection of a science dataset prior to image generation, introducing intermediary values that separate the image from the source data values. NASA's Global Imagery Browse Services (GIBS) is researching techniques for efficiently serving "image-ready" data allowing client-side dynamic visualization and analysis capabilities. This presentation will cover some GIBS prototyping work designed to maintain connectivity between Level 2 swath data and its corresponding raster visualizations. Specifically, we discuss the DAta-to-Image-SYstem (DAISY), an indexing approach for Level 2 swath data, and the mechanisms whereby a client may dynamically visualize the data in raster form.

  8. Multisensor data fusion for IED threat detection

    NASA Astrophysics Data System (ADS)

    Mees, Wim; Heremans, Roel

    2012-10-01

    In this paper we present the multi-sensor registration and fusion algorithms that were developed for a force protection research project in order to detect threats against military patrol vehicles. The fusion is performed at object level, using a hierarchical evidence aggregation approach. It first uses expert domain knowledge about the features used to characterize the detected threats, that is implemented in the form of a fuzzy expert system. The next level consists in fusing intra-sensor and inter-sensor information. Here an ordered weighted averaging operator is used. The object level fusion between candidate threats that are detected asynchronously on a moving vehicle by sensors with different imaging geometries, requires an accurate sensor to world coordinate transformation. This image registration will also be discussed in this paper.

  9. Rule-driven defect detection in CT images of hardwood logs

    Treesearch

    Erol Sarigul; A. Lynn Abbott; Daniel L. Schmoldt

    2000-01-01

    This paper deals with automated detection and identification of internal defects in hardwood logs using computed tomography (CT) images. We have developed a system that employs artificial neural networks to perform tentative classification of logs on a pixel-by-pixel basis. This approach achieves a high level of classification accuracy for several hardwood species (...

  10. Pixel-based meshfree modelling of skeletal muscles.

    PubMed

    Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu

    2016-01-01

    This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.

  11. Optimum viewing distance for target acquisition

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    2015-05-01

    Human visual system (HVS) "resolution" (a.k.a. visual acuity) varies with illumination level, target characteristics, and target contrast. For signage, computer displays, cell phones, and TVs a viewing distance and display size are selected. Then the number of display pixels is chosen such that each pixel subtends 1 min-1. Resolution of low contrast targets is quite different. It is best described by Barten's contrast sensitivity function. Target acquisition models predict maximum range when the display pixel subtends 3.3 min-1. The optimum viewing distance is nearly independent of magnification. Noise increases the optimum viewing distance.

  12. A kind of color image segmentation algorithm based on super-pixel and PCNN

    NASA Astrophysics Data System (ADS)

    Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.

  13. Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction

    NASA Technical Reports Server (NTRS)

    Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.

    2013-01-01

    The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.

  14. Comparison of posterolateral lumbar fusion rates of Grafton Putty and OP-1 Putty in an athymic rat model.

    PubMed

    Bomback, David A; Grauer, Jonathan N; Lugo, Roberto; Troiano, Nancy; Patel, Tushar Ch; Friedlaender, Gary E

    2004-08-01

    Posterolateral lumbar spine fusions in athymic rats. To compare spine fusion rates of two different osteoinductive products. Many osteoinductive bone graft alternatives are available. Grafton (a demineralized bone matrix [DBM]) and Osteogenic Protein-1 (OP-1, an individual recombinant bone morphogenetic protein) are two such alternatives. The relative efficacy of products from these two classes has not been previously studied. The athymic rat spine fusion model has been validated and demonstrated useful to minimize inflammatory responses to xenogeneic or differentially expressed proteins such as those presented by DBMs of human etiology. Single-level intertransverse process fusions were performed in 60 athymic nude rats with 2 cc/kg of Grafton or OP-1 Putty. Half of each study group was killed at 3 weeks and half at 6 weeks. Fusion masses were assessed by radiography, manual palpation, and histology. At 3 weeks, manual palpation revealed a 13% fusion rate with Grafton and a 100% fusion rate with OP-1 (P = 0.0001). At 6 weeks, manual palpation revealed a 39% fusion rate of with Grafton and a 100% fusion rate with OP-1 (P = 0.0007). Similar fusion rates were found by histology at 3 and 6 weeks. Of note, one or two adjacent levels were fused in all of the OP-1 animals and none of the Grafton animals. Significant differences between the ability of Grafton and OP-1 to induce bone formation in an athymic rat posterolateral lumbar spine fusion model were found.

  15. Effect of Interbody Fusion on the Remaining Discs of the Lumbar Spine in Subjects with Disc Degeneration.

    PubMed

    Ryu, Robert; Techy, Fernando; Varadarajan, Ravikumar; Amirouche, Farid

    2016-02-01

    To study effects (stress loads) of lumbar fusion on the remaining segments (adjacent or not) of the lumbar spine in the setting of degenerated adjacent discs. A lumbar spine finite element model was built and validated. The full model of the lumbar spine was a parametric finite element model of segments L 1-5 . Numerous hypothetical combinations of one-level lumbar spine fusion and one-level disc degeneration were created. These models were subjected to 10 Nm flexion and extension moments and the stresses on the endplates and consequently on the intervertebral lumbar discs measured. These values were compared to the stresses on healthy lumbar spine discs under the same load and fusion scenarios. Increased stress at endplates was observed only in the settings of L4-5 fusion and L3-4 disc degeneration (8% stress elevation at L2,3 in flexion or extension, and 25% elevation at L3,4 in flexion only). All other combinations showed less endplate stress than did the control model. For fusion at L3-4 and degeneration at L4-5 , the stresses in the endplates at the adjacent level inferior to the fused disc decreased for both loading disc height reductions. Stresses in flexion decreased after fusion by 29.5% and 25.8% for degeneration I and II, respectively. Results for extension were similar. For fusion at L2-3 and degeneration at L4-5 , stresses in the endplates decreased more markedly at the degenerated (30%), than at the fused level (14%) in the presence of 25% disc height reduction and 10 Nm flexion, whereas in extension stresses decreased more at the fused (24.3%) than the degenerated level (5.86%). For fusion at L3-4 and degeneration at L2-3 , there were no increases in endplate stress in any scenario. For fusion at L4-5 and degeneration at L3-4 , progression of degeneration from I to II had a significant effect only in flexion. A dramatic increase in stress was noted in the endplates of the degenerated disc (L3-4 ) in flexion for degeneration II. Stresses are greater in flexion at the endplates of L3-4 and in flexion and extension at L2-3 in the presence of L3-4 disc disease and L4-5 fusion than in the control group. In all other combinations of fusion and disc disease, endplate stress was less for all levels tested than in the control model. © 2016 Chinese Orthopaedic Association and John Wiley & Sons Australia, Ltd.

  16. Multimodal biometric system using rank-level fusion approach.

    PubMed

    Monwar, Md Maruf; Gavrilova, Marina L

    2009-08-01

    In many real-world applications, unimodal biometric systems often face significant limitations due to sensitivity to noise, intraclass variability, data quality, nonuniversality, and other factors. Attempting to improve the performance of individual matchers in such situations may not prove to be highly effective. Multibiometric systems seek to alleviate some of these problems by providing multiple pieces of evidence of the same identity. These systems help achieve an increase in performance that may not be possible using a single-biometric indicator. This paper presents an effective fusion scheme that combines information presented by multiple domain experts based on the rank-level fusion integration method. The developed multimodal biometric system possesses a number of unique qualities, starting from utilizing principal component analysis and Fisher's linear discriminant methods for individual matchers (face, ear, and signature) identity authentication and utilizing the novel rank-level fusion method in order to consolidate the results obtained from different biometric matchers. The ranks of individual matchers are combined using the highest rank, Borda count, and logistic regression approaches. The results indicate that fusion of individual modalities can improve the overall performance of the biometric system, even in the presence of low quality data. Insights on multibiometric design using rank-level fusion and its performance on a variety of biometric databases are discussed in the concluding section.

  17. FT-Raman and NIR spectroscopy data fusion strategy for multivariate qualitative analysis of food fraud.

    PubMed

    Márquez, Cristina; López, M Isabel; Ruisánchez, Itziar; Callao, M Pilar

    2016-12-01

    Two data fusion strategies (high- and mid-level) combined with a multivariate classification approach (Soft Independent Modelling of Class Analogy, SIMCA) have been applied to take advantage of the synergistic effect of the information obtained from two spectroscopic techniques: FT-Raman and NIR. Mid-level data fusion consists of merging some of the previous selected variables from the spectra obtained from each spectroscopic technique and then applying the classification technique. High-level data fusion combines the SIMCA classification results obtained individually from each spectroscopic technique. Of the possible ways to make the necessary combinations, we decided to use fuzzy aggregation connective operators. As a case study, we considered the possible adulteration of hazelnut paste with almond. Using the two-class SIMCA approach, class 1 consisted of unadulterated hazelnut samples and class 2 of samples adulterated with almond. Models performance was also studied with samples adulterated with chickpea. The results show that data fusion is an effective strategy since the performance parameters are better than the individual ones: sensitivity and specificity values between 75% and 100% for the individual techniques and between 96-100% and 88-100% for the mid- and high-level data fusion strategies, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Local bone graft harvesting and volumes in posterolateral lumbar fusion: a technical report.

    PubMed

    Carragee, Eugene J; Comer, Garet C; Smith, Micah W

    2011-06-01

    In lumbar surgery, local bone graft is often harvested and used in posterolateral fusion procedures. The volume of local bone graft available for posterolateral fusion has not been determined in North American patients. Some authors have described this as minimal, but others have suggested the volume was sufficient to be reliably used as a stand-alone bone graft substitute for single-level fusion. To describe the technique used and determine the volume of local bone graft available in a cohort of patients undergoing single-level primary posterolateral fusion by the authors harvesting technique. Technical description and cohort report. Consecutive patients undergoing lumbar posterolateral fusion with or without instrumentation for degenerative processes. Local bone graft volume. Consecutive patients undergoing lumbar posterolateral fusion with or without instrumentation for degenerative processes of were studied. Local bone graft was harvested by a standard method in each patient and the volume measured by a standard procedure. Twenty-five patients were studied, and of these 11 (44%) had a previous decompression. The mean volume of local bone graft harvested was measured to be 25 cc (range, 12-36 cc). Local bone graft was augmented by iliac crest bone in six of 25 patients (24%) if the posterolateral fusion bed was not well packed with local bone alone. There was a trend to greater local bone graft volumes in men and in patients without previous decompression. Large volumes of local bone can be harvested during posterolateral lumbar fusion surgery. Even in patients with previous decompression the volume harvested is similar to that reported harvested from the posterior iliac crest for single-level fusion. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Welcome Back: Responses of Female Bonobos (Pan paniscus) to Fusions.

    PubMed

    Moscovice, Liza R; Deschner, Tobias; Hohmann, Gottfried

    2015-01-01

    In species with a high degree of fission-fusion social dynamics, fusions may trigger social conflict and thus provide an opportunity to identify sources of social tension and mechanisms related to its alleviation. We characterized behavioral and endocrine responses of captive female bonobos (Pan paniscus) to fusions within a zoo facility designed to simulate naturalistic fission-fusion social dynamics. We compared urinary cortisol levels and frequencies of aggression, grooming and socio-sexual interactions between female bonobos while in stable sub-groups and when one "joiner" was reunited with the "residents" of another sub-group. We hypothesized that fusions would trigger increases in aggression and cortisol levels among reunited joiners and resident females. We further predicted that females who face more uncertainty in their social interactions following fusions may use grooming and/or socio-sexual behavior to reduce social tension and aggression. The only aggression on reunion days occurred between reunited females, but frequencies of aggression remained low across non-reunion and reunion days, and there was no effect of fusions on cortisol levels. Fusions did not influence patterns of grooming, but there were increases in socio-sexual solicitations and socio-sexual interactions between joiners and resident females. Joiners who had been separated from residents for longer received the most solicitations, but were also more selective in their acceptance of solicitations and preferred to have socio-sexual interactions with higher-ranking residents. Our results suggest that socio-sexual interactions play a role in reintegrating female bonobos into social groups following fusions. In addition, females who receive a high number of solicitations are able to gain more control over their socio-sexual interactions and may use socio-sexual interactions for other purposes, such as to enhance their social standing.

  20. Welcome Back: Responses of Female Bonobos (Pan paniscus) to Fusions

    PubMed Central

    Moscovice, Liza R.; Deschner, Tobias; Hohmann, Gottfried

    2015-01-01

    In species with a high degree of fission-fusion social dynamics, fusions may trigger social conflict and thus provide an opportunity to identify sources of social tension and mechanisms related to its alleviation. We characterized behavioral and endocrine responses of captive female bonobos (Pan paniscus) to fusions within a zoo facility designed to simulate naturalistic fission-fusion social dynamics. We compared urinary cortisol levels and frequencies of aggression, grooming and socio-sexual interactions between female bonobos while in stable sub-groups and when one “joiner” was reunited with the “residents” of another sub-group. We hypothesized that fusions would trigger increases in aggression and cortisol levels among reunited joiners and resident females. We further predicted that females who face more uncertainty in their social interactions following fusions may use grooming and/or socio-sexual behavior to reduce social tension and aggression. The only aggression on reunion days occurred between reunited females, but frequencies of aggression remained low across non-reunion and reunion days, and there was no effect of fusions on cortisol levels. Fusions did not influence patterns of grooming, but there were increases in socio-sexual solicitations and socio-sexual interactions between joiners and resident females. Joiners who had been separated from residents for longer received the most solicitations, but were also more selective in their acceptance of solicitations and preferred to have socio-sexual interactions with higher-ranking residents. Our results suggest that socio-sexual interactions play a role in reintegrating female bonobos into social groups following fusions. In addition, females who receive a high number of solicitations are able to gain more control over their socio-sexual interactions and may use socio-sexual interactions for other purposes, such as to enhance their social standing. PMID:25996476

  1. Membranes linked by trans-SNARE complexes require lipids prone to non-bilayer structure for progression to fusion.

    PubMed

    Zick, Michael; Stroupe, Christopher; Orr, Amy; Douville, Deborah; Wickner, William T

    2014-01-01

    Like other intracellular fusion events, the homotypic fusion of yeast vacuoles requires a Rab GTPase, a large Rab effector complex, SNARE proteins which can form a 4-helical bundle, and the SNARE disassembly chaperones Sec17p and Sec18p. In addition to these proteins, specific vacuole lipids are required for efficient fusion in vivo and with the purified organelle. Reconstitution of vacuole fusion with all purified components reveals that high SNARE levels can mask the requirement for a complex mixture of vacuole lipids. At lower, more physiological SNARE levels, neutral lipids with small headgroups that tend to form non-bilayer structures (phosphatidylethanolamine, diacylglycerol, and ergosterol) are essential. Membranes without these three lipids can dock and complete trans-SNARE pairing but cannot rearrange their lipids for fusion. DOI: http://dx.doi.org/10.7554/eLife.01879.001.

  2. Biomechanical comparison of a two-level Maverick disc replacement with a hybrid one-level disc replacement and one-level anterior lumbar interbody fusion.

    PubMed

    Erkan, Serkan; Rivera, Yamil; Wu, Chunhui; Mehbod, Amir A; Transfeldt, Ensor E

    2009-10-01

    Multilevel lumbar disc disease (MLDD) is a common finding in many patients. Surgical solutions for MLDD include fusion or disc replacement. The hybrid model, combining fusion and disc replacement, is a potential alternative for patients who require surgical intervention at both L5-S1 and L4-L5. The indications for this hybrid model could be posterior element insufficiency, severe facet pathology, calcified ligamentum flavum, and subarticular disease confirming spinal stenosis at L5-S1 level, or previous fusion surgery at L5-S1 and new symptomatic pathology at L4-L5. Biomechanical data of the hybrid model with the Maverick disc and anterior fusion are not available in the literature. To compare the biomechanical properties of a two-level Maverick disc replacement at L4-L5, L5-S1, and a hybrid model consisting of an L4-L5 Maverick disc replacement with an L5-S1 anterior lumbar interbody fusion using multidirectional flexibility test. An in vitro human cadaveric biomechanical study. Six fresh human cadaveric lumbar specimens (L4-S1) were subjected to unconstrained load in axial torsion (AT), lateral bending (LB), flexion (F), extension (E), and flexion-extension (FE) using multidirectional flexibility test. Four surgical treatments-intact, one-level Maverick at L5-S1, two-level Maverick between L4 and S1, and the hybrid model (anterior fusion at L5-S1 and Maverick at L4-L5) were tested in sequential order. The range of motion of each treatment was calculated. The Maverick disc replacement slightly reduced intact motion in AT and LB at both levels. The total FE motion was similar to the intact motion. However, the E motion is significantly increased (approximately 50% higher) and F motion is significantly decreased (30%-50% lower). The anterior fusion using a cage and anterior plate significantly reduced spinal motion compared with the condition (p<.05). No significant differences were found between two-level Maverick disc prosthesis and the hybrid model in terms of all motion types at L4-L5 level (p>.05). The Maverick disc preserved total motion but altered the motion pattern of the intact condition. This result is similar to unconstrained devices such as Charité. The motion at L4-L5 of the hybrid model is similar to that of two-level Maverick disc replacement. The fusion procedure using an anterior plate significantly reduced intact motion. Clinical studies are recommended to validate the efficacy of the hybrid model.

  3. Selection of Fusion Levels Using the Fulcrum Bending Radiograph for the Management of Adolescent Idiopathic Scoliosis Patients with Alternate Level Pedicle Screw Strategy: Clinical Decision-making and Outcomes.

    PubMed

    Samartzis, Dino; Leung, Yee; Shigematsu, Hideki; Natarajan, Deepa; Stokes, Oliver; Mak, Kin-Cheung; Yao, Guanfeng; Luk, Keith D K; Cheung, Kenneth M C

    2015-01-01

    Selecting fusion levels based on the Luk et al criteria for operative management of thoracic adolescent idiopathic scoliosis (AIS) with hook and hybrid systems yields acceptable curve correction and balance parameters; however, it is unknown whether utilizing a purely pedicle screw strategy is effective. Utilizing the fulcrum bending radiographic (FBR) to assess curve flexibility to select fusion levels, the following study assessed the efficacy of pedicle screw fixation with alternate level screw strategy (ALSS) for thoracic AIS. A retrospective study with prospective radiographic data collection/analyses (preoperative, postoperative 1-week and minimum 2-year follow-up) of 28 operative thoracic AIS patients undergoing ALSS was performed. Standing coronal/sagittal and FBR Cobb angles, FBR flexibility, fulcrum bending correction index (FBCI), trunkal shift, radiographic shoulder height (RSH), and list were assessed on x-rays. Fusion level selection was based on the Luk et al criteria and compared to conventional techniques. In the primary curve, the mean preoperative and postoperative 1 week and last follow-up standing coronal Cobb angles were 59.9, 17.2 and 20.0 degrees, respectively. Eighteen patients (64.3%) had distal levels saved (mean: 1.6 levels) in comparison to conventional techniques. Mean immediate and last follow-up FBCIs were 122.6% and 115.0%, respectively. Sagittal alignment did not statistically differ between any assessment intervals (p>0.05). A decrease in trunkal shift was noted from preoperative to last follow-up (p = 0.003). No statistically significant difference from preoperative to last follow-up was noted in RSH and list (p>0.05). No "add-on" of other vertebra or decompensation was noted and all patients achieved fusion. This is the first report to note that using the FBR for decision-making in selecting fusion levels in thoracic AIS patients undergoing management with pedicle screw constructs (e.g. ALSS) is a cost-effective strategy that can achieve clinically-relevant deformity correction that is maintained and without compromising fusion levels.

  4. Influence of Alendronate and Endplate Degeneration to Single Level Posterior Lumbar Spinal Interbody Fusion

    PubMed Central

    Rhee, Wootack; Ha, Seongil; Lim, Jae Hyeon; Jang, Il Tae

    2014-01-01

    Objective Using alendronate after spinal fusion is a controversial issue due to the inhibition of osteoclast mediated bone resorption. In addition, there are an increasing number of reports that the endplate degeneration influences the lumbar spinal fusion. The object of this retrospective controlled study was to evaluate how the endplate degeneration and the bisphosphonate medication influence the spinal fusion through radiographic evaluation. Methods In this study, 44 patients who underwent single-level posterior lumbar interbody fusion (PLIF) using cage were examined from April 2007 to March 2009. All patients had been diagnosed as osteoporosis and would be recommended for alendronate medication. Endplate degeneration is categorized by the Modic changes. The solid fusion is defined if there was bridging bone between the vertebral bodies, either within or external to the cage on the plain X-ray and if there is less than 5° of angular difference in dynamic X-ray. Results In alendronate group, fusion was achieved in 66.7% compared to 73.9% in control group (no medication). Alendronate did not influence the fusion rate of PLIF. However, there was the statistical difference of fusion rate between the endplate degeneration group and the group without endplate degeneration. A total of 52.4% of fusion rate was seen in the endplate degeneration group compared to 91.3% in the group without endplate degeneration. The endplate degeneration suppresses the fusion process of PLIF. Conclusion Alendronate does not influence the fusion process in osteoporotic patients. The endplate degeneration decreases the fusion rate. PMID:25620981

  5. SVM Pixel Classification on Colour Image Segmentation

    NASA Astrophysics Data System (ADS)

    Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.

    2018-04-01

    The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.

  6. Estimation bias from using nonlinear Fourier plane correlators for sub-pixel image shift measurement and implications for the binary joint transform correlator

    NASA Astrophysics Data System (ADS)

    Grycewicz, Thomas J.; Florio, Christopher J.; Franz, Geoffrey A.; Robinson, Ross E.

    2007-09-01

    When using Fourier plane digital algorithms or an optical correlator to measure the correlation between digital images, interpolation by center-of-mass or quadratic estimation techniques can be used to estimate image displacement to the sub-pixel level. However, this can lead to a bias in the correlation measurement. This bias shifts the sub-pixel output measurement to be closer to the nearest pixel center than the actual location. The paper investigates the bias in the outputs of both digital and optical correlators, and proposes methods to minimize this effect. We use digital studies and optical implementations of the joint transform correlator to demonstrate optical registration with accuracies better than 0.1 pixels. We use both simulations of image shift and movies of a moving target as inputs. We demonstrate bias error for both center-of-mass and quadratic interpolation, and discuss the reasons that this bias is present. Finally, we suggest measures to reduce or eliminate the bias effects. We show that when sub-pixel bias is present, it can be eliminated by modifying the interpolation method. By removing the bias error, we improve registration accuracy by thirty percent.

  7. Fiber pixelated image database

    NASA Astrophysics Data System (ADS)

    Shinde, Anant; Perinchery, Sandeep Menon; Matham, Murukeshan Vadakke

    2016-08-01

    Imaging of physically inaccessible parts of the body such as the colon at micron-level resolution is highly important in diagnostic medical imaging. Though flexible endoscopes based on the imaging fiber bundle are used for such diagnostic procedures, their inherent honeycomb-like structure creates fiber pixelation effects. This impedes the observer from perceiving the information from an image captured and hinders the direct use of image processing and machine intelligence techniques on the recorded signal. Significant efforts have been made by researchers in the recent past in the development and implementation of pixelation removal techniques. However, researchers have often used their own set of images without making source data available which subdued their usage and adaptability universally. A database of pixelated images is the current requirement to meet the growing diagnostic needs in the healthcare arena. An innovative fiber pixelated image database is presented, which consists of pixelated images that are synthetically generated and experimentally acquired. Sample space encompasses test patterns of different scales, sizes, and shapes. It is envisaged that this proposed database will alleviate the current limitations associated with relevant research and development and would be of great help for researchers working on comb structure removal algorithms.

  8. Automatic sub-pixel coastline extraction based on spectral mixture analysis using EO-1 Hyperion data

    NASA Astrophysics Data System (ADS)

    Hong, Zhonghua; Li, Xuesu; Han, Yanling; Zhang, Yun; Wang, Jing; Zhou, Ruyan; Hu, Kening

    2018-06-01

    Many megacities (such as Shanghai) are located in coastal areas, therefore, coastline monitoring is critical for urban security and urban development sustainability. A shoreline is defined as the intersection between coastal land and a water surface and features seawater edge movements as tides rise and fall. Remote sensing techniques have increasingly been used for coastline extraction; however, traditional hard classification methods are performed only at the pixel-level and extracting subpixel accuracy using soft classification methods is both challenging and time consuming due to the complex features in coastal regions. This paper presents an automatic sub-pixel coastline extraction method (ASPCE) from high-spectral satellite imaging that performs coastline extraction based on spectral mixture analysis and, thus, achieves higher accuracy. The ASPCE method consists of three main components: 1) A Water- Vegetation-Impervious-Soil (W-V-I-S) model is first presented to detect mixed W-V-I-S pixels and determine the endmember spectra in coastal regions; 2) The linear spectral mixture unmixing technique based on Fully Constrained Least Squares (FCLS) is applied to the mixed W-V-I-S pixels to estimate seawater abundance; and 3) The spatial attraction model is used to extract the coastline. We tested this new method using EO-1 images from three coastal regions in China: the South China Sea, the East China Sea, and the Bohai Sea. The results showed that the method is accurate and robust. Root mean square error (RMSE) was utilized to evaluate the accuracy by calculating the distance differences between the extracted coastline and the digitized coastline. The classifier's performance was compared with that of the Multiple Endmember Spectral Mixture Analysis (MESMA), Mixture Tuned Matched Filtering (MTMF), Sequential Maximum Angle Convex Cone (SMACC), Constrained Energy Minimization (CEM), and one classical Normalized Difference Water Index (NDWI). The results from the three test sites indicated that the proposed ASPCE method extracted coastlines more efficiently than did the compared methods, and its coastline extraction accuracy corresponded closely to the digitized coastline, with 0.39 pixels, 0.40 pixels, and 0.35 pixels in the three test regions, showing that the ASPCE method achieves an accuracy below 12.0 m (0.40 pixels). Moreover, in the quantitative accuracy assessment for the three test sites, the ASPCE method shows the best performance in coastline extraction, achieving a 0.35 pixel-level at the Bohai Sea, China test site. Therefore, the proposed ASPCE method can extract coastline more accurately than can the hard classification methods or other spectral unmixing methods.

  9. A hyperspectral image projector for hyperspectral imagers

    NASA Astrophysics Data System (ADS)

    Rice, Joseph P.; Brown, Steven W.; Neira, Jorge E.; Bousquet, Robert R.

    2007-04-01

    We have developed and demonstrated a Hyperspectral Image Projector (HIP) intended for system-level validation testing of hyperspectral imagers, including the instrument and any associated spectral unmixing algorithms. HIP, based on the same digital micromirror arrays used in commercial digital light processing (DLP*) displays, is capable of projecting any combination of many different arbitrarily programmable basis spectra into each image pixel at up to video frame rates. We use a scheme whereby one micromirror array is used to produce light having the spectra of endmembers (i.e. vegetation, water, minerals, etc.), and a second micromirror array, optically in series with the first, projects any combination of these arbitrarily-programmable spectra into the pixels of a 1024 x 768 element spatial image, thereby producing temporally-integrated images having spectrally mixed pixels. HIP goes beyond conventional DLP projectors in that each spatial pixel can have an arbitrary spectrum, not just arbitrary color. As such, the resulting spectral and spatial content of the projected image can simulate realistic scenes that a hyperspectral imager will measure during its use. Also, the spectral radiance of the projected scenes can be measured with a calibrated spectroradiometer, such that the spectral radiance projected into each pixel of the hyperspectral imager can be accurately known. Use of such projected scenes in a controlled laboratory setting would alleviate expensive field testing of instruments, allow better separation of environmental effects from instrument effects, and enable system-level performance testing and validation of hyperspectral imagers as used with analysis algorithms. For example, known mixtures of relevant endmember spectra could be projected into arbitrary spatial pixels in a hyperspectral imager, enabling tests of how well a full system, consisting of the instrument + calibration + analysis algorithm, performs in unmixing (i.e. de-convolving) the spectra in all pixels. We discuss here the performance of a visible prototype HIP. The technology is readily extendable to the ultraviolet and infrared spectral ranges, and the scenes can be static or dynamic.

  10. Titanium vs. polyetheretherketone (PEEK) interbody fusion: Meta-analysis and review of the literature.

    PubMed

    Seaman, Scott; Kerezoudis, Panagiotis; Bydon, Mohamad; Torner, James C; Hitchon, Patrick W

    2017-10-01

    Spinal interbody fusion is a standard and accepted method for spinal fusion. Interbody fusion devices include titanium (Ti) and polyetheretherketone (PEEK) cages with distinct biomechanical properties. Titanium and PEEK cages have been evaluated in the cervical and lumbar spine, with conflicting results in bony fusion and subsidence. Using Preferred Reporting Items for Systematic reviews and Meta-analyses (PRISMA) guidelines, we reviewed the available literature evaluating Ti and PEEK cages to assess subsidence and fusion rates. Six studies were included in the analysis, 3 of which were class IV evidence, 2 were class III, and 1 was class II. A total of 410 patients (Ti-228, PEEK-182) and 587 levels (Ti-327, PEEK-260) were studied. Pooled mean age was 50.8years in the Ti group, and 53.1years in the PEEK group. Anterior cervical discectomy was performed in 4 studies (395 levels) and transforaminal interbody fusion in 2 studies (192 levels). No statistically significant difference was found between groups with fusion (OR 1.16, 95% C.I 0.59-2.89, p=0.686, I 2 =49.7%) but there was a statistically significant the rate of subsidence with titanium (OR 3.59, 95% C.I 1.28-10.07, p=0.015, I 2 =56.9%) at last follow-up. Titanium and PEEK cages are associated with a similar rate of fusion, but there is an increased rate of subsidence with titanium cage. Future prospective randomized controlled trials are needed to further evaluate these cages using surgical and patient-reported outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Development of a prototype infrared imaging bolometer for NSTX-U

    NASA Astrophysics Data System (ADS)

    van Eden, G. G.; Delgado-Aparicio, L. F.; Gray, T. K.; Jaworski, M. A.; Morgan, T. W.; Peterson, B. J.; Reinke, M. L.; Sano, R.; Mukai, K.; Differ/Pppl Collaboration; Nifs/Pppl Collaboration

    2015-11-01

    Measurements of the radiated power in fusion reactors are of high importance for studying detachment and the overall power balance. A prototype Infrared Video Bolometer (IRVB) is being developed for NSTX-U complementing resistive bolometer and AXUV diode diagnostics. The IRVB has proven to be a powerful tool on LHD and JT-60U for its 2D imaging quality and reactor environment compatibility. For NSTX-U, a poloidal view of the lower center stack and lower divertor are envisaged for the 2016 run campaign. The IRVB concept images radiation from the plasma onto a 2.5 μm thick 9 x 7 cm2 calibrated Pt foil and monitors its temperature evolution using an IR camera (SB focal plane, 2-12 μm, 128x128 pixels, 1.6 kHz). The power incident on the foil is calculated by solving the 2D +time heat diffusion equation. Benchtop characterization is presented, demonstrating a sensitivity of approximately 20 mK and a noise equivalent power density of 71.5 μW cm-2 for 4x20 bolometer super-pixels and a 50 Hz time response. The hardware design, optimization of camera and detector settings as well as first results of both synthetic and experimental origin are discussed.

  12. Prostate cancer detection: Fusion of cytological and textural features.

    PubMed

    Nguyen, Kien; Jain, Anil K; Sabata, Bikash

    2011-01-01

    A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i) to locate cancer regions in a large digitized tissue biopsy, and (ii) to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli) in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000×7,000 pixels) and 1 1 whole-slide test images (each of which has approximately 5,000×23,000 pixels). All images are at 20X magnification.

  13. Prostate cancer detection: Fusion of cytological and textural features

    PubMed Central

    Nguyen, Kien; Jain, Anil K.; Sabata, Bikash

    2011-01-01

    A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i) to locate cancer regions in a large digitized tissue biopsy, and (ii) to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli) in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000×7,000 pixels) and 1 1 whole-slide test images (each of which has approximately 5,000×23,000 pixels). All images are at 20X magnification. PMID:22811959

  14. Musculoskeletal imaging with a prototype photon-counting detector.

    PubMed

    Gruber, M; Homolka, P; Chmeissani, M; Uffmann, M; Pretterklieber, M; Kainberger, F

    2012-01-01

    To test a digital imaging X-ray device based on the direct capture of X-ray photons with pixel detectors, which are coupled with photon-counting readout electronics. The chip consists of a matrix of 256 × 256 pixels with a pixel pitch of 55 μm. A monolithic image of 11.2 cm × 7 cm was obtained by the consecutive displacement approach. Images of embalmed anatomical specimens of eight human hands were obtained at four different dose levels (skin dose 2.4, 6, 12, 25 μGy) with the new detector, as well as with a flat-panel detector. The overall rating scores for the evaluated anatomical regions ranged from 5.23 at the lowest dose level, 6.32 at approximately 6 μGy, 6.70 at 12 μGy, to 6.99 at the highest dose level with the photon-counting system. The corresponding rating scores for the flat-panel detector were 3.84, 5.39, 6.64, and 7.34. When images obtained at the same dose were compared, the new system outperformed the conventional DR system at the two lowest dose levels. At the higher dose levels, there were no significant differences between the two systems. The photon-counting detector has great potential to obtain musculoskeletal images of excellent quality at very low dose levels.

  15. Effect of TheraCyte-encapsulated parathyroid cells on lumbar fusion in a rat model.

    PubMed

    Chen, Sung-Hsiung; Huang, Shun-Chen; Lui, Chun-Chung; Lin, Tzu-Ping; Chou, Fong-Fu; Ko, Jih-Yang

    2012-09-01

    Implantation of TheraCyte 4 × 10(6) live parathyroid cells can increase the bone marrow density of the spine of ovariectomized rats. There has been no published study examining the effect of such implantation on spinal fusion outcomes. The purpose of this study was to examine the effect of TheraCyte-encapsulated parathyroid cells on posterolateral lumbar fusions in a rat model. Forty Sprague-Dawley rats underwent single-level, intertransverse process spinal fusions using iliac crest autograft. The rats were randomly assigned to two groups: Group 1 rats received sham operations on their necks (control; N = 20); Group 2 rats were implanted with TheraCyte-encapsulated 4 × 10(6) live parathyroid cells into the subcutis of their necks (TheraCyte; N = 20). Six weeks after surgery the rats were killed. Fusion was assessed by inspection, manual palpation, radiography, and histology. Blood was drawn to measure the serum levels of calcium, phosphorus, and intact parathyroid hormone (iPTH). Based on manual palpation, the control group had a fusion rate of 33 % (6/18) and the TheraCyte group had a fusion rate of 72 % (13/18) (P = 0.044). Histology confirmed the manual palpation results. Serum iPTH levels were significantly higher in the TheraCyte group compared with the control group (P < 0.05); neither serum calcium nor phosphorus levels were significantly different between the two groups. This pilot animal study revealed that there were more fusions in rats that received TheraCyte-encapsulated 4 × 10(6) live parathyroid cells than in control rats without significant change in serum calcium or phosphorus concentrations. As with any animal study, the results may not extrapolate to a higher species. Further studies are needed to determine if these effects are clinically significant.

  16. One and two level posterior lumbar interbody fusion (PLIF) using an expandable, stand-alone, interbody fusion device: a VariLift® case series

    PubMed Central

    Barrett-Tuck, Rebecca; Del Monaco, Diana

    2017-01-01

    Background Surgical interventions such as posterior lumbar interbody fusion (PLIF) with and without posterior instrumentation are often employed in patients with degenerative spinal conditions that fail to respond to conservative medical management. The VariLift® Interbody Fusion System was developed as a stand-alone solution to provide the benefits of an intervertebral fusion device without the requirement of supplemental pedicle screw fixation. Methods In this retrospective case series, 25 patients underwent PLIF with a stand-alone VariLift® expandable interbody fusion device without adjunctive pedicle screw fixation. There were 12 men and 13 women, with a mean age of 57.2 years (range, 33–83 years); single level in 18 patients, 2 levels in 7 patients. Back pain severity was reported as none, mild, moderate, severe and worst imaginable at baseline, 6 and 12 months. Preoperatively, 88% (22 of 25) of patients reported severe back pain. Results All patients experienced symptomatic improvement and, by 12 months postoperatively, 71% (15 of 21) of patients reported only mild residual pain. Overall, pain scores improved significantly from baseline to 12 months (P=0.0002). There were no revision surgeries and fusion was achieved 12 of 13 patients (92%) who returned for a 12-month radiographic follow-up. There were three cases of intractable postsurgical pain which required extended hospitalization or pain management, one wound infection and one case of surgical site dehiscence, both treated and resolved during inpatient hospitalization. Conclusions In this single-physician case series, the VariLift® device used in single or two-level PLIF provided effective symptom relief and produced a high fusion rate without the need for supplemental fixation. PMID:28435912

  17. One and two level posterior lumbar interbody fusion (PLIF) using an expandable, stand-alone, interbody fusion device: a VariLift® case series.

    PubMed

    Barrett-Tuck, Rebecca; Del Monaco, Diana; Block, Jon E

    2017-03-01

    Surgical interventions such as posterior lumbar interbody fusion (PLIF) with and without posterior instrumentation are often employed in patients with degenerative spinal conditions that fail to respond to conservative medical management. The VariLift ® Interbody Fusion System was developed as a stand-alone solution to provide the benefits of an intervertebral fusion device without the requirement of supplemental pedicle screw fixation. In this retrospective case series, 25 patients underwent PLIF with a stand-alone VariLift ® expandable interbody fusion device without adjunctive pedicle screw fixation. There were 12 men and 13 women, with a mean age of 57.2 years (range, 33-83 years); single level in 18 patients, 2 levels in 7 patients. Back pain severity was reported as none, mild, moderate, severe and worst imaginable at baseline, 6 and 12 months. Preoperatively, 88% (22 of 25) of patients reported severe back pain. All patients experienced symptomatic improvement and, by 12 months postoperatively, 71% (15 of 21) of patients reported only mild residual pain. Overall, pain scores improved significantly from baseline to 12 months (P=0.0002). There were no revision surgeries and fusion was achieved 12 of 13 patients (92%) who returned for a 12-month radiographic follow-up. There were three cases of intractable postsurgical pain which required extended hospitalization or pain management, one wound infection and one case of surgical site dehiscence, both treated and resolved during inpatient hospitalization. In this single-physician case series, the VariLift ® device used in single or two-level PLIF provided effective symptom relief and produced a high fusion rate without the need for supplemental fixation.

  18. Return to Golf After Lumbar Fusion

    PubMed Central

    Shifflett, Grant D.; Hellman, Michael D.; Louie, Philip K.; Mikhail, Christopher; Park, Kevin U.; Phillips, Frank M.

    2016-01-01

    Background: Spinal fusion surgery is being increasingly performed, yet few studies have focused on return to recreational sports after lumbar fusion and none have specifically analyzed return to golf. Hypothesis: Most golfers successfully return to sport after lumbar fusion surgery. Study Design: Case series. Level of Evidence: Level 4. Methods: All patients who underwent 1- or 2-level primary lumbar fusion surgery for degenerative pathologies performed by a single surgeon between January 2008 and October 2012 and had at least 1-year follow-up were included. Patients completed a specifically designed golf survey. Surveys were mailed, given during follow-up clinic, or answered during telephone contact. Results: A total of 353 patients met the inclusion and exclusion criteria, with 200 responses (57%) to the questionnaire producing 34 golfers. The average age of golfers was 57 years (range, 32-79 years). In 79% of golfers, preoperative back and/or leg pain significantly affected their ability to play golf. Within 1 year from surgery, 65% of patients returned to practice and 52% returned to course play. Only 29% of patients stated that continued back/leg pain limited their play. Twenty-five patients (77%) were able to play the same amount of golf or more than before fusion surgery. Of those providing handicaps, 12 (80%) reported the same or an improved handicap. Conclusion: More than 50% of golfers return to on-course play within 1 year of lumbar fusion surgery. The majority of golfers can return to preoperative levels in terms of performance (handicap) and frequency of play. Clinical Relevance: This investigation offers insight into when golfers return to sport after lumbar fusion surgery and provides surgeons with information to set realistic expectations postoperatively. PMID:27879299

  19. "They may be pixels, but they're MY pixels:" developing a metric of character attachment in role-playing video games.

    PubMed

    Lewis, Melissa L; Weber, René; Bowman, Nicholas David

    2008-08-01

    This paper proposes a new and reliable metric for measuring character attachment (CA), the connection felt by a video game player toward a video game character. Results of construct validity analyses indicate that the proposed CA scale has a significant relationship with self-esteem, addiction, game enjoyment, and time spent playing games; all of these relationships are predicted by theory. Additionally, CA levels for role-playing games differ significantly from CA levels of other character-driven games.

  20. Programmable architecture for pixel level processing tasks in lightweight strapdown IR seekers

    NASA Astrophysics Data System (ADS)

    Coates, James L.

    1993-06-01

    Typical processing tasks associated with missile IR seeker applications are described, and a straw man suite of algorithms is presented. A fully programmable multiprocessor architecture is realized on a multimedia video processor (MVP) developed by Texas Instruments. The MVP combines the elements of RISC, floating point, advanced DSPs, graphics processors, display and acquisition control, RAM, and external memory. Front end pixel level tasks typical of missile interceptor applications, operating on 256 x 256 sensor imagery, can be processed at frame rates exceeding 100 Hz in a single MVP chip.

  1. Vision Sensors and Cameras

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  2. Rolling Band Artifact Flagging in the Kepler Data Pipeline

    NASA Astrophysics Data System (ADS)

    Clarke, Bruce; Kolodziejczak, Jeffery J; Caldwell, Douglas A.

    2014-06-01

    Instrument-induced artifacts in the raw Kepler pixel data include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, manifestations of drifting moiré pattern as locally correlated nonstationary noise and rolling bands in the images. These systematics find their way into the calibrated pixel time series and ultimately into the target flux time series. The Kepler pipeline module Dynablack models the FGS crosstalk artifacts using a combination of raw science pixel data, full frame images, reverse-clocked pixel data and ancillary temperature data. The calibration module (CAL) uses the fitted Dynablack models to remove FGS crosstalk artifacts in the calibrated pixels by adjusting the black level correction per cadence. Dynablack also detects and flags spatial regions and time intervals of strong time-varying black-level. These rolling band artifact (RBA) flags are produced on a per row per cadence basis by searching for transit signatures in the Dynablack fit residuals. The Photometric Analysis module (PA) generates per target per cadence data quality flags based on the Dynablack RBA flags. Proposed future work includes using the target data quality flags as a basis for de-weighting in the Presearch Data Conditioning (PDC), Transiting Planet Search (TPS) and Data Validation (DV) pipeline modules. We discuss the effectiveness of RBA flagging for downstream users and illustrate with some affected light curves. We also discuss the implementation of Dynablack in the Kepler data pipeline and present results regarding the improvement in calibrated pixels and the expected improvement in cotrending performance as a result of including FGS corrections in the calibration. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  3. FITPix COMBO—Timepix detector with integrated analog signal spectrometric readout

    NASA Astrophysics Data System (ADS)

    Holik, M.; Kraus, V.; Georgiev, V.; Granja, C.

    2016-02-01

    The hybrid semiconductor pixel detector Timepix has proven a powerful tool in radiation detection and imaging. Energy loss and directional sensitivity as well as particle type resolving power are possible by high resolution particle tracking and per-pixel energy and quantum-counting capability. The spectrometric resolving power of the detector can be further enhanced by analyzing the analog signal of the detector common sensor electrode (also called back-side pulse). In this work we present a new compact readout interface, based on the FITPix readout architecture, extended with integrated analog electronics for the detector's common sensor signal. Integrating simultaneous operation of the digital per-pixel information with the common sensor (called also back-side electrode) analog pulse processing circuitry into one device enhances the detector capabilities and opens new applications. Thanks to noise suppression and built-in electromagnetic interference shielding the common hardware platform enables parallel analog signal spectroscopy on the back side pulse signal with full operation and read-out of the pixelated digital part, the noise level is 600 keV and spectrometric resolution around 100 keV for 5.5 MeV alpha particles. Self-triggering is implemented with delay of few tens of ns making use of adjustable low-energy threshold of the particle analog signal amplitude. The digital pixelated full frame can be thus triggered and recorded together with the common sensor analog signal. The waveform, which is sampled with frequency 100 MHz, can be recorded in adjustable time window including time prior to the trigger level. An integrated software tool provides control, on-line display and read-out of both analog and digital channels. Both the pixelated digital record and the analog waveform are synchronized and written out by common time stamp.

  4. PIXELS: Using field-based learning to investigate students' concepts of pixels and sense of scale

    NASA Astrophysics Data System (ADS)

    Pope, A.; Tinigin, L.; Petcovic, H. L.; Ormand, C. J.; LaDue, N.

    2015-12-01

    Empirical work over the past decade supports the notion that a high level of spatial thinking skill is critical to success in the geosciences. Spatial thinking incorporates a host of sub-skills such as mentally rotating an object, imagining the inside of a 3D object based on outside patterns, unfolding a landscape, and disembedding critical patterns from background noise. In this study, we focus on sense of scale, which refers to how an individual quantified space, and is thought to develop through kinesthetic experiences. Remote sensing data are increasingly being used for wide-reaching and high impact research. A sense of scale is critical to many areas of the geosciences, including understanding and interpreting remotely sensed imagery. In this exploratory study, students (N=17) attending the Juneau Icefield Research Program participated in a 3-hour exercise designed to study how a field-based activity might impact their sense of scale and their conceptions of pixels in remotely sensed imagery. Prior to the activity, students had an introductory remote sensing lecture and completed the Sense of Scale inventory. Students walked and/or skied the perimeter of several pixel types, including a 1 m square (representing a WorldView sensor's pixel), a 30 m square (a Landsat pixel) and a 500 m square (a MODIS pixel). The group took reflectance measurements using a field radiometer as they physically traced out the pixel. The exercise was repeated in two different areas, one with homogenous reflectance, and another with heterogeneous reflectance. After the exercise, students again completed the Sense of Scale instrument and a demographic survey. This presentation will share the effects and efficacy of the field-based intervention to teach remote sensing concepts and to investigate potential relationships between students' concepts of pixels and sense of scale.

  5. Vehicle logo recognition using multi-level fusion model

    NASA Astrophysics Data System (ADS)

    Ming, Wei; Xiao, Jianli

    2018-04-01

    Vehicle logo recognition plays an important role in manufacturer identification and vehicle recognition. This paper proposes a new vehicle logo recognition algorithm. It has a hierarchical framework, which consists of two fusion levels. At the first level, a feature fusion model is employed to map the original features to a higher dimension feature space. In this space, the vehicle logos become more recognizable. At the second level, a weighted voting strategy is proposed to promote the accuracy and the robustness of the recognition results. To evaluate the performance of the proposed algorithm, extensive experiments are performed, which demonstrate that the proposed algorithm can achieve high recognition accuracy and work robustly.

  6. A Nipkow disk integrated with Fresnel lenses for terahertz single pixel imaging.

    PubMed

    Li, Chong; Grant, James; Wang, Jue; Cumming, David R S

    2013-10-21

    We present a novel Nipkow disk design for terahertz (THz) single pixel imaging applications. A 100 mm high resistivity (ρ≈3k-10k Ω·cm) silicon wafer was used for the disk on which a spiral array of twelve 16-level binary Fresnel lenses were fabricated using photolithography and a dry-etch process. The implementation of Fresnel lenses on the Nipkow disk increases the THz signal transmission compared to the conventional pinhole-based Nipkow disk by more than 12 times thus a THz source with lower power or a THz detector with lower detectivity can be used. Due to the focusing capability of the lenses, a pixel resolution better than 0.5 mm is in principle achievable. To demonstrate the concept, a single pixel imaging system operating at 2.52 THz is described.

  7. Subpixel target detection and enhancement in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Tiwari, K. C.; Arora, M.; Singh, D.

    2011-06-01

    Hyperspectral data due to its higher information content afforded by higher spectral resolution is increasingly being used for various remote sensing applications including information extraction at subpixel level. There is however usually a lack of matching fine spatial resolution data particularly for target detection applications. Thus, there always exists a tradeoff between the spectral and spatial resolutions due to considerations of type of application, its cost and other associated analytical and computational complexities. Typically whenever an object, either manmade, natural or any ground cover class (called target, endmembers, components or class) gets spectrally resolved but not spatially, mixed pixels in the image result. Thus, numerous manmade and/or natural disparate substances may occur inside such mixed pixels giving rise to mixed pixel classification or subpixel target detection problems. Various spectral unmixing models such as Linear Mixture Modeling (LMM) are in vogue to recover components of a mixed pixel. Spectral unmixing outputs both the endmember spectrum and their corresponding abundance fractions inside the pixel. It, however, does not provide spatial distribution of these abundance fractions within a pixel. This limits the applicability of hyperspectral data for subpixel target detection. In this paper, a new inverse Euclidean distance based super-resolution mapping method has been presented that achieves subpixel target detection in hyperspectral images by adjusting spatial distribution of abundance fraction within a pixel. Results obtained at different resolutions indicate that super-resolution mapping may effectively aid subpixel target detection.

  8. Simulation study of pixel detector charge digitization

    NASA Astrophysics Data System (ADS)

    Wang, Fuyue; Nachman, Benjamin; Sciveres, Maurice; Lawrence Berkeley National Laboratory Team

    2017-01-01

    Reconstruction of tracks from nearly overlapping particles, called Tracking in Dense Environments (TIDE), is an increasingly important component of many physics analyses at the Large Hadron Collider as signatures involving highly boosted jets are investigated. TIDE makes use of the charge distribution inside a pixel cluster to resolve tracks that share one of more of their pixel detector hits. In practice, the pixel charge is discretized using the Time-over-Threshold (ToT) technique. More charge information is better for discrimination, but more challenging for designing and operating the detector. A model of the silicon pixels has been developed in order to study the impact of the precision of the digitized charge distribution on distinguishing multi-particle clusters. The output of the GEANT4-based simulation is used to train neutral networks that predict the multiplicity and location of particles depositing energy inside one cluster of pixels. By studying the multi-particle cluster identification efficiency and position resolution, we quantify the trade-off between the number of ToT bits and low-level tracking inputs. As both ATLAS and CMS are designing upgraded detectors, this work provides guidance for the pixel module designs to meet TIDE needs. Work funded by the China Scholarship Council and the Office of High Energy Physics of the U.S. Department of Energy under contract DE-AC02-05CH11231.

  9. Preliminary investigations of active pixel sensors in Nuclear Medicine imaging

    NASA Astrophysics Data System (ADS)

    Ott, Robert; Evans, Noel; Evans, Phil; Osmond, J.; Clark, A.; Turchetta, R.

    2009-06-01

    Three CMOS active pixel sensors have been investigated for their application to Nuclear Medicine imaging. Startracker with 525×525 25 μm square pixels has been coupled via a fibre optic stud to a 2 mm thick segmented CsI(Tl) crystal. Imaging tests were performed using 99mTc sources, which emit 140 keV gamma rays. The system was interfaced to a PC via FPGA-based DAQ and optical link enabling imaging rates of 10 f/s. System noise was measured to be >100e and it was shown that the majority of this noise was fixed pattern in nature. The intrinsic spatial resolution was measured to be ˜80 μm and the system spatial resolution measured with a slit was ˜450 μm. The second sensor, On Pixel Intelligent CMOS (OPIC), had 64×72 40 μm pixels and was used to evaluate noise characteristics and to develop a method of differentiation between fixed pattern and statistical noise. The third sensor, Vanilla, had 520×520 25 μm pixels and a measured system noise of ˜25e. This sensor was coupled directly to the segmented phosphor. Imaging results show that even at this lower level of noise the signal from 140 keV gamma rays is small as the light from the phosphor is spread over a large number of pixels. Suggestions for the 'ideal' sensor are made.

  10. An Exploration of WFC3/IR Dark Current Variation

    NASA Astrophysics Data System (ADS)

    Sunnquist, B.; Baggett, S.; Long, K. S.

    2017-02-01

    We use a collection of darks spanning September 2009 to June 2016 to study variations in the dark current in the IR detector on WFC3. Although the darks possess a similar signal pattern across the detector, we find that their median dark rates vary by as much as 0.014 DN/s (0.032 e-/s). The distribution of these median values has a triangular shape with a mean and standard deviation of 0.021 ± 0.0029 DN/s (0.049 ± 0.0069 e-/s). We observe a long term time-dependence in the inboard vertical reference pixel and zeroth read signals; however, these differences do not noticeably affect the calibrated dark signals, and we conclude that the WFC3/IR dark current levels continue to remain stable since launch. The inboard reference pixel signals exhibit a unique, but consistent, pattern around the detector, but this pattern does not evolve noticeably with the median of the science pixels, and a quadrant or row-based reference pixel subtraction strategy does not reduce the spread between the median dark rates. We notice a slight drift in the inboard reference pixel signals up the dark ramps, and the intensity of this drift is related to the median dark current in the science pixels. This holds true using either the horizontal or vertical reference pixels and for darks with a variety of sample sequences.

  11. Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model

    NASA Astrophysics Data System (ADS)

    Li, X. L.; Zhao, Q. H.; Li, Y.

    2017-09-01

    Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.

  12. Data fusion strategies for hazard detection and safe site selection for planetary and small body landings

    NASA Astrophysics Data System (ADS)

    Câmara, F.; Oliveira, J.; Hormigo, T.; Araújo, J.; Ribeiro, R.; Falcão, A.; Gomes, M.; Dubois-Matra, O.; Vijendran, S.

    2015-06-01

    This paper discusses the design and evaluation of data fusion strategies to perform tiered fusion of several heterogeneous sensors and a priori data. The aim is to increase robustness and performance of hazard detection and avoidance systems, while enabling safe planetary and small body landings anytime, anywhere. The focus is on Mars and asteroid landing mission scenarios and three distinct data fusion algorithms are introduced and compared. The first algorithm consists of a hybrid camera-LIDAR hazard detection and avoidance system, the H2DAS, in which data fusion is performed at both sensor-level data (reconstruction of the point cloud obtained with a scanning LIDAR using the navigation motion states and correcting the image for motion compensation using IMU data), feature-level data (concatenation of multiple digital elevation maps, obtained from consecutive LIDAR images, to achieve higher accuracy and resolution maps while enabling relative positioning) as well as decision-level data (fusing hazard maps from multiple sensors onto a single image space, with a single grid orientation and spacing). The second method presented is a hybrid reasoning fusion, the HRF, in which innovative algorithms replace the decision-level functions of the previous method, by combining three different reasoning engines—a fuzzy reasoning engine, a probabilistic reasoning engine and an evidential reasoning engine—to produce safety maps. Finally, the third method presented is called Intelligent Planetary Site Selection, the IPSIS, an innovative multi-criteria, dynamic decision-level data fusion algorithm that takes into account historical information for the selection of landing sites and a piloting function with a non-exhaustive landing site search capability, i.e., capable of finding local optima by searching a reduced set of global maps. All the discussed data fusion strategies and algorithms have been integrated, verified and validated in a closed-loop simulation environment. Monte Carlo simulation campaigns were performed for the algorithms performance assessment and benchmarking. The simulations results comprise the landing phases of Mars and Phobos landing mission scenarios.

  13. Revisiting the JDL Model for Information Exploitation

    DTIC Science & Technology

    2013-07-01

    High-Level Information Fusion Management and Systems Design, Artech House, Norwood, MA, 2012. [10] E. Blasch, D. A. Lambert, P. Valin , M. M. Kokar...Fusion – Fusion2012 Panel Discussion,” Int. Conf. on Info Fusion, 2012. [29] E. P. Blasch, P. Valin , A-L. Jousselme, et al., “Top Ten Trends in High...P. Valin , E. Bosse, M. Nilsson, J. Van Laere, et al., “Implication of Culture: User Roles in Information Fusion for Enhanced Situational

  14. NASA/GEWEX shortwave surface radiation budget: Integrated data product with reprocessed radiance, cloud, and meteorology inputs, and new surface albedo treatment

    NASA Astrophysics Data System (ADS)

    Cox, Stephen J.; Stackhouse, Paul W.; Gupta, Shashi K.; Mikovitz, J. Colleen; Zhang, Taiping

    2017-02-01

    The NASA/GEWEX Surface Radiation Budget (SRB) project produces shortwave and longwave surface and top of atmosphere radiative fluxes for the 1983-near present time period. Spatial resolution is 1 degree. The current Release 3.0 (available at gewex-srb.larc.nasa.gov) uses the International Satellite Cloud Climatology Project (ISCCP) DX product for pixel level radiance and cloud information. This product is subsampled to 30 km. ISCCP is currently recalibrating and recomputing their entire data series, to be released as the H product, at 10km resolution. The ninefold increase in pixel number will allow SRB a higher resolution gridded product (e.g. 0.5 degree), as well as the production of pixel-level fluxes. Other key input improvements include a detailed aerosol history using the Max Planck Institute Aerosol Climatology (MAC), and temperature and moisture profiles from nnHIRS.

  15. Multisensor fusion for 3D target tracking using track-before-detect particle filter

    NASA Astrophysics Data System (ADS)

    Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.

    2015-05-01

    This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.

  16. Salt-and-pepper noise removal using modified mean filter and total variation minimization

    NASA Astrophysics Data System (ADS)

    Aghajarian, Mickael; McInroy, John E.; Wright, Cameron H. G.

    2018-01-01

    The search for effective noise removal algorithms is still a real challenge in the field of image processing. An efficient image denoising method is proposed for images that are corrupted by salt-and-pepper noise. Salt-and-pepper noise takes either the minimum or maximum intensity, so the proposed method restores the image by processing the pixels whose values are either 0 or 255 (assuming an 8-bit/pixel image). For low levels of noise corruption (less than or equal to 50% noise density), the method employs the modified mean filter (MMF), while for heavy noise corruption, noisy pixels values are replaced by the weighted average of the MMF and the total variation of corrupted pixels, which is minimized using convex optimization. Two fuzzy systems are used to determine the weights for taking average. To evaluate the performance of the algorithm, several test images with different noise levels are restored, and the results are quantitatively measured by peak signal-to-noise ratio and mean absolute error. The results show that the proposed scheme gives considerable noise suppression up to a noise density of 90%, while almost completely maintaining edges and fine details of the original image.

  17. [Hybrid stabilization technique with spinal fusion and interlaminar device to reduce the length of fusion and to protect symptomatic adjacent segments : Clinical long-term follow-up].

    PubMed

    Fleege, C; Rickert, M; Werner, I; Rauschmann, M; Arabmotlagh, M

    2016-09-01

    Determination of the extent of spinal fusion for lumbar degenerative diseases is often difficult due to minor pathologies in the adjacent segment. Although surgical intervention is required, fusion seems to be an overtreatment. Decompression alone may be not enough as this segment is affected by multiple factors such as destabilization, low grade degeneration and an unfavorable biomechanical transition next to a rigid construct. An alternative surgical treatment is a hybrid construct, consisting of fusion and implantation of an interlaminar stabilization device at the adjacent level. The aim of this study was to compare long-term clinical outcome after lumbar fusion with a hybrid construct including an interlaminar stabilization device as "topping-off". A retrospective analysis of 25 lumbar spinal fusions from 2003 to 2010 with additional interlaminar stabilization device was performed. Through a matched case controlled procedure 25 congruent patients who received lumbar spinal fusion in one or two levels were included as a control group. At an average follow-up of 43 months pre- and postoperative pain, ODI, SF-36 as well as clinical parameters, such as leg and back pain, walking distance and patient satisfaction were recorded. Pain relief, ODI improvement and patient satisfaction was significantly higher in the hybrid group compared to the control group. SF-36 scores improved in both groups but was higher in the hybrid group, although without significance. Evaluation of walking distance showed no significant differences. Many outcome parameters present significantly better long-term results in the hybrid group compared to sole spinal fusion. Therefore, in cases with a clear indication for lumbar spinal fusion with the need for decompression at the adjacent level due to spinal stenosis or moderate spondylarthrosis, support of this segment with an interlaminar stabilization device demonstrates a reasonable treatment option with good clinical outcome. Also, the length of the fusion construct can be reduced allowing for a softer and more harmonic transition.

  18. Paramyxovirus fusion: Real-time measurement of parainfluenza virus 5 virus-cell fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connolly, Sarah A.; Lamb, Robert A.

    2006-11-25

    Although cell-cell fusion assays are useful surrogate methods for studying virus fusion, differences between cell-cell and virus-cell fusion exist. To examine paramyxovirus fusion in real time, we labeled viruses with fluorescent lipid probes and monitored virus-cell fusion by fluorimetry. Two parainfluenza virus 5 (PIV5) isolates (W3A and SER) and PIV5 containing mutations within the fusion protein (F) were studied. Fusion was specific and temperature-dependent. Compared to many low pH-dependent viruses, the kinetics of PIV5 fusion was slow, approaching completion within several minutes. As predicted from cell-cell fusion assays, virus containing an F protein with an extended cytoplasmic tail (rSV5 F551)more » had reduced fusion compared to wild-type virus (W3A). In contrast, virus-cell fusion for SER occurred at near wild-type levels, despite the fact that this isolate exhibits a severely reduced cell-cell fusion phenotype. These results support the notion that virus-cell and cell-cell fusion have significant differences.« less

  19. Fusion partners can increase the expression of recombinant interleukins via transient transfection in 2936E cells

    PubMed Central

    Carter, Jane; Zhang, Jue; Dang, Thien-Lan; Hasegawa, Haruki; Cheng, Janet D; Gianan, Irene; O'Neill, Jason W; Wolfson, Martin; Siu, Sophia; Qu, Sheldon; Meininger, David; Kim, Helen; Delaney, John; Mehlin, Christopher

    2010-01-01

    The expression levels of five secreted target interleukins (IL-11, 15, 17B, 32, and IL23 p19 subunit) were tested with three different fusion partners in 2936E cells. When fused to the N-terminus, human serum albumin (HSA) was found to enhance the expression of both IL-17B and IL-15, cytokines which did not express at measurable levels on their own. Although the crystallizable fragment of an antibody (Fc) was also an effective fusion partner for IL-17B, Fc did not increase expression of IL-15. Fc was superior to HSA for the expression of the p19 subunit of IL-23, but no partner led to measurable levels of IL-32γ secretion. Glutathione S-transferase (GST) did not enhance the expression of any target and suppressed the production of IL-11, a cytokine which expressed robustly both on its own and when fused to HSA or Fc. Cleavage of the fusion partner was not always possible. The use of HSA or Fc as N-terminal fusions can be an effective technique to express difficult proteins, especially for applications in which the fusion partner need not be removed. PMID:20014434

  20. The MODIS cloud optical and microphysical products: Collection 6 updates and examples from Terra and Aqua

    PubMed Central

    Platnick, Steven; Meyer, Kerry G.; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G. Thomas; Zhang, Zhibo; Hubanks, Paul A.; Holz, Robert E.; Yang, Ping; Ridgway, William L.; Riedi, Jérôme

    2018-01-01

    The MODIS Level-2 cloud product (Earth Science Data Set names MOD06 and MYD06 for Terra and Aqua MODIS, respectively) provides pixel-level retrievals of cloud-top properties (day and night pressure, temperature, and height) and cloud optical properties (optical thickness, effective particle radius, and water path for both liquid water and ice cloud thermodynamic phases–daytime only). Collection 6 (C6) reprocessing of the product was completed in May 2014 and March 2015 for MODIS Aqua and Terra, respectively. Here we provide an overview of major C6 optical property algorithm changes relative to the previous Collection 5 (C5) product. Notable C6 optical and microphysical algorithm changes include: (i) new ice cloud optical property models and a more extensive cloud radiative transfer code lookup table (LUT) approach, (ii) improvement in the skill of the shortwave-derived cloud thermodynamic phase, (iii) separate cloud effective radius retrieval datasets for each spectral combination used in previous collections, (iv) separate retrievals for partly cloudy pixels and those associated with cloud edges, (v) failure metrics that provide diagnostic information for pixels having observations that fall outside the LUT solution space, and (vi) enhanced pixel-level retrieval uncertainty calculations. The C6 algorithm changes collectively can result in significant changes relative to C5, though the magnitude depends on the dataset and the pixel’s retrieval location in the cloud parameter space. Example Level-2 granule and Level-3 gridded dataset differences between the two collections are shown. While the emphasis is on the suite of cloud optical property datasets, other MODIS cloud datasets are discussed when relevant. PMID:29657349

  1. Surface composition of Mars: A Viking multispectral view

    NASA Technical Reports Server (NTRS)

    Adams, John B.; Smith, Milton O.; Arvidson, Raymond E.; Dale-Bannister, Mary; Guinness, Edward A.; Singer, Robert; Adams, John B.

    1987-01-01

    A new method of analyzing multispectral images takes advantage of the spectral variation from pixel to pixel that is typical for natural planetary surfaces, and treats all pixels as potential mixtures of spectrally distinct materials. For Viking Lander images, mixtures of only three spectral end members (rock, soil, and shade) are sufficient to explain the observed spectral variation to the level of instrumental noise. It was concluded that a large portion of the Martian surface consists of only two spectrally distinct materials, basalt and palgonitic soil. It is emphasized, however, that as viewed through the three broad bandpasses of Viking Orbiter, other materials cannot be distinguished from the mixtures.

  2. Resolution-independent surface rendering using programmable graphics hardware

    DOEpatents

    Loop, Charles T.; Blinn, James Frederick

    2008-12-16

    Surfaces defined by a Bezier tetrahedron, and in particular quadric surfaces, are rendered on programmable graphics hardware. Pixels are rendered through triangular sides of the tetrahedra and locations on the shapes, as well as surface normals for lighting evaluations, are computed using pixel shader computations. Additionally, vertex shaders are used to aid interpolation over a small number of values as input to the pixel shaders. Through this, rendering of the surfaces is performed independently of viewing resolution, allowing for advanced level-of-detail management. By individually rendering tetrahedrally-defined surfaces which together form complex shapes, the complex shapes can be rendered in their entirety.

  3. A unified tensor level set for image segmentation.

    PubMed

    Wang, Bin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2010-06-01

    This paper presents a new region-based unified tensor level set model for image segmentation. This model introduces a three-order tensor to comprehensively depict features of pixels, e.g., gray value and the local geometrical features, such as orientation and gradient, and then, by defining a weighted distance, we generalized the representative region-based level set method from scalar to tensor. The proposed model has four main advantages compared with the traditional representative method as follows. First, involving the Gaussian filter bank, the model is robust against noise, particularly the salt- and pepper-type noise. Second, considering the local geometrical features, e.g., orientation and gradient, the model pays more attention to boundaries and makes the evolving curve stop more easily at the boundary location. Third, due to the unified tensor pixel representation representing the pixels, the model segments images more accurately and naturally. Fourth, based on a weighted distance definition, the model possesses the capacity to cope with data varying from scalar to vector, then to high-order tensor. We apply the proposed method to synthetic, medical, and natural images, and the result suggests that the proposed method is superior to the available representative region-based level set method.

  4. Backside illuminated CMOS-TDI line scan sensor for space applications

    NASA Astrophysics Data System (ADS)

    Cohen, Omer; Ofer, Oren; Abramovich, Gil; Ben-Ari, Nimrod; Gershon, Gal; Brumer, Maya; Shay, Adi; Shamay, Yaron

    2018-05-01

    A multi-spectral backside illuminated Time Delayed Integration Radiation Hardened line scan sensor utilizing CMOS technology was designed for continuous scanning Low Earth Orbit small satellite applications. The sensor comprises a single silicon chip with 4 independent arrays of pixels where each array is arranged in 2600 columns with 64 TDI levels. A multispectral optical filter whose spectral responses per array are adjustable per system requirement is assembled at the package level. A custom 4T Pixel design provides the required readout speed, low-noise, very low dark current, and high conversion gains. A 2-phase internally controlled exposure mechanism improves the sensor's dynamic MTF. The sensor high level of integration includes on-chip 12 bit per pixel analog to digital converters, on-chip controller, and CMOS compatible voltage levels. Thus, the power consumption and the weight of the supporting electronics are reduced, and a simple electrical interface is provided. An adjustable gain provides a Full Well Capacity ranging from 150,000 electrons up to 500,000 electrons per column and an overall readout noise per column of less than 120 electrons. The imager supports line rates ranging from 50 to 10,000 lines/sec, with power consumption of less than 0.5W per array. Thus, the sensor is characterized by a high pixel rate, a high dynamic range and a very low power. To meet a Latch-up free requirement RadHard architecture and design rules were utilized. In this paper recent electrical and electro-optical measurements of the sensor's Flight Models will be presented for the first time.

  5. Interactive dual-volume rendering visualization with real-time fusion and transfer function enhancement

    NASA Astrophysics Data System (ADS)

    Macready, Hugh; Kim, Jinman; Feng, David; Cai, Weidong

    2006-03-01

    Dual-modality imaging scanners combining functional PET and anatomical CT constitute a challenge in volumetric visualization that can be limited by the high computational demand and expense. This study aims at providing physicians with multi-dimensional visualization tools, in order to navigate and manipulate the data running on a consumer PC. We have maximized the utilization of pixel-shader architecture of the low-cost graphic hardware and the texture-based volume rendering to provide visualization tools with high degree of interactivity. All the software was developed using OpenGL and Silicon Graphics Inc. Volumizer, tested on a Pentium mobile CPU on a PC notebook with 64M graphic memory. We render the individual modalities separately, and performing real-time per-voxel fusion. We designed a novel "alpha-spike" transfer function to interactively identify structure of interest from volume rendering of PET/CT. This works by assigning a non-linear opacity to the voxels, thus, allowing the physician to selectively eliminate or reveal information from the PET/CT volumes. As the PET and CT are rendered independently, manipulations can be applied to individual volumes, for instance, the application of transfer function to CT to reveal the lung boundary while adjusting the fusion ration between the CT and PET to enhance the contrast of a tumour region, with the resultant manipulated data sets fused together in real-time as the adjustments are made. In addition to conventional navigation and manipulation tools, such as scaling, LUT, volume slicing, and others, our strategy permits efficient visualization of PET/CT volume rendering which can potentially aid in interpretation and diagnosis.

  6. The Quality and Readability of Information Available on the Internet Regarding Lumbar Fusion

    PubMed Central

    Zhang, Dafang; Schumacher, Charles; Harris, Mitchel B.; Bono, Christopher M.

    2015-01-01

    Study Design An Internet-based evaluation of Web sites regarding lumbar fusion. Objective The Internet has become a major resource for patients; however, the quality and readability of Internet information regarding lumbar fusion is unclear. The objective of this study is to evaluate the quality and readability of Internet information regarding lumbar fusion and to determine whether these measures changed with Web site modality, complexity of the search term, or Health on the Net Code of Conduct certification. Methods Using five search engines and three different search terms of varying complexity (“low back fusion,” “lumbar fusion,” and “lumbar arthrodesis”), we identified and reviewed 153 unique Web site hits for information quality and readability. Web sites were specifically analyzed by search term and Web site modality. Information quality was evaluated on a 5-point scale. Information readability was assessed using the Flesch-Kincaid score for reading grade level. Results The average quality score was low. The average reading grade level was nearly six grade levels above that recommended by National Work Group on Literacy and Health. The quality and readability of Internet information was significantly dependent on Web site modality. The use of more complex search terms yielded information of higher reading grade level but not higher quality. Conclusions Higher-quality information about lumbar fusion conveyed using language that is more readable by the general public is needed on the Internet. It is important for health care providers to be aware of the information accessible to patients, as it likely influences their decision making regarding care. PMID:26933614

  7. The Quality and Readability of Information Available on the Internet Regarding Lumbar Fusion.

    PubMed

    Zhang, Dafang; Schumacher, Charles; Harris, Mitchel B; Bono, Christopher M

    2016-03-01

    Study Design An Internet-based evaluation of Web sites regarding lumbar fusion. Objective The Internet has become a major resource for patients; however, the quality and readability of Internet information regarding lumbar fusion is unclear. The objective of this study is to evaluate the quality and readability of Internet information regarding lumbar fusion and to determine whether these measures changed with Web site modality, complexity of the search term, or Health on the Net Code of Conduct certification. Methods Using five search engines and three different search terms of varying complexity ("low back fusion," "lumbar fusion," and "lumbar arthrodesis"), we identified and reviewed 153 unique Web site hits for information quality and readability. Web sites were specifically analyzed by search term and Web site modality. Information quality was evaluated on a 5-point scale. Information readability was assessed using the Flesch-Kincaid score for reading grade level. Results The average quality score was low. The average reading grade level was nearly six grade levels above that recommended by National Work Group on Literacy and Health. The quality and readability of Internet information was significantly dependent on Web site modality. The use of more complex search terms yielded information of higher reading grade level but not higher quality. Conclusions Higher-quality information about lumbar fusion conveyed using language that is more readable by the general public is needed on the Internet. It is important for health care providers to be aware of the information accessible to patients, as it likely influences their decision making regarding care.

  8. Laser-induced fusion of human embryonic stem cells with optical tweezers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Shuxun; Wang Xiaolin; Sun Dong

    2013-07-15

    We report a study on the laser-induced fusion of human embryonic stem cells (hESCs) at the single-cell level. Cells were manipulated by optical tweezers and fused under irradiation with pulsed UV laser at 355 nm. Successful fusion was indicated by green fluorescence protein transfer. The influence of laser pulse energy on the fusion efficiency was investigated. The fused products were viable as gauged by live cell staining. Successful fusion of hESCs with somatic cells was also demonstrated. The reported fusion outcome may facilitate studies of cell differentiation, maturation, and reprogramming.

  9. Fusion/Astrophysics Teacher Research Academy

    NASA Astrophysics Data System (ADS)

    Correll, Donald

    2005-10-01

    In order to engage California high school science teachers in the area of plasma physics and fusion research, LLNL's Fusion Energy Program has partnered with the UC Davis Edward Teller Education Center, ETEC (http://etec.ucdavis.edu), the Stanford University Solar Center (http://solar-center.stanford.edu) and LLNL's Science / Technology Education Program, STEP (http://education.llnl.gov). A four-level ``Fusion & Astrophysics Research Academy'' has been designed to give teachers experience in conducting research using spectroscopy with their students. Spectroscopy, and its relationship to atomic physics and electromagnetism, provides for an ideal plasma `bridge' to the CA Science Education Standards (http://www.cde.ca.gov/be/st/ss/scphysics.asp). Teachers attend multiple-day professional development workshops to explore new research activities for use in the high school science classroom. A Level I, 3-day program consists of two days where teachers learn how plasma researchers use spectrometers followed by instructions on how to use a research grade spectrometer for their own investigations. A 3rd day includes touring LLNL's SSPX (http://www.mfescience.org/sspx/) facility to see spectrometry being used to measure plasma properties. Spectrometry classroom kits are made available for loaning to participating teachers. Level I workshop results (http://education.llnl.gov/fusion&_slash;astro/) will be presented along with plans being developed for Level II (one week advanced SKA's), Level III (pre-internship), and Level IV (summer internship) research academies.

  10. Pixel level optical-transfer-function design based on the surface-wave-interferometry aperture

    PubMed Central

    Zheng, Guoan; Wang, Yingmin; Yang, Changhuei

    2010-01-01

    The design of optical transfer function (OTF) is of significant importance for optical information processing in various imaging and vision systems. Typically, OTF design relies on sophisticated bulk optical arrangement in the light path of the optical systems. In this letter, we demonstrate a surface-wave-interferometry aperture (SWIA) that can be directly incorporated onto optical sensors to accomplish OTF design on the pixel level. The whole aperture design is based on the bull’s eye structure. It composes of a central hole (diameter of 300 nm) and periodic groove (period of 560 nm) on a 340 nm thick gold layer. We show, with both simulation and experiment, that different types of optical transfer functions (notch, highpass and lowpass filter) can be achieved by manipulating the interference between the direct transmission of the central hole and the surface wave (SW) component induced from the periodic groove. Pixel level OTF design provides a low-cost, ultra robust, highly compact method for numerous applications such as optofluidic microscopy, wavefront detection, darkfield imaging, and computational photography. PMID:20721038

  11. A CMOS pixel sensor prototype for the outer layers of linear collider vertex detector

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Morel, F.; Hu-Guo, C.; Himmi, A.; Dorokhov, A.; Hu, Y.

    2015-01-01

    The International Linear Collider (ILC) expresses a stringent requirement for high precision vertex detectors (VXD). CMOS pixel sensors (CPS) have been considered as an option for the VXD of the International Large Detector (ILD), one of the detector concepts proposed for the ILC. MIMOSA-31 developed at IPHC-Strasbourg is the first CPS integrated with 4-bit column-level ADC for the outer layers of the VXD, adapted to an original concept minimizing the power consumption. It is composed of a matrix of 64 rows and 48 columns. The pixel concept combines in-pixel amplification with a correlated double sampling (CDS) operation in order to reduce the temporal noise and fixed pattern noise (FPN). At the bottom of the pixel array, each column is terminated with a self-triggered analog-to-digital converter (ADC). The ADC design was optimized for power saving at a sampling frequency of 6.25 MS/s. The prototype chip is fabricated in a 0.35 μm CMOS technology. This paper presents the details of the prototype chip and its test results.

  12. In-situ device integration of large-area patterned organic nanowire arrays for high-performance optical sensors

    PubMed Central

    Wu, Yiming; Zhang, Xiujuan; Pan, Huanhuan; Deng, Wei; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng

    2013-01-01

    Single-crystalline organic nanowires (NWs) are important building blocks for future low-cost and efficient nano-optoelectronic devices due to their extraordinary properties. However, it remains a critical challenge to achieve large-scale organic NW array assembly and device integration. Herein, we demonstrate a feasible one-step method for large-area patterned growth of cross-aligned single-crystalline organic NW arrays and their in-situ device integration for optical image sensors. The integrated image sensor circuitry contained a 10 × 10 pixel array in an area of 1.3 × 1.3 mm2, showing high spatial resolution, excellent stability and reproducibility. More importantly, 100% of the pixels successfully operated at a high response speed and relatively small pixel-to-pixel variation. The high yield and high spatial resolution of the operational pixels, along with the high integration level of the device, clearly demonstrate the great potential of the one-step organic NW array growth and device construction approach for large-scale optoelectronic device integration. PMID:24287887

  13. Qualification and calibration tests of detector modules for the CMS Pixel Phase 1 upgrade

    NASA Astrophysics Data System (ADS)

    Zhu, D.; Backhaus, M.; Berger, P.; Meinhard, M.; Starodumov, A.; Tavolaro, V.

    2018-01-01

    In high energy particle physics, accelerator- and detector-upgrades always go hand in hand. The instantaneous luminosity of the Large Hadron Collider will increase to up to L = 2×1034cm-2s-1 during Run 2 until 2023. In order to cope with such luminosities, the pixel detector of the CMS experiment has been replaced early 2017. The so-called CMS Pixel phase 1 upgrade detector consists of 1184 modules with new design. An important production step is the module qualification and calibration, ensuring their proper functionality within the detector. This paper summarizes the qualification and calibration tests and results of modules used in the innermost two detector layers with focus on methods using module-internal calibration signals. Extended characterizations on pixel level such as electronic noise and bump bond connectivity, optimization of operational parameters, sensor quality and thermal stress resistance were performed using a customized setup with controlled environment. It could be shown that the selected modules have on average 0.55‰ ± 0.01‰ defective pixels and that all performance parameters stay within their specifications.

  14. Pixel parallel localized driver design for a 128 x 256 pixel array 3D 1Gfps image sensor

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Dao, V. T. S.; Etoh, T. G.; Charbon, E.

    2017-02-01

    In this paper, a 3D 1Gfps BSI image sensor is proposed, where 128 × 256 pixels are located in the top-tier chip and a 32 × 32 localized driver array in the bottom-tier chip. Pixels are designed with Multiple Collection Gates (MCG), which collects photons selectively with different collection gates being active at intervals of 1ns to achieve 1Gfps. For the drivers, a global PLL is designed, which consists of a ring oscillator with 6-stage current starved differential inverters, achieving a wide frequency tuning range from 40MHz to 360MHz (20ps rms jitter). The drivers are the replicas of the ring oscillator that operates within a PLL. Together with level shifters and XNOR gates, continuous 3.3V pulses are generated with desired pulse width, which is 1/12 of the PLL clock period. The driver array is activated by a START signal, which propagates through a highly balanced clock tree, to activate all the pixels at the same time with virtually negligible skew.

  15. Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation

    NASA Astrophysics Data System (ADS)

    Song, Huihui

    Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat-MODIS image pairs, we build the corresponding relationship between the difference images of MODIS and ETM+ by training a low- and high-resolution dictionary pair from the given prior image pairs. In the second scenario, i.e., only one Landsat- MODIS image pair being available, we directly correlate MODIS and ETM+ data through an image degradation model. Then, the fusion stage is achieved by super-resolving the MODIS image combining the high-pass modulation in a two-layer fusion framework. Remarkably, the proposed spatial-temporal fusion methods form a unified framework for blending remote sensing images with phenology change or land-cover-type change. Based on the proposed spatial-temporal fusion models, we propose to monitor the land use/land cover changes in Shenzhen, China. As a fast-growing city, Shenzhen faces the problem of detecting the rapid changes for both rational city planning and sustainable development. However, the cloudy and rainy weather in region Shenzhen located makes the capturing circle of high-quality satellite images longer than their normal revisit periods. Spatial-temporal fusion methods are capable to tackle this problem by improving the spatial resolution of images with coarse spatial resolution but frequent temporal coverage, thereby making the detection of rapid changes possible. On two Landsat-MODIS datasets with annual and monthly changes, respectively, we apply the proposed spatial-temporal fusion methods to the task of multiple change detection. Afterward, we propose a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning and sparse non-negative matrix factorization. By combining the spectral information from hyperspectral image, which is characterized by low spatial resolution but high spectral resolution and abbreviated as LSHS, and the spatial information from multispectral image, which is featured by high spatial resolution but low spectral resolution and abbreviated as HSLS, this method aims to generate the fused data with both high spatial and high spectral resolutions. Motivated by the observation that each hyperspectral pixel can be represented by a linear combination of a few endmembers, this method first extracts the spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatially unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, we finally derive the fused data characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data.

  16. Deep learning decision fusion for the classification of urban remote sensing data

    NASA Astrophysics Data System (ADS)

    Abdi, Ghasem; Samadzadegan, Farhad; Reinartz, Peter

    2018-01-01

    Multisensor data fusion is one of the most common and popular remote sensing data classification topics by considering a robust and complete description about the objects of interest. Furthermore, deep feature extraction has recently attracted significant interest and has become a hot research topic in the geoscience and remote sensing research community. A deep learning decision fusion approach is presented to perform multisensor urban remote sensing data classification. After deep features are extracted by utilizing joint spectral-spatial information, a soft-decision made classifier is applied to train high-level feature representations and to fine-tune the deep learning framework. Next, a decision-level fusion classifies objects of interest by the joint use of sensors. Finally, a context-aware object-based postprocessing is used to enhance the classification results. A series of comparative experiments are conducted on the widely used dataset of 2014 IEEE GRSS data fusion contest. The obtained results illustrate the considerable advantages of the proposed deep learning decision fusion over the traditional classifiers.

  17. Multifocus watermarking approach based on discrete cosine transform.

    PubMed

    Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila

    2016-05-01

    Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.

  18. Shape and Albedo from Shading (SAfS) for Pixel-Level dem Generation from Monocular Images Constrained by Low-Resolution dem

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chung Liu, Wai; Grumpe, Arne; Wöhler, Christian

    2016-06-01

    Lunar topographic information, e.g., lunar DEM (Digital Elevation Model), is very important for lunar exploration missions and scientific research. Lunar DEMs are typically generated from photogrammetric image processing or laser altimetry, of which photogrammetric methods require multiple stereo images of an area. DEMs generated from these methods are usually achieved by various interpolation techniques, leading to interpolation artifacts in the resulting DEM. On the other hand, photometric shape reconstruction, e.g., SfS (Shape from Shading), extensively studied in the field of Computer Vision has been introduced to pixel-level resolution DEM refinement. SfS methods have the ability to reconstruct pixel-wise terrain details that explain a given image of the terrain. If the terrain and its corresponding pixel-wise albedo were to be estimated simultaneously, this is a SAfS (Shape and Albedo from Shading) problem and it will be under-determined without additional information. Previous works show strong statistical regularities in albedo of natural objects, and this is even more logically valid in the case of lunar surface due to its lower surface albedo complexity than the Earth. In this paper we suggest a method that refines a lower-resolution DEM to pixel-level resolution given a monocular image of the coverage with known light source, at the same time we also estimate the corresponding pixel-wise albedo map. We regulate the behaviour of albedo and shape such that the optimized terrain and albedo are the likely solutions that explain the corresponding image. The parameters in the approach are optimized through a kernel-based relaxation framework to gain computational advantages. In this research we experimentally employ the Lunar-Lambertian model for reflectance modelling; the framework of the algorithm is expected to be independent of a specific reflectance model. Experiments are carried out using the monocular images from Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) (0.5 m spatial resolution), constrained by the SELENE and LRO Elevation Model (SLDEM 2015) of 60 m spatial resolution. The results indicate that local details are largely recovered by the algorithm while low frequency topographic consistency is affected by the low-resolution DEM.

  19. Calibrating the pixel-level Kepler imaging data with a causal data-driven model

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Foreman-Mackey, Daniel; Hogg, David W.; Schölkopf, Bernhard

    2015-01-01

    In general, astronomical observations are affected by several kinds of noise, each with it's own causal source; there is photon noise, stochastic source variability, and residuals coming from imperfect calibration of the detector or telescope. In particular, the precision of NASA Kepler photometry for exoplanet science—the most precise photometric measurements of stars ever made—appears to be limited by unknown or untracked variations in spacecraft pointing and temperature, and unmodeled stellar variability. Here we present the Causal Pixel Model (CPM) for Kepler data, a data-driven model intended to capture variability but preserve transit signals. The CPM works at the pixel level (not the photometric measurement level); it can capture more fine-grained information about the variation of the spacecraft than is available in the pixel-summed aperture photometry. The basic idea is that CPM predicts each target pixel value from a large number of pixels of other stars sharing the instrument variabilities while not containing any information on possible transits at the target star. In addition, we use the target star's future and past (auto-regression). By appropriately separating the data into training and test sets, we ensure that information about any transit will be perfectly isolated from the fitting of the model. The method has four hyper-parameters (the number of predictor stars, the auto-regressive window size, and two L2-regularization amplitudes for model components), which we set by cross-validation. We determine a generic set of hyper-parameters that works well on most of the stars with 11≤V≤12 mag and apply the method to a corresponding set of target stars with known planet transits. We find that we can consistently outperform (for the purposes of exoplanet detection) the Kepler Pre-search Data Conditioning (PDC) method for exoplanet discovery, often improving the SNR by a factor of two. While we have not yet exhaustively tested the method at other magnitudes, we expect that it should be generally applicable, with positive consequences for subsequent exoplanet detection or stellar variability (in which case we must exclude the autoregressive part to preserve intrinsic variability).

  20. Active Pixel Sensors: Are CCD's Dinosaurs?

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.

    1993-01-01

    Charge-coupled devices (CCD's) are presently the technology of choice for most imaging applications. In the 23 years since their invention in 1970, they have evolved to a sophisticated level of performance. However, as with all technologies, we can be certain that they will be supplanted someday. In this paper, the Active Pixel Sensor (APS) technology is explored as a possible successor to the CCD. An active pixel is defined as a detector array technology that has at least one active transistor within the pixel unit cell. The APS eliminates the need for nearly perfect charge transfer -- the Achilles' heel of CCDs. This perfect charge transfer makes CCD's radiation 'soft,' difficult to use under low light conditions, difficult to manufacture in large array sizes, difficult to integrate with on-chip electronics, difficult to use at low temperatures, difficult to use at high frame rates, and difficult to manufacture in non-silicon materials that extend wavelength response.

  1. A review of advances in pixel detectors for experiments with high rate and radiation

    NASA Astrophysics Data System (ADS)

    Garcia-Sciveres, Maurice; Wermes, Norbert

    2018-06-01

    The large Hadron collider (LHC) experiments ATLAS and CMS have established hybrid pixel detectors as the instrument of choice for particle tracking and vertexing in high rate and radiation environments, as they operate close to the LHC interaction points. With the high luminosity-LHC upgrade now in sight, for which the tracking detectors will be completely replaced, new generations of pixel detectors are being devised. They have to address enormous challenges in terms of data throughput and radiation levels, ionizing and non-ionizing, that harm the sensing and readout parts of pixel detectors alike. Advances in microelectronics and microprocessing technologies now enable large scale detector designs with unprecedented performance in measurement precision (space and time), radiation hard sensors and readout chips, hybridization techniques, lightweight supports, and fully monolithic approaches to meet these challenges. This paper reviews the world-wide effort on these developments.

  2. Wavelet imaging cleaning method for atmospheric Cherenkov telescopes

    NASA Astrophysics Data System (ADS)

    Lessard, R. W.; Cayón, L.; Sembroski, G. H.; Gaidos, J. A.

    2002-07-01

    We present a new method of image cleaning for imaging atmospheric Cherenkov telescopes. The method is based on the utilization of wavelets to identify noise pixels in images of gamma-ray and hadronic induced air showers. This method selects more signal pixels with Cherenkov photons than traditional image processing techniques. In addition, the method is equally efficient at rejecting pixels with noise alone. The inclusion of more signal pixels in an image of an air shower allows for a more accurate reconstruction, especially at lower gamma-ray energies that produce low levels of light. We present the results of Monte Carlo simulations of gamma-ray and hadronic air showers which show improved angular resolution using this cleaning procedure. Data from the Whipple Observatory's 10-m telescope are utilized to show the efficacy of the method for extracting a gamma-ray signal from the background of hadronic generated images.

  3. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  4. The realization of an SVGA OLED-on-silicon microdisplay driving circuit

    NASA Astrophysics Data System (ADS)

    Bohua, Zhao; Ran, Huang; Fei, Ma; Guohua, Xie; Zhensong, Zhang; Huan, Du; Jiajun, Luo; Yi, Zhao

    2012-03-01

    An 800 × 600 pixel organic light-emitting diode-on-silicon (OLEDoS) driving circuit is proposed. The pixel cell circuit utilizes a subthreshold-voltage-scaling structure which can modulate the pixel current between 170 pA and 11.4 nA. In order to keep the voltage of the column bus at a relatively high level, the sample-and-hold circuits adopt a ping-pong operation. The driving circuit is fabricated in a commercially available 0.35 μm two-poly four-metal 3.3 V mixed-signal CMOS process. The pixel cell area is 15 × 15 μm2 and the total chip occupies 15.5 × 12.3 mm2. Experimental results show that the chip can work properly at a frame frequency of 60 Hz and has a 64 grayscale (monochrome) display. The total power consumption of the chip is about 85 mW with a 3.3V supply voltage.

  5. Infrared sensors for Earth observation missions

    NASA Astrophysics Data System (ADS)

    Ashcroft, P.; Thorne, P.; Weller, H.; Baker, I.

    2007-10-01

    SELEX S&AS is developing a family of infrared sensors for earth observation missions. The spectral bands cover shortwave infrared (SWIR) channels from around 1μm to long-wave infrared (LWIR) channels up to 15μm. Our mercury cadmium telluride (MCT) technology has enabled a sensor array design that can satisfy the requirements of all of the SWIR and medium-wave infrared (MWIR) bands with near-identical arrays. This is made possible by the combination of a set of existing technologies that together enable a high degree of flexibility in the pixel geometry, sensitivity, and photocurrent integration capacity. The solution employs a photodiode array under the control of a readout integrated circuit (ROIC). The ROIC allows flexible geometries and in-pixel redundancy to maximise operability and reliability, by combining the photocurrent from a number of photodiodes into a single pixel. Defective or inoperable diodes (or "sub-pixels") can be deselected with tolerable impact on the overall pixel performance. The arrays will be fabricated using the "loophole" process in MCT grown by liquid-phase epitaxy (LPE). These arrays are inherently robust, offer high quantum efficiencies and have been used in previous space programs. The use of loophole arrays also offers access to SELEX's avalanche photodiode (APD) technology, allowing low-noise, highly uniform gain at the pixel level where photon flux is very low.

  6. ASIC Readout Circuit Architecture for Large Geiger Photodiode Arrays

    NASA Technical Reports Server (NTRS)

    Vasile, Stefan; Lipson, Jerold

    2012-01-01

    The objective of this work was to develop a new class of readout integrated circuit (ROIC) arrays to be operated with Geiger avalanche photodiode (GPD) arrays, by integrating multiple functions at the pixel level (smart-pixel or active pixel technology) in 250-nm CMOS (complementary metal oxide semiconductor) processes. In order to pack a maximum of functions within a minimum pixel size, the ROIC array is a full, custom application-specific integrated circuit (ASIC) design using a mixed-signal CMOS process with compact primitive layout cells. The ROIC array was processed to allow assembly in bump-bonding technology with photon-counting infrared detector arrays into 3-D imaging cameras (LADAR). The ROIC architecture was designed to work with either common- anode Si GPD arrays or common-cathode InGaAs GPD arrays. The current ROIC pixel design is hardwired prior to processing one of the two GPD array configurations, and it has the provision to allow soft reconfiguration to either array (to be implemented into the next ROIC array generation). The ROIC pixel architecture implements the Geiger avalanche quenching, bias, reset, and time to digital conversion (TDC) functions in full-digital design, and uses time domain over-sampling (vernier) to allow high temporal resolution at low clock rates, increased data yield, and improved utilization of the laser beam.

  7. Multilevel lumbar fusion and postoperative physiotherapy rehabilitation in a patient with persistent pain.

    PubMed

    Pons, Tracey; Shipton, Edward A

    2011-04-01

    There are no comparative randomised controlled trials of physiotherapy modalities for chronic low back and radicular pain associated with multilevel fusion. Physiotherapy-based rehabilitation to control pain and improve activation levels for persistent pain following multilevel fusion can be challenging. This is a case report of a 68-year-old man who was referred for physiotherapy intervention 10 months after a multilevel spinal fusion for spinal stenosis. He reported high levels of persistent postoperative pain with minimal activity as a consequence of his pain following the surgery. The physiotherapy interventions consisted of three phases of rehabilitation starting with pool exercise that progressed to land-based walking. These were all combined with transcutaneous electrical nerve stimulation (TENS) that was used consistently for up to 8 hours per day. As outcome measures, daily pain levels and walking distances were charted once the pool programme was completed (in the third phase). Phase progression was determined by shuttle test results. The pain level was correlated with the distance walked using linear regression over a 5-day average. Over a 5-day moving average, the pain level reduced and walking distance increased. The chart of recorded pain level and walking distance showed a trend toward decreased pain with the increased distance walked. In a patient undergoing multilevel lumbar fusion, the combined use of TENS and a progressive walking programme (from pool to land) reduced pain and increased walking distance. This improvement was despite poor medication compliance and a reported high level of postsurgical pain.

  8. TMPRSS2-ERG gene fusion status in minute (minimal) prostatic adenocarcinoma.

    PubMed

    Albadine, Roula; Latour, Mathieu; Toubaji, Antoun; Haffner, Michael; Isaacs, William B; A Platz, Elizabeth; Meeker, Alan K; Demarzo, Angelo M; Epstein, Jonathan I; Netto, George J

    2009-11-01

    Minute prostatic adenocarcinomas are considered to be of insufficient virulence. Given recent suggestions of TMPRSS2-ERG gene fusion association with aggressive prostatic adenocarcinoma, we evaluated the incidence of TMPRSS2-ERG fusion in minute prostatic adenocarcinomas. A total of 45 consecutive prostatectomies with minute adenocarcinoma were used for tissue microarray construction. A total of 63 consecutive non-minimal, Gleason Score 6 tumors, from a separate PSA Era prostatectomy tissue microarray, were used for comparison. FISH was carried out using ERG break-apart probes. Tumors were assessed for fusion by deletion (Edel) or split (Esplit), duplicated fusions and low-level copy number gain in normal ERG gene locus. Minute adenocarcinomas: Fusion was evaluable in 32/45 tumors (71%). Fifteen out of 32 (47%) tumors were positive for fusion. Six (19%) were of the Edel class and 7 (22%) were classified as combined Edel+Esplit. Non-minute adenocarcinomas (pT2): Fusion was identified in 20/30 tumors (67%). Four (13%) were of Edel class and 5 (17%) were combined Edel+Esplit. Duplicated fusions were encountered in 5 (16%) tumors. Non-minute adenocarcinomas (pT3): Fusion was identified in 19/33 (58%). Fusion was due to a deletion in 6 (18%) tumors. Seven tumors (21%) were classified as combined Edel+Esplit. One tumor showed Esplit alone. Duplicated fusions were encountered in 3 (9%) cases. The incidence of duplicated fusions was higher in non-minute adenocarcinomas (13 vs 0%; P=0.03). A trend for higher incidence of low-level copy number gain in normal ERG gene locus without fusion was noted in non-minute adenocarcinomas (10 vs 0%; P=0.07). We found a TMPRSS2-ERG fusion rate of 47% in minute adenocarcinomas. The latter is not significantly different from that of grade matched non-minute adenocarcinomas. The incidence of duplicated fusion was higher in non-minute adenocarcinomas. Our finding of comparable rate of TMPRSS2-ERG fusion in minute adenocarcinomas may argue against its value as a marker of aggressive prostate carcinoma phenotype.

  9. Return to Golf After Lumbar Fusion.

    PubMed

    Shifflett, Grant D; Hellman, Michael D; Louie, Philip K; Mikhail, Christopher; Park, Kevin U; Phillips, Frank M

    Spinal fusion surgery is being increasingly performed, yet few studies have focused on return to recreational sports after lumbar fusion and none have specifically analyzed return to golf. Most golfers successfully return to sport after lumbar fusion surgery. Case series. Level 4. All patients who underwent 1- or 2-level primary lumbar fusion surgery for degenerative pathologies performed by a single surgeon between January 2008 and October 2012 and had at least 1-year follow-up were included. Patients completed a specifically designed golf survey. Surveys were mailed, given during follow-up clinic, or answered during telephone contact. A total of 353 patients met the inclusion and exclusion criteria, with 200 responses (57%) to the questionnaire producing 34 golfers. The average age of golfers was 57 years (range, 32-79 years). In 79% of golfers, preoperative back and/or leg pain significantly affected their ability to play golf. Within 1 year from surgery, 65% of patients returned to practice and 52% returned to course play. Only 29% of patients stated that continued back/leg pain limited their play. Twenty-five patients (77%) were able to play the same amount of golf or more than before fusion surgery. Of those providing handicaps, 12 (80%) reported the same or an improved handicap. More than 50% of golfers return to on-course play within 1 year of lumbar fusion surgery. The majority of golfers can return to preoperative levels in terms of performance (handicap) and frequency of play. This investigation offers insight into when golfers return to sport after lumbar fusion surgery and provides surgeons with information to set realistic expectations postoperatively.

  10. The Formin Diaphanous Regulates Myoblast Fusion through Actin Polymerization and Arp2/3 Regulation

    PubMed Central

    Deng, Su; Bothe, Ingo; Baylies, Mary K.

    2015-01-01

    The formation of multinucleated muscle cells through cell-cell fusion is a conserved process from fruit flies to humans. Numerous studies have shown the importance of Arp2/3, its regulators, and branched actin for the formation of an actin structure, the F-actin focus, at the fusion site. This F-actin focus forms the core of an invasive podosome-like structure that is required for myoblast fusion. In this study, we find that the formin Diaphanous (Dia), which nucleates and facilitates the elongation of actin filaments, is essential for Drosophila myoblast fusion. Following cell recognition and adhesion, Dia is enriched at the myoblast fusion site, concomitant with, and having the same dynamics as, the F-actin focus. Through analysis of Dia loss-of-function conditions using mutant alleles but particularly a dominant negative Dia transgene, we demonstrate that reduction in Dia activity in myoblasts leads to a fusion block. Significantly, no actin focus is detected, and neither branched actin regulators, SCAR or WASp, accumulate at the fusion site when Dia levels are reduced. Expression of constitutively active Dia also causes a fusion block that is associated with an increase in highly dynamic filopodia, altered actin turnover rates and F-actin distribution, and mislocalization of SCAR and WASp at the fusion site. Together our data indicate that Dia plays two roles during invasive podosome formation at the fusion site: it dictates the level of linear F-actin polymerization, and it is required for appropriate branched actin polymerization via localization of SCAR and WASp. These studies provide new insight to the mechanisms of cell-cell fusion, the relationship between different regulators of actin polymerization, and invasive podosome formation that occurs in normal development and in disease. PMID:26295716

  11. Photovoltaic restoration of sight with high visual acuity

    PubMed Central

    Lorach, Henri; Goetz, Georges; Smith, Richard; Lei, Xin; Mandel, Yossi; Kamins, Theodore; Mathieson, Keith; Huie, Philip; Harris, James; Sher, Alexander; Palanker, Daniel

    2015-01-01

    Patients with retinal degeneration lose sight due to gradual demise of photoreceptors. Electrical stimulation of the surviving retinal neurons provides an alternative route for delivery of visual information. We demonstrate that subretinal arrays with 70 μm photovoltaic pixels provide highly localized stimulation, with electrical and visual receptive fields of comparable sizes in rat retinal ganglion cells. Similarly to normal vision, retinal response to prosthetic stimulation exhibits flicker fusion at high frequencies, adaptation to static images and non-linear spatial summation. In rats with retinal degeneration, these photovoltaic arrays provide spatial resolution of 64 ± 11 μm, corresponding to half of the normal visual acuity in pigmented rats. Ease of implantation of these wireless and modular arrays, combined with their high resolution opens the door to functional restoration of sight. PMID:25915832

  12. A spatially resolving x-ray crystal spectrometer for measurement of ion-temperature and rotation-velocity profiles on the Alcator C-Mod tokamak.

    PubMed

    Hill, K W; Bitter, M L; Scott, S D; Ince-Cushman, A; Reinke, M; Rice, J E; Beiersdorfer, P; Gu, M-F; Lee, S G; Broennimann, Ch; Eikenberry, E F

    2008-10-01

    A new spatially resolving x-ray crystal spectrometer capable of measuring continuous spatial profiles of high resolution spectra (lambda/d lambda>6000) of He-like and H-like Ar K alpha lines with good spatial (approximately 1 cm) and temporal (approximately 10 ms) resolutions has been installed on the Alcator C-Mod tokamak. Two spherically bent crystals image the spectra onto four two-dimensional Pilatus II pixel detectors. Tomographic inversion enables inference of local line emissivity, ion temperature (T(i)), and toroidal plasma rotation velocity (upsilon(phi)) from the line Doppler widths and shifts. The data analysis techniques, T(i) and upsilon(phi) profiles, analysis of fusion-neutron background, and predictions of performance on other tokamaks, including ITER, will be presented.

  13. Pleiotropic Actions of Forskolin Result in Phosphatidylserine Exposure in Primary Trophoblasts

    PubMed Central

    Riddell, Meghan R.; Winkler-Lowen, Bonnie; Jiang, Yanyan; Davidge, Sandra T.; Guilbert, Larry J.

    2013-01-01

    Forskolin is an extract of the Coleus forskholii plant that is widely used in cell physiology to raise intracellular cAMP levels. In the field of trophoblast biology, forskolin is one of the primary treatments used to induce trophoblastic cellular fusion. The syncytiotrophoblast (ST) is a continuous multinucleated cell in the human placenta that separates maternal from fetal circulations and can only expand by fusion with its stem cell, the cytotrophoblast (CT). Functional investigation of any aspect of ST physiology requires in vitro differentiation of CT and de novo ST formation, thus selecting the most appropriate differentiation agent for the hypothesis being investigated is necessary as well as addressing potential off-target effects. Previous studies, using forskolin to induce fusion in trophoblastic cell lines, identified phosphatidylserine (PS) externalization to be essential for trophoblast fusion and showed that widespread PS externalization is present even after fusion has been achieved. PS is a membrane phospholipid that is primarily localized to the inner-membrane leaflet. Externalization of PS is a hallmark of early apoptosis and is involved in cellular fusion of myocytes and macrophages. We were interested to examine whether PS externalization was also involved in primary trophoblast fusion. We show widespread PS externalization occurs after 72 hours when fusion was stimulated with forskolin, but not when stimulated with the cell permeant cAMP analog Br-cAMP. Using a forskolin analog, 1,9-dideoxyforskolin, which stimulates membrane transporters but not adenylate cyclase, we found that widespread PS externalization required both increased intracellular cAMP levels and stimulation of membrane transporters. Treatment of primary trophoblasts with Br-cAMP alone did not result in widespread PS externalization despite high levels of cellular fusion. Thus, we concluded that widespread PS externalization is independent of trophoblast fusion and, importantly, provide evidence that the common differentiation agent forskolin has previously unappreciated pleiotropic effects on trophoblastic cells. PMID:24339915

  14. Pleiotropic actions of forskolin result in phosphatidylserine exposure in primary trophoblasts.

    PubMed

    Riddell, Meghan R; Winkler-Lowen, Bonnie; Jiang, Yanyan; Davidge, Sandra T; Guilbert, Larry J

    2013-01-01

    Forskolin is an extract of the Coleus forskholii plant that is widely used in cell physiology to raise intracellular cAMP levels. In the field of trophoblast biology, forskolin is one of the primary treatments used to induce trophoblastic cellular fusion. The syncytiotrophoblast (ST) is a continuous multinucleated cell in the human placenta that separates maternal from fetal circulations and can only expand by fusion with its stem cell, the cytotrophoblast (CT). Functional investigation of any aspect of ST physiology requires in vitro differentiation of CT and de novo ST formation, thus selecting the most appropriate differentiation agent for the hypothesis being investigated is necessary as well as addressing potential off-target effects. Previous studies, using forskolin to induce fusion in trophoblastic cell lines, identified phosphatidylserine (PS) externalization to be essential for trophoblast fusion and showed that widespread PS externalization is present even after fusion has been achieved. PS is a membrane phospholipid that is primarily localized to the inner-membrane leaflet. Externalization of PS is a hallmark of early apoptosis and is involved in cellular fusion of myocytes and macrophages. We were interested to examine whether PS externalization was also involved in primary trophoblast fusion. We show widespread PS externalization occurs after 72 hours when fusion was stimulated with forskolin, but not when stimulated with the cell permeant cAMP analog Br-cAMP. Using a forskolin analog, 1,9-dideoxyforskolin, which stimulates membrane transporters but not adenylate cyclase, we found that widespread PS externalization required both increased intracellular cAMP levels and stimulation of membrane transporters. Treatment of primary trophoblasts with Br-cAMP alone did not result in widespread PS externalization despite high levels of cellular fusion. Thus, we concluded that widespread PS externalization is independent of trophoblast fusion and, importantly, provide evidence that the common differentiation agent forskolin has previously unappreciated pleiotropic effects on trophoblastic cells.

  15. Effect of serum nicotine level on posterior spinal fusion in an in vivo rabbit model.

    PubMed

    Daffner, Scott D; Waugh, Stacey; Norman, Timothy L; Mukherjee, Nilay; France, John C

    2015-06-01

    Cigarette smoking has a deleterious effect on spinal fusion. Although some studies have implied that nicotine is primarily responsible for poor fusion outcomes, other studies suggest that nicotine may actually stimulate bone growth. Hence, there may be a dose-dependent effect of nicotine on posterior spinal fusion outcomes. The purpose of this study was to determine if such a relationship could be shown in an in vivo rabbit model. This is a prospective in vivo animal study. Twenty-four adult male New Zealand white rabbits were randomly divided into four groups. All groups received a single-level posterolateral, intertransverse process fusion at L5-L6 with autologous iliac crest bone. One group served as controls and only underwent the spine fusion surgery. Three groups received 5.25-, 10.5-, and 21-mg nicotine patches, respectively, for 5 weeks. Serum nicotine levels were recorded for each group. All animals were euthanized 5 weeks postoperatively, and spinal fusions were evaluated radiographically, by manual palpation, and biomechanically. Statistical analysis evaluated the dose response effect of outcomes variables and nicotine dosage. This study was supported by a portion of a $100,000 grant from the Orthopaedic Research and Education Foundation. Author financial disclosures were completed in accordance with the journal's guidelines; there were no conflicts of interests disclosed that would have led to bias in this work. The average serum levels of nicotine from the different patches were 7.8±1.9 ng/mL for the 5.25-mg patch group; 99.7±17.7 ng/mL for the 10.5-mg patch group; and 149.1±24.6 ng/mL for the 21-mg patch group. The doses positively correlated with serum concentrations of nicotine (correlation coefficient=0.8410, p<.001). The 5.25-mg group provided the best fusion rate, trabeculation, and stiffness. On the basis of the palpation tests, the fusion rates were control (50%), 5.25 mg (80%), 10.5 mg (50%), and 21 mg (42.8%). Radiographic assessment of trabeculation and bone incorporation and biomechanical analysis of bending stiffness ratio were also greatest in the 5.25-mg group. Radiographic evaluation showed a significant (p=.0446) quadratic effect of nicotine dose on spinal fusion. The effects of nicotine on spinal fusion are complex, may be dose dependent, and may not always be detrimental. The uniformly negative effects of smoking reported in patients undergoing spinal fusion may possibly be attributed to the other components of cigarette smoke. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Two-year outcomes of transforaminal lumbar interbody fusion.

    PubMed

    Poh, Seng Yew; Yue, Wai Mun; Chen, Li-Tat John; Guo, Chang-Ming; Yeo, William; Tan, Seang-Beng

    2011-08-01

    To evaluate the outcomes, fusion rates, complications, and adjacent segment degeneration associated with transforaminal lumbar interbody fusion (TLIF). 32 men and 80 women aged 15 to 85 (mean, 57) years underwent 141 fusions (84 one-level, 27 2-level, and one 3-level) and were followed up for 24 to 76 (mean, 33) months. 92% of the patients had degenerative lumbar disease, 15 of whom had had previous lumbar surgery. Radiographic and clinical outcomes were assessed at 2 years. The short-form 36 (SF-36) health survey, visual analogue scale (VAS) for pain, and the modified North American Spine Society (NASS) Low Back Pain Outcome Instrument were used. Of the 141 levels fused, 110 (78%) were fused with remodelling and trabeculae (grade I), and 31 (22%) had intact grafts but were not fully incorporated (grade II). No patient had pseudoarthroses (grade III or IV). For one-level fusions, poorer radiological fusion grades correlated with higher VAS scores for pain (p<0.01). All components of the SF-36, the VAS scores for pain, and the NASS scores improved significantly after TLIF (p<0.01), except for general health in the SF-36 (p=0.59). Improvement from postoperative 6 months to 2 years was not significant, except for physical function (p<0.01) and role function (physical) [p=0.01] in the SF-36. Two years after TLIF, 50% of the patients reported returning to full function, whereas 72% were satisfied. 26 (23%) of the patients had adjacent segment degeneration, but only 4 of them were symptomatic. TLIF is a safe and effective treatment for degenerative lumbar diseases.

  17. MT3825BA: a 384×288-25µm ROIC for uncooled microbolometer FPAs

    NASA Astrophysics Data System (ADS)

    Eminoglu, Selim; Gulden, M. Ali; Bayhan, Nusret; Incedere, O. Samet; Soyer, S. Tuncer; Ustundag, Cem M. B.; Isikhan, Murat; Kocak, Serhat; Turan, Ozge; Yalcin, Cem; Akin, Tayfun

    2014-06-01

    This paper reports the development of a new microbolometer Readout Integrated Circuit (ROIC) called MT3825BA. It has a format of 384 × 288 and a pixel pitch of 25μm. MT3825BA is Mikro-Tasarim's second microbolometer ROIC product, which is developed specifically for resistive surface micro-machined microbolometer detector arrays using high-TCR pixel materials, such as VOx and a-Si. MT3825BA has a system-on-chip architecture, where all the timing, biasing, and pixel non-uniformity correction (NUC) operations in the ROIC are applied using on-chip circuitry simplifying the use and system integration of this ROIC. The ROIC is designed to support pixel resistance values ranging from 30 KΩ to 100 KΩ. MT3825BA is operated using conventional row based readout method, where pixels in the array are read out in a row-by-row basis, where the applied bias for each pixel in a given row is updated at the beginning of each line period according to the applied line based NUC data. The NUC data is applied continuously in a row-by-row basis using the serial programming interface, which is also used to program user configurable features of the ROIC, such as readout gain, integration time, and number of analog video outputs. MT3825BA has a total of 4 analog video outputs and 2 analog reference outputs, placed at the top and bottom of the ROIC, which can be programmed to operate in the 1, 2, and 4-output modes, supporting frames rates well above 60 fps at a 3 MHz pixel output rate. The pixels in the array are read out with respect to reference pixels implemented above and below actual array pixels. The bias voltage of the pixels can be programmed over a 1.0 V range to compensate for the changes in the detector resistance values due to the variations coming from the manufacturing process or changes in the operating temperature. The ROIC has an on-chip integrated temperature sensor with a sensitivity of better than 5 mV / K, and the output of the temperature sensor can be read out the output as part of the analog video stream. MT3825BA can be used to build a microbolometer FPAs with an NETD value below 100 mK using a microbolometer detector array fabrication technology with a detector resistance value up to 100 KΩ, a high TCR value (< 2 % / K), and a sufficiently low pixel thermal conductance (Gth ≤ 20 nW / K). MT3825BA measures 13.0 mm × 13.5 mm and is fabricated on 200 mm CMOS wafers. The microbolometer ROIC wafers are engineered to have flat surface finish to simplify the wafer level detector fabrication and wafer level vacuum packaging (WLVP). The ROIC runs on 3.3 V analog and 1.8 V digital supplies, and dissipates less than 85 mW in the 2-output mode at 30 fps. Mikro-Tasarim provides tested ROIC wafers and offers compact test electronics and software for its ROIC customers to shorten their FPA and camera development cycles.

  18. SuBSENSE: a universal change detection method with local adaptive sensitivity.

    PubMed

    St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert

    2015-01-01

    Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.

  19. Modeling Cyber Situational Awareness Through Data Fusion

    DTIC Science & Technology

    2013-03-01

    following table: Table 3.10: Example Vulnerable Hosts for Criticality Assessment Experiment Example Id OS Applications/Services Version 1 Mac OS X VLC ...linux.org/. [4] Blasch, E., I. Kadar, J. Salerno, M. Kokar, S. Das, G. Powell, D. Corkill, and E. Ruspini. “Issues and challenges of knowledge representation...Holsopple. “Issues and challenges in higher level fusion: Threat/impact assessment and intent modeling (a panel summary)”. Information Fusion (FUSION

  20. Computer Based Behavioral Biometric Authentication via Multi-Modal Fusion

    DTIC Science & Technology

    2013-03-01

    the decisions made by each individual modality. Fusion of features is the simple concatenation of feature vectors from multiple modalities to be...of Features BayesNet MDL 330 LibSVM PCA 80 J48 Wrapper Evaluator 11 3.5.3 Ensemble Based Decision Level Fusion. In ensemble learning multiple ...The high fusion percentages validate our hypothesis that by combining features from multiple modalities, classification accuracy can be improved. As

  1. Impact of monaural frequency compression on binaural fusion at the brainstem level.

    PubMed

    Klauke, Isabelle; Kohl, Manuel C; Hannemann, Ronny; Kornagel, Ulrich; Strauss, Daniel J; Corona-Strauss, Farah I

    2015-08-01

    A classical objective measure for binaural fusion at the brainstem level is the so-called β-wave of the binaural interaction component (BIC) in the auditory brainstem response (ABR). However, in some cases it appeared that a reliable detection of this component still remains a challenge. In this study, we investigate the wavelet phase synchronization stability (WPSS) of ABR data for the analysis of binaural fusion and compare it to the BIC. In particular, we examine the impact of monaural nonlinear frequency compression on binaural fusion. As the auditory system is tonotopically organized, an interaural frequency mismatch caused by monaural frequency compression could negatively effect binaural fusion. In this study, only few subjects showed a detectable β-wave and in most cases only for low ITDs. However, we present a novel objective measure for binaural fusion that outperforms the current state-of-the-art technique (BIC): the WPSS analysis showed a significant difference between the phase stability of the sum of the monaurally evoked responses and the phase stability of the binaurally evoked ABR. This difference could be an indicator for binaural fusion in the brainstem. Furthermore, we observed that monaural frequency compression could indeed effect binaural fusion, as the WPSS results for this condition vary strongly from the results obtained without frequency compression.

  2. Advanced uncooled sensor product development

    NASA Astrophysics Data System (ADS)

    Kennedy, A.; Masini, P.; Lamb, M.; Hamers, J.; Kocian, T.; Gordon, E.; Parrish, W.; Williams, R.; LeBeau, T.

    2015-06-01

    The partnership between RVS, Seek Thermal and Freescale Semiconductor continues on the path to bring the latest technology and innovation to both military and commercial customers. The partnership has matured the 17μm pixel for volume production on the Thermal Weapon Sight (TWS) program in efforts to bring advanced production capability to produce a low cost, high performance product. The partnership has developed the 12μm pixel and has demonstrated performance across a family of detector sizes ranging from formats as small as 206 x 156 to full high definition formats. Detector pixel sensitivities have been achieved using the RVS double level advanced pixel structure. Transition of the packaging of microbolometers from a traditional die level package to a wafer level package (WLP) in a high volume commercial environment is complete. Innovations in wafer fabrication techniques have been incorporated into this product line to assist in the high yield required for volume production. The WLP seal yield is currently > 95%. Simulated package vacuum lives >> 20 years have been demonstrated through accelerated life testing where the package has been shown to have no degradation after 2,500 hours at 150°C. Additionally the rugged assembly has shown no degradation after mechanical shock and vibration and thermal shock testing. The transition to production effort was successfully completed in 2014 and the WLP design has been integrated into multiple new production products including the TWS and the innovative Seek Thermal commercial product that interfaces directly to an iPhone or android device.

  3. Ensembles of satellite aerosol retrievals based on three AATSR algorithms within aerosol_cci

    NASA Astrophysics Data System (ADS)

    Kosmale, Miriam; Popp, Thomas

    2016-04-01

    Ensemble techniques are widely used in the modelling community, combining different modelling results in order to reduce uncertainties. This approach could be also adapted to satellite measurements. Aerosol_cci is an ESA funded project, where most of the European aerosol retrieval groups work together. The different algorithms are homogenized as far as it makes sense, but remain essentially different. Datasets are compared with ground based measurements and between each other. Three AATSR algorithms (Swansea university aerosol retrieval, ADV aerosol retrieval by FMI and Oxford aerosol retrieval ORAC) provide within this project 17 year global aerosol records. Each of these algorithms provides also uncertainty information on pixel level. Within the presented work, an ensembles of the three AATSR algorithms is performed. The advantage over each single algorithm is the higher spatial coverage due to more measurement pixels per gridbox. A validation to ground based AERONET measurements shows still a good correlation of the ensemble, compared to the single algorithms. Annual mean maps show the global aerosol distribution, based on a combination of the three aerosol algorithms. In addition, pixel level uncertainties of each algorithm are used for weighting the contributions, in order to reduce the uncertainty of the ensemble. Results of different versions of the ensembles for aerosol optical depth will be presented and discussed. The results are validated against ground based AERONET measurements. A higher spatial coverage on daily basis allows better results in annual mean maps. The benefit of using pixel level uncertainties is analysed.

  4. Reconstruction of 2D PET data with Monte Carlo generated system matrix for generalized natural pixels

    NASA Astrophysics Data System (ADS)

    Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L.; Soares, Edward J.; Lemahieu, Ignace; Glick, Stephen J.

    2006-06-01

    In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).

  5. Issues and challenges in resource management and its interaction with levels 2/3 fusion with applications to real-world problems: an annotated perspective

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Kadar, Ivan; Hintz, Kenneth; Biermann, Joachim; Chong, Chee-Yee; Salerno, John; Das, Subrata

    2007-04-01

    Resource management (or process refinement) is critical for information fusion operations in that users, sensors, and platforms need to be informed, based on mission needs, on how to collect, process, and exploit data. To meet these growing concerns, a panel session was conducted at the International Society of Information Fusion Conference in 2006 to discuss the various issues surrounding the interaction of Resource Management with Level 2/3 Situation and Threat Assessment. This paper briefly consolidates the discussion of the invited panel panelists. The common themes include: (1) Addressing the user in system management, sensor control, and knowledge based information collection (2) Determining a standard set of fusion metrics for optimization and evaluation based on the application (3) Allowing dynamic and adaptive updating to deliver timely information needs and information rates (4) Optimizing the joint objective functions at all information fusion levels based on decision-theoretic analysis (5) Providing constraints from distributed resource mission planning and scheduling; and (6) Defining L2/3 situation entity definitions for knowledge discovery, modeling, and information projection

  6. A Markov game theoretic data fusion approach for cyber situational awareness

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Chen, Genshe; Cruz, Jose B., Jr.; Haynes, Leonard; Kruger, Martin; Blasch, Erik

    2007-04-01

    This paper proposes an innovative data-fusion/ data-mining game theoretic situation awareness and impact assessment approach for cyber network defense. Alerts generated by Intrusion Detection Sensors (IDSs) or Intrusion Prevention Sensors (IPSs) are fed into the data refinement (Level 0) and object assessment (L1) data fusion components. High-level situation/threat assessment (L2/L3) data fusion based on Markov game model and Hierarchical Entity Aggregation (HEA) are proposed to refine the primitive prediction generated by adaptive feature/pattern recognition and capture new unknown features. A Markov (Stochastic) game method is used to estimate the belief of each possible cyber attack pattern. Game theory captures the nature of cyber conflicts: determination of the attacking-force strategies is tightly coupled to determination of the defense-force strategies and vice versa. Also, Markov game theory deals with uncertainty and incompleteness of available information. A software tool is developed to demonstrate the performance of the high level information fusion for cyber network defense situation and a simulation example shows the enhanced understating of cyber-network defense.

  7. On the Use of Sensor Fusion to Reduce the Impact of Rotational and Additive Noise in Human Activity Recognition

    PubMed Central

    Banos, Oresti; Damas, Miguel; Pomares, Hector; Rojas, Ignacio

    2012-01-01

    The main objective of fusion mechanisms is to increase the individual reliability of the systems through the use of the collectivity knowledge. Moreover, fusion models are also intended to guarantee a certain level of robustness. This is particularly required for problems such as human activity recognition where runtime changes in the sensor setup seriously disturb the reliability of the initial deployed systems. For commonly used recognition systems based on inertial sensors, these changes are primarily characterized as sensor rotations, displacements or faults related to the batteries or calibration. In this work we show the robustness capabilities of a sensor-weighted fusion model when dealing with such disturbances under different circumstances. Using the proposed method, up to 60% outperformance is obtained when a minority of the sensors are artificially rotated or degraded, independent of the level of disturbance (noise) imposed. These robustness capabilities also apply for any number of sensors affected by a low to moderate noise level. The presented fusion mechanism compensates the poor performance that otherwise would be obtained when just a single sensor is considered. PMID:22969386

  8. On the use of sensor fusion to reduce the impact of rotational and additive noise in human activity recognition.

    PubMed

    Banos, Oresti; Damas, Miguel; Pomares, Hector; Rojas, Ignacio

    2012-01-01

    The main objective of fusion mechanisms is to increase the individual reliability of the systems through the use of the collectivity knowledge. Moreover, fusion models are also intended to guarantee a certain level of robustness. This is particularly required for problems such as human activity recognition where runtime changes in the sensor setup seriously disturb the reliability of the initial deployed systems. For commonly used recognition systems based on inertial sensors, these changes are primarily characterized as sensor rotations, displacements or faults related to the batteries or calibration. In this work we show the robustness capabilities of a sensor-weighted fusion model when dealing with such disturbances under different circumstances. Using the proposed method, up to 60% outperformance is obtained when a minority of the sensors are artificially rotated or degraded, independent of the level of disturbance (noise) imposed. These robustness capabilities also apply for any number of sensors affected by a low to moderate noise level. The presented fusion mechanism compensates the poor performance that otherwise would be obtained when just a single sensor is considered.

  9. Biomechanics of Hybrid Anterior Cervical Fusion and Artificial Disc Replacement in 3-Level Constructs: An In Vitro Investigation

    PubMed Central

    Liao, Zhenhua; Fogel, Guy R.; Pu, Ting; Gu, Hongsheng; Liu, Weiqiang

    2015-01-01

    Background The ideal surgical approach for cervical disk disease remains controversial, especially for multilevel cervical disease. The purpose of this study was to investigate the biomechanics of the cervical spine after 3-level hybrid surgery compared with 3-level anterior cervical discectomy and fusion (ACDF). Material/Methods Eighteen human cadaveric spines (C2-T1) were evaluated under displacement-input protocol. After intact testing, a simulated hybrid construct or fusion construct was created between C3 to C6 and tested in the following 3 conditions: 3-level disc plate disc (3DPD), 3-level plate disc plate (3PDP), and 3-level plate (3P). Results Compared to intact, almost 65~80% of motion was successfully restricted at C3-C6 fusion levels (p<0.05). 3DPD construct resulted in slight increase at the 3 instrumented levels (p>0.05). 3PDP construct resulted in significant decrease of ROM at C3-C6 levels less than 3P (p<0.05). Both 3DPD and 3PDP caused significant reduction of ROM at the arthrodesis level and produced motion increase at the arthroplasty level. For adjacent levels, 3P resulted in markedly increased contribution of both upper and lower adjacent levels (p<0.05). Significant motion increases lower than 3P were only noted at partly adjacent levels in some conditions for 3DPD and 3PDP (p<0.05). Conclusions ACDF eliminated motion within the construct and greatly increased adjacent motion. Artificial cervical disc replacement normalized motion of its segment and adjacent segments. While hybrid conditions failed to restore normal motion within the construct, they significantly normalized motion in adjacent segments compared with the 3-level ACDF condition. The artificial disc in 3-level constructs has biomechanical advantages compared to fusion in normalizing motion. PMID:26529430

  10. Biomechanics of Hybrid Anterior Cervical Fusion and Artificial Disc Replacement in 3-Level Constructs: An In Vitro Investigation.

    PubMed

    Liao, Zhenhua; Fogel, Guy R; Pu, Ting; Gu, Hongsheng; Liu, Weiqiang

    2015-11-03

    The ideal surgical approach for cervical disk disease remains controversial, especially for multilevel cervical disease. The purpose of this study was to investigate the biomechanics of the cervical spine after 3-level hybrid surgery compared with 3-level anterior cervical discectomy and fusion (ACDF). Eighteen human cadaveric spines (C2-T1) were evaluated under displacement-input protocol. After intact testing, a simulated hybrid construct or fusion construct was created between C3 to C6 and tested in the following 3 conditions: 3-level disc plate disc (3DPD), 3-level plate disc plate (3PDP), and 3-level plate (3P). Compared to intact, almost 65~80% of motion was successfully restricted at C3-C6 fusion levels (p<0.05). 3DPD construct resulted in slight increase at the 3 instrumented levels (p>0.05). 3PDP construct resulted in significant decrease of ROM at C3-C6 levels less than 3P (p<0.05). Both 3DPD and 3PDP caused significant reduction of ROM at the arthrodesis level and produced motion increase at the arthroplasty level. For adjacent levels, 3P resulted in markedly increased contribution of both upper and lower adjacent levels (p<0.05). Significant motion increases lower than 3P were only noted at partly adjacent levels in some conditions for 3DPD and 3PDP (p<0.05). ACDF eliminated motion within the construct and greatly increased adjacent motion. Artificial cervical disc replacement normalized motion of its segment and adjacent segments. While hybrid conditions failed to restore normal motion within the construct, they significantly normalized motion in adjacent segments compared with the 3-level ACDF condition. The artificial disc in 3-level constructs has biomechanical advantages compared to fusion in normalizing motion.

  11. Self-recognition in corals facilitates deep-sea habitat engineering

    USGS Publications Warehouse

    Hennige, Sebastian J; Morrison, Cheryl L.; Form, Armin U.; Buscher, Janina; Kamenos, Nicholas A.; Roberts, J. Murray

    2014-01-01

    The ability of coral reefs to engineer complex three-dimensional habitats is central to their success and the rich biodiversity they support. In tropical reefs, encrusting coralline algae bind together substrates and dead coral framework to make continuous reef structures, but beyond the photic zone, the cold-water coral Lophelia pertusa also forms large biogenic reefs, facilitated by skeletal fusion. Skeletal fusion in tropical corals can occur in closely related or juvenile individuals as a result of non-aggressive skeletal overgrowth or allogeneic tissue fusion, but contact reactions in many species result in mortality if there is no ‘self-recognition’ on a broad species level. This study reveals areas of ‘flawless’ skeletal fusion in Lophelia pertusa, potentially facilitated by allogeneic tissue fusion, are identified as having small aragonitic crystals or low levels of crystal organisation, and strong molecular bonding. Regardless of the mechanism, the recognition of ‘self’ between adjacent L. pertusa colonies leads to no observable mortality, facilitates ecosystem engineering and reduces aggression-related energetic expenditure in an environment where energy conservation is crucial. The potential for self-recognition at a species level, and subsequent skeletal fusion in framework-forming cold-water corals is an important first step in understanding their significance as ecological engineers in deep-seas worldwide.

  12. Sensitivity of Marine Warm Cloud Retrieval Statistics to Algorithm Choices: Examples from MODIS Collection 6

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Zhang, Zhibo; Ackerman, Steven A.; Maddux, Brent

    2012-01-01

    The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the l.6, 2.1, and 3.7 m spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "notclear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud'edges as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the ID cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.

  13. Novel cooperative neural fusion algorithms for image restoration and image fusion.

    PubMed

    Xia, Youshen; Kamel, Mohamed S

    2007-02-01

    To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.

  14. Semantic segmentation of forest stands of pure species combining airborne lidar data and very high resolution multispectral imagery

    NASA Astrophysics Data System (ADS)

    Dechesne, Clément; Mallet, Clément; Le Bris, Arnaud; Gouet-Brunet, Valérie

    2017-04-01

    Forest stands are the basic units for forest inventory and mapping. Stands are defined as large forested areas (e.g., ⩾ 2 ha) of homogeneous tree species composition and age. Their accurate delineation is usually performed by human operators through visual analysis of very high resolution (VHR) infra-red images. This task is tedious, highly time consuming, and should be automated for scalability and efficient updating purposes. In this paper, a method based on the fusion of airborne lidar data and VHR multispectral images is proposed for the automatic delineation of forest stands containing one dominant species (purity superior to 75%). This is the key preliminary task for forest land-cover database update. The multispectral images give information about the tree species whereas 3D lidar point clouds provide geometric information on the trees and allow their individual extraction. Multi-modal features are computed, both at pixel and object levels: the objects are individual trees extracted from lidar data. A supervised classification is then performed at the object level in order to coarsely discriminate the existing tree species in each area of interest. The classification results are further processed to obtain homogeneous areas with smooth borders by employing an energy minimum framework, where additional constraints are joined to form the energy function. The experimental results show that the proposed method provides very satisfactory results both in terms of stand labeling and delineation (overall accuracy ranges between 84 % and 99 %).

  15. Multiclassifier information fusion methods for microarray pattern recognition

    NASA Astrophysics Data System (ADS)

    Braun, Jerome J.; Glina, Yan; Judson, Nicholas; Herzig-Marx, Rachel

    2004-04-01

    This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.

  16. Association between EML4-ALK fusion gene and thymidylate synthase mRNA expression in non-small cell lung cancer tissues

    PubMed Central

    XU, CHUN-WEI; WANG, GANG; WANG, WU-LONG; GAO, WEN-BIN; HAN, CHUAN-JUN; GAO, JING-SHAN; ZHANG, LI-YING; LI, YANG; WANG, LIN; ZHANG, YU-PING; TIAN, YU-WANG; QI, DONG-DONG

    2015-01-01

    This study aimed to investigate the association of the mRNA expression of the echinoderm microtubule-associated protein-like 4 (EML4)-anaplastic lymphoma kinase (ALK) fusion gene with that of thymidylate synthase (TYMS) in non-small cell lung cancer (NSCLC) tissues. Quantitative polymerase chain reaction was used to detect the expression of EML4-ALK fusion gene and TYMS mRNA in 257 cases of NSCLC. The positive rate of EML4-ALK fusion gene was 4.28% in the NSCLC tissues (11/257), and was higher in nonsmokers than in smokers (P<0.05); TYMS mRNA expression was detected in 63.42% (163/257) of cases. An association of the EML4-ALK fusion gene with TYMS expression was detected; a low expression level of TYMS mRNA was observed more frequently when the EML4-ALK fusion gene was present than when it was not detected (P<0.05). In conclusion, patients positive for the EML4-ALK fusion gene in NSCLC tissues are likely to have a low expression level of TYMS, and may benefit from the first-line chemotherapy drug pemetrexed. PMID:26136951

  17. Association between EML4-ALK fusion gene and thymidylate synthase mRNA expression in non-small cell lung cancer tissues.

    PubMed

    Xu, Chun-Wei; Wang, Gang; Wang, Wu-Long; Gao, Wen-Bin; Han, Chuan-Jun; Gao, Jing-Shan; Zhang, Li-Ying; Li, Yang; Wang, Lin; Zhang, Yu-Ping; Tian, Yu-Wang; Qi, Dong-Dong

    2015-06-01

    This study aimed to investigate the association of the mRNA expression of the echinoderm microtubule-associated protein-like 4 (EML4)-anaplastic lymphoma kinase (ALK) fusion gene with that of thymidylate synthase (TYMS) in non-small cell lung cancer (NSCLC) tissues. Quantitative polymerase chain reaction was used to detect the expression of EML4-ALK fusion gene and TYMS mRNA in 257 cases of NSCLC. The positive rate of EML4-ALK fusion gene was 4.28% in the NSCLC tissues (11/257), and was higher in nonsmokers than in smokers (P<0.05); TYMS mRNA expression was detected in 63.42% (163/257) of cases. An association of the EML4-ALK fusion gene with TYMS expression was detected; a low expression level of TYMS mRNA was observed more frequently when the EML4-ALK fusion gene was present than when it was not detected (P<0.05). In conclusion, patients positive for the EML4-ALK fusion gene in NSCLC tissues are likely to have a low expression level of TYMS, and may benefit from the first-line chemotherapy drug pemetrexed.

  18. Infrared and visible image fusion scheme based on NSCT and low-level visual features

    NASA Astrophysics Data System (ADS)

    Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei

    2016-05-01

    Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.

  19. Image encryption using a synchronous permutation-diffusion technique

    NASA Astrophysics Data System (ADS)

    Enayatifar, Rasul; Abdullah, Abdul Hanan; Isnin, Ismail Fauzi; Altameem, Ayman; Lee, Malrey

    2017-03-01

    In the past decade, the interest on digital images security has been increased among scientists. A synchronous permutation and diffusion technique is designed in order to protect gray-level image content while sending it through internet. To implement the proposed method, two-dimensional plain-image is converted to one dimension. Afterward, in order to reduce the sending process time, permutation and diffusion steps for any pixel are performed in the same time. The permutation step uses chaotic map and deoxyribonucleic acid (DNA) to permute a pixel, while diffusion employs DNA sequence and DNA operator to encrypt the pixel. Experimental results and extensive security analyses have been conducted to demonstrate the feasibility and validity of this proposed image encryption method.

  20. Cost-utility analysis of posterior minimally invasive fusion compared with conventional open fusion for lumbar spondylolisthesis

    PubMed Central

    Rampersaud, Y. Raja; Gray, Randolph; Lewis, Steven J.; Massicotte, Eric M.; Fehlings, Michael G.

    2011-01-01

    Background The utility and cost of minimally invasive surgical (MIS) fusion remain controversial. The primary objective of this study was to compare the direct economic impact of 1- and 2-level fusion for grade I or II degenerative or isthmic spondylolisthesis via an MIS technique compared with conventional open posterior decompression and fusion. Methods A retrospective cohort study was performed by use of prospective data from 78 consecutive patients (37 with MIS technique by 1 surgeon and 41 with open technique by 3 surgeons). Independent review of demographic, intraoperative, and acute postoperative data was performed. Oswestry disability index (ODI) and Short Form 36 (SF-36) values were prospectively collected preoperatively and at 1 year postoperatively. Cost-utility analysis was performed by use of in-hospital micro-costing data (operating room, nursing, imaging, laboratories, pharmacy, and allied health cost) and change in health utility index (SF-6D) at 1 year. Results The groups were comparable in terms of age, sex, preoperative hemoglobin, comorbidities, and body mass index. Groups significantly differed (P < .01) regarding baseline ODI and SF-6D scores, as well as number of 2-level fusions (MIS, 12; open, 20) and number of interbody cages (MIS, 45; open, 14). Blood loss (200 mL vs 798 mL), transfusions (0% vs 17%), and length of stay (LOS) (6.1 days vs 8.4 days) were significantly (P < .01) lower in the MIS group. Complications were also fewer in the MIS group (4 vs 12, P < .02). The mean cost of an open fusion was 1.28 times greater than that of an MIS fusion (P = .001). Both groups had significant improvement in 1-year outcome. The changes in ODI and SF-6D scores were not statistically different between groups. Multivariate regression analysis showed that LOS and number of levels fused were independent predictors of cost. Age and MIS were the only predictors of LOS. Baseline outcomes and MIS were predictors of 1-year outcome. Conclusion MIS posterior fusion for spondylolisthesis does reduce blood loss, transfusion requirements, and LOS. Both techniques provided substantial clinical improvements at 1 year. The cost utility of the MIS technique was considered comparable to that of the open technique. Level of Evidence Level III. PMID:25802665

  1. Preventing Fusion Mass Shift Avoids Postoperative Distal Curve Adding-on in Adolescent Idiopathic Scoliosis.

    PubMed

    Shigematsu, Hideki; Cheung, Jason Pui Yin; Bruzzone, Mauro; Matsumori, Hiroaki; Mak, Kin-Cheung; Samartzis, Dino; Luk, Keith Dip Kei

    2017-05-01

    Surgery for adolescent idiopathic scoliosis (AIS) is only complete after achieving fusion to maintain the correction obtained intraoperatively. The instrumented or fused segments can be referred to as the "fusion mass". In patients with AIS, the ideal fusion mass strategy has been established based on fulcrum-bending radiographs for main thoracic curves. Ideally, the fusion mass should achieve parallel endplates of the upper and lower instrumented vertebra and correct any "shift" for truncal balance. Distal adding-on is an important element to consider in AIS surgery. This phenomenon represents a progressive increase in the number of vertebrae included distally in the primary curvature and it should be avoided as it is associated with unsatisfactory cosmesis and an increased risk of revision surgery. However, it remains unknown whether any fusion mass shift, or shift in the fusion mass or instrumented segments, affects global spinal balance and distal adding-on after curve correction surgery in patients with AIS. (1) To investigate the relationship among postoperative fusion mass shift, global balance, and distal adding-on phenomenon in patients with AIS; and (2) to identify a cutoff value of fusion mass shift that will lead to distal adding-on. This was a retrospective study of patients with AIS from a single institution. Between 2006 and 2011 we performed 69 selective thoracic fusions for patients with main thoracic AIS. All patients were evaluated preoperatively and at 2 years postoperatively. The Cobb angle between the cranial and caudal endplates of the fusion mass and the coronal shift between them, which was defined as "fusion mass shift", were measured. Patients with a fusion mass Cobb angle greater than 20° were excluded to specifically determine the effect of fusion mass shift on distal adding-on phenomenon. Fusion mass shift was empirically set as 20 mm for analysis. Therefore, of the 69 patients who underwent selective thoracic fusion, only 52 with a fusion mass Cobb angle of 20° or less were recruited for study. We defined patients with a fusion mass shift of 20 mm or less as the balanced group and those with a fusion mass shift greater than 20 mm as the unbalanced group. A receiver operating characteristic (ROC) curve was used to determine the cutoff point of fusion mass shift for adding-on. Of the 52 patients studied, fusion mass shift (> 20 mm) was noted in 11 (21%), and six of those patients had distal adding-on at final followup. Although global spinal balance did not differ significantly between patients with or without fusion mass shift, the occurrence of adding-on phenomenon was significantly higher in the unbalanced group (55% (six of 11 patients), odds ratio [OR], 8.6; 95% CI, 2-39; p < 0.002) than the balanced group (12% [five of 41 patients]). Based on the ROC curve analysis, a fusion mass shift more than 18 mm was observed as the cutoff point for distal adding-on phenomenon (area under the curve, 0.70; 95% CI, 0.5-0.9; likelihood ratio, 5.0; sensitivity, 0.64; specificity, 0.73; positive predictive value, 39% [seven of 18 patients]; negative predictive value, 88% [30 of 34 patients]; OR, 4.8; 95% CI, 1-20; p = 0.02). Our study illustrates the substantial utility of the fulcrum-bending radiograph in determining fusion levels that can avoid fusion mass shift; thereby, underlining its importance in designing personalized surgical strategies for patients with scoliosis. Preoperatively, determining fusion levels by fulcrum-bending radiographs to avoid residual fusion mass shift is imperative. Intraoperatively, any fusion mass shift should be corrected to avoid distal adding-on, reoperation, and elevated healthcare costs. Level II, prognostic study.

  2. A robust object-based shadow detection method for cloud-free high resolution satellite images over urban areas and water bodies

    NASA Astrophysics Data System (ADS)

    Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad

    2018-06-01

    Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.

  3. Ventral cervical fusion at multiple levels using free vascularized double-islanded fibula - a technical report and review of the relevant literature.

    PubMed

    Krishnan, Kartik G; Müller, Adolf

    2002-04-01

    Reconstruction of the cervical spine using free vascularized bone flaps has been described in the literature. The reports involve either one level or, when multiple levels, they describe en bloc resection and reconstruction. Stabilization of different levels with a preserved intermediate segment with a single vascularized flap has not been described. We report on the case of a 55-year-old man, who had been operated several times using conventional techniques for cervical myelopathy and instability, who presented to us with severe neck pain. Diagnostic procedures showed pseudarthrosis of C3/4 and stress-overload of the C3/4 and C5/6 segments. The C4/5 fusion was adequately rigid, but avascular. We performed anterior cervical fusion at the C3/4 and C5/6 levels with a vascularized fibula flap modified as a double island. The rigidly fused C4/5 block was preserved and vascularized with the periosteum bridging the two fibular islands. The method and technique are described in detail. Fusion was adequate. Donor site morbidity was minimal and temporary. The patient is symptom free to date (25 months). The suggested method provides the possibility of vertebral fusion at different levels using a single vascularized flap. The indications for this procedure are (1) repeated failure of conventional methods, (2) established poor bone healing and bone non-union with avascular grafts and (3) a well-fused or preserved intermediate segment. The relevant literature is reviewed.

  4. An instrument for in situ time-resolved X-ray imaging and diffraction of laser powder bed fusion additive manufacturing processes

    NASA Astrophysics Data System (ADS)

    Calta, Nicholas P.; Wang, Jenny; Kiss, Andrew M.; Martin, Aiden A.; Depond, Philip J.; Guss, Gabriel M.; Thampy, Vivek; Fong, Anthony Y.; Weker, Johanna Nelson; Stone, Kevin H.; Tassone, Christopher J.; Kramer, Matthew J.; Toney, Michael F.; Van Buuren, Anthony; Matthews, Manyalibo J.

    2018-05-01

    In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at the Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ˜1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ˜50 × 100 μm area. We also discuss the utility of these measurements for model validation and process improvement.

  5. An instrument for in situ time-resolved X-ray imaging and diffraction of laser powder bed fusion additive manufacturing processes.

    PubMed

    Calta, Nicholas P; Wang, Jenny; Kiss, Andrew M; Martin, Aiden A; Depond, Philip J; Guss, Gabriel M; Thampy, Vivek; Fong, Anthony Y; Weker, Johanna Nelson; Stone, Kevin H; Tassone, Christopher J; Kramer, Matthew J; Toney, Michael F; Van Buuren, Anthony; Matthews, Manyalibo J

    2018-05-01

    In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at the Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ∼1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ∼50 × 100 μm area. We also discuss the utility of these measurements for model validation and process improvement.

  6. An instrument for in situ time-resolved X-ray imaging and diffraction of laser powder bed fusion additive manufacturing processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calta, Nicholas P.; Wang, Jenny; Kiss, Andrew M.

    In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at themore » Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ~1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ~50 × 100 μm area. In conclusion, we also discuss the utility of these measurements for model validation and process improvement.« less

  7. Perceptual full-reference quality assessment of stereoscopic images by considering binocular visual characteristics.

    PubMed

    Shao, Feng; Lin, Weisi; Gu, Shanbo; Jiang, Gangyi; Srikanthan, Thambipillai

    2013-05-01

    Perceptual quality assessment is a challenging issue in 3D signal processing research. It is important to study 3D signal directly instead of studying simple extension of the 2D metrics directly to the 3D case as in some previous studies. In this paper, we propose a new perceptual full-reference quality assessment metric of stereoscopic images by considering the binocular visual characteristics. The major technical contribution of this paper is that the binocular perception and combination properties are considered in quality assessment. To be more specific, we first perform left-right consistency checks and compare matching error between the corresponding pixels in binocular disparity calculation, and classify the stereoscopic images into non-corresponding, binocular fusion, and binocular suppression regions. Also, local phase and local amplitude maps are extracted from the original and distorted stereoscopic images as features in quality assessment. Then, each region is evaluated independently by considering its binocular perception property, and all evaluation results are integrated into an overall score. Besides, a binocular just noticeable difference model is used to reflect the visual sensitivity for the binocular fusion and suppression regions. Experimental results show that compared with the relevant existing metrics, the proposed metric can achieve higher consistency with subjective assessment of stereoscopic images.

  8. Generating Daily Synthetic Landsat Imagery by Combining Landsat and MODIS Data

    PubMed Central

    Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao

    2015-01-01

    Owing to low temporal resolution and cloud interference, there is a shortage of high spatial resolution remote sensing data. To address this problem, this study introduces a modified spatial and temporal data fusion approach (MSTDFA) to generate daily synthetic Landsat imagery. This algorithm was designed to avoid the limitations of the conditional spatial temporal data fusion approach (STDFA) including the constant window for disaggregation and the sensor difference. An adaptive window size selection method is proposed in this study to select the best window size and moving steps for the disaggregation of coarse pixels. The linear regression method is used to remove the influence of differences in sensor systems using disaggregated mean coarse reflectance by testing and validation in two study areas located in Xinjiang Province, China. The results show that the MSTDFA algorithm can generate daily synthetic Landsat imagery with a high correlation coefficient (R) ranged from 0.646 to 0.986 between synthetic images and the actual observations. We further show that MSTDFA can be applied to 250 m 16-day MODIS MOD13Q1 products and the Landsat Normalized Different Vegetation Index (NDVI) data by generating a synthetic NDVI image highly similar to actual Landsat NDVI observation with a high R of 0.97. PMID:26393607

  9. Generating Daily Synthetic Landsat Imagery by Combining Landsat and MODIS Data.

    PubMed

    Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao

    2015-09-18

    Owing to low temporal resolution and cloud interference, there is a shortage of high spatial resolution remote sensing data. To address this problem, this study introduces a modified spatial and temporal data fusion approach (MSTDFA) to generate daily synthetic Landsat imagery. This algorithm was designed to avoid the limitations of the conditional spatial temporal data fusion approach (STDFA) including the constant window for disaggregation and the sensor difference. An adaptive window size selection method is proposed in this study to select the best window size and moving steps for the disaggregation of coarse pixels. The linear regression method is used to remove the influence of differences in sensor systems using disaggregated mean coarse reflectance by testing and validation in two study areas located in Xinjiang Province, China. The results show that the MSTDFA algorithm can generate daily synthetic Landsat imagery with a high correlation coefficient (R) ranged from 0.646 to 0.986 between synthetic images and the actual observations. We further show that MSTDFA can be applied to 250 m 16-day MODIS MOD13Q1 products and the Landsat Normalized Different Vegetation Index (NDVI) data by generating a synthetic NDVI image highly similar to actual Landsat NDVI observation with a high R of 0.97.

  10. Hyperspectral and LiDAR remote sensing of fire fuels in Hawaii Volcanoes National Park.

    PubMed

    Varga, Timothy A; Asner, Gregory P

    2008-04-01

    Alien invasive grasses threaten to transform Hawaiian ecosystems through the alteration of ecosystem dynamics, especially the creation or intensification of a fire cycle. Across sub-montane ecosystems of Hawaii Volcanoes National Park on Hawaii Island, we quantified fine fuels and fire spread potential of invasive grasses using a combination of airborne hyperspectral and light detection and ranging (LiDAR) measurements. Across a gradient from forest to savanna to shrubland, automated mixture analysis of hyperspectral data provided spatially explicit fractional cover estimates of photosynthetic vegetation, non-photosynthetic vegetation, and bare substrate and shade. Small-footprint LiDAR provided measurements of vegetation height along this gradient of ecosystems. Through the fusion of hyperspectral and LiDAR data, a new fire fuel index (FFI) was developed to model the three-dimensional volume of grass fuels. Regionally, savanna ecosystems had the highest volumes of fire fuels, averaging 20% across the ecosystem and frequently filling all of the three-dimensional space represented by each image pixel. The forest and shrubland ecosystems had lower FFI values, averaging 4.4% and 8.4%, respectively. The results indicate that the fusion of hyperspectral and LiDAR remote sensing can provide unique information on the three-dimensional properties of ecosystems, their flammability, and the potential for fire spread.

  11. An instrument for in situ time-resolved X-ray imaging and diffraction of laser powder bed fusion additive manufacturing processes

    DOE PAGES

    Calta, Nicholas P.; Wang, Jenny; Kiss, Andrew M.; ...

    2018-05-01

    In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at themore » Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ~1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ~50 × 100 μm area. In conclusion, we also discuss the utility of these measurements for model validation and process improvement.« less

  12. Sensitivity of Marine Warm Cloud Retrieval Statistics to Algorithm Choices: Examples from MODIS Collection 6

    NASA Astrophysics Data System (ADS)

    Platnick, S.; Wind, G.; Zhang, Z.; Ackerman, S. A.; Maddux, B. C.

    2012-12-01

    The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the 1.6, 2.1, and 3.7 μm spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "not-clear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud edges (defined by immediate adjacency to "clear" MOD/MYD35 pixels) as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the 1D cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.

  13. Clinical evaluation of an allogeneic bone matrix containing viable osteogenic cells in patients undergoing one- and two-level posterolateral lumbar arthrodesis with decompressive laminectomy.

    PubMed

    Musante, David B; Firtha, Michael E; Atkinson, Brent L; Hahn, Rebekah; Ryaby, James T; Linovitz, Raymond J

    2016-05-27

    Trinity Evolution® cellular bone allograft (TE) possesses the osteogenic, osteoinductive, and osteoconductive elements essential for bone healing. The purpose of this study is to evaluate the radiographic and clinical outcomes when TE is used as a graft extender in combination with locally derived bone in one- and two-level instrumented lumbar posterolateral arthrodeses. In this retrospective evaluation, a consecutive series of subject charts that had posterolateral arthrodesis with TE and a 12-month radiographic follow-up were evaluated. All subjects were diagnosed with degenerative disc disease, radiculopathy, stenosis, and decreased disc height. At 2 weeks and at 3 and 12 months, plain radiographs were performed and the subject's back and leg pain (VAS) was recorded. An evaluation of fusion status was performed at 12 months. The population consisted of 43 subjects and 47 arthrodeses. At 12 months, a fusion rate of 90.7 % of subjects and 89.4 % of surgical levels was observed. High-risk subjects (e.g., diabetes, tobacco use, etc.) had fusion rates comparable to normal patients. Compared with the preoperative leg or back pain level, the postoperative pain levels were significantly (p < 0.0001) improved at every time point. There were no adverse events attributable to TE. Fusion rates using TE were higher than or comparable to fusion rates with autologous iliac crest bone graft that have been reported in the recent literature for posterolateral fusion procedures, and TE fusion rates were not adversely affected by several high-risk patient factors. The positive results provide confidence that TE can safely replace autologous iliac crest bone graft when used as a bone graft extender in combination with locally derived bone in the setting of posterolateral lumbar arthrodesis in patients with or without risk factors for compromised bone healing. Because of the retrospective nature of this study, the trial was not registered.

  14. Progress in understanding the neuronal SNARE function and its regulation.

    PubMed

    Yoon, T-Y; Shin, Y-K

    2009-02-01

    Vesicle budding and fusion underlies many essential biochemical deliveries in eukaryotic cells, and its core fusion machinery is thought to be built on one protein family named soluble N-ethylmaleimide-sensitive factor attachment protein receptor (SNARE). Recent technical advances based on site-directed fluorescence labelling and nano-scale detection down to the single-molecule level rapidly unveiled the protein and the lipid intermediates along the fusion pathway as well as the molecular actions of fusion effectors. Here we summarize these new exciting findings in context with a new mechanistic model that reconciles two existing fusion models: the proteinaceous pore model and the hemifusion model. Further, we attempt to locate the points of action for the fusion effectors along the fusion pathway and to delineate the energetic interplay between the SNARE complexes and the fusion effectors.

  15. Remotely controlled fusion of selected vesicles and living cells: a key issue review

    NASA Astrophysics Data System (ADS)

    Bahadori, Azra; Moreno-Pescador, Guillermo; Oddershede, Lene B.; Bendix, Poul M.

    2018-03-01

    Remote control over fusion of single cells and vesicles has a great potential in biological and chemical research allowing both transfer of genetic material between cells and transfer of molecular content between vesicles. Membrane fusion is a critical process in biology that facilitates molecular transport and mixing of cellular cytoplasms with potential formation of hybrid cells. Cells precisely regulate internal membrane fusions with the aid of specialized fusion complexes that physically provide the energy necessary for mediating fusion. Physical factors like membrane curvature, tension and temperature, affect biological membrane fusion by lowering the associated energy barrier. This has inspired the development of physical approaches to harness the fusion process at a single cell level by using remotely controlled electromagnetic fields to trigger membrane fusion. Here, we critically review various approaches, based on lasers or electric pulses, to control fusion between individual cells or between individual lipid vesicles and discuss their potential and limitations for present and future applications within biochemistry, biology and soft matter.

  16. Lumbar interbody fusion: techniques, indications and comparison of interbody fusion options including PLIF, TLIF, MI-TLIF, OLIF/ATP, LLIF and ALIF

    PubMed Central

    Phan, Kevin; Malham, Greg; Seex, Kevin; Rao, Prashanth J.

    2015-01-01

    Degenerative disc and facet joint disease of the lumbar spine is common in the ageing population, and is one of the most frequent causes of disability. Lumbar spondylosis may result in mechanical back pain, radicular and claudicant symptoms, reduced mobility and poor quality of life. Surgical interbody fusion of degenerative levels is an effective treatment option to stabilize the painful motion segment, and may provide indirect decompression of the neural elements, restore lordosis and correct deformity. The surgical options for interbody fusion of the lumbar spine include: posterior lumbar interbody fusion (PLIF), transforaminal lumbar interbody fusion (TLIF), minimally invasive transforaminal lumbar interbody fusion (MI-TLIF), oblique lumbar interbody fusion/anterior to psoas (OLIF/ATP), lateral lumbar interbody fusion (LLIF) and anterior lumbar interbody fusion (ALIF). The indications may include: discogenic/facetogenic low back pain, neurogenic claudication, radiculopathy due to foraminal stenosis, lumbar degenerative spinal deformity including symptomatic spondylolisthesis and degenerative scoliosis. In general, traditional posterior approaches are frequently used with acceptable fusion rates and low complication rates, however they are limited by thecal sac and nerve root retraction, along with iatrogenic injury to the paraspinal musculature and disruption of the posterior tension band. Minimally invasive (MIS) posterior approaches have evolved in an attempt to reduce approach related complications. Anterior approaches avoid the spinal canal, cauda equina and nerve roots, however have issues with approach related abdominal and vascular complications. In addition, lateral and OLIF techniques have potential risks to the lumbar plexus and psoas muscle. The present study aims firstly to comprehensively review the available literature and evidence for different lumbar interbody fusion (LIF) techniques. Secondly, we propose a set of recommendations and guidelines for the indications for interbody fusion options. Thirdly, this article provides a description of each approach, and illustrates the potential benefits and disadvantages of each technique with reference to indication and spine level performed. PMID:27683674

  17. Minimally invasive transforaminal lumbar interbody fusion for spondylolisthesis and degenerative spondylosis: 5-year results.

    PubMed

    Park, Yung; Ha, Joong Won; Lee, Yun Tae; Sung, Na Young

    2014-06-01

    Multiple studies have reported favorable short-term results after treatment of spondylolisthesis and other degenerative lumbar diseases with minimally invasive transforaminal lumbar interbody fusion. However, to our knowledge, results at a minimum of 5 years have not been reported. We determined (1) changes to the Oswestry Disability Index, (2) frequency of radiographic fusion, (3) complications and reoperations, and (4) the learning curve associated with minimally invasive transforaminal lumbar interbody fusion at minimum 5-year followup. We reviewed our first 124 patients who underwent minimally invasive transforaminal lumbar interbody fusion to treat low-grade spondylolisthesis and degenerative lumbar diseases and did not need a major deformity correction. This represented 63% (124 of 198) of the transforaminal lumbar interbody fusion procedures we performed for those indications during the study period (2003-2007). Eighty-three (67%) patients had complete 5-year followup. Plain radiographs and CT scans were evaluated by two reviewers. Trends of surgical time, blood loss, and hospital stay over time were examined by logarithmic curve fit-regression analysis to evaluate the learning curve. At 5 years, mean Oswestry Disability Index improved from 60 points preoperatively to 24 points and 79 of 83 patients (95%) had improvement of greater than 10 points. At 5 years, 67 of 83 (81%) achieved radiographic fusion, including 64 of 72 patients (89%) who had single-level surgery. Perioperative complications occurred in 11 of 124 patients (9%), and another surgical procedure was performed in eight of 124 patients (6.5%) involving the index level and seven of 124 patients (5.6%) at adjacent levels. There were slowly decreasing trends of surgical time and hospital stay only in single-level surgery and almost no change in intraoperative blood loss over time, suggesting a challenging learning curve. Oswestry Disability Index scores improved for patients with spondylolisthesis and degenerative lumbar diseases treated with minimally invasive transforaminal lumbar interbody fusion at minimum 5-year followup. We suggest this procedure is reasonable for properly selected patients with these indications; however, traditional approaches should still be performed for patients with high-grade spondylolisthesis, patients with a severely collapsed disc space and no motion seen on the dynamic radiographs, patients who need multilevel decompression and arthrodesis, and patients with kyphoscoliosis needing correction. Level IV, therapeutic study. See the Instructions for Authors for a complete description of levels of evidence.

  18. Thematic accuracy of the 1992 National Land-Cover Data for the western United States

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Yang, L.

    2004-01-01

    The MultiResolution Land Characteristics (MRLC) consortium sponsored production of the National Land Cover Data (NLCD) for the conterminous United States, using Landsat imagery collected on a target year of 1992 (1992 NLCD). Here we report the thematic accuracy of the 1992 NLCD for the six western mapping regions. Reference data were collected in each region for a probability sample of pixels stratified by map land-cover class. Results are reported for each of the six mapping regions with agreement defined as a match between the primary or alternate reference land-cover label and a mode class of the mapped 3×3 block of pixels centered on the sample pixel. Overall accuracy at Anderson Level II was low and variable across the regions, ranging from 38% for the Midwest to 70% for the Southwest. Overall accuracy at Anderson Level I was higher and more consistent across the regions, ranging from 82% to 85% for five of the six regions, but only 74% for the South-central region.

  19. Radiation hardness studies of AMS HV-CMOS 350 nm prototype chip HVStripV1

    DOE PAGES

    Kanisauskas, K.; Affolder, A.; Arndt, K.; ...

    2017-02-15

    CMOS active pixel sensors are being investigated for their potential use in the ATLAS inner tracker upgrade at the HL-LHC. The new inner tracker will have to handle a significant increase in luminosity while maintaining a sufficient signal-to-noise ratio and pulse shaping times. This paper focuses on the prototype chip "HVStripV1" (manufactured in the AMS HV-CMOS 350nm process) characterization before and after irradiation up to fluence levels expected for the strip region in the HL-LHC environment. The results indicate an increase of depletion region after irradiation for the same bias voltage by a factor of ≈2.4 and ≈2.8 for twomore » active pixels on the test chip. As a result, there was also a notable increase in noise levels from 85 e – to 386 e – and from 75 e – to 277 e – for the corresponding pixels.« less

  20. Radiation hardness studies of AMS HV-CMOS 350 nm prototype chip HVStripV1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanisauskas, K.; Affolder, A.; Arndt, K.

    CMOS active pixel sensors are being investigated for their potential use in the ATLAS inner tracker upgrade at the HL-LHC. The new inner tracker will have to handle a significant increase in luminosity while maintaining a sufficient signal-to-noise ratio and pulse shaping times. This paper focuses on the prototype chip "HVStripV1" (manufactured in the AMS HV-CMOS 350nm process) characterization before and after irradiation up to fluence levels expected for the strip region in the HL-LHC environment. The results indicate an increase of depletion region after irradiation for the same bias voltage by a factor of ≈2.4 and ≈2.8 for twomore » active pixels on the test chip. As a result, there was also a notable increase in noise levels from 85 e – to 386 e – and from 75 e – to 277 e – for the corresponding pixels.« less

  1. Clinical and radiographic assessment of transforaminal lumbar interbody fusion using HEALOS collagen-hydroxyapatite sponge with autologous bone marrow aspirate.

    PubMed

    Carter, Jason D; Swearingen, Alan B; Chaput, Christopher D; Rahm, Mark D

    2009-06-01

    Studies have suggested that the use of bone marrow aspirate (BMA) with HEALOS (DePuy Spine, Raynham, MA), a collagen-hydroxyapatite sponge (CHS), is an effective substitute for autologous iliac crest bone graft when used in fusion procedures of the lumbar spine. To assess clinical and radiographic outcomes after implantation of BMA/CHS in patients undergoing transforaminal lumbar interbody fusion (TLIF) with posterolateral fusion (PLF). Case series radiographic outcome study. Twenty patients. Radiographs/computed tomography (CT) scans. From September 2003 to October 2004, 20 patients (22 interbody levels) were implanted with BMA/CHS via TLIF/PLF with interbody cages and posterior pedicle screws. All patients were retrospectively identified and invited for a 2-year prospective follow-up. Plain radiographs with dynamic films and CT scans were taken, and fusion was assessed in a blinded manner. Follow-up averaged 27 months (range: 24-29). Primary diagnosis included spondylolisthesis (17 patients), scoliosis with asymmetric collapse (2 patients), and postdiscectomy foraminal stenosis (1 patient). The overall fusion rate was 95% (21/22 levels, 19/20 patients). Anteriorly bridging bone was observed in 91% of the anteriorly fused levels (20/22), of which 65% (13/20) occurred through and around the cage and 35% (7/20) around the cage only. Unilateral or bilateral bridging of the posterior fusion masses was observed in 91% (20/22), with 55% occurring bilaterally (12/22). In 4 (18%) cases, bridging only occurred either posteriorly (2 cases) or anteriorly (2 cases). Complications included one deep wound infection. At the 2-year follow-up, BMA/CHS showed acceptable fusion rates in patients undergoing TLIF/PLF, and can be considered as an alternative source of graft material.

  2. [Expression optimization and characterization of Tenebrio molitor antimicrobiol peptides TmAMP1m in Escherichia coli].

    PubMed

    Alimu, Reyihanguli; Mao, Xinfang; Liu, Zhongyuan

    2013-06-01

    To improve the expression level of tmAMP1m gene from Tenebrio molitor in Escherichia coli, we studied the effects of expression level and activity of the fusion protein HIS-TmAMP1m by conditions, such as culture temperature, inducing time and the final concentration of inductor Isopropyl beta-D-thiogalactopyranoside (IPTG). We analyzed the optimum expression conditions by Tricine-SDS-PAGE electrophoresis, meanwhile, detected its antibacterial activity by using agarose cavity diffusion method. The results suggest that when inducing the recombinant plasmid with a final IPTG concentration of 0.1 mmol/L at 37 degrees C for 4 h, there was the highest expression level of fusion protein HIS-TmAMP1m in Escherichia coli. Under these conditions, the expression of fusion protein accounted for 40% of the total cell lysate with the best antibacterial activity. We purified the fusion protein HIS-TmAMPlm with nickel-nitrilotriacetic acid (Ni-NTA) metal-affinity chromatography matrices. Western blotting analysis indicates that the His monoclonal antibody could be specifically bound to fusion protein HIS-TmAMPlm. After expression by inducing, the fusion protein could inhibit the growth of host cell transformed by pET30a-tmAMP1m. The fusion protein HIS-TmAMP1m had better stability and remained higher antibacterial activities when incubated at 100 degrees C for 10 h, repeated freeze thawing at -20 degrees C, dissolved in strong acid and alkali, or treated by organic solvents and protease. Moreover, the minimum inhibitory concentration results demonstrated that the fusion protein HIS-TmAMP1m has a good antibacterial activity against Staphylococcus aureus, Staphylococcus sp., Corynebacterium glutamicum, Bacillus thuringiensis, Corynebacterium sp. This study laid the foundation to promote the application of insect antimicrobial peptides and further research.

  3. Formulating Spatially Varying Performance in the Statistical Fusion Framework

    PubMed Central

    Landman, Bennett A.

    2012-01-01

    To date, label fusion methods have primarily relied either on global (e.g. STAPLE, globally weighted vote) or voxelwise (e.g. locally weighted vote) performance models. Optimality of the statistical fusion framework hinges upon the validity of the stochastic model of how a rater errs (i.e., the labeling process model). Hitherto, approaches have tended to focus on the extremes of potential models. Herein, we propose an extension to the STAPLE approach to seamlessly account for spatially varying performance by extending the performance level parameters to account for a smooth, voxelwise performance level field that is unique to each rater. This approach, Spatial STAPLE, provides significant improvements over state-of-the-art label fusion algorithms in both simulated and empirical data sets. PMID:22438513

  4. Digital classification of Landsat data for vegetation and land-cover mapping in the Blackfoot River watershed, southeastern Idaho

    USGS Publications Warehouse

    Pettinger, L.R.

    1982-01-01

    This paper documents the procedures, results, and final products of a digital analysis of Landsat data used to produce a vegetation and landcover map of the Blackfoot River watershed in southeastern Idaho. Resource classes were identified at two levels of detail: generalized Level I classes (for example, forest land and wetland) and detailed Levels II and III classes (for example, conifer forest, aspen, wet meadow, and riparian hardwoods). Training set statistics were derived using a modified clustering approach. Environmental stratification that separated uplands from lowlands improved discrimination between resource classes having similar spectral signatures. Digital classification was performed using a maximum likelihood algorithm. Classification accuracy was determined on a single-pixel basis from a random sample of 25-pixel blocks. These blocks were transferred to small-scale color-infrared aerial photographs, and the image area corresponding to each pixel was interpreted. Classification accuracy, expressed as percent agreement of digital classification and photo-interpretation results, was 83.0:t 2.1 percent (0.95 probability level) for generalized (Level I) classes and 52.2:t 2.8 percent (0.95 probability level) for detailed (Levels II and III) classes. After the classified images were geometrically corrected, two types of maps were produced of Level I and Levels II and III resource classes: color-coded maps at a 1:250,000 scale, and flatbed-plotter overlays at a 1:24,000 scale. The overlays are more useful because of their larger scale, familiar format to users, and compatibility with other types of topographic and thematic maps of the same scale.

  5. Detection of diluted contaminants on chicken carcasses using a two-dimensional scatter plot based on a two-dimensional hyperspectral correlation spectrum.

    PubMed

    Wu, Wei; Chen, Gui-Yun; Wu, Ming-Qing; Yu, Zhen-Wei; Chen, Kun-Jie

    2017-03-20

    A two-dimensional (2D) scatter plot method based on the 2D hyperspectral correlation spectrum is proposed to detect diluted blood, bile, and feces from the cecum and duodenum on chicken carcasses. First, from the collected hyperspectral data, a set of uncontaminated regions of interest (ROIs) and four sets of contaminated ROIs were selected, whose average spectra were treated as the original spectrum and influenced spectra, respectively. Then, the difference spectra were obtained and used to conduct correlation analysis, from which the 2D hyperspectral correlation spectrum was constructed using the analogy method of 2D IR correlation spectroscopy. Two maximum auto-peaks and a pair of cross peaks appeared at 656 and 474 nm. Therefore, 656 and 474 nm were selected as the characteristic bands because they were most sensitive to the spectral change induced by the contaminants. The 2D scatter plots of the contaminants, clean skin, and background in the 474- and 656-nm space were used to distinguish the contaminants from the clean skin and background. The threshold values of the 474- and 656-nm bands were determined by receiver operating characteristic (ROC) analysis. According to the ROC results, a pixel whose relative reflectance at 656 nm was greater than 0.5 and relative reflectance at 474 nm was lower than 0.3 was judged as a contaminated pixel. A region with more than 50 pixels identified was marked in the detection graph. This detection method achieved a recognition rate of up to 95.03% at the region level and 31.84% at the pixel level. The false-positive rate was only 0.82% at the pixel level. The results of this study confirm that the 2D scatter plot method based on the 2D hyperspectral correlation spectrum is an effective method for detecting diluted contaminants on chicken carcasses.

  6. On-Orbit Solar Dynamics Observatory (SDO) Star Tracker Warm Pixel Analysis

    NASA Technical Reports Server (NTRS)

    Felikson, Denis; Ekinci, Matthew; Hashmall, Joseph A.; Vess, Melissa

    2011-01-01

    This paper describes the process of identification and analysis of warm pixels in two autonomous star trackers on the Solar Dynamics Observatory (SDO) mission. A brief description of the mission orbit and attitude regimes is discussed and pertinent star tracker hardware specifications are given. Warm pixels are defined and the Quality Index parameter is introduced, which can be explained qualitatively as a manifestation of a possible warm pixel event. A description of the algorithm used to identify warm pixel candidates is given. Finally, analysis of dumps of on-orbit star tracker charge coupled devices (CCD) images is presented and an operational plan going forward is discussed. SDO, launched on February 11, 2010, is operated from the NASA Goddard Space Flight Center (GSFC). SDO is in a geosynchronous orbit with a 28.5 inclination. The nominal mission attitude points the spacecraft X-axis at the Sun, with the spacecraft Z-axis roughly aligned with the Solar North Pole. The spacecraft Y-axis completes the triad. In attitude, SDO moves approximately 0.04 per hour, mostly about the spacecraft Z-axis. The SDO star trackers, manufactured by Galileo Avionica, project the images of stars in their 16.4deg x 16.4deg fields-of-view onto CCD detectors consisting of 512 x 512 pixels. The trackers autonomously identify the star patterns and provide an attitude estimate. Each unit is able to track up to 9 stars. Additionally, each tracker calculates a parameter called the Quality Index, which is a measure of the quality of the attitude solution. Each pixel in the CCD measures the intensity of light and a warns pixel is defined as having a measurement consistently and significantly higher than the mean background intensity level. A warns pixel should also have lower intensity than a pixel containing a star image and will not move across the field of view as the attitude changes (as would a dim star image). It should be noted that the maximum error introduced in the star tracker attitude solution during suspected warm pixel corruptions is within the specified 36 attitude error budget requirement of [35, 70, 70] arcseconds. Thus, the star trackers provided attitude accuracy within the specification for SDO. The star tracker images are intentionally defocused so each star image is detected in more than one CCD pixel. The position of each star is calculated as an intensity-weighted average of the illuminated pixels. The exact method of finding the positions is proprietary to the tracker manufacturer. When a warm pixel happens to be in the vicinity of a star, it can corrupt the calculation of the position of that particular star, thereby corrupting the estimate of the attitude.

  7. Pott disease in the thoracolumbar spine with marked kyphosis and progressive paraplegia necessitating posterior vertebral column resection and anterior reconstruction with a cage.

    PubMed

    Pappou, Ioannis P; Papadopoulos, Elias C; Swanson, Andrew N; Mermer, Matthew J; Fantini, Gary A; Urban, Michael K; Russell, Linda; Cammisa, Frank P; Girardi, Federico P

    2006-02-15

    Case report. To report on a patient with Pott disease, progressive neurologic deficit, and severe kyphotic deformity, who had medical treatment fail and required posterior/anterior decompression with instrumented fusion. Treatment options will be discussed. Tuberculous spondylitis is an increasingly common disease worldwide, with an estimated prevalence of 800,000 cases. Surgical treatment consisting of extensive posterior decompression/instrumented fusion and 3-level posterior vertebral column resection, followed by anterior debridement/fusion with cage reconstruction. Neurologic improvement at 6-month follow-up (Frankel B to Frankel D), with evidence of radiographic fusion. A 70-year-old patient with progressive Pott paraplegia and severe kyphotic deformity, for whom medical treatment failed is presented. A posterior vertebral column resection, multiple level posterior decompression, and instrumented fusion, followed by an anterior interbody fusion with cage was used to decompress the spinal cord, restore sagittal alignment, and debride the infection. At 6-month follow-up, the patient obtained excellent pain relief, correction of deformity, elimination of the tuberculous foci, and significant recovery of neurologic function.

  8. Improved Maturity and Ripeness Classifications of Magnifera Indica cv. Harumanis Mangoes through Sensor Fusion of an Electronic Nose and Acoustic Sensor

    PubMed Central

    Zakaria, Ammar; Shakaff, Ali Yeon Md; Masnan, Maz Jamilah; Saad, Fathinul Syahir Ahmad; Adom, Abdul Hamid; Ahmad, Mohd Noor; Jaafar, Mahmad Nor; Abdullah, Abu Hassan; Kamarudin, Latifah Munirah

    2012-01-01

    In recent years, there have been a number of reported studies on the use of non-destructive techniques to evaluate and determine mango maturity and ripeness levels. However, most of these reported works were conducted using single-modality sensing systems, either using an electronic nose, acoustics or other non-destructive measurements. This paper presents the work on the classification of mangoes (Magnifera Indica cv. Harumanis) maturity and ripeness levels using fusion of the data of an electronic nose and an acoustic sensor. Three groups of samples each from two different harvesting times (week 7 and week 8) were evaluated by the e-nose and then followed by the acoustic sensor. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) were able to discriminate the mango harvested at week 7 and week 8 based solely on the aroma and volatile gases released from the mangoes. However, when six different groups of different maturity and ripeness levels were combined in one classification analysis, both PCA and LDA were unable to discriminate the age difference of the Harumanis mangoes. Instead of six different groups, only four were observed using the LDA, while PCA showed only two distinct groups. By applying a low level data fusion technique on the e-nose and acoustic data, the classification for maturity and ripeness levels using LDA was improved. However, no significant improvement was observed using PCA with data fusion technique. Further work using a hybrid LDA-Competitive Learning Neural Network was performed to validate the fusion technique and classify the samples. It was found that the LDA-CLNN was also improved significantly when data fusion was applied. PMID:22778629

  9. Improved maturity and ripeness classifications of Magnifera Indica cv. Harumanis mangoes through sensor fusion of an electronic nose and acoustic sensor.

    PubMed

    Zakaria, Ammar; Shakaff, Ali Yeon Md; Masnan, Maz Jamilah; Saad, Fathinul Syahir Ahmad; Adom, Abdul Hamid; Ahmad, Mohd Noor; Jaafar, Mahmad Nor; Abdullah, Abu Hassan; Kamarudin, Latifah Munirah

    2012-01-01

    In recent years, there have been a number of reported studies on the use of non-destructive techniques to evaluate and determine mango maturity and ripeness levels. However, most of these reported works were conducted using single-modality sensing systems, either using an electronic nose, acoustics or other non-destructive measurements. This paper presents the work on the classification of mangoes (Magnifera Indica cv. Harumanis) maturity and ripeness levels using fusion of the data of an electronic nose and an acoustic sensor. Three groups of samples each from two different harvesting times (week 7 and week 8) were evaluated by the e-nose and then followed by the acoustic sensor. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) were able to discriminate the mango harvested at week 7 and week 8 based solely on the aroma and volatile gases released from the mangoes. However, when six different groups of different maturity and ripeness levels were combined in one classification analysis, both PCA and LDA were unable to discriminate the age difference of the Harumanis mangoes. Instead of six different groups, only four were observed using the LDA, while PCA showed only two distinct groups. By applying a low level data fusion technique on the e-nose and acoustic data, the classification for maturity and ripeness levels using LDA was improved. However, no significant improvement was observed using PCA with data fusion technique. Further work using a hybrid LDA-Competitive Learning Neural Network was performed to validate the fusion technique and classify the samples. It was found that the LDA-CLNN was also improved significantly when data fusion was applied.

  10. A Causal, Data-driven Approach to Modeling the Kepler Data

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Hogg, David W.; Foreman-Mackey, Daniel; Schölkopf, Bernhard

    2016-09-01

    Astronomical observations are affected by several kinds of noise, each with its own causal source; there is photon noise, stochastic source variability, and residuals coming from imperfect calibration of the detector or telescope. The precision of NASA Kepler photometry for exoplanet science—the most precise photometric measurements of stars ever made—appears to be limited by unknown or untracked variations in spacecraft pointing and temperature, and unmodeled stellar variability. Here, we present the causal pixel model (CPM) for Kepler data, a data-driven model intended to capture variability but preserve transit signals. The CPM works at the pixel level so that it can capture very fine-grained information about the variation of the spacecraft. The CPM models the systematic effects in the time series of a pixel using the pixels of many other stars and the assumption that any shared signal in these causally disconnected light curves is caused by instrumental effects. In addition, we use the target star’s future and past (autoregression). By appropriately separating, for each data point, the data into training and test sets, we ensure that information about any transit will be perfectly isolated from the model. The method has four tuning parameters—the number of predictor stars or pixels, the autoregressive window size, and two L2-regularization amplitudes for model components, which we set by cross-validation. We determine values for tuning parameters that works well for most of the stars and apply the method to a corresponding set of target stars. We find that CPM can consistently produce low-noise light curves. In this paper, we demonstrate that pixel-level de-trending is possible while retaining transit signals, and we think that methods like CPM are generally applicable and might be useful for K2, TESS, etc., where the data are not clean postage stamps like Kepler.

  11. 14C autoradiography with an energy-sensitive silicon pixel detector.

    PubMed

    Esposito, M; Mettivier, G; Russo, P

    2011-04-07

    The first performance tests are presented of a carbon-14 ((14)C) beta-particle digital autoradiography system with an energy-sensitive hybrid silicon pixel detector based on the Timepix readout circuit. Timepix was developed by the Medipix2 Collaboration and it is similar to the photon-counting Medipix2 circuit, except for an added time-based synchronization logic which allows derivation of energy information from the time-over-threshold signal. This feature permits direct energy measurements in each pixel of the detector array. Timepix is bump-bonded to a 300 µm thick silicon detector with 256 × 256 pixels of 55 µm pitch. Since an energetic beta-particle could release its kinetic energy in more than one detector pixel as it slows down in the semiconductor detector, an off-line image analysis procedure was adopted in which the single-particle cluster of hit pixels is recognized; its total energy is calculated and the position of interaction on the detector surface is attributed to the centre of the charge cluster. Measurements reported are detector sensitivity, (4.11 ± 0.03) × 10(-3) cps mm(-2) kBq(-1) g, background level, (3.59 ± 0.01) × 10(-5) cps mm(-2), and minimum detectable activity, 0.0077 Bq. The spatial resolution is 76.9 µm full-width at half-maximum. These figures are compared with several digital imaging detectors for (14)C beta-particle digital autoradiography.

  12. Development and characterization of high-resolution neutron pixel detectors based on Timepix read-out chips

    NASA Astrophysics Data System (ADS)

    Krejci, F.; Zemlicka, J.; Jakubek, J.; Dudak, J.; Vavrik, D.; Köster, U.; Atkins, D.; Kaestner, A.; Soltes, J.; Viererbl, L.; Vacik, J.; Tomandl, I.

    2016-12-01

    Using a suitable isotope such as 6Li and 10B semiconductor hybrid pixel detectors can be successfully adapted for position sensitive detection of thermal and cold neutrons via conversion into energetic light ions. The adapted devices then typically provides spatial resolution at the level comparable to the pixel pitch (55 μm) and sensitive area of about few cm2. In this contribution, we describe further progress in neutron imaging performance based on the development of a large-area hybrid pixel detector providing practically continuous neutron sensitive area of 71 × 57 mm2. The measurements characterising the detector performance at the cold neutron imaging instrument ICON at PSI and high-flux imaging beam-line Neutrograph at ILL are presented. At both facilities, high-resolution high-contrast neutron radiography with the newly developed detector has been successfully applied for objects which imaging were previously difficult with hybrid pixel technology (such as various composite materials, objects of cultural heritage etc.). Further, a significant improvement in the spatial resolution of neutron radiography with hybrid semiconductor pixel detector based on the fast read-out Timepix-based detector is presented. The system is equipped with a thin planar 6LiF convertor operated effectively in the event-by-event mode enabling position sensitive detection with spatial resolution better than 10 μm.

  13. Multiscale Medical Image Fusion in Wavelet Domain

    PubMed Central

    Khare, Ashish

    2013-01-01

    Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach. PMID:24453868

  14. An improved method for precise automatic co-registration of moderate and high-resolution spacecraft imagery

    NASA Technical Reports Server (NTRS)

    Bryant, Nevin A.; Logan, Thomas L.; Zobrist, Albert L.

    2006-01-01

    Improvements to the automated co-registration and change detection software package, AFIDS (Automatic Fusion of Image Data System) has recently completed development for and validation by NGA/GIAT. The improvements involve the integration of the AFIDS ultra-fine gridding technique for horizontal displacement compensation with the recently evolved use of Rational Polynomial Functions/ Coefficients (RPFs/RPCs) for image raster pixel position to Latitude/Longitude indexing. Mapping and orthorectification (correction for elevation effects) of satellite imagery defies exact projective solutions because the data are not obtained from a single point (like a camera), but as a continuous process from the orbital path. Standard image processing techniques can apply approximate solutions, but advances in the state-of-the-art had to be made for precision change-detection and time-series applications where relief offsets become a controlling factor. The earlier AFIDS procedure required the availability of a camera model and knowledge of the satellite platform ephemeredes. The recent design advances connect the spacecraft sensor Rational Polynomial Function, a deductively developed model, with the AFIDS ultrafine grid, an inductively developed representation of the relationship raster pixel position to latitude /longitude. As a result, RPCs can be updated by AFIDS, a situation often necessary due to the accuracy limits of spacecraft navigation systems. An example of precision change detection will be presented from Quickbird.

  15. Virtual reality 3D headset based on DMD light modulators

    NASA Astrophysics Data System (ADS)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-01

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micromirror devices (DMD). Current methods for presenting information for virtual reality are focused on either polarizationbased modulators such as liquid crystal on silicon (LCoS) devices, or miniature LCD or LED displays often using lenses to place the image at infinity. LCoS modulators are an area of active research and development, and reduce the amount of viewing light by 50% due to the use of polarization. Viewable LCD or LED screens may suffer low resolution, cause eye fatigue, and exhibit a "screen door" or pixelation effect due to the low pixel fill factor. Our approach leverages a mature technology based on silicon micro mirrors delivering 720p resolution displays in a small form-factor with high fill factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high-definition resolution and low power consumption, and many of the design methods developed for DMD projector applications can be adapted to display use. Potential applications include night driving with natural depth perception, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design concept is described in which light from the DMD is imaged to infinity and the user's own eye lens forms a real image on the user's retina resulting in a virtual retinal display.

  16. Imaging system for cardiac planar imaging using a dedicated dual-head gamma camera

    DOEpatents

    Majewski, Stanislaw [Morgantown, VA; Umeno, Marc M [Woodinville, WA

    2011-09-13

    A cardiac imaging system employing dual gamma imaging heads co-registered with one another to provide two dynamic simultaneous views of the heart sector of a patient torso. A first gamma imaging head is positioned in a first orientation with respect to the heart sector and a second gamma imaging head is positioned in a second orientation with respect to the heart sector. An adjustment arrangement is capable of adjusting the distance between the separate imaging heads and the angle between the heads. With the angle between the imaging heads set to 180 degrees and operating in a range of 140-159 keV and at a rate of up to 500kHz, the imaging heads are co-registered to produce simultaneous dynamic recording of two stereotactic views of the heart. The use of co-registered imaging heads maximizes the uniformity of detection sensitivity of blood flow in and around the heart over the whole heart volume and minimizes radiation absorption effects. A normalization/image fusion technique is implemented pixel-by-corresponding pixel to increase signal for any cardiac region viewed in two images obtained from the two opposed detector heads for the same time bin. The imaging system is capable of producing enhanced first pass studies, bloodpool studies including planar, gated and non-gated EKG studies, planar EKG perfusion studies, and planar hot spot imaging.

  17. Investigation of thin n-in-p planar pixel modules for the ATLAS upgrade

    NASA Astrophysics Data System (ADS)

    Savic, N.; Beyer, J.; La Rosa, A.; Macchiolo, A.; Nisius, R.

    2016-12-01

    In view of the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), planned to start around 2023-2025, the ATLAS experiment will undergo a replacement of the Inner Detector. A higher luminosity will imply higher irradiation levels and hence will demand more radiation hardness especially in the inner layers of the pixel system. The n-in-p silicon technology is a promising candidate to instrument this region, also thanks to its cost-effectiveness because it only requires a single sided processing in contrast to the n-in-n pixel technology presently employed in the LHC experiments. In addition, thin sensors were found to ensure radiation hardness at high fluences. An overview is given of recent results obtained with not irradiated and irradiated n-in-p planar pixel modules. The focus will be on n-in-p planar pixel sensors with an active thickness of 100 and 150 μm recently produced at ADVACAM. To maximize the active area of the sensors, slim and active edges are implemented. The performance of these modules is investigated at beam tests and the results on edge efficiency will be shown.

  18. An analysis of Landsat-4 Thematic Mapper geometric properties

    NASA Technical Reports Server (NTRS)

    Walker, R. E.; Zobrist, A. L.; Bryant, N. A.; Gohkman, B.; Friedman, S. Z.; Logan, T. L.

    1984-01-01

    Landsat-4 Thematic Mapper data of Washington, DC, Harrisburg, PA, and Salton Sea, CA were analyzed to determine geometric integrity and conformity of the data to known earth surface geometry. Several tests were performed. Intraband correlation and interband registration were investigated. No problems were observed in the intraband analysis, and aside from indications of slight misregistration between bands of the primary versus bands of the secondary focal planes, interband registration was well within the specified tolerances. A substantial number of ground control points were found and used to check the images' conformity to the Space Oblique Mercator (SOM) projection of their respective areas. The means of the residual offsets, which included nonprocessing related measurement errors, were close to the one pixel level in the two scenes examined. The Harrisburg scene residual mean was 28.38 m (0.95 pixels) with a standard deviation of 19.82 m (0.66 pixels), while the mean and standard deviation for the Salton Sea scene were 40.46 (1.35 pixels) and 30.57 m (1.02 pixels), respectively. Overall, the data were judged to be a high geometric quality with errors close to those targeted by the TM sensor design specifications.

  19. Sub-pixel mineral mapping using EO-1 Hyperion hyperspectral data

    NASA Astrophysics Data System (ADS)

    Kumar, C.; Shetty, A.; Raval, S.; Champatiray, P. K.; Sharma, R.

    2014-11-01

    This study describes the utility of Earth Observation (EO)-1 Hyperion data for sub-pixel mineral investigation using Mixture Tuned Target Constrained Interference Minimized Filter (MTTCIMF) algorithm in hostile mountainous terrain of Rajsamand district of Rajasthan, which hosts economic mineralization such as lead, zinc, and copper etc. The study encompasses pre-processing, data reduction, Pixel Purity Index (PPI) and endmember extraction from reflectance image of surface minerals such as illite, montmorillonite, phlogopite, dolomite and chlorite. These endmembers were then assessed with USGS mineral spectral library and lab spectra of rock samples collected from field for spectral inspection. Subsequently, MTTCIMF algorithm was implemented on processed image to obtain mineral distribution map of each detected mineral. A virtual verification method has been adopted to evaluate the classified image, which uses directly image information to evaluate the result and confirm the overall accuracy and kappa coefficient of 68 % and 0.6 respectively. The sub-pixel level mineral information with reasonable accuracy could be a valuable guide to geological and exploration community for expensive ground and/or lab experiments to discover economic deposits. Thus, the study demonstrates the feasibility of Hyperion data for sub-pixel mineral mapping using MTTCIMF algorithm with cost and time effective approach.

  20. Improving the quantification of contrast enhanced ultrasound using a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Rizzo, Gaia; Tonietto, Matteo; Castellaro, Marco; Raffeiner, Bernd; Coran, Alessandro; Fiocco, Ugo; Stramare, Roberto; Grisan, Enrico

    2017-03-01

    Contrast Enhanced Ultrasound (CEUS) is a sensitive imaging technique to assess tissue vascularity, that can be useful in the quantification of different perfusion patterns. This can be particularly important in the early detection and staging of arthritis. In a recent study we have shown that a Gamma-variate can accurately quantify synovial perfusion and it is flexible enough to describe many heterogeneous patterns. Moreover, we have shown that through a pixel-by-pixel analysis the quantitative information gathered characterizes more effectively the perfusion. However, the SNR ratio of the data and the nonlinearity of the model makes the parameter estimation difficult. Using classical non-linear-leastsquares (NLLS) approach the number of unreliable estimates (those with an asymptotic coefficient of variation greater than a user-defined threshold) is significant, thus affecting the overall description of the perfusion kinetics and of its heterogeneity. In this work we propose to solve the parameter estimation at the pixel level within a Bayesian framework using Variational Bayes (VB), and an automatic and data-driven prior initialization. When evaluating the pixels for which both VB and NLLS provided reliable estimates, we demonstrated that the parameter values provided by the two methods are well correlated (Pearson's correlation between 0.85 and 0.99). Moreover, the mean number of unreliable pixels drastically reduces from 54% (NLLS) to 26% (VB), without increasing the computational time (0.05 s/pixel for NLLS and 0.07 s/pixel for VB). When considering the efficiency of the algorithms as computational time per reliable estimate, VB outperforms NLLS (0.11 versus 0.25 seconds per reliable estimate respectively).

Top