Multimodal biometric approach for cancelable face template generation
NASA Astrophysics Data System (ADS)
Paul, Padma Polash; Gavrilova, Marina
2012-06-01
Due to the rapid growth of biometric technology, template protection becomes crucial to secure integrity of the biometric security system and prevent unauthorized access. Cancelable biometrics is emerging as one of the best solutions to secure the biometric identification and verification system. We present a novel technique for robust cancelable template generation algorithm that takes advantage of the multimodal biometric using feature level fusion. Feature level fusion of different facial features is applied to generate the cancelable template. A proposed algorithm based on the multi-fold random projection and fuzzy communication scheme is used for this purpose. In cancelable template generation, one of the main difficulties is keeping interclass variance of the feature. We have found that interclass variations of the features that are lost during multi fold random projection can be recovered using fusion of different feature subsets and projecting in a new feature domain. Applying the multimodal technique in feature level, we enhance the interclass variability hence improving the performance of the system. We have tested the system for classifier fusion for different feature subset and different cancelable template fusion. Experiments have shown that cancelable template improves the performance of the biometric system compared with the original template.
Adaptive fusion of infrared and visible images in dynamic scene
NASA Astrophysics Data System (ADS)
Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi
2011-11-01
Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.
Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao
2017-01-01
To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved. PMID:28640181
Deng, Changjian; Lv, Kun; Shi, Debo; Yang, Bo; Yu, Song; He, Zhiyi; Yan, Jia
2018-06-12
In this paper, a novel feature selection and fusion framework is proposed to enhance the discrimination ability of gas sensor arrays for odor identification. Firstly, we put forward an efficient feature selection method based on the separability and the dissimilarity to determine the feature selection order for each type of feature when increasing the dimension of selected feature subsets. Secondly, the K-nearest neighbor (KNN) classifier is applied to determine the dimensions of the optimal feature subsets for different types of features. Finally, in the process of establishing features fusion, we come up with a classification dominance feature fusion strategy which conducts an effective basic feature. Experimental results on two datasets show that the recognition rates of Database I and Database II achieve 97.5% and 80.11%, respectively, when k = 1 for KNN classifier and the distance metric is correlation distance (COR), which demonstrates the superiority of the proposed feature selection and fusion framework in representing signal features. The novel feature selection method proposed in this paper can effectively select feature subsets that are conducive to the classification, while the feature fusion framework can fuse various features which describe the different characteristics of sensor signals, for enhancing the discrimination ability of gas sensors and, to a certain extent, suppressing drift effect.
A method based on IHS cylindrical transform model for quality assessment of image fusion
NASA Astrophysics Data System (ADS)
Zhu, Xiaokun; Jia, Yonghong
2005-10-01
Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.
Integrated Multi-Aperture Sensor and Navigation Fusion
2010-02-01
Visio, Springer-Verlag Inc., New York, 2004. [3] R. G. Brown and P. Y. C. Hwang , Introduction to Random Signals and Applied Kalman Filtering, Third...formulate Kalman filter vision/inertial measurement observables for other images without the need to know (or measure) their feature ranges. As compared...Internal Data Fusion Multi-aperture/INS data fusion is formulated in the feature domain using the complementary Kalman filter methodology [3]. In this
Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition
Islam, Md. Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676
Feature and score fusion based multiple classifier selection for iris recognition.
Islam, Md Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.
Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas.
Liu, Yanpeng; Li, Yibin; Ma, Xin; Song, Rui
2017-03-29
In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features' dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Shiju; Qian, Wei; Guan, Yubao
2016-06-15
Purpose: This study aims to investigate the potential to improve lung cancer recurrence risk prediction performance for stage I NSCLS patients by integrating oversampling, feature selection, and score fusion techniques and develop an optimal prediction model. Methods: A dataset involving 94 early stage lung cancer patients was retrospectively assembled, which includes CT images, nine clinical and biological (CB) markers, and outcome of 3-yr disease-free survival (DFS) after surgery. Among the 94 patients, 74 remained DFS and 20 had cancer recurrence. Applying a computer-aided detection scheme, tumors were segmented from the CT images and 35 quantitative image (QI) features were initiallymore » computed. Two normalized Gaussian radial basis function network (RBFN) based classifiers were built based on QI features and CB markers separately. To improve prediction performance, the authors applied a synthetic minority oversampling technique (SMOTE) and a BestFirst based feature selection method to optimize the classifiers and also tested fusion methods to combine QI and CB based prediction results. Results: Using a leave-one-case-out cross-validation (K-fold cross-validation) method, the computed areas under a receiver operating characteristic curve (AUCs) were 0.716 ± 0.071 and 0.642 ± 0.061, when using the QI and CB based classifiers, respectively. By fusion of the scores generated by the two classifiers, AUC significantly increased to 0.859 ± 0.052 (p < 0.05) with an overall prediction accuracy of 89.4%. Conclusions: This study demonstrated the feasibility of improving prediction performance by integrating SMOTE, feature selection, and score fusion techniques. Combining QI features and CB markers and performing SMOTE prior to feature selection in classifier training enabled RBFN based classifier to yield improved prediction accuracy.« less
Deep learning decision fusion for the classification of urban remote sensing data
NASA Astrophysics Data System (ADS)
Abdi, Ghasem; Samadzadegan, Farhad; Reinartz, Peter
2018-01-01
Multisensor data fusion is one of the most common and popular remote sensing data classification topics by considering a robust and complete description about the objects of interest. Furthermore, deep feature extraction has recently attracted significant interest and has become a hot research topic in the geoscience and remote sensing research community. A deep learning decision fusion approach is presented to perform multisensor urban remote sensing data classification. After deep features are extracted by utilizing joint spectral-spatial information, a soft-decision made classifier is applied to train high-level feature representations and to fine-tune the deep learning framework. Next, a decision-level fusion classifies objects of interest by the joint use of sensors. Finally, a context-aware object-based postprocessing is used to enhance the classification results. A series of comparative experiments are conducted on the widely used dataset of 2014 IEEE GRSS data fusion contest. The obtained results illustrate the considerable advantages of the proposed deep learning decision fusion over the traditional classifiers.
NASA Astrophysics Data System (ADS)
Ebrahimi Orimi, H.; Esmaeili, M.; Refahi Oskouei, A.; Mirhadizadehd, S. A.; Tse, P. W.
2017-10-01
Condition monitoring of rotary devices such as helical gears is an issue of great significance in industrial projects. This paper introduces a feature extraction method for gear fault diagnosis using wavelet packet due to its higher frequency resolution. During this investigation, the mother wavelet Daubechies 10 (Db-10) was applied to calculate the coefficient entropy of each frequency band of 5th level (32 frequency bands) as features. In this study, the peak value of the signal entropies was selected as applicable features in order to improve frequency band differentiation and reduce feature vectors' dimension. Feature extraction is followed by the fusion network where four different structured multi-layer perceptron networks are trained to classify the recorded signals (healthy/faulty). The robustness of fusion network outputs is greater compared to perceptron networks. The results provided by the fusion network indicate a classification of 98.88 and 97.95% for healthy and faulty classes, respectively.
Facial expression recognition under partial occlusion based on fusion of global and local features
NASA Astrophysics Data System (ADS)
Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji
2018-04-01
Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.
NASA Astrophysics Data System (ADS)
Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua
2017-04-01
Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian
2018-01-01
In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.
Feature-Based Methods for Landmine Detection with Ground Penetrating Radar
2012-09-27
of abstraction without having to resort to assumptions about the events. DS fusion was applied to handwriting recognition [67], decision making [68...has been applied to landmine detection [80], and (in a different way) to handwriting recognition [46], and fusion of social choices (voting...applications to handwriting recognition, IEEE Transactions on Systems, Man and Cybernetics 22 (3) (1992) 418–435. [68] M. Beynon, D. Cosker, A.D. Marshall
NASA Astrophysics Data System (ADS)
Chen, Chen; Hao, Huiyan; Jafari, Roozbeh; Kehtarnavaz, Nasser
2017-05-01
This paper presents an extension to our previously developed fusion framework [10] involving a depth camera and an inertial sensor in order to improve its view invariance aspect for real-time human action recognition applications. A computationally efficient view estimation based on skeleton joints is considered in order to select the most relevant depth training data when recognizing test samples. Two collaborative representation classifiers, one for depth features and one for inertial features, are appropriately weighted to generate a decision making probability. The experimental results applied to a multi-view human action dataset show that this weighted extension improves the recognition performance by about 5% over equally weighted fusion deployed in our previous fusion framework.
Biometric identification based on feature fusion with PCA and SVM
NASA Astrophysics Data System (ADS)
Lefkovits, László; Lefkovits, Szidónia; Emerich, Simina
2018-04-01
Biometric identification is gaining ground compared to traditional identification methods. Many biometric measurements may be used for secure human identification. The most reliable among them is the iris pattern because of its uniqueness, stability, unforgeability and inalterability over time. The approach presented in this paper is a fusion of different feature descriptor methods such as HOG, LIOP, LBP, used for extracting iris texture information. The classifiers obtained through the SVM and PCA methods demonstrate the effectiveness of our system applied to one and both irises. The performances measured are highly accurate and foreshadow a fusion system with a rate of identification approaching 100% on the UPOL database.
A Hierarchical Convolutional Neural Network for vesicle fusion event classification.
Li, Haohan; Mao, Yunxiang; Yin, Zhaozheng; Xu, Yingke
2017-09-01
Quantitative analysis of vesicle exocytosis and classification of different modes of vesicle fusion from the fluorescence microscopy are of primary importance for biomedical researches. In this paper, we propose a novel Hierarchical Convolutional Neural Network (HCNN) method to automatically identify vesicle fusion events in time-lapse Total Internal Reflection Fluorescence Microscopy (TIRFM) image sequences. Firstly, a detection and tracking method is developed to extract image patch sequences containing potential fusion events. Then, a Gaussian Mixture Model (GMM) is applied on each image patch of the patch sequence with outliers rejected for robust Gaussian fitting. By utilizing the high-level time-series intensity change features introduced by GMM and the visual appearance features embedded in some key moments of the fusion process, the proposed HCNN architecture is able to classify each candidate patch sequence into three classes: full fusion event, partial fusion event and non-fusion event. Finally, we validate the performance of our method on 9 challenging datasets that have been annotated by cell biologists, and our method achieves better performances when comparing with three previous methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Chunhui; Zhang, Duona; Zhao, Xintao
2018-03-01
Saliency detection in synthetic aperture radar (SAR) images is a difficult problem. This paper proposed a multitask saliency detection (MSD) model for the saliency detection task of SAR images. We extract four features of the SAR image, which include the intensity, orientation, uniqueness, and global contrast, as the input of the MSD model. The saliency map is generated by the multitask sparsity pursuit, which integrates the multiple features collaboratively. Detection of different scale features is also taken into consideration. Subjective and objective evaluation of the MSD model verifies its effectiveness. Based on the saliency maps obtained by the MSD model, we apply the saliency map of the SAR image to the SAR and color optical image fusion. The experimental results of real data show that the saliency map obtained by the MSD model helps to improve the fusion effect, and the salient areas in the SAR image can be highlighted in the fusion results.
Multiview fusion for activity recognition using deep neural networks
NASA Astrophysics Data System (ADS)
Kavi, Rahul; Kulathumani, Vinod; Rohit, Fnu; Kecojevic, Vlad
2016-07-01
Convolutional neural networks (ConvNets) coupled with long short term memory (LSTM) networks have been recently shown to be effective for video classification as they combine the automatic feature extraction capabilities of a neural network with additional memory in the temporal domain. This paper shows how multiview fusion can be applied to such a ConvNet LSTM architecture. Two different fusion techniques are presented. The system is first evaluated in the context of a driver activity recognition system using data collected in a multicamera driving simulator. These results show significant improvement in accuracy with multiview fusion and also show that deep learning performs better than a traditional approach using spatiotemporal features even without requiring any background subtraction. The system is also validated on another publicly available multiview action recognition dataset that has 12 action classes and 8 camera views.
NASA Astrophysics Data System (ADS)
Zargari Khuzani, Abolfazl; Danala, Gopichandh; Heidari, Morteza; Du, Yue; Mashhadi, Najmeh; Qiu, Yuchen; Zheng, Bin
2018-02-01
Higher recall rates are a major challenge in mammography screening. Thus, developing computer-aided diagnosis (CAD) scheme to classify between malignant and benign breast lesions can play an important role to improve efficacy of mammography screening. Objective of this study is to develop and test a unique image feature fusion framework to improve performance in classifying suspicious mass-like breast lesions depicting on mammograms. The image dataset consists of 302 suspicious masses detected on both craniocaudal and mediolateral-oblique view images. Amongst them, 151 were malignant and 151 were benign. The study consists of following 3 image processing and feature analysis steps. First, an adaptive region growing segmentation algorithm was used to automatically segment mass regions. Second, a set of 70 image features related to spatial and frequency characteristics of mass regions were initially computed. Third, a generalized linear regression model (GLM) based machine learning classifier combined with a bat optimization algorithm was used to optimally fuse the selected image features based on predefined assessment performance index. An area under ROC curve (AUC) with was used as a performance assessment index. Applying CAD scheme to the testing dataset, AUC was 0.75+/-0.04, which was significantly higher than using a single best feature (AUC=0.69+/-0.05) or the classifier with equally weighted features (AUC=0.73+/-0.05). This study demonstrated that comparing to the conventional equal-weighted approach, using an unequal-weighted feature fusion approach had potential to significantly improve accuracy in classifying between malignant and benign breast masses.
Fourier domain image fusion for differential X-ray phase-contrast breast imaging.
Coello, Eduardo; Sperl, Jonathan I; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne
2017-04-01
X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhang, Xinzheng; Rad, Ahmad B; Wong, Yiu-Kwong
2012-01-01
This paper presents a sensor fusion strategy applied for Simultaneous Localization and Mapping (SLAM) in dynamic environments. The designed approach consists of two features: (i) the first one is a fusion module which synthesizes line segments obtained from laser rangefinder and line features extracted from monocular camera. This policy eliminates any pseudo segments that appear from any momentary pause of dynamic objects in laser data. (ii) The second characteristic is a modified multi-sensor point estimation fusion SLAM (MPEF-SLAM) that incorporates two individual Extended Kalman Filter (EKF) based SLAM algorithms: monocular and laser SLAM. The error of the localization in fused SLAM is reduced compared with those of individual SLAM. Additionally, a new data association technique based on the homography transformation matrix is developed for monocular SLAM. This data association method relaxes the pleonastic computation. The experimental results validate the performance of the proposed sensor fusion and data association method.
Goshvarpour, Ateke; Goshvarpour, Atefeh
2018-04-30
Heart rate variability (HRV) analysis has become a widely used tool for monitoring pathological and psychological states in medical applications. In a typical classification problem, information fusion is a process whereby the effective combination of the data can achieve a more accurate system. The purpose of this article was to provide an accurate algorithm for classifying HRV signals in various psychological states. Therefore, a novel feature level fusion approach was proposed. First, using the theory of information, two similarity indicators of the signal were extracted, including correntropy and Cauchy-Schwarz divergence. Applying probabilistic neural network (PNN) and k-nearest neighbor (kNN), the performance of each index in the classification of meditators and non-meditators HRV signals was appraised. Then, three fusion rules, including division, product, and weighted sum rules were used to combine the information of both similarity measures. For the first time, we propose an algorithm to define the weights of each feature based on the statistical p-values. The performance of HRV classification using combined features was compared with the non-combined features. Totally, the accuracy of 100% was obtained for discriminating all states. The results showed the strong ability and proficiency of division and weighted sum rules in the improvement of the classifier accuracies.
Men, Hong; Shi, Yan; Fu, Songlin; Jiao, Yanan; Qiao, Yu; Liu, Jingjing
2017-01-01
Multi-sensor data fusion can provide more comprehensive and more accurate analysis results. However, it also brings some redundant information, which is an important issue with respect to finding a feature-mining method for intuitive and efficient analysis. This paper demonstrates a feature-mining method based on variable accumulation to find the best expression form and variables’ behavior affecting beer flavor. First, e-tongue and e-nose were used to gather the taste and olfactory information of beer, respectively. Second, principal component analysis (PCA), genetic algorithm-partial least squares (GA-PLS), and variable importance of projection (VIP) scores were applied to select feature variables of the original fusion set. Finally, the classification models based on support vector machine (SVM), random forests (RF), and extreme learning machine (ELM) were established to evaluate the efficiency of the feature-mining method. The result shows that the feature-mining method based on variable accumulation obtains the main feature affecting beer flavor information, and the best classification performance for the SVM, RF, and ELM models with 96.67%, 94.44%, and 98.33% prediction accuracy, respectively. PMID:28753917
NASA Astrophysics Data System (ADS)
Majeed, Raad H.; Oudah, Osamah N.
2018-05-01
Thermonuclear fusion reaction plays an important role in developing and construction any power plant system. Studying the physical behavior for the possible mechanism governed energies released by the fusion products to precise understanding the related kinematics. In this work a theoretical formula controlled the general applied thermonuclear fusion reactions is achieved to calculating the fusion products energy depending upon the reactants physical properties and therefore, one can calculate other parameters governed a given reaction. By using this formula, the energy spectrum of 4He produced from T-3He fusion reaction has been sketched with respect to reaction angle and incident energy ranged from (0.08-0.6) MeV.
NASA Astrophysics Data System (ADS)
Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan
2015-10-01
Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.
A protein coevolution method uncovers critical features of the Hepatitis C Virus fusion mechanism
Douam, Florian; Mancip, Jimmy; Mailly, Laurent; Montserret, Roland; Ding, Qiang; Verhoeyen, Els; Baumert, Thomas F.; Ploss, Alexander; Carbone, Alessandra
2018-01-01
Amino-acid coevolution can be referred to mutational compensatory patterns preserving the function of a protein. Viral envelope glycoproteins, which mediate entry of enveloped viruses into their host cells, are shaped by coevolution signals that confer to viruses the plasticity to evade neutralizing antibodies without altering viral entry mechanisms. The functions and structures of the two envelope glycoproteins of the Hepatitis C Virus (HCV), E1 and E2, are poorly described. Especially, how these two proteins mediate the HCV fusion process between the viral and the cell membrane remains elusive. Here, as a proof of concept, we aimed to take advantage of an original coevolution method recently developed to shed light on the HCV fusion mechanism. When first applied to the well-characterized Dengue Virus (DENV) envelope glycoproteins, coevolution analysis was able to predict important structural features and rearrangements of these viral protein complexes. When applied to HCV E1E2, computational coevolution analysis predicted that E1 and E2 refold interdependently during fusion through rearrangements of the E2 Back Layer (BL). Consistently, a soluble BL-derived polypeptide inhibited HCV infection of hepatoma cell lines, primary human hepatocytes and humanized liver mice. We showed that this polypeptide specifically inhibited HCV fusogenic rearrangements, hence supporting the critical role of this domain during HCV fusion. By combining coevolution analysis and in vitro assays, we also uncovered functionally-significant coevolving signals between E1 and E2 BL/Stem regions that govern HCV fusion, demonstrating the accuracy of our coevolution predictions. Altogether, our work shed light on important structural features of the HCV fusion mechanism and contributes to advance our functional understanding of this process. This study also provides an important proof of concept that coevolution can be employed to explore viral protein mediated-processes, and can guide the development of innovative translational strategies against challenging human-tropic viruses. PMID:29505618
Blob-level active-passive data fusion for Benthic classification
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Kalluri, Hemanth; Mathur, Abhinav; Ramnath, Vinod; Kim, Minsu; Aitken, Jennifer; Tuell, Grady
2012-06-01
We extend the data fusion pixel level to the more semantically meaningful blob level, using the mean-shift algorithm to form labeled blobs having high similarity in the feature domain, and connectivity in the spatial domain. We have also developed Bhattacharyya Distance (BD) and rule-based classifiers, and have implemented these higher-level data fusion algorithms into the CZMIL Data Processing System. Applying these new algorithms to recent SHOALS and CASI data at Plymouth Harbor, Massachusetts, we achieved improved benthic classification accuracies over those produced with either single sensor, or pixel-level fusion strategies. These results appear to validate the hypothesis that classification accuracy may be generally improved by adopting higher spatial and semantic levels of fusion.
Infrared and visible image fusion scheme based on NSCT and low-level visual features
NASA Astrophysics Data System (ADS)
Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei
2016-05-01
Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.
Cai, Suxian; Yang, Shanshan; Zheng, Fang; Lu, Meng; Wu, Yunfeng; Krishnan, Sridhar
2013-01-01
Analysis of knee joint vibration (VAG) signals can provide quantitative indices for detection of knee joint pathology at an early stage. In addition to the statistical features developed in the related previous studies, we extracted two separable features, that is, the number of atoms derived from the wavelet matching pursuit decomposition and the number of significant signal turns detected with the fixed threshold in the time domain. To perform a better classification over the data set of 89 VAG signals, we applied a novel classifier fusion system based on the dynamic weighted fusion (DWF) method to ameliorate the classification performance. For comparison, a single leastsquares support vector machine (LS-SVM) and the Bagging ensemble were used for the classification task as well. The results in terms of overall accuracy in percentage and area under the receiver operating characteristic curve obtained with the DWF-based classifier fusion method reached 88.76% and 0.9515, respectively, which demonstrated the effectiveness and superiority of the DWF method with two distinct features for the VAG signal analysis. PMID:23573175
D Object Classification Based on Thermal and Visible Imagery in Urban Area
NASA Astrophysics Data System (ADS)
Hasani, H.; Samadzadegan, F.
2015-12-01
The spatial distribution of land cover in the urban area especially 3D objects (buildings and trees) is a fundamental dataset for urban planning, ecological research, disaster management, etc. According to recent advances in sensor technologies, several types of remotely sensed data are available from the same area. Data fusion has been widely investigated for integrating different source of data in classification of urban area. Thermal infrared imagery (TIR) contains information on emitted radiation and has unique radiometric properties. However, due to coarse spatial resolution of thermal data, its application has been restricted in urban areas. On the other hand, visible image (VIS) has high spatial resolution and information in visible spectrum. Consequently, there is a complementary relation between thermal and visible imagery in classification of urban area. This paper evaluates the potential of aerial thermal hyperspectral and visible imagery fusion in classification of urban area. In the pre-processing step, thermal imagery is resampled to the spatial resolution of visible image. Then feature level fusion is applied to construct hybrid feature space include visible bands, thermal hyperspectral bands, spatial and texture features and moreover Principle Component Analysis (PCA) transformation is applied to extract PCs. Due to high dimensionality of feature space, dimension reduction method is performed. Finally, Support Vector Machines (SVMs) classify the reduced hybrid feature space. The obtained results show using thermal imagery along with visible imagery, improved the classification accuracy up to 8% respect to visible image classification.
Spatial-time-state fusion algorithm for defect detection through eddy current pulsed thermography
NASA Astrophysics Data System (ADS)
Xiao, Xiang; Gao, Bin; Woo, Wai Lok; Tian, Gui Yun; Xiao, Xiao Ting
2018-05-01
Eddy Current Pulsed Thermography (ECPT) has received extensive attention due to its high sensitive of detectability on surface and subsurface cracks. However, it remains as a difficult challenge in unsupervised detection as to identify defects without knowing any prior knowledge. This paper presents a spatial-time-state features fusion algorithm to obtain fully profile of the defects by directional scanning. The proposed method is intended to conduct features extraction by using independent component analysis (ICA) and automatic features selection embedding genetic algorithm. Finally, the optimal feature of each step is fused to obtain defects reconstruction by applying common orthogonal basis extraction (COBE) method. Experiments have been conducted to validate the study and verify the efficacy of the proposed method on blind defect detection.
Integrating image quality in 2nu-SVM biometric match score fusion.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2007-10-01
This paper proposes an intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2nu-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.
Appearance-based human gesture recognition using multimodal features for human computer interaction
NASA Astrophysics Data System (ADS)
Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun
2011-03-01
The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.
NASA Astrophysics Data System (ADS)
Bigdeli, Behnaz; Pahlavani, Parham
2017-01-01
Interpretation of synthetic aperture radar (SAR) data processing is difficult because the geometry and spectral range of SAR are different from optical imagery. Consequently, SAR imaging can be a complementary data to multispectral (MS) optical remote sensing techniques because it does not depend on solar illumination and weather conditions. This study presents a multisensor fusion of SAR and MS data based on the use of classification and regression tree (CART) and support vector machine (SVM) through a decision fusion system. First, different feature extraction strategies were applied on SAR and MS data to produce more spectral and textural information. To overcome the redundancy and correlation between features, an intrinsic dimension estimation method based on noise-whitened Harsanyi, Farrand, and Chang determines the proper dimension of the features. Then, principal component analysis and independent component analysis were utilized on stacked feature space of two data. Afterward, SVM and CART classified each reduced feature space. Finally, a fusion strategy was utilized to fuse the classification results. To show the effectiveness of the proposed methodology, single classification on each data was compared to the obtained results. A coregistered Radarsat-2 and WorldView-2 data set from San Francisco, USA, was available to examine the effectiveness of the proposed method. The results show that combinations of SAR data with optical sensor based on the proposed methodology improve the classification results for most of the classes. The proposed fusion method provided approximately 93.24% and 95.44% for two different areas of the data.
Joint interpretation of geophysical data using Image Fusion techniques
NASA Astrophysics Data System (ADS)
Karamitrou, A.; Tsokas, G.; Petrou, M.
2013-12-01
Joint interpretation of geophysical data produced from different methods is a challenging area of research in a wide range of applications. In this work we apply several image fusion approaches to combine maps of electrical resistivity, electromagnetic conductivity, vertical gradient of the magnetic field, magnetic susceptibility, and ground penetrating radar reflections, in order to detect archaeological relics. We utilize data gathered from Arkansas University, with the support of the U.S. Department of Defense, through the Strategic Environmental Research and Development Program (SERDP-CS1263). The area of investigation is the Army City, situated in Riley Country of Kansas, USA. The depth of the relics is estimated about 30 cm from the surface, yet the surface indications of its existence are limited. We initially register the images from the different methods to correct from random offsets due to the use of hand-held devices during the measurement procedure. Next, we apply four different image fusion approaches to create combined images, using fusion with mean values, wavelet decomposition, curvelet transform, and curvelet transform enhancing the images along specific angles. We create seven combinations of pairs between the available geophysical datasets. The combinations are such that for every pair at least one high-resolution method (resistivity or magnetic gradiometry) is included. Our results indicate that in almost every case the method of mean values produces satisfactory fused images that corporate the majority of the features of the initial images. However, the contrast of the final image is reduced, and in some cases the averaging process nearly eliminated features that are fade in the original images. Wavelet based fusion outputs also good results, providing additional control in selecting the feature wavelength. Curvelet based fusion is proved the most effective method in most of the cases. The ability of curvelet domain to unfold the image in terms of space, wavenumber, and orientation, provides important advantages compared with the rest of the methods by allowing the incorporation of a-priori information about the orientation of the potential targets.
Begum, Shahina; Barua, Shaibal; Ahmed, Mobyen Uddin
2014-07-03
Today, clinicians often do diagnosis and classification of diseases based on information collected from several physiological sensor signals. However, sensor signal could easily be vulnerable to uncertain noises or interferences and due to large individual variations sensitivity to different physiological sensors could also vary. Therefore, multiple sensor signal fusion is valuable to provide more robust and reliable decision. This paper demonstrates a physiological sensor signal classification approach using sensor signal fusion and case-based reasoning. The proposed approach has been evaluated to classify Stressed or Relaxed individuals using sensor data fusion. Physiological sensor signals i.e., Heart Rate (HR), Finger Temperature (FT), Respiration Rate (RR), Carbon dioxide (CO2) and Oxygen Saturation (SpO2) are collected during the data collection phase. Here, sensor fusion has been done in two different ways: (i) decision-level fusion using features extracted through traditional approaches; and (ii) data-level fusion using features extracted by means of Multivariate Multiscale Entropy (MMSE). Case-Based Reasoning (CBR) is applied for the classification of the signals. The experimental result shows that the proposed system could classify Stressed or Relaxed individual 87.5% accurately compare to an expert in the domain. So, it shows promising result in the psychophysiological domain and could be possible to adapt this approach to other relevant healthcare systems.
Face-iris multimodal biometric scheme based on feature level fusion
NASA Astrophysics Data System (ADS)
Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei
2015-11-01
Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.
Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing
NASA Astrophysics Data System (ADS)
Fan, Lei
Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.
A biometric identification system based on eigenpalm and eigenfinger features.
Ribaric, Slobodan; Fratric, Ivan
2005-11-01
This paper presents a multimodal biometric identification system based on the features of the human hand. We describe a new biometric approach to personal identification using eigenfinger and eigenpalm features, with fusion applied at the matching-score level. The identification process can be divided into the following phases: capturing the image; preprocessing; extracting and normalizing the palm and strip-like finger subimages; extracting the eigenpalm and eigenfinger features based on the K-L transform; matching and fusion; and, finally, a decision based on the (k, l)-NN classifier and thresholding. The system was tested on a database of 237 people (1,820 hand images). The experimental results showed the effectiveness of the system in terms of the recognition rate (100 percent), the equal error rate (EER = 0.58 percent), and the total error rate (TER = 0.72 percent).
Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan
2018-01-01
Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.
Data fusion: principles and applications in air defense
NASA Astrophysics Data System (ADS)
Maltese, Dominique; Lucas, Andre
1998-07-01
Within a Surveillance and Reconnaissance System, the Fusion Process is an essential part of the software package since the different sensors measurements are combined by this process; each sensor sends its data to a fusion center whose task is to elaborate the best tactical situation. In this paper, a practical algorithm of data fusion applied to a military application context is presented; the case studied here is a medium-range surveillance situation featuring a dual-sensor platform which combines a surveillance Radar and an IRST; both sensors are collocated. The presented performances were obtained on validation scenarios via simulations performed by SAGEM with the ESSOR ('Environnement de Simulation de Senseurs Optroniques et Radar') multisensor simulation test bench.
Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo
2011-04-01
The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.
[An improved medical image fusion algorithm and quality evaluation].
Chen, Meiling; Tao, Ling; Qian, Zhiyu
2009-08-01
Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.
Infrared and visible fusion face recognition based on NSCT domain
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-01-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.
NASA Astrophysics Data System (ADS)
Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen
2018-01-01
Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.
Segmentation by fusion of histogram-based k-means clusters in different color spaces.
Mignotte, Max
2008-05-01
This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.
NASA Astrophysics Data System (ADS)
Liu, Jie; Hu, Youmin; Wang, Yan; Wu, Bo; Fan, Jikai; Hu, Zhongxu
2018-05-01
The diagnosis of complicated fault severity problems in rotating machinery systems is an important issue that affects the productivity and quality of manufacturing processes and industrial applications. However, it usually suffers from several deficiencies. (1) A considerable degree of prior knowledge and expertise is required to not only extract and select specific features from raw sensor signals, and but also choose a suitable fusion for sensor information. (2) Traditional artificial neural networks with shallow architectures are usually adopted and they have a limited ability to learn the complex and variable operating conditions. In multi-sensor-based diagnosis applications in particular, massive high-dimensional and high-volume raw sensor signals need to be processed. In this paper, an integrated multi-sensor fusion-based deep feature learning (IMSFDFL) approach is developed to identify the fault severity in rotating machinery processes. First, traditional statistics and energy spectrum features are extracted from multiple sensors with multiple channels and combined. Then, a fused feature vector is constructed from all of the acquisition channels. Further, deep feature learning with stacked auto-encoders is used to obtain the deep features. Finally, the traditional softmax model is applied to identify the fault severity. The effectiveness of the proposed IMSFDFL approach is primarily verified by a one-stage gearbox experimental platform that uses several accelerometers under different operating conditions. This approach can identify fault severity more effectively than the traditional approaches.
NASA Astrophysics Data System (ADS)
S. Al-Kaltakchi, Musab T.; Woo, Wai L.; Dlay, Satnam; Chambers, Jonathon A.
2017-12-01
In this study, a speaker identification system is considered consisting of a feature extraction stage which utilizes both power normalized cepstral coefficients (PNCCs) and Mel frequency cepstral coefficients (MFCC). Normalization is applied by employing cepstral mean and variance normalization (CMVN) and feature warping (FW), together with acoustic modeling using a Gaussian mixture model-universal background model (GMM-UBM). The main contributions are comprehensive evaluations of the effect of both additive white Gaussian noise (AWGN) and non-stationary noise (NSN) (with and without a G.712 type handset) upon identification performance. In particular, three NSN types with varying signal to noise ratios (SNRs) were tested corresponding to street traffic, a bus interior, and a crowded talking environment. The performance evaluation also considered the effect of late fusion techniques based on score fusion, namely, mean, maximum, and linear weighted sum fusion. The databases employed were TIMIT, SITW, and NIST 2008; and 120 speakers were selected from each database to yield 3600 speech utterances. As recommendations from the study, mean fusion is found to yield overall best performance in terms of speaker identification accuracy (SIA) with noisy speech, whereas linear weighted sum fusion is overall best for original database recordings.
Minimizing the semantic gap in biomedical content-based image retrieval
NASA Astrophysics Data System (ADS)
Guan, Haiying; Antani, Sameer; Long, L. Rodney; Thoma, George R.
2010-03-01
A major challenge in biomedical Content-Based Image Retrieval (CBIR) is to achieve meaningful mappings that minimize the semantic gap between the high-level biomedical semantic concepts and the low-level visual features in images. This paper presents a comprehensive learning-based scheme toward meeting this challenge and improving retrieval quality. The article presents two algorithms: a learning-based feature selection and fusion algorithm and the Ranking Support Vector Machine (Ranking SVM) algorithm. The feature selection algorithm aims to select 'good' features and fuse them using different similarity measurements to provide a better representation of the high-level concepts with the low-level image features. Ranking SVM is applied to learn the retrieval rank function and associate the selected low-level features with query concepts, given the ground-truth ranking of the training samples. The proposed scheme addresses four major issues in CBIR to improve the retrieval accuracy: image feature extraction, selection and fusion, similarity measurements, the association of the low-level features with high-level concepts, and the generation of the rank function to support high-level semantic image retrieval. It models the relationship between semantic concepts and image features, and enables retrieval at the semantic level. We apply it to the problem of vertebra shape retrieval from a digitized spine x-ray image set collected by the second National Health and Nutrition Examination Survey (NHANES II). The experimental results show an improvement of up to 41.92% in the mean average precision (MAP) over conventional image similarity computation methods.
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Hu, Gang; Hu, Kai
2018-01-01
The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.
Schmit, P F; Knapp, P F; Hansen, S B; Gomez, M R; Hahn, K D; Sinars, D B; Peterson, K J; Slutz, S A; Sefkow, A B; Awe, T J; Harding, E; Jennings, C A; Chandler, G A; Cooper, G W; Cuneo, M E; Geissel, M; Harvey-Thompson, A J; Herrmann, M C; Hess, M H; Johns, O; Lamppa, D C; Martin, M R; McBride, R D; Porter, J L; Robertson, G K; Rochau, G A; Rovang, D C; Ruiz, C L; Savage, M E; Smith, I C; Stygar, W A; Vesey, R A
2014-10-10
Magnetizing the fuel in inertial confinement fusion relaxes ignition requirements by reducing thermal conductivity and changing the physics of burn product confinement. Diagnosing the level of fuel magnetization during burn is critical to understanding target performance in magneto-inertial fusion (MIF) implosions. In pure deuterium fusion plasma, 1.01 MeV tritons are emitted during deuterium-deuterium fusion and can undergo secondary deuterium-tritium reactions before exiting the fuel. Increasing the fuel magnetization elongates the path lengths through the fuel of some of the tritons, enhancing their probability of reaction. Based on this feature, a method to diagnose fuel magnetization using the ratio of overall deuterium-tritium to deuterium-deuterium neutron yields is developed. Analysis of anisotropies in the secondary neutron energy spectra further constrain the measurement. Secondary reactions also are shown to provide an upper bound for the volumetric fuel-pusher mix in MIF. The analysis is applied to recent MIF experiments [M. R. Gomez et al., Phys. Rev. Lett. 113, 155003 (2014)] on the Z Pulsed Power Facility, indicating that significant magnetic confinement of charged burn products was achieved and suggesting a relatively low-mix environment. Both of these are essential features of future ignition-scale MIF designs.
Center for Neural Engineering: applications of pulse-coupled neural networks
NASA Astrophysics Data System (ADS)
Malkani, Mohan; Bodruzzaman, Mohammad; Johnson, John L.; Davis, Joel
1999-03-01
Pulsed-Coupled Neural Network (PCNN) is an oscillatory model neural network where grouping of cells and grouping among the groups that form the output time series (number of cells that fires in each input presentation also called `icon'). This is based on the synchronicity of oscillations. Recent work by Johnson and others demonstrated the functional capabilities of networks containing such elements for invariant feature extraction using intensity maps. PCNN thus presents itself as a more biologically plausible model with solid functional potential. This paper will present the summary of several projects and their results where we successfully applied PCNN. In project one, the PCNN was applied for object recognition and classification through a robotic vision system. The features (icons) generated by the PCNN were then fed into a feedforward neural network for classification. In project two, we developed techniques for sensory data fusion. The PCNN algorithm was implemented and tested on a B14 mobile robot. The PCNN-based features were extracted from the images taken from the robot vision system and used in conjunction with the map generated by data fusion of the sonar and wheel encoder data for the navigation of the mobile robot. In our third project, we applied the PCNN for speaker recognition. The spectrogram image of speech signals are fed into the PCNN to produce invariant feature icons which are then fed into a feedforward neural network for speaker identification.
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators
Bai, Xiangzhi
2015-01-01
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.
Bai, Xiangzhi
2015-07-15
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.
Classification of weld defect based on information fusion technology for radiographic testing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Hongquan; Liang, Zeming, E-mail: heavenlzm@126.com; Gao, Jianmin
Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster–Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defectmore » feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.« less
Jiang, Hongquan; Liang, Zeming; Gao, Jianmin; Dang, Changying
2016-03-01
Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster-Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.
A Support Vector Machine-Based Gender Identification Using Speech Signal
NASA Astrophysics Data System (ADS)
Lee, Kye-Hwan; Kang, Sang-Ick; Kim, Deok-Hwan; Chang, Joon-Hyuk
We propose an effective voice-based gender identification method using a support vector machine (SVM). The SVM is a binary classification algorithm that classifies two groups by finding the voluntary nonlinear boundary in a feature space and is known to yield high classification performance. In the present work, we compare the identification performance of the SVM with that of a Gaussian mixture model (GMM)-based method using the mel frequency cepstral coefficients (MFCC). A novel approach of incorporating a features fusion scheme based on a combination of the MFCC and the fundamental frequency is proposed with the aim of improving the performance of gender identification. Experimental results demonstrate that the gender identification performance using the SVM is significantly better than that of the GMM-based scheme. Moreover, the performance is substantially improved when the proposed features fusion technique is applied.
Automated target classification in high resolution dual frequency sonar imagery
NASA Astrophysics Data System (ADS)
Aridgides, Tom; Fernández, Manuel
2007-04-01
An improved computer-aided-detection / computer-aided-classification (CAD/CAC) processing string has been developed. The classified objects of 2 distinct strings are fused using the classification confidence values and their expansions as features, and using "summing" or log-likelihood-ratio-test (LLRT) based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new high-resolution dual frequency sonar imagery. Three significant fusion algorithm improvements were made. First, a nonlinear 2nd order (Volterra) feature LLRT fusion algorithm was developed. Second, a Box-Cox nonlinear feature LLRT fusion algorithm was developed. The Box-Cox transformation consists of raising the features to a to-be-determined power. Third, a repeated application of a subset feature selection / feature orthogonalization / Volterra feature LLRT fusion block was utilized. It was shown that cascaded Volterra feature LLRT fusion of the CAD/CAC processing strings outperforms summing, baseline single-stage Volterra and Box-Cox feature LLRT algorithms, yielding significant improvements over the best single CAD/CAC processing string results, and providing the capability to correctly call the majority of targets while maintaining a very low false alarm rate. Additionally, the robustness of cascaded Volterra feature fusion was demonstrated, by showing that the algorithm yields similar performance with the training and test sets.
Recognizing human activities using appearance metric feature and kinematics feature
NASA Astrophysics Data System (ADS)
Qian, Huimin; Zhou, Jun; Lu, Xinbiao; Wu, Xinye
2017-05-01
The problem of automatically recognizing human activities from videos through the fusion of the two most important cues, appearance metric feature and kinematics feature, is considered. And a system of two-dimensional (2-D) Poisson equations is introduced to extract the more discriminative appearance metric feature. Specifically, the moving human blobs are first detected out from the video by background subtraction technique to form a binary image sequence, from which the appearance feature designated as the motion accumulation image and the kinematics feature termed as centroid instantaneous velocity are extracted. Second, 2-D discrete Poisson equations are employed to reinterpret the motion accumulation image to produce a more differentiated Poisson silhouette image, from which the appearance feature vector is created through the dimension reduction technique called bidirectional 2-D principal component analysis, considering the balance between classification accuracy and time consumption. Finally, a cascaded classifier based on the nearest neighbor classifier and two directed acyclic graph support vector machine classifiers, integrated with the fusion of the appearance feature vector and centroid instantaneous velocity vector, is applied to recognize the human activities. Experimental results on the open databases and a homemade one confirm the recognition performance of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Wan, Xiaoqing; Zhao, Chunhui; Wang, Yanchun; Liu, Wu
2017-11-01
This paper proposes a novel classification paradigm for hyperspectral image (HSI) using feature-level fusion and deep learning-based methodologies. Operation is carried out in three main steps. First, during a pre-processing stage, wave atoms are introduced into bilateral filter to smooth HSI, and this strategy can effectively attenuate noise and restore texture information. Meanwhile, high quality spectral-spatial features can be extracted from HSI by taking geometric closeness and photometric similarity among pixels into consideration simultaneously. Second, higher order statistics techniques are firstly introduced into hyperspectral data classification to characterize the phase correlations of spectral curves. Third, multifractal spectrum features are extracted to characterize the singularities and self-similarities of spectra shapes. To this end, a feature-level fusion is applied to the extracted spectral-spatial features along with higher order statistics and multifractal spectrum features. Finally, stacked sparse autoencoder is utilized to learn more abstract and invariant high-level features from the multiple feature sets, and then random forest classifier is employed to perform supervised fine-tuning and classification. Experimental results on two real hyperspectral data sets demonstrate that the proposed method outperforms some traditional alternatives.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-05
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-01
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion. PMID:25569749
Multiscale infrared and visible image fusion using gradient domain guided image filtering
NASA Astrophysics Data System (ADS)
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection
Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun
2016-01-01
Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE. PMID:27447635
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.
Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun
2016-07-19
Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE.
Computer Based Behavioral Biometric Authentication via Multi-Modal Fusion
2013-03-01
the decisions made by each individual modality. Fusion of features is the simple concatenation of feature vectors from multiple modalities to be...of Features BayesNet MDL 330 LibSVM PCA 80 J48 Wrapper Evaluator 11 3.5.3 Ensemble Based Decision Level Fusion. In ensemble learning multiple ...The high fusion percentages validate our hypothesis that by combining features from multiple modalities, classification accuracy can be improved. As
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghaei, Faranak; Tan, Maxine; Liu, Hong
Purpose: To identify a new clinical marker based on quantitative kinetic image features analysis and assess its feasibility to predict tumor response to neoadjuvant chemotherapy. Methods: The authors assembled a dataset involving breast MR images acquired from 68 cancer patients before undergoing neoadjuvant chemotherapy. Among them, 25 patients had complete response (CR) and 43 had partial and nonresponse (NR) to chemotherapy based on the response evaluation criteria in solid tumors. The authors developed a computer-aided detection scheme to segment breast areas and tumors depicted on the breast MR images and computed a total of 39 kinetic image features from bothmore » tumor and background parenchymal enhancement regions. The authors then applied and tested two approaches to classify between CR and NR cases. The first one analyzed each individual feature and applied a simple feature fusion method that combines classification results from multiple features. The second approach tested an attribute selected classifier that integrates an artificial neural network (ANN) with a wrapper subset evaluator, which was optimized using a leave-one-case-out validation method. Results: In the pool of 39 features, 10 yielded relatively higher classification performance with the areas under receiver operating characteristic curves (AUCs) ranging from 0.61 to 0.78 to classify between CR and NR cases. Using a feature fusion method, the maximum AUC = 0.85 ± 0.05. Using the ANN-based classifier, AUC value significantly increased to 0.96 ± 0.03 (p < 0.01). Conclusions: This study demonstrated that quantitative analysis of kinetic image features computed from breast MR images acquired prechemotherapy has potential to generate a useful clinical marker in predicting tumor response to chemotherapy.« less
Gradient-based multiresolution image fusion.
Petrović, Valdimir S; Xydeas, Costas S
2004-02-01
A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.
NASA Astrophysics Data System (ADS)
Pournamdari, M.; Hashim, M.
2014-02-01
Chromite ore deposit occurrence is related to ophiolite complexes as a part of the oceanic crust and provides a good opportunity for lithological mapping using remote sensing data. The main contribution of this paper is a novel approaches to discriminate different rock units associated with ophiolite complex using the Feature Level Fusion technique on ASTER and Landsat TM satellite data at regional scale. In addition this study has applied spectral transform approaches, consisting of Spectral Angle Mapper (SAM) to distinguish the concentration of high-potential areas of chromite and also for determining the boundary between different rock units. Results indicated both approaches show superior outputs compared to other methods and can produce a geological map for ophiolite complex rock units in the arid and the semi-arid region. The novel technique including feature level fusion and Spectral Angle Mapper (SAM) discriminated ophiolitic rock units and produced detailed geological maps of the study area. As a case study, Sikhoran ophiolite complex located in SE, Iran has been selected for image processing techniques. In conclusion, a suitable approach for lithological mapping of ophiolite complexes is demonstrated, this technique contributes meaningfully towards economic geology in terms of identifying new prospects.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2008-08-01
This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.
[Research Progress of Multi-Model Medical Image Fusion at Feature Level].
Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun
2016-04-01
Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.
Chavan, Satishkumar S; Mahajan, Abhishek; Talbar, Sanjay N; Desai, Subhash; Thakur, Meenakshi; D'cruz, Anil
2017-02-01
Neurocysticercosis (NCC) is a parasite infection caused by the tapeworm Taenia solium in its larvae stage which affects the central nervous system of the human body (a definite host). It results in the formation of multiple lesions in the brain at different locations during its various stages. During diagnosis of such symptomatic patients, these lesions can be better visualized using a feature based fusion of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). This paper presents a novel approach to Multimodality Medical Image Fusion (MMIF) used for the analysis of the lesions for the diagnostic purpose and post treatment review of NCC. The MMIF presented here is a technique of combining CT and MRI data of the same patient into a new slice using a Nonsubsampled Rotated Complex Wavelet Transform (NSRCxWT). The forward NSRCxWT is applied on both the source modalities separately to extract the complementary and the edge related features. These features are then combined to form a composite spectral plane using average and maximum value selection fusion rules. The inverse transformation on this composite plane results into a new, visually better, and enriched fused image. The proposed technique is tested on the pilot study data sets of patients infected with NCC. The quality of these fused images is measured using objective and subjective evaluation metrics. Objective evaluation is performed by estimating the fusion parameters like entropy, fusion factor, image quality index, edge quality measure, mean structural similarity index measure, etc. The fused images are also evaluated for their visual quality using subjective analysis with the help of three expert radiologists. The experimental results on 43 image data sets of 17 patients are promising and superior when compared with the state of the art wavelet based fusion algorithms. The proposed algorithm can be a part of computer-aided detection and diagnosis (CADD) system which assists the radiologists in clinical practices. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua
2016-03-01
Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.
Image fusion using sparse overcomplete feature dictionaries
Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt
2015-10-06
Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.
Feature level fusion of hand and face biometrics
NASA Astrophysics Data System (ADS)
Ross, Arun A.; Govindarajan, Rohin
2005-03-01
Multibiometric systems utilize the evidence presented by multiple biometric sources (e.g., face and fingerprint, multiple fingers of a user, multiple matchers, etc.) in order to determine or verify the identity of an individual. Information from multiple sources can be consolidated in several distinct levels, including the feature extraction level, match score level and decision level. While fusion at the match score and decision levels have been extensively studied in the literature, fusion at the feature level is a relatively understudied problem. In this paper we discuss fusion at the feature level in 3 different scenarios: (i) fusion of PCA and LDA coefficients of face; (ii) fusion of LDA coefficients corresponding to the R,G,B channels of a face image; (iii) fusion of face and hand modalities. Preliminary results are encouraging and help in highlighting the pros and cons of performing fusion at this level. The primary motivation of this work is to demonstrate the viability of such a fusion and to underscore the importance of pursuing further research in this direction.
Method of constructing a microwave antenna
NASA Technical Reports Server (NTRS)
Ngo, Phong (Inventor); Arndt, G. Dickey (Inventor); Carl, James (Inventor)
2003-01-01
A method, simulation, and apparatus are provided that are highly suitable for treatment of benign prostatic hyperplasia (BPH). A catheter is disclosed that includes a small diameter disk loaded monopole antenna surrounded by fusion material having a high heat of fusion and a melting point preferably at or near body temperature. Microwaves from the antenna heat prostatic tissue to promote necrosing of the prostatic tissue that relieves the pressure of the prostatic tissue against the urethra as the body reabsorbs the necrosed or dead tissue. The fusion material keeps the urethra cool by means of the heat of fusion of the fusion material. This prevents damage to the urethra while the prostatic tissue is necrosed. A computer simulation is provided that can be used to predict the resulting temperature profile produced in the prostatic tissue. By changing the various control features of the catheter and method of applying microwave energy a temperature profile can be predicted and produced that is similar to the temperature profile desired for the particular patient.
Method of Constructing a Microwave Antenna
NASA Technical Reports Server (NTRS)
Arndt, G. Dickey (Inventor); Carl, James (Inventor); Ngo, Phong (Inventor)
2003-01-01
A method, simulation, and apparatus are provided that are highly suitable for treatment of benign prostatic hyperplasia (BPH). A catheter is disclosed that includes a small diameter disk loaded monopole antenna surrounded by fusion material having a high heat of fusion and a melting point preferably at or near body temperature. Microwaves from the antenna heat prostatic tissue to promote necrosing of the prostatic tissue that relieves the pressure of the prostatic tissue against the urethra as the body reabsorbs the necrosed or dead tissue. The fusion material keeps the urethra cool by means of the heat of fusion of the fusion material. This prevents damage to the urethra while the prostatic tissue is necrosed. A computer simulation is provided that can be used to predict the resulting temperature profile produced in the prostatic tissue. By changing the various control features of the catheter and method of applying microwave energy a temperature profile can be predicted and produced that is similar to the temperature profile desired for the particular patient.
Method for selective thermal ablation
NASA Technical Reports Server (NTRS)
Ngo, Phong (Inventor); Arndt, G. Dickey (Inventor); Raffoul, George W. (Inventor); Carl, James (Inventor)
2003-01-01
A method, simulation, and apparatus are provided that are highly suitable for treatment of benign prostatic hyperplasia (BPH). A catheter is disclosed that includes a small diameter disk loaded monopole antenna surrounded by fusion material having a high heat of fusion and a melting point preferably at or near body temperature. Microwaves from the antenna heat prostatic tissue to promote necrosing of the prostatic tissue that relieves the pressure of the prostatic tissue against the urethra as the body reabsorbs the necrosed or dead tissue. The fusion material keeps the urethra cool by means of the heat of fusion of the fusion material. This prevents damage to the urethra while the prostatic tissue is necrosed. A computer simulation is provided that can be used to predict the resulting temperature profile produced in the prostatic tissue. By changing the various control features of the catheter and method of applying microwave energy a temperature profile can be predicted and produced that is similar to the temperature profile desired for the particular patient.
Method for Selective Thermal Ablation
NASA Technical Reports Server (NTRS)
Arndt, G. Dickey (Inventor); Carl, James (Inventor); Ngo, Phong (Inventor); Raffoul, George W. (Inventor)
2003-01-01
A method, simulation, and apparatus are provided that are highly suitable for treatment of benign prostatic hyperplasia (BPH). A catheter is disclosed that includes a small diameter disk loaded monopole antenna surrounded by fusion material having a high heat of fusion and a melting point preferably at or near body temperature. Microwaves from the antenna heat prostatic tissue to promote necrosing of the prostatic tissue that relieves the pressure of the prostatic tissue against the urethra as the body reabsorbs the necrosed or dead tissue. The fusion material keeps the urethra cool by means of the heat of fusion of the fusion material. This prevents damage to the urethra while the prostatic tissue is necrosed. A computer simulation is provided that can be used to predict the resulting temperature profile produced in the prostatic tissue. By changing the various control features of the catheter and method of applying microwave energy a temperature profile can be predicted and produced that is similar to the temperature profile desired for the particular patient.
Transcatheter Microwave Antenna
NASA Technical Reports Server (NTRS)
Arndt, Dickey G. (Inventor); Carl, James R. (Inventor); Ngo, Phong (Inventor); Raffoul, George W. (Inventor)
2001-01-01
A method, simulation, and apparatus are provided that are highly suitable for treatment of benign prostatic hyperplasia (BPH). A catheter is disclosed that includes a small diameter disk loaded monopole antenna surrounded by fusion material having a high heat of fusion and a melting point preferably at or near body temperature. Microwaves from the antenna heat prostatic tissue to promote necrosing of the prostatic tissue that relieves the pressure of the prostatic tissue against the urethra as the body reabsorbs the necrosed or dead tissue. The fusion material keeps the urethra cool by means of the heat of fusion of the fusion material. This prevents damage to the urethra while the prostatic tissue is necrosed. A computer simulation is provided that can be used to predict the resulting temperature profile produced in the prostatic tissue. By changing the various control features of the catheter and method of applying microwave energy a temperature profile can be predicted and produced that is similar to the temperature profile desired for the particular patient.
Heideklang, René; Shokouhi, Parisa
2016-01-01
This article focuses on the fusion of flaw indications from multi-sensor nondestructive materials testing. Because each testing method makes use of a different physical principle, a multi-method approach has the potential of effectively differentiating actual defect indications from the many false alarms, thus enhancing detection reliability. In this study, we propose a new technique for aggregating scattered two- or three-dimensional sensory data. Using a density-based approach, the proposed method explicitly addresses localization uncertainties such as registration errors. This feature marks one of the major of advantages of this approach over pixel-based image fusion techniques. We provide guidelines on how to set all the key parameters and demonstrate the technique’s robustness. Finally, we apply our fusion approach to experimental data and demonstrate its capability to locate small defects by substantially reducing false alarms under conditions where no single-sensor method is adequate. PMID:26784200
ALLFlight: detection of moving objects in IR and ladar images
NASA Astrophysics Data System (ADS)
Doehler, H.-U.; Peinecke, Niklas; Lueken, Thomas; Schmerwitz, Sven
2013-05-01
Supporting a helicopter pilot during landing and takeoff in degraded visual environment (DVE) is one of the challenges within DLR's project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites). Different types of sensors (TV, Infrared, mmW radar and laser radar) are mounted onto DLR's research helicopter FHS (flying helicopter simulator) for gathering different sensor data of the surrounding world. A high performance computer cluster architecture acquires and fuses all the information to get one single comprehensive description of the outside situation. While both TV and IR cameras deliver images with frame rates of 25 Hz or 30 Hz, Ladar and mmW radar provide georeferenced sensor data with only 2 Hz or even less. Therefore, it takes several seconds to detect or even track potential moving obstacle candidates in mmW or Ladar sequences. Especially if the helicopter is flying with higher speed, it is very important to minimize the detection time of obstacles in order to initiate a re-planning of the helicopter's mission timely. Applying feature extraction algorithms on IR images in combination with data fusion algorithms of extracted features and Ladar data can decrease the detection time appreciably. Based on real data from flight tests, the paper describes applied feature extraction methods for moving object detection, as well as data fusion techniques for combining features from TV/IR and Ladar data.
NASA Astrophysics Data System (ADS)
Aghaei, Faranak; Tan, Maxine; Hollingsworth, Alan B.; Zheng, Bin; Cheng, Samuel
2016-03-01
Dynamic contrast-enhanced breast magnetic resonance imaging (DCE-MRI) has been used increasingly in breast cancer diagnosis and assessment of cancer treatment efficacy. In this study, we applied a computer-aided detection (CAD) scheme to automatically segment breast regions depicting on MR images and used the kinetic image features computed from the global breast MR images acquired before neoadjuvant chemotherapy to build a new quantitative model to predict response of the breast cancer patients to the chemotherapy. To assess performance and robustness of this new prediction model, an image dataset involving breast MR images acquired from 151 cancer patients before undergoing neoadjuvant chemotherapy was retrospectively assembled and used. Among them, 63 patients had "complete response" (CR) to chemotherapy in which the enhanced contrast levels inside the tumor volume (pre-treatment) was reduced to the level as the normal enhanced background parenchymal tissues (post-treatment), while 88 patients had "partially response" (PR) in which the high contrast enhancement remain in the tumor regions after treatment. We performed the studies to analyze the correlation among the 22 global kinetic image features and then select a set of 4 optimal features. Applying an artificial neural network trained with the fusion of these 4 kinetic image features, the prediction model yielded an area under ROC curve (AUC) of 0.83+/-0.04. This study demonstrated that by avoiding tumor segmentation, which is often difficult and unreliable, fusion of kinetic image features computed from global breast MR images without tumor segmentation can also generate a useful clinical marker in predicting efficacy of chemotherapy.
NASA Astrophysics Data System (ADS)
Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing
2016-04-01
In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-03-01
Measurement noise is inevitable in practice; thus, it is difficult to identify defects, cracks or damage in a structure while suppressing noise simultaneously. In this work, a novel method is introduced to detect multiple damage in noisy environments. Based on multi-scale space analysis for discrete signals, a method for extracting damage characteristics from the measured displacement mode shape is illustrated. Moreover, the proposed method incorporates a data fusion algorithm to further eliminate measurement noise-based interference. The effectiveness of the method is verified by numerical and experimental methods applied to different structural types. The results demonstrate that there are two advantages to the proposed method. First, damage features are extracted by the difference of the multi-scale representation; this step is taken such that the interference of noise amplification can be avoided. Second, a data fusion technique applied to the proposed method provides a global decision, which retains the damage features while maximally eliminating the uncertainty. Monte Carlo simulations are utilized to validate that the proposed method has a higher accuracy in damage detection.
Wu, Guorong; Kim, Minjeong; Sanroma, Gerard; Wang, Qian; Munsell, Brent C.; Shen, Dinggang
2014-01-01
Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images, after registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process it’s critical the chosen patch similarity measurement accurately captures the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch now characterized by a multi-scale feature representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchically approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 tesla MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale feature representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods. PMID:25463474
Age Estimation Based on Children's Voice: A Fuzzy-Based Decision Fusion Strategy
Ting, Hua-Nong
2014-01-01
Automatic estimation of a speaker's age is a challenging research topic in the area of speech analysis. In this paper, a novel approach to estimate a speaker's age is presented. The method features a “divide and conquer” strategy wherein the speech data are divided into six groups based on the vowel classes. There are two reasons behind this strategy. First, reduction in the complicated distribution of the processing data improves the classifier's learning performance. Second, different vowel classes contain complementary information for age estimation. Mel-frequency cepstral coefficients are computed for each group and single layer feed-forward neural networks based on self-adaptive extreme learning machine are applied to the features to make a primary decision. Subsequently, fuzzy data fusion is employed to provide an overall decision by aggregating the classifier's outputs. The results are then compared with a number of state-of-the-art age estimation methods. Experiments conducted based on six age groups including children aged between 7 and 12 years revealed that fuzzy fusion of the classifier's outputs resulted in considerable improvement of up to 53.33% in age estimation accuracy. Moreover, the fuzzy fusion of decisions aggregated the complementary information of a speaker's age from various speech sources. PMID:25006595
Determination of feature generation methods for PTZ camera object tracking
NASA Astrophysics Data System (ADS)
Doyle, Daniel D.; Black, Jonathan T.
2012-06-01
Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.
PET and MRI image fusion based on combination of 2-D Hilbert transform and IHS method.
Haddadpour, Mozhdeh; Daneshvar, Sabalan; Seyedarabi, Hadi
2017-08-01
The process of medical image fusion is combining two or more medical images such as Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) and mapping them to a single image as fused image. So purpose of our study is assisting physicians to diagnose and treat the diseases in the least of the time. We used Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) as input images, so fused them based on combination of two dimensional Hilbert transform (2-D HT) and Intensity Hue Saturation (IHS) method. Evaluation metrics that we apply are Discrepancy (D k ) as an assessing spectral features and Average Gradient (AG k ) as an evaluating spatial features and also Overall Performance (O.P) to verify properly of the proposed method. In this paper we used three common evaluation metrics like Average Gradient (AG k ) and the lowest Discrepancy (D k ) and Overall Performance (O.P) to evaluate the performance of our method. Simulated and numerical results represent the desired performance of proposed method. Since that the main purpose of medical image fusion is preserving both spatial and spectral features of input images, so based on numerical results of evaluation metrics such as Average Gradient (AG k ), Discrepancy (D k ) and Overall Performance (O.P) and also desired simulated results, it can be concluded that our proposed method can preserve both spatial and spectral features of input images. Copyright © 2017 Chang Gung University. Published by Elsevier B.V. All rights reserved.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813
Information recovery through image sequence fusion under wavelet transformation
NASA Astrophysics Data System (ADS)
He, Qiang
2010-04-01
Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.
E-Nose Vapor Identification Based on Dempster-Shafer Fusion of Multiple Classifiers
NASA Technical Reports Server (NTRS)
Li, Winston; Leung, Henry; Kwan, Chiman; Linnell, Bruce R.
2005-01-01
Electronic nose (e-nose) vapor identification is an efficient approach to monitor air contaminants in space stations and shuttles in order to ensure the health and safety of astronauts. Data preprocessing (measurement denoising and feature extraction) and pattern classification are important components of an e-nose system. In this paper, a wavelet-based denoising method is applied to filter the noisy sensor measurements. Transient-state features are then extracted from the denoised sensor measurements, and are used to train multiple classifiers such as multi-layer perceptions (MLP), support vector machines (SVM), k nearest neighbor (KNN), and Parzen classifier. The Dempster-Shafer (DS) technique is used at the end to fuse the results of the multiple classifiers to get the final classification. Experimental analysis based on real vapor data shows that the wavelet denoising method can remove both random noise and outliers successfully, and the classification rate can be improved by using classifier fusion.
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
A Feature Fusion Based Forecasting Model for Financial Time Series
Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie
2014-01-01
Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455
NASA Astrophysics Data System (ADS)
Mu, Wei; Qi, Jin; Lu, Hong; Schabath, Matthew; Balagurunathan, Yoganand; Tunali, Ilke; Gillies, Robert James
2018-02-01
Purpose: Investigate the ability of using complementary information provided by the fusion of PET/CT images to predict immunotherapy response in non-small cell lung cancer (NSCLC) patients. Materials and methods: We collected 64 patients diagnosed with primary NSCLC treated with anti PD-1 checkpoint blockade. Using PET/CT images, fused images were created following multiple methodologies, resulting in up to 7 different images for the tumor region. Quantitative image features were extracted from the primary image (PET/CT) and the fused images, which included 195 from primary images and 1235 features from the fusion images. Three clinical characteristics were also analyzed. We then used support vector machine (SVM) classification models to identify discriminant features that predict immunotherapy response at baseline. Results: A SVM built with 87 fusion features and 13 primary PET/CT features on validation dataset had an accuracy and area under the ROC curve (AUROC) of 87.5% and 0.82, respectively, compared to a model built with 113 original PET/CT features on validation dataset 78.12% and 0.68. Conclusion: The fusion features shows better ability to predict immunotherapy response prediction compared to individual image features.
Multi-focus image fusion using a guided-filter-based difference image.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu
2016-03-20
The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.
NASA Astrophysics Data System (ADS)
Attallah, Bilal; Serir, Amina; Chahir, Youssef; Boudjelal, Abdelwahhab
2017-11-01
Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.
Siegele, Bradford; Roberts, Jon; Black, Jennifer O; Rudzinski, Erin; Vargas, Sara O; Galambos, Csaba
2017-03-01
The histologic differential diagnosis of pediatric and adult round cell tumors is vast and includes the recently recognized entity CIC-DUX4 fusion-positive round cell tumor. The diagnosis of CIC-DUX4 tumor can be suggested by light microscopic and immunohistochemical features, but currently, definitive diagnosis requires ancillary genetic testing such as conventional karyotyping, fluorescence in situ hybridization, or molecular methods. We sought to determine whether DUX4 expression would serve as a fusion-specific immunohistochemical marker distinguishing CIC-DUX4 tumor from potential histologic mimics. A cohort of CIC-DUX4 fusion-positive round cell tumors harboring t(4;19)(q35;q13) and t(10;19)(q26;q13) translocations was designed, with additional inclusion of a case with a translocation confirmed to involve the CIC gene without delineation of the partner. Round cell tumors with potentially overlapping histologic features were also collected. Staining with a monoclonal antibody raised against the C-terminus of the DUX4 protein was applied to all cases. DUX4 immunohistochemistry exhibited diffuse, crisp, strong nuclear staining in all CIC-DUX4 fusion-positive round cell tumors (5/5, 100% sensitivity), and exhibited negative staining in nuclei of all of the other tested round cell tumors, including 20 Ewing sarcomas, 1 Ewing-like sarcoma, 11 alveolar rhabdomyosarcomas, 9 embryonal rhabdomyosarcomas, 12 synovial sarcomas, 7 desmoplastic small round cell tumors, 3 malignant rhabdoid tumors, 9 neuroblastomas, and 4 clear cell sarcomas (0/76, 100% specificity). Thus, in our experience, DUX4 immunostaining distinguishes CIC-DUX4 tumors from other round cell mimics. We recommend its use when CIC-DUX4 fusion-positive round cell tumor enters the histologic differential diagnosis.
Obstructive sleep apnea severity estimation: Fusion of speech-based systems.
Ben Or, D; Dafna, E; Tarasiuk, A; Zigel, Y
2016-08-01
Obstructive sleep apnea (OSA) is a common sleep-related breathing disorder. Previous studies associated OSA with anatomical abnormalities of the upper respiratory tract that may be reflected in the acoustic characteristics of speech. We tested the hypothesis that the speech signal carries essential information that can assist in early assessment of OSA severity by estimating apnea-hypopnea index (AHI). 198 men referred to routine polysomnography (PSG) were recorded shortly prior to sleep onset while reading a one-minute speech protocol. The different parts of the speech recordings, i.e., sustained vowels, short-time frames of fluent speech, and the speech recording as a whole, underwent separate analyses, using sustained vowels features, short-term features, and long-term features, respectively. Applying support vector regression and regression trees, these features were used in order to estimate AHI. The fusion of the outputs of the three subsystems resulted in a diagnostic agreement of 67.3% between the speech-estimated AHI and the PSG-determined AHI, and an absolute error rate of 10.8 events/hr. Speech signal analysis may assist in the estimation of AHI, thus allowing the development of a noninvasive tool for OSA screening.
A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System.
Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan
2015-01-01
The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods.
A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System
Yuan, Xianfeng; Song, Mumin; Chen, Zhumin; Li, Yan
2015-01-01
The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods. PMID:26229526
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.
2013-10-01
Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.
Predicting the Valence of a Scene from Observers’ Eye Movements
R.-Tavakoli, Hamed; Atyabi, Adham; Rantanen, Antti; Laukka, Seppo J.; Nefti-Meziani, Samia; Heikkilä, Janne
2015-01-01
Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that ‘saliency map’, ‘fixation histogram’, ‘histogram of fixation duration’, and ‘histogram of saccade slope’ are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images. PMID:26407322
Wu, Lingfei; Wu, Kesheng; Sim, Alex; ...
2016-06-01
A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes tomore » detect and track blob-filaments in real time in fusion plasma. Here, on a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.« less
Fusion of Geophysical Images in the Study of Archaeological Sites
NASA Astrophysics Data System (ADS)
Karamitrou, A. A.; Petrou, M.; Tsokas, G. N.
2011-12-01
This paper presents results from different fusion techniques between geophysical images from different modalities in order to combine them into one image with higher information content than the two original images independently. The resultant image will be useful for the detection and mapping of buried archaeological relics. The examined archaeological area is situated in Kampana site (NE Greece) near the ancient theater of Maronia city. Archaeological excavations revealed an ancient theater, an aristocratic house and the temple of the ancient Greek God Dionysus. Numerous ceramic objects found in the broader area indicated the probability of the existence of buried urban structure. In order to accurately locate and map the latter, geophysical measurements performed with the use of the magnetic method (vertical gradient of the magnetic field) and of the electrical method (apparent resistivity). We performed a semi-stochastic pixel based registration method between the geophysical images in order to fine register them by correcting their local spatial offsets produced by the use of hand held devices. After this procedure we applied to the registered images three different fusion approaches. Image fusion is a relatively new technique that not only allows integration of different information sources, but also takes advantage of the spatial and spectral resolution as well as the orientation characteristics of each image. We have used three different fusion techniques, fusion with mean values, with wavelets by enhancing selected frequency bands and curvelets giving emphasis at specific bands and angles (according the expecting orientation of the relics). In all three cases the fused images gave significantly better results than each of the original geophysical images separately. The comparison of the results of the three different approaches showed that the fusion with the use of curvelets, giving emphasis at the features' orientation, seems to give the best fused image. In the resultant image appear clear linear and ellipsoid features corresponding to potential archaeological relics.
Analytical method for thermal stress analysis of plasma facing materials
NASA Astrophysics Data System (ADS)
You, J. H.; Bolt, H.
2001-10-01
The thermo-mechanical response of plasma facing materials (PFMs) to heat loads from the fusion plasma is one of the crucial issues in fusion technology. In this work, a fully analytical description of the thermal stress distribution in armour tiles of plasma facing components is presented which is expected to occur under typical high heat flux (HHF) loads. The method of stress superposition is applied considering the temperature gradient and thermal expansion mismatch. Several combinations of PFMs and heat sink metals are analysed and compared. In the framework of the present theoretical model, plastic flow and the effect of residual stress can be quantitatively assessed. Possible failure features are discussed.
Distinguishing obsessive features and worries: the role of thought-action fusion.
Coles, M E; Mennin, D S; Heimberg, R G
2001-08-01
Obsessions are a key feature of obsessive-compulsive disorder (OCD), and chronic worry is the cardinal feature of generalized anxiety disorder (GAD). However, these two cognitive processes are conceptually very similar, and there is a need to determine how they differ. Recent studies have attempted to identify cognitive processes that may be differentially related to obsessive features and worry. In the current study we proposed that (1) obsessive features and worry could be differentiated and that (2) a measure of the cognitive process thought-action fusion would distinguish between obsessive features and worry, being strongly related to obsessive features after controlling for the effects of worry. These hypotheses were supported in a sample of 173 undergraduate students. Thought-action fusion may be a valuable construct in differentiating between obsessive features and worry.
Arthrodesis of the knee following failed arthroplasty.
Van Rensch, P J H; Van de Pol, G J; Goosen, J H M; Wymenga, A B; De Man, F H R
2014-08-01
Primary stability in arthrodesis of the knee can be achieved by external fixation, intramedullary nailing or plate fixation. Each method has different features and results. We present a practical algorithm for arthrodesis of the knee following a failed (infected) arthroplasty, based on our own results and a literature review. Between 2004 and 2010, patients were included with an indication for arthrodesis after failed (revision) arthroplasty of the knee. Patients were analyzed with respect to indication, fusion method and bone contact. End-point was solid fusion. Twenty-six arthrodeses were performed. Eighteen patients were treated because of an infected arthroplasty. In total, ten external fixators, ten intramedullary nails and six plate fixations were applied; solid fusion was achieved in 3/10, 8/10 and 3/6, respectively. There is no definite answer as to which method is superior in performing an arthrodesis of the knee. Intramedullary nailing achieved the best fusion rates, but was used most in cases without--or cured--infection. Our data and the contemporary literature suggest that external fixation can be abandoned as standard fusion method, but can be of use following persisting infection. The Ilizarov circular external fixator, however, seems to render high fusion rates. Good patient selection and appropriate individual treatment are the key to a successful arthrodesis. Based upon these findings, a practical algorithm was developed.
NASA Astrophysics Data System (ADS)
Carpenter, Scott A.; Deveny, Marc E.; Schulze, Norman R.; Gatti, Raymond C.; Peters, Micheal B.
1994-07-01
In this paper, we strive to achieve three goals: (1) to describe a continuous-thrusting space-fusion-propulsion engine called the Mirror Fusion Propulsion System (MFPS), (2) to describe MFPS' ability to accomplish two candidate outer-solar-system (OSS) missions using various levels of advanced technology identified in the laboratory, and (3) to describe some interesting safety features of MFPS that include continuous mission-abort capability, magnetic-field-shielding against solar particle events (SPE), and performance of in-orbit characterization of the target body's natural resources (prior to human landings) using fusion-neutrons, x-rays, and possibly the neutralized thrust beam. The first OSS mission discussed is a mission to the Saturnian system, primarily exploration and resource- characterization driven, with emphasis on minimizing the Earth-to-Saturn and return-trip flight times. The other OSS mission discussed is an economically-driven mission to Uranus, stopping first to perform in-orbit resource characterization of the major moons of Uranus prior to human landing, and then returning to earth with a payload consisting of 3He (removed from the Uranian atmosphere or extracted from the Uranian moons) to be used in a future earth-based fusion-power industry.
Sun, Wei; Zhang, Xiaorui; Peeta, Srinivas; He, Xiaozheng; Li, Yongfu; Zhu, Senlai
2015-01-01
To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA) to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model. PMID:26393615
Intelligence, Surveillance, and Reconnaissance Fusion for Coalition Operations
2008-07-01
classification of the targets of interest. The MMI features extracted in this manner have two properties that provide a sound justification for...are generalizations of well- known feature extraction methods such as Principal Components Analysis (PCA) and Independent Component Analysis (ICA...augment (without degrading performance) a large class of generic fusion processes. Ontologies Classifications Feature extraction Feature analysis
Jing, Luyang; Wang, Taiyong; Zhao, Ming; Wang, Peng
2017-01-01
A fault diagnosis approach based on multi-sensor data fusion is a promising tool to deal with complicated damage detection problems of mechanical systems. Nevertheless, this approach suffers from two challenges, which are (1) the feature extraction from various types of sensory data and (2) the selection of a suitable fusion level. It is usually difficult to choose an optimal feature or fusion level for a specific fault diagnosis task, and extensive domain expertise and human labor are also highly required during these selections. To address these two challenges, we propose an adaptive multi-sensor data fusion method based on deep convolutional neural networks (DCNN) for fault diagnosis. The proposed method can learn features from raw data and optimize a combination of different fusion levels adaptively to satisfy the requirements of any fault diagnosis task. The proposed method is tested through a planetary gearbox test rig. Handcraft features, manual-selected fusion levels, single sensory data, and two traditional intelligent models, back-propagation neural networks (BPNN) and a support vector machine (SVM), are used as comparisons in the experiment. The results demonstrate that the proposed method is able to detect the conditions of the planetary gearbox effectively with the best diagnosis accuracy among all comparative methods in the experiment. PMID:28230767
Double-Sided Single-Pass Submerged Arc Welding for 2205 Duplex Stainless Steel
NASA Astrophysics Data System (ADS)
Luo, Jian; Yuan, Yi; Wang, Xiaoming; Yao, Zongxiang
2013-09-01
The duplex stainless steel (DSS), which combines the characteristics of ferritic steel and austenitic steel, is used widely. The submerged arc welding (SAW) method is usually applied to join thick plates of DSS. However, an effective welding procedure is needed in order to obtain ideal DSS welds with an appropriate proportion of ferrite (δ) and austenite (γ) in the weld zone, particularly in the melted zone and heat-affected zone. This study evaluated the effectiveness of a high efficiency double-sided single-pass (DSSP) SAW joining method for thick DSS plates. The effectiveness of the converse welding procedure, characterizations of weld zone, and mechanical properties of welded joint are analyzed. The results show an increasing appearance and continuous distribution feature of the σ phase in the fusion zone of the leading welded seam. The converse welding procedure promotes the σ phase to precipitate in the fusion zone of leading welded side. The microhardness appears to significantly increase in the center of leading welded side. Ductile fracture mode is observed in the weld zone. A mixture fracture feature appears with a shear lip and tears in the fusion zone near the fusion line. The ductility, plasticity, and microhardness of the joints have a significant relationship with σ phase and heat treatment effect influenced by the converse welding step. An available heat input controlling technology of the DSSP formation method is discussed for SAW of thick DSS plates.
Developing a multimodal biometric authentication system using soft computing methods.
Malcangi, Mario
2015-01-01
Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.
Fusion of classifiers for REIS-based detection of suspicious breast lesions
NASA Astrophysics Data System (ADS)
Lederman, Dror; Wang, Xingwei; Zheng, Bin; Sumkin, Jules H.; Tublin, Mitchell; Gur, David
2011-03-01
After developing a multi-probe resonance-frequency electrical impedance spectroscopy (REIS) system aimed at detecting women with breast abnormalities that may indicate a developing breast cancer, we have been conducting a prospective clinical study to explore the feasibility of applying this REIS system to classify younger women (< 50 years old) into two groups of "higher-than-average risk" and "average risk" of having or developing breast cancer. The system comprises one central probe placed in contact with the nipple, and six additional probes uniformly distributed along an outside circle to be placed in contact with six points on the outer breast skin surface. In this preliminary study, we selected an initial set of 174 examinations on participants that have completed REIS examinations and have clinical status verification. Among these, 66 examinations were recommended for biopsy due to findings of a highly suspicious breast lesion ("positives"), and 108 were determined as negative during imaging based procedures ("negatives"). A set of REIS-based features, extracted using a mirror-matched approach, was computed and fed into five machine learning classifiers. A genetic algorithm was used to select an optimal subset of features for each of the five classifiers. Three fusion rules, namely sum rule, weighted sum rule and weighted median rule, were used to combine the results of the classifiers. Performance evaluation was performed using a leave-one-case-out cross-validation method. The results indicated that REIS may provide a new technology to identify younger women with higher than average risk of having or developing breast cancer. Furthermore, it was shown that fusion rule, such as a weighted median fusion rule and a weighted sum fusion rule may improve performance as compared with the highest performing single classifier.
Weighted score-level feature fusion based on Dempster-Shafer evidence theory for action recognition
NASA Astrophysics Data System (ADS)
Zhang, Guoliang; Jia, Songmin; Li, Xiuzhi; Zhang, Xiangyin
2018-01-01
The majority of human action recognition methods use multifeature fusion strategy to improve the classification performance, where the contribution of different features for specific action has not been paid enough attention. We present an extendible and universal weighted score-level feature fusion method using the Dempster-Shafer (DS) evidence theory based on the pipeline of bag-of-visual-words. First, the partially distinctive samples in the training set are selected to construct the validation set. Then, local spatiotemporal features and pose features are extracted from these samples to obtain evidence information. The DS evidence theory and the proposed rule of survival of the fittest are employed to achieve evidence combination and calculate optimal weight vectors of every feature type belonging to each action class. Finally, the recognition results are deduced via the weighted summation strategy. The performance of the established recognition framework is evaluated on Penn Action dataset and a subset of the joint-annotated human metabolome database (sub-JHMDB). The experiment results demonstrate that the proposed feature fusion method can adequately exploit the complementarity among multiple features and improve upon most of the state-of-the-art algorithms on Penn Action and sub-JHMDB datasets.
Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.
Ganasala, Padma; Kumar, Vinod
2016-02-01
Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.
Wang, Ying; Wang, Shumin; Xu, Shiguang; Qu, Jiaqi; Liu, Bo
2014-01-01
The frequencies of EML4-ALK fusion gene in non-small cell lung cancer (NSCLC) with different clinicopathologic features described by previous studies are inconsistent. The key demographic and pathologic features associated with EML4-ALK fusion gene have not been definitively established. This meta-analysis was conducted to compare the frequency of the EML4-ALK fusion gene in patients with different clinicopathologic features and to identify an enriched population of patients with NSCLC harboring EML4-ALK fusion gene. The Pubmed and Embase databases for all studies on EML4-ALK fusion gene in NSCLC patients were searched up to July 2014. A criteria list and exclusion criteria were established to screen the studies. The frequency of the EML4-ALK fusion gene and the clinicopathologic features, including smoking status, pathologic type, gender, and EGFR status were abstracted. Seventeen articles consisting of 4511 NSCLC cases were included in this meta-analysis. A significant lower EML4-ALK fusion gene positive rate was associated with smokers (pooled OR = 0.40, 95% CI = 0.30-0.54, P<0.00001). A significantly higher EML4-ALK fusion gene positivity rate was associated with adenocarcinomas (pooled OR = 2.53, 95% CI = 1.66-3.86, P<0.0001) and female (pooled OR = 0.61, 95% CI = 0.41-0.90, P = 0.01). We found that a significantly lower EML4-ALK fusion gene positivity rate was associated with EGFR mutation (pooled OR = 0.07, 95% CI = 0.03-0.19, P<0.00001). No publication bias was observed in any meta-analysis (all P value of Egger's test >0.05); however, because of the small sample size, no results were in the meta-analysis regarding EGFR gene status. This meta-analysis revealed that the EML4-ALK fusion gene is highly correlated with a never/light smoking history, female and the pathologic type of adenocarcinoma, and is largely mutually exclusive of EGFR.
Wang, Shumin; Xu, Shiguang; Qu, Jiaqi
2014-01-01
Background The frequencies of EML4-ALK fusion gene in non-small cell lung cancer (NSCLC) with different clinicopathologic features described by previous studies are inconsistent. The key demographic and pathologic features associated with EML4-ALK fusion gene have not been definitively established. This meta-analysis was conducted to compare the frequency of the EML4-ALK fusion gene in patients with different clinicopathologic features and to identify an enriched population of patients with NSCLC harboring EML4-ALK fusion gene. Methods The Pubmed and Embase databases for all studies on EML4-ALK fusion gene in NSCLC patients were searched up to July 2014. A criteria list and exclusion criteria were established to screen the studies. The frequency of the EML4-ALK fusion gene and the clinicopathologic features, including smoking status, pathologic type, gender, and EGFR status were abstracted. Results Seventeen articles consisting of 4511 NSCLC cases were included in this meta-analysis. A significant lower EML4-ALK fusion gene positive rate was associated with smokers (pooled OR = 0.40, 95% CI = 0.30–0.54, P<0.00001). A significantly higher EML4-ALK fusion gene positivity rate was associated with adenocarcinomas (pooled OR = 2.53, 95% CI = 1.66–3.86, P<0.0001) and female (pooled OR = 0.61, 95% CI = 0.41–0.90, P = 0.01). We found that a significantly lower EML4-ALK fusion gene positivity rate was associated with EGFR mutation (pooled OR = 0.07, 95% CI = 0.03–0.19, P<0.00001). No publication bias was observed in any meta-analysis (all P value of Egger's test >0.05); however, because of the small sample size, no results were in the meta-analysis regarding EGFR gene status. Conclusion This meta-analysis revealed that the EML4-ALK fusion gene is highly correlated with a never/light smoking history, female and the pathologic type of adenocarcinoma, and is largely mutually exclusive of EGFR. PMID:25360721
Multifocus image fusion using phase congruency
NASA Astrophysics Data System (ADS)
Zhan, Kun; Li, Qiaoqiao; Teng, Jicai; Wang, Mingying; Shi, Jinhui
2015-05-01
We address the problem of fusing multifocus images based on the phase congruency (PC). PC provides a sharpness feature of a natural image. The focus measure (FM) is identified as strong PC near a distinctive image feature evaluated by the complex Gabor wavelet. The PC is more robust against noise than other FMs. The fusion image is obtained by a new fusion rule (FR), and the focused region is selected by the FR from one of the input images. Experimental results show that the proposed fusion scheme achieves the fusion performance of the state-of-the-art methods in terms of visual quality and quantitative evaluations.
NASA Astrophysics Data System (ADS)
Danala, Gopichandh; Wang, Yunzhi; Thai, Theresa; Gunderson, Camille; Moxley, Katherine; Moore, Kathleen; Mannel, Robert; Liu, Hong; Zheng, Bin; Qiu, Yuchen
2017-03-01
Predicting metastatic tumor response to chemotherapy at early stage is critically important for improving efficacy of clinical trials of testing new chemotherapy drugs. However, using current response evaluation criteria in solid tumors (RECIST) guidelines only yields a limited accuracy to predict tumor response. In order to address this clinical challenge, we applied Radiomics approach to develop a new quantitative image analysis scheme, aiming to accurately assess the tumor response to new chemotherapy treatment, for the advanced ovarian cancer patients. During the experiment, a retrospective dataset containing 57 patients was assembled, each of which has two sets of CT images: pre-therapy and 4-6 week follow up CT images. A Radiomics based image analysis scheme was then applied on these images, which is composed of three steps. First, the tumors depicted on the CT images were segmented by a hybrid tumor segmentation scheme. Then, a total of 115 features were computed from the segmented tumors, which can be grouped as 1) volume based features; 2) density based features; and 3) wavelet features. Finally, an optimal feature cluster was selected based on the single feature performance and an equal-weighed fusion rule was applied to generate the final predicting score. The results demonstrated that the single feature achieved an area under the receiver operating characteristic curve (AUC) of 0.838+/-0.053. This investigation demonstrates that the Radiomic approach may have the potential in the development of high accuracy predicting model for early stage prognostic assessment of ovarian cancer patients.
Novel BCOR-MAML3 and ZC3H7B-BCOR Gene Fusions in Undifferentiated Small Blue Round Cell Sarcomas.
Specht, Katja; Zhang, Lei; Sung, Yun-Shao; Nucci, Marisa; Dry, Sarah; Vaiyapuri, Sumathi; Richter, Gunther H S; Fletcher, Christopher D M; Antonescu, Cristina R
2016-04-01
Small blue round cell tumors (SBRCTs) are a heterogenous group of tumors that are difficult to diagnose because of overlapping morphologic, immunohistochemical, and clinical features. About two-thirds of EWSR1-negative SBRCTs are associated with CIC-DUX4-related fusions, whereas another small subset shows BCOR-CCNB3 X-chromosomal paracentric inversion. Applying paired-end RNA sequencing to an SBRCT index case of a 44-year-old man, we identified a novel BCOR-MAML3 chimeric fusion, which was validated by reverse transcription polymerase chain reaction and fluorescence in situ hybridization techniques. We then screened a total of 75 SBRCTs lacking EWSR1, FUS, SYT, CIC, and BCOR-CCNB3 abnormalities for BCOR break-apart probes by fluorescence in situ hybridization to detect potential recurrent BCOR gene rearrangements outside the typical X-chromosomal inversion. Indeed, 8/75 (11%) SBRCTs showed distinct BCOR gene rearrangements, with 2 cases each showing either a BCOR-MAML3 or the alternative ZC3H7B-BCOR fusion, whereas no fusion partner was detected in the remaining 4 cases. Gene expression of the BCOR-MAML3-positive index case showed a distinct transcriptional profile with upregulation of HOX-gene signature, compared with classic Ewing's sarcoma or CIC-DUX4-positive SBRCTs. The clinicopathologic features of the SBRCTs with alternative BCOR rearrangements were also compared with a group of BCOR-CCNB3 inversion-positive cases, combining 11 from our files with a meta-analysis of 42 published cases. The BCOR-CCNB3-positive tumors occurred preferentially in children and in bone, in contrast to alternative BCOR-rearranged SBRCTs, which presented in young adults, with a variable anatomic distribution. Furthermore, BCOR-rearranged tumors often displayed spindle cell areas, either well defined in intersecting fascicles or blending with the round cell component, which appears distinct from most other fusion-positive SBRCTs and shares histologic overlap with poorly differentiated synovial sarcoma.
Novel BCOR-MAML3 and ZC3H7B-BCOR Gene Fusions in Undifferentiated Small Blue Round Cell Sarcomas
Specht, Katja; Zhang, Lei; Sung, Yun-Shao; Nucci, Marisa; Dry, Sarah; Vaiyapuri, Sumathi; Richter, Gunther HS; Fletcher, Christopher DM; Antonescu, Cristina R
2015-01-01
Small blue round cell tumors (SBRCTs) are a heterogenous group of tumors that are difficult to diagnose due to overlapping morphologic, immunohistochemical and clinical features. About two-thirds of EWSR1-negative SBRCTs are associated with CIC-DUX4 related fusions, while another small subset shows BCOR-CCNB3 X-chromosomal paracentric inversion. Applying paired-end RNA sequencing to an SBRCT index case of a 44 year-old male, we identified a novel BCOR-MAML3 chimeric fusion, which was validated by RT-PCR and FISH techniques. We then screened a total of 75 SBRCTs lacking EWSR1, FUS, SYT, CIC and BCOR-CCNB3 abnormalities, for BCOR break-apart probes by FISH to detect potential recurrent BCOR gene rearrangements, outside the typical X-chromosomal inversion. Indeed, 8/75 (11%) SBRCTs showed distinct BCOR gene rearrangements, with 2 cases each showing either a BCOR-MAML3 or the alternative ZC3H7B-BCOR fusion, while no fusion partner was detected in the remaining 4 cases. Gene expression of the BCOR-MAML3 positive index case showed a distinct transcriptional profile with upregulation of HOX-gene signature, compared to classic Ewing sarcoma or CIC-DUX4-positive SBRCTs. The clinicopathologic features of the SRBCTs with alternative BCOR rearrangements were also compared with a group of BCOR-CCNB3 inversion positive cases, combining 11 from our files with a meta-analysis of 42 published cases. The BCOR-CCNB3-positive tumors occurred preferentially in children and in bone, in contrast to alternative BCOR-rearranged SBRCTs which presented in young adults, with a variable anatomic distribution. Furthermore BCOR-rearranged tumors often displayed spindle cell areas, either well-defined in intersecting fascicles or blending with the round cell component, which appears distinct from most other fusion-positive SBRCTs and shares histologic overlap with poorly differentiated synovial sarcoma. PMID:26752546
Multiple feature fusion via covariance matrix for visual tracking
NASA Astrophysics Data System (ADS)
Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui
2018-04-01
Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.
A comparative study of multi-focus image fusion validation metrics
NASA Astrophysics Data System (ADS)
Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael
2016-05-01
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
Salient region detection by fusing bottom-up and top-down features extracted from a single image.
Tian, Huawei; Fang, Yuming; Zhao, Yao; Lin, Weisi; Ni, Rongrong; Zhu, Zhenfeng
2014-10-01
Recently, some global contrast-based salient region detection models have been proposed based on only the low-level feature of color. It is necessary to consider both color and orientation features to overcome their limitations, and thus improve the performance of salient region detection for images with low-contrast in color and high-contrast in orientation. In addition, the existing fusion methods for different feature maps, like the simple averaging method and the selective method, are not effective sufficiently. To overcome these limitations of existing salient region detection models, we propose a novel salient region model based on the bottom-up and top-down mechanisms: the color contrast and orientation contrast are adopted to calculate the bottom-up feature maps, while the top-down cue of depth-from-focus from the same single image is used to guide the generation of final salient regions, since depth-from-focus reflects the photographer's preference and knowledge of the task. A more general and effective fusion method is designed to combine the bottom-up feature maps. According to the degree-of-scattering and eccentricities of feature maps, the proposed fusion method can assign adaptive weights to different feature maps to reflect the confidence level of each feature map. The depth-from-focus of the image as a significant top-down feature for visual attention in the image is used to guide the salient regions during the fusion process; with its aid, the proposed fusion method can filter out the background and highlight salient regions for the image. Experimental results show that the proposed model outperforms the state-of-the-art models on three public available data sets.
Near infrared and visible face recognition based on decision fusion of LBP and DCT features
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-03-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.
Wen, Miaomiao; Wang, Xuejiao; Sun, Ying; Xia, Jinghua; Fan, Liangbo; Xing, Hao; Zhang, Zhipei; Li, Xiaofei
2016-01-01
Echinoderm microtubule-associated protein-like 4-anaplastic lymphoma kinase (EML4-ALK) and epidermal growth factor receptor (EGFR) define specific molecular subsets of lung cancer with distinct clinical features. We aimed at revealing the clinical features of EML4-ALK fusion gene and EGFR mutation in non-small-cell lung cancer (NSCLC). We enrolled 694 Chinese patients with NSCLC for analysis. EML4-ALK fusion gene was analyzed by real-time polymerase chain reaction, and EGFR mutations were analyzed by amplified refractory mutation system. Among the 694 patients, 60 (8.65%) patients had EML4-ALK fusions. In continuity correction χ (2) test analysis, EML4-ALK fusion gene was correlated with sex, age, smoking status, and histology, but no significant association was observed between EML4-ALK fusion gene and clinical stage. A total of 147 (21.18%) patients had EGFR mutations. In concordance with previous reports, EGFR mutation was correlated with age, smoking status, histology, and clinical stage, whereas patient age was not significantly associated with EGFR mutation. Meanwhile, to our surprise, six (0.86%) patients had coexisting EML4-ALK fusions and EGFR mutations. EML4-ALK fusion gene defines a new molecular subset in patients with NSCLC. Six patients who harbored both EML4-ALK fusion genes and EGFR mutations were identified in our study. The EGFR mutations and the EML4-ALK fusion genes are coexistent.
Wen, Miaomiao; Wang, Xuejiao; Sun, Ying; Xia, Jinghua; Fan, Liangbo; Xing, Hao; Zhang, Zhipei; Li, Xiaofei
2016-01-01
Purpose Echinoderm microtubule-associated protein-like 4–anaplastic lymphoma kinase (EML4-ALK) and epidermal growth factor receptor (EGFR) define specific molecular subsets of lung cancer with distinct clinical features. We aimed at revealing the clinical features of EML4-ALK fusion gene and EGFR mutation in non-small-cell lung cancer (NSCLC). Methods We enrolled 694 Chinese patients with NSCLC for analysis. EML4-ALK fusion gene was analyzed by real-time polymerase chain reaction, and EGFR mutations were analyzed by amplified refractory mutation system. Results Among the 694 patients, 60 (8.65%) patients had EML4-ALK fusions. In continuity correction χ2 test analysis, EML4-ALK fusion gene was correlated with sex, age, smoking status, and histology, but no significant association was observed between EML4-ALK fusion gene and clinical stage. A total of 147 (21.18%) patients had EGFR mutations. In concordance with previous reports, EGFR mutation was correlated with age, smoking status, histology, and clinical stage, whereas patient age was not significantly associated with EGFR mutation. Meanwhile, to our surprise, six (0.86%) patients had coexisting EML4-ALK fusions and EGFR mutations. Conclusion EML4-ALK fusion gene defines a new molecular subset in patients with NSCLC. Six patients who harbored both EML4-ALK fusion genes and EGFR mutations were identified in our study. The EGFR mutations and the EML4-ALK fusion genes are coexistent. PMID:27103824
Multisource data fusion for documenting archaeological sites
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir; Chibunichev, Alexander; Zhuravlev, Denis
2017-10-01
The quality of archaeological sites documenting is of great importance for cultural heritage preserving and investigating. The progress in developing new techniques and systems for data acquisition and processing creates an excellent basis for achieving a new quality of archaeological sites documenting and visualization. archaeological data has some specific features which have to be taken into account when acquiring, processing and managing. First of all, it is a needed to gather as full as possible information about findings providing no loss of information and no damage to artifacts. Remote sensing technologies are the most adequate and powerful means which satisfy this requirement. An approach to archaeological data acquiring and fusion based on remote sensing is proposed. It combines a set of photogrammetric techniques for obtaining geometrical and visual information at different scales and detailing and a pipeline for archaeological data documenting, structuring, fusion, and analysis. The proposed approach is applied for documenting of Bosporus archaeological expedition of Russian State Historical Museum.
Classifying four-category visual objects using multiple ERP components in single-trial ERP.
Qin, Yu; Zhan, Yu; Wang, Changming; Zhang, Jiacai; Yao, Li; Guo, Xiaojuan; Wu, Xia; Hu, Bin
2016-08-01
Object categorization using single-trial electroencephalography (EEG) data measured while participants view images has been studied intensively. In previous studies, multiple event-related potential (ERP) components (e.g., P1, N1, P2, and P3) were used to improve the performance of object categorization of visual stimuli. In this study, we introduce a novel method that uses multiple-kernel support vector machine to fuse multiple ERP component features. We investigate whether fusing the potential complementary information of different ERP components (e.g., P1, N1, P2a, and P2b) can improve the performance of four-category visual object classification in single-trial EEGs. We also compare the classification accuracy of different ERP component fusion methods. Our experimental results indicate that the classification accuracy increases through multiple ERP fusion. Additional comparative analyses indicate that the multiple-kernel fusion method can achieve a mean classification accuracy higher than 72 %, which is substantially better than that achieved with any single ERP component feature (55.07 % for the best single ERP component, N1). We compare the classification results with those of other fusion methods and determine that the accuracy of the multiple-kernel fusion method is 5.47, 4.06, and 16.90 % higher than those of feature concatenation, feature extraction, and decision fusion, respectively. Our study shows that our multiple-kernel fusion method outperforms other fusion methods and thus provides a means to improve the classification performance of single-trial ERPs in brain-computer interface research.
Normalized distance aggregation of discriminative features for person reidentification
NASA Astrophysics Data System (ADS)
Hou, Li; Han, Kang; Wan, Wanggen; Hwang, Jenq-Neng; Yao, Haiyan
2018-03-01
We propose an effective person reidentification method based on normalized distance aggregation of discriminative features. Our framework is built on the integration of three high-performance discriminative feature extraction models, including local maximal occurrence (LOMO), feature fusion net (FFN), and a concatenation of LOMO and FFN called LOMO-FFN, through two fast and discriminant metric learning models, i.e., cross-view quadratic discriminant analysis (XQDA) and large-scale similarity learning (LSSL). More specifically, we first represent all the cross-view person images using LOMO, FFN, and LOMO-FFN, respectively, and then apply each extracted feature representation to train XQDA and LSSL, respectively, to obtain the optimized individual cross-view distance metric. Finally, the cross-view person matching is computed as the sum of the optimized individual cross-view distance metric through the min-max normalization. Experimental results have shown the effectiveness of the proposed algorithm on three challenging datasets (VIPeR, PRID450s, and CUHK01).
Finger-vein and fingerprint recognition based on a feature-level fusion method
NASA Astrophysics Data System (ADS)
Yang, Jinfeng; Hong, Bofeng
2013-07-01
Multimodal biometrics based on the finger identification is a hot topic in recent years. In this paper, a novel fingerprint-vein based biometric method is proposed to improve the reliability and accuracy of the finger recognition system. First, the second order steerable filters are used here to enhance and extract the minutiae features of the fingerprint (FP) and finger-vein (FV). Second, the texture features of fingerprint and finger-vein are extracted by a bank of Gabor filter. Third, a new triangle-region fusion method is proposed to integrate all the fingerprint and finger-vein features in feature-level. Thus, the fusion features contain both the finger texture-information and the minutiae triangular geometry structure. Finally, experimental results performed on the self-constructed finger-vein and fingerprint databases are shown that the proposed method is reliable and precise in personal identification.
NASA Astrophysics Data System (ADS)
Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.
2018-04-01
In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.
Fusion of monocular cues to detect man-made structures in aerial imagery
NASA Technical Reports Server (NTRS)
Shufelt, Jefferey; Mckeown, David M.
1991-01-01
The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.
Multilevel depth and image fusion for human activity detection.
Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng
2013-10-01
Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.
NASA Astrophysics Data System (ADS)
He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Chen, Ying
2014-05-01
A multimodal biometric system has been considered a promising technique to overcome the defects of unimodal biometric systems. We have introduced a fusion scheme to gain a better understanding and fusion method for a face-iris-fingerprint multimodal biometric system. In our case, we use particle swarm optimization to train a set of adaptive Gabor filters in order to achieve the proper Gabor basic functions for each modality. For a closer analysis of texture information, two different local Gabor features for each modality are produced by the corresponding Gabor coefficients. Next, all matching scores of the two Gabor features for each modality are projected to a single-scalar score via a trained, supported, vector regression model for a final decision. A large-scale dataset is formed to validate the proposed scheme using the Facial Recognition Technology database-fafb and CASIA-V3-Interval together with FVC2004-DB2a datasets. The experimental results demonstrate that as well as achieving further powerful local Gabor features of multimodalities and obtaining better recognition performance by their fusion strategy, our architecture also outperforms some state-of-the-art individual methods and other fusion approaches for face-iris-fingerprint multimodal biometric systems.
Vehicle logo recognition using multi-level fusion model
NASA Astrophysics Data System (ADS)
Ming, Wei; Xiao, Jianli
2018-04-01
Vehicle logo recognition plays an important role in manufacturer identification and vehicle recognition. This paper proposes a new vehicle logo recognition algorithm. It has a hierarchical framework, which consists of two fusion levels. At the first level, a feature fusion model is employed to map the original features to a higher dimension feature space. In this space, the vehicle logos become more recognizable. At the second level, a weighted voting strategy is proposed to promote the accuracy and the robustness of the recognition results. To evaluate the performance of the proposed algorithm, extensive experiments are performed, which demonstrate that the proposed algorithm can achieve high recognition accuracy and work robustly.
Face Liveness Detection Using Defocus
Kim, Sooyeon; Ban, Yuseok; Lee, Sangyoun
2015-01-01
In order to develop security systems for identity authentication, face recognition (FR) technology has been applied. One of the main problems of applying FR technology is that the systems are especially vulnerable to attacks with spoofing faces (e.g., 2D pictures). To defend from these attacks and to enhance the reliability of FR systems, many anti-spoofing approaches have been recently developed. In this paper, we propose a method for face liveness detection using the effect of defocus. From two images sequentially taken at different focuses, three features, focus, power histogram and gradient location and orientation histogram (GLOH), are extracted. Afterwards, we detect forged faces through the feature-level fusion approach. For reliable performance verification, we develop two databases with a handheld digital camera and a webcam. The proposed method achieves a 3.29% half total error rate (HTER) at a given depth of field (DoF) and can be extended to camera-equipped devices, like smartphones. PMID:25594594
NASA Astrophysics Data System (ADS)
Emmerman, Philip J.
2005-05-01
Teams of robots or mixed teams of warfighters and robots on reconnaissance and other missions can benefit greatly from a local fusion station. A local fusion station is defined here as a small mobile processor with interfaces to enable the ingestion of multiple heterogeneous sensor data and information streams, including blue force tracking data. These data streams are fused and integrated with contextual information (terrain features, weather, maps, dynamic background features, etc.), and displayed or processed to provide real time situational awareness to the robot controller or to the robots themselves. These blue and red force fusion applications remove redundancies, lessen ambiguities, correlate, aggregate, and integrate sensor information with context such as high resolution terrain. Applications such as safety, team behavior, asset control, training, pattern analysis, etc. can be generated or enhanced by these fusion stations. This local fusion station should also enable the interaction between these local units and a global information world.
Fusion Materials Research at Oak Ridge National Laboratory in Fiscal Year 2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiffen, Frederick W.; Noe, Susan P.; Snead, Lance Lewis
2014-10-01
The realization of fusion energy is a formidable challenge with significant achievements resulting from close integration of the plasma physics and applied technology disciplines. Presently, the most significant technological challenge for the near-term experiments such as ITER, and next generation fusion power systems, is the inability of current materials and components to withstand the harsh fusion nuclear environment. The overarching goal of the ORNL fusion materials program is to provide the applied materials science support and understanding to underpin the ongoing DOE Office of Science fusion energy program while developing materials for fusion power systems. In doing so the programmore » continues to be integrated both with the larger U.S. and international fusion materials communities, and with the international fusion design and technology communities.« less
Medical image registration based on normalized multidimensional mutual information
NASA Astrophysics Data System (ADS)
Li, Qi; Ji, Hongbing; Tong, Ming
2009-10-01
Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.
Performance Evaluation of Fusing Protected Fingerprint Minutiae Templates on the Decision Level
Yang, Bian; Busch, Christoph; de Groot, Koen; Xu, Haiyun; Veldhuis, Raymond N. J.
2012-01-01
In a biometric authentication system using protected templates, a pseudonymous identifier is the part of a protected template that can be directly compared. Each compared pair of pseudonymous identifiers results in a decision testing whether both identifiers are derived from the same biometric characteristic. Compared to an unprotected system, most existing biometric template protection methods cause to a certain extent degradation in biometric performance. Fusion is therefore a promising way to enhance the biometric performance in template-protected biometric systems. Compared to feature level fusion and score level fusion, decision level fusion has not only the least fusion complexity, but also the maximum interoperability across different biometric features, template protection and recognition algorithms, templates formats, and comparison score rules. However, performance improvement via decision level fusion is not obvious. It is influenced by both the dependency and the performance gap among the conducted tests for fusion. We investigate in this paper several fusion scenarios (multi-sample, multi-instance, multi-sensor, multi-algorithm, and their combinations) on the binary decision level, and evaluate their biometric performance and fusion efficiency on a multi-sensor fingerprint database with 71,994 samples. PMID:22778583
Subbiah, Vivek; McMahon, Caitlin; Patel, Shreyaskumar; Zinner, Ralph; Silva, Elvio G; Elvin, Julia A; Subbiah, Ishwaria M; Ohaji, Chimela; Ganeshan, Dhakshina Moorthy; Anand, Deepa; Levenback, Charles F; Berry, Jenny; Brennan, Tim; Chmielecki, Juliann; Chalmers, Zachary R; Mayfield, John; Miller, Vincent A; Stephens, Philip J; Ross, Jeffrey S; Ali, Siraj M
2015-06-11
Recurrent, metastatic mesenchymal myxoid tumors of the gynecologic tract present a management challenge as there is minimal evidence to guide systemic therapy. Such tumors also present a diagnostic dilemma, as myxoid features are observed in leiomyosarcomas, inflammatory myofibroblastic tumors (IMT), and mesenchymal myxoid tumors. Comprehensive genomic profiling was performed in the course of clinical care on a case of a recurrent, metastatic myxoid uterine malignancy (initially diagnosed as smooth muscle tumor of uncertain malignant potential (STUMP)), to guide identify targeted therapeutic options. To our knowledge, this case represents the first report of clinical response to targeted therapy in a tumor harboring a DCTN1-ALK fusion protein. Hybridization capture of 315 cancer-related genes plus introns from 28 genes often rearranged or altered in cancer was applied to >50 ng of DNA extracted from this sample and sequenced to high, uniform coverage. Therapy was given in the context of a phase I clinical trial ClinicalTrials.gov Identifier: ( NCT01548144 ). Immunostains showed diffuse positivity for ALK1 expression and comprehensive genomic profiling identified an in frame DCTN1-ALK gene fusion. The diagnosis of STUMP was revised to that of an IMT with myxoid features. The patient was enrolled in a clinical trial and treated with an anaplastic lymphoma kinase (ALK) inhibitor (crizotinib/Xalkori®) and a multikinase VEGF inhibitor (pazopanib/Votrient®). The patient experienced an ongoing partial response (6+ months) by response evaluation criteria in solid tumors (RECIST) 1.1 criteria. For myxoid tumors of the gynecologic tract, comprehensive genomic profiling can identify clinical relevant genomic alterations that both direct treatment targeted therapy and help discriminate between similar diagnostic entities.
Bielle, Franck; Di Stefano, Anna-Luisa; Meyronet, David; Picca, Alberto; Villa, Chiara; Bernier, Michèle; Schmitt, Yohann; Giry, Marine; Rousseau, Audrey; Figarella-Branger, Dominique; Maurage, Claude-Alain; Uro-Coste, Emmanuelle; Lasorella, Anna; Iavarone, Antonio; Sanson, Marc; Mokhtari, Karima
2017-10-04
Adult glioblastomas, IDH-wildtype represent a heterogeneous group of diseases. They are resistant to conventional treatment by concomitant radiochemotherapy and carry a dismal prognosis. The discovery of oncogenic gene fusions in these tumors has led to prospective targeted treatments, but identification of these rare alterations in practice is challenging. Here, we report a series of 30 adult diffuse gliomas with an in frame FGFR3-TACC3 oncogenic fusion (n = 27 WHO grade IV and n = 3 WHO grade II) as well as their histological and molecular features. We observed recurrent morphological features (monomorphous ovoid nuclei, nuclear palisading and thin parallel cytoplasmic processes, endocrinoid network of thin capillaries) associated with frequent microcalcifications and desmoplasia. We report a constant immunoreactivity for FGFR3, which is a valuable method for screening for the FGFR3-TACC3 fusion with 100% sensitivity and 92% specificity. We confirmed the associated molecular features (typical genetic alterations of glioblastoma, except the absence of EGFR amplification, and an increased frequency of CDK4 and MDM2 amplifications). FGFR3 immunopositivity is a valuable tool to identify gliomas that are likely to harbor the FGFR3-TACC3 fusion for inclusion in targeted therapeutic trials. © 2017 International Society of Neuropathology.
A Foreign Object Damage Event Detector Data Fusion System for Turbofan Engines
NASA Technical Reports Server (NTRS)
Turso, James A.; Litt, Jonathan S.
2004-01-01
A Data Fusion System designed to provide a reliable assessment of the occurrence of Foreign Object Damage (FOD) in a turbofan engine is presented. The FOD-event feature level fusion scheme combines knowledge of shifts in engine gas path performance obtained using a Kalman filter, with bearing accelerometer signal features extracted via wavelet analysis, to positively identify a FOD event. A fuzzy inference system provides basic probability assignments (bpa) based on features extracted from the gas path analysis and bearing accelerometers to a fusion algorithm based on the Dempster-Shafer-Yager Theory of Evidence. Details are provided on the wavelet transforms used to extract the foreign object strike features from the noisy data and on the Kalman filter-based gas path analysis. The system is demonstrated using a turbofan engine combined-effects model (CEM), providing both gas path and rotor dynamic structural response, and is suitable for rapid-prototyping of control and diagnostic systems. The fusion of the disparate data can provide significantly more reliable detection of a FOD event than the use of either method alone. The use of fuzzy inference techniques combined with Dempster-Shafer-Yager Theory of Evidence provides a theoretical justification for drawing conclusions based on imprecise or incomplete data.
Heidari, Morteza; Khuzani, Abolfazl Zargari; Hollingsworth, Alan B; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qiu, Yuchen; Liu, Hong; Zheng, Bin
2018-01-30
In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.
NASA Astrophysics Data System (ADS)
Heidari, Morteza; Zargari Khuzani, Abolfazl; Hollingsworth, Alan B.; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qiu, Yuchen; Liu, Hong; Zheng, Bin
2018-02-01
In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.
Improvement of information fusion-based audio steganalysis
NASA Astrophysics Data System (ADS)
Kraetzer, Christian; Dittmann, Jana
2010-01-01
In the paper we extend an existing information fusion based audio steganalysis approach by three different kinds of evaluations: The first evaluation addresses the so far neglected evaluations on sensor level fusion. Our results show that this fusion removes content dependability while being capable of achieving similar classification rates (especially for the considered global features) if compared to single classifiers on the three exemplarily tested audio data hiding algorithms. The second evaluation enhances the observations on fusion from considering only segmental features to combinations of segmental and global features, with the result of a reduction of the required computational complexity for testing by about two magnitudes while maintaining the same degree of accuracy. The third evaluation tries to build a basis for estimating the plausibility of the introduced steganalysis approach by measuring the sensibility of the models used in supervised classification of steganographic material against typical signal modification operations like de-noising or 128kBit/s MP3 encoding. Our results show that for some of the tested classifiers the probability of false alarms rises dramatically after such modifications.
Mok, Yingting; Pang, Yin Huei; Sanjeev, Jain Sudhanshi; Kuick, Chik Hong; Chang, Kenneth Tou-En
2018-01-01
Low-grade fibromyxoid sarcoma (LGFMS) and sclerosing epithelioid fibrosarcoma (SEF) are rare tumors with distinct sets of morphological features, both characterized by MUC4 immunoreactivity. Tumors exhibiting features of both entities are considered hybrid LGFMS-SEF lesions. While the majority of LGFMS cases are characterized by FUS-CREB3L2 gene fusions, most cases of pure SEF show EWSR1 gene rearrangements. In the largest study of hybrid LGFMS-SEF tumors to date, all cases exhibited FUS rearrangements, a similar genetic profile to LGFMS. We herein describe the clinicopathological features and genetic findings of a case of primary renal hybrid LGFMS-SEF occurring in a 10-year-old child, with disseminated metastases. Fusion gene detection using a next-generation sequencing-based anchored multiplex PCR technique (Archer FusionPlex Sarcoma Panel) was performed on both the primary renal tumor that showed the morphology of a LGFMS, and a cervical metastasis that showed the morphology of SEF. An EWSR1-CREB3L1 gene fusion occurring between exon 11 of EWSR1 and exon 6 of CREB3L1 was present in both the LGFMS and SEF components. This unusual case provides evidence that a subset of hybrid LGFMS-SEF harbor EWSR1-CREB3L1 gene fusions. In this case, these features were associated with an aggressive clinical course, with disease-associated mortality occurring within 12 months of diagnosis.
NASA Astrophysics Data System (ADS)
Mesbah, Mostefa; Balakrishnan, Malarvili; Colditz, Paul B.; Boashash, Boualem
2012-12-01
This article proposes a new method for newborn seizure detection that uses information extracted from both multi-channel electroencephalogram (EEG) and a single channel electrocardiogram (ECG). The aim of the study is to assess whether additional information extracted from ECG can improve the performance of seizure detectors based solely on EEG. Two different approaches were used to combine this extracted information. The first approach, known as feature fusion, involves combining features extracted from EEG and heart rate variability (HRV) into a single feature vector prior to feeding it to a classifier. The second approach, called classifier or decision fusion, is achieved by combining the independent decisions of the EEG and the HRV-based classifiers. Tested on recordings obtained from eight newborns with identified EEG seizures, the proposed neonatal seizure detection algorithms achieved 95.20% sensitivity and 88.60% specificity for the feature fusion case and 95.20% sensitivity and 94.30% specificity for the classifier fusion case. These results are considerably better than those involving classifiers using EEG only (80.90%, 86.50%) or HRV only (85.70%, 84.60%).
Spherical torus fusion reactor
Peng, Yueng-Kay M.
1989-04-04
A fusion reactor is provided having a near spherical-shaped plasma with a modest central opening through which straight segments of toroidal field coils extend that carry electrical current for generating a toroidal magnet plasma confinement fields. By retaining only the indispensable components inboard of the plasma torus, principally the cooled toroidal field conductors and in some cases a vacuum containment vessel wall, the fusion reactor features an exceptionally small aspect ratio (typically about 1.5), a naturally elongated plasma cross section without extensive field shaping, requires low strength magnetic containment fields, small size and high beta. These features combine to produce a spherical torus plasma in a unique physics regime which permits compact fusion at low field and modest cost.
Spherical torus fusion reactor
Peng, Yueng-Kay M.
1989-01-01
A fusion reactor is provided having a near spherical-shaped plasma with a modest central opening through which straight segments of toroidal field coils extend that carry electrical current for generating a toroidal magnet plasma confinement fields. By retaining only the indispensable components inboard of the plasma torus, principally the cooled toroidal field conductors and in some cases a vacuum containment vessel wall, the fusion reactor features an exceptionally small aspect ratio (typically about 1.5), a naturally elongated plasma cross section without extensive field shaping, requires low strength magnetic containment fields, small size and high beta. These features combine to produce a spherical torus plasma in a unique physics regime which permits compact fusion at low field and modest cost.
NASA Astrophysics Data System (ADS)
Pal, S. K.; Majumdar, T. J.; Bhattacharya, Amit K.
Fusion of optical and synthetic aperture radar data has been attempted in the present study for mapping of various lithologic units over a part of the Singhbhum Shear Zone (SSZ) and its surroundings. ERS-2 SAR data over the study area has been enhanced using Fast Fourier Transformation (FFT) based filtering approach, and also using Frost filtering technique. Both the enhanced SAR imagery have been then separately fused with histogram equalized IRS-1C LISS III image using Principal Component Analysis (PCA) technique. Later, Feature-oriented Principal Components Selection (FPCS) technique has been applied to generate False Color Composite (FCC) images, from which corresponding geological maps have been prepared. Finally, GIS techniques have been successfully used for change detection analysis in the lithological interpretation between the published geological map and the fusion based geological maps. In general, there is good agreement between these maps over a large portion of the study area. Based on the change detection studies, few areas could be identified which need attention for further detailed ground-based geological studies.
Applying data fusion techniques for benthic habitat mapping and monitoring in a coral reef ecosystem
NASA Astrophysics Data System (ADS)
Zhang, Caiyun
2015-06-01
Accurate mapping and effective monitoring of benthic habitat in the Florida Keys are critical in developing management strategies for this valuable coral reef ecosystem. For this study, a framework was designed for automated benthic habitat mapping by combining multiple data sources (hyperspectral, aerial photography, and bathymetry data) and four contemporary imagery processing techniques (data fusion, Object-based Image Analysis (OBIA), machine learning, and ensemble analysis). In the framework, 1-m digital aerial photograph was first merged with 17-m hyperspectral imagery and 10-m bathymetry data using a pixel/feature-level fusion strategy. The fused dataset was then preclassified by three machine learning algorithms (Random Forest, Support Vector Machines, and k-Nearest Neighbor). Final object-based habitat maps were produced through ensemble analysis of outcomes from three classifiers. The framework was tested for classifying a group-level (3-class) and code-level (9-class) habitats in a portion of the Florida Keys. Informative and accurate habitat maps were achieved with an overall accuracy of 88.5% and 83.5% for the group-level and code-level classifications, respectively.
Improving nondestructive characterization of dual phase steels using data fusion
NASA Astrophysics Data System (ADS)
Kahrobaee, Saeed; Haghighi, Mehdi Salkhordeh; Akhlaghi, Iman Ahadi
2018-07-01
The aim of this paper is to introduce a novel methodology for nondestructive determination of microstructural and mechanical properties (due to the various heat treatments), as well as thickness variations (as a result of corrosion effect) of dual phase steels. The characterizations are based on the variations in the electromagnetic properties extracted from magnetic hysteresis loop and eddy current methods which are coupled with a data fusion system. This study was conducted on six groups of samples (with different thicknesses, from 1 mm to 4 mm) subjected to the various intercritical annealing processes to produce different fractions of martensite/ferrite phases and consequently, changes in hardness, yield strength and ultra tensile strength (UTS). This study proposes a novel soft computing technique to increase accuracy of nondestructive measurements and resolving overlapped NDE outputs related to the various samples. The empirical results indicate that applying the proposed data fusion technique on the two electromagnetic NDE data sets nondestructively, causes an increase in the accuracy and reliability of determining material features including ferrite fraction, hardness, yield strength, UTS, as well as thickness variations.
Style-based classification of Chinese ink and wash paintings
NASA Astrophysics Data System (ADS)
Sheng, Jiachuan; Jiang, Jianmin
2013-09-01
Following the fact that a large collection of ink and wash paintings (IWP) is being digitized and made available on the Internet, their automated content description, analysis, and management are attracting attention across research communities. While existing research in relevant areas is primarily focused on image processing approaches, a style-based algorithm is proposed to classify IWPs automatically by their authors. As IWPs do not have colors or even tones, the proposed algorithm applies edge detection to locate the local region and detect painting strokes to enable histogram-based feature extraction and capture of important cues to reflect the styles of different artists. Such features are then applied to drive a number of neural networks in parallel to complete the classification, and an information entropy balanced fusion is proposed to make an integrated decision for the multiple neural network classification results in which the entropy is used as a pointer to combine the global and local features. Evaluations via experiments support that the proposed algorithm achieves good performances, providing excellent potential for computerized analysis and management of IWPs.
Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.
Reena Benjamin, J; Jayasree, T
2018-02-01
In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.
deepNF: Deep network fusion for protein function prediction.
Gligorijevic, Vladimir; Barot, Meet; Bonneau, Richard
2018-06-01
The prevalence of high-throughput experimental methods has resulted in an abundance of large-scale molecular and functional interaction networks. The connectivity of these networks provides a rich source of information for inferring functional annotations for genes and proteins. An important challenge has been to develop methods for combining these heterogeneous networks to extract useful protein feature representations for function prediction. Most of the existing approaches for network integration use shallow models that encounter difficulty in capturing complex and highly-nonlinear network structures. Thus, we propose deepNF, a network fusion method based on Multimodal Deep Autoencoders to extract high-level features of proteins from multiple heterogeneous interaction networks. We apply this method to combine STRING networks to construct a common low-dimensional representation containing high-level protein features. We use separate layers for different network types in the early stages of the multimodal autoencoder, later connecting all the layers into a single bottleneck layer from which we extract features to predict protein function. We compare the cross-validation and temporal holdout predictive performance of our method with state-of-the-art methods, including the recently proposed method Mashup. Our results show that our method outperforms previous methods for both human and yeast STRING networks. We also show substantial improvement in the performance of our method in predicting GO terms of varying type and specificity. deepNF is freely available at: https://github.com/VGligorijevic/deepNF. vgligorijevic@flatironinstitute.org, rb133@nyu.edu. Supplementary data are available at Bioinformatics online.
The continuum fusion theory of signal detection applied to a bi-modal fusion problem
NASA Astrophysics Data System (ADS)
Schaum, A.
2011-05-01
A new formalism has been developed that produces detection algorithms for model-based problems, in which one or more parameter values is unknown. Continuum Fusion can be used to generate different flavors of algorithm for any composite hypothesis testing problem. The methodology is defined by a fusion logic that can be translated into max/min conditions. Here it is applied to a simple sensor fusion model, but one for which the generalized likelihood ratio test is intractable. By contrast, a fusion-based response to the same problem can be devised that is solvable in closed form and represents a good approximation to the GLR test.
Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong
2016-07-01
Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less
Investigations of image fusion
NASA Astrophysics Data System (ADS)
Zhang, Zhong
1999-12-01
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.
NASA Astrophysics Data System (ADS)
Li, Jun; Song, Minghui; Peng, Yuanxi
2018-03-01
Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
SFM: A novel sequence-based fusion method for disease genes identification and prioritization.
Yousef, Abdulaziz; Moghadam Charkari, Nasrollah
2015-10-21
The identification of disease genes from human genome is of great importance to improve diagnosis and treatment of disease. Several machine learning methods have been introduced to identify disease genes. However, these methods mostly differ in the prior knowledge used to construct the feature vector for each instance (gene), the ways of selecting negative data (non-disease genes) where there is no investigational approach to find them and the classification methods used to make the final decision. In this work, a novel Sequence-based fusion method (SFM) is proposed to identify disease genes. In this regard, unlike existing methods, instead of using a noisy and incomplete prior-knowledge, the amino acid sequence of the proteins which is universal data has been carried out to present the genes (proteins) into four different feature vectors. To select more likely negative data from candidate genes, the intersection set of four negative sets which are generated using distance approach is considered. Then, Decision Tree (C4.5) has been applied as a fusion method to combine the results of four independent state-of the-art predictors based on support vector machine (SVM) algorithm, and to make the final decision. The experimental results of the proposed method have been evaluated by some standard measures. The results indicate the precision, recall and F-measure of 82.6%, 85.6% and 84, respectively. These results confirm the efficiency and validity of the proposed method. Copyright © 2015 Elsevier Ltd. All rights reserved.
Technology transfer: the key to fusion commercialization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnett, S.C.
1981-01-01
The paper brings to light some of the reasons why technology transfer is difficult in fusion, examines some of the impediments to the process, and finally looks at a successful example of technology transfer. The paper considers some subjective features of fusion - one might call them the sociology of fusion - that are none the less real and that serve as impediments to technology transfer.
Zhong, Shan; Zhang, Haiping; Bai, Dongyu; Gao, Dehong; Zheng, Jie; Ding, Yi
2015-09-01
To study the prevalence of ALK, ROS1 and RET fusion genes in non-small cell lung cancer (NSCLC), and its correlation with clinicopathologic features. Formalin-fixed and paraffin-embedded tissue sections from samples of 302 patients with NSCLC were screened for ALK, ROS1, RET fusions by real-time polymerase chain reaction (PCR). All of the cases were validated by Sanger DNA sequencing. The relationship between ALK, ROS1, RET fusion genes and clinicopathologic features were analyzed. In the cohort of 302 NSCLC samples, 3.97% (12/302) were found to contain ALK fusion genes, including 3 cases with E13; A20 gene fusion, 3 cases with E6; A20 gene fusion and 3 cases with E20; A20 gene fusion. There was no statistically significant difference in patient's gender, age, smoking history and histologic type. Moreover, in the 302 NSCLC samples studied, 3.97% (12/302) were found to contain ROS1 fusion genes, with CD74-ROS1 fusion identified in 9 cases. There was no statistically significant difference in patients' gender, age, smoking history and histologic type. One non-smoking elderly female patient with pulmonary adenocarcinoma had RET gene fusion. None of the cases studied had concurrent ALK, ROS1 and RET mutations. The ALK, ROS1 and RET fusion gene mutation rates in NSCLC are low, they represent some specific molecular subtypes of NSCLC. Genetic testing has significant meaning to guide clinical targeted therapy.
Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
2016-10-05
Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.
Real-Time Visual Tracking through Fusion Features
Ruan, Yang; Wei, Zhenzhong
2016-01-01
Due to their high-speed, correlation filters for object tracking have begun to receive increasing attention. Traditional object trackers based on correlation filters typically use a single type of feature. In this paper, we attempt to integrate multiple feature types to improve the performance, and we propose a new DD-HOG fusion feature that consists of discriminative descriptors (DDs) and histograms of oriented gradients (HOG). However, fusion features as multi-vector descriptors cannot be directly used in prior correlation filters. To overcome this difficulty, we propose a multi-vector correlation filter (MVCF) that can directly convolve with a multi-vector descriptor to obtain a single-channel response that indicates the location of an object. Experiments on the CVPR2013 tracking benchmark with the evaluation of state-of-the-art trackers show the effectiveness and speed of the proposed method. Moreover, we show that our MVCF tracker, which uses the DD-HOG descriptor, outperforms the structure-preserving object tracker (SPOT) in multi-object tracking because of its high-speed and ability to address heavy occlusion. PMID:27347951
Target detection method by airborne and spaceborne images fusion based on past images
NASA Astrophysics Data System (ADS)
Chen, Shanjing; Kang, Qing; Wang, Zhenggang; Shen, ZhiQiang; Pu, Huan; Han, Hao; Gu, Zhongzheng
2017-11-01
To solve the problem that remote sensing target detection method has low utilization rate of past remote sensing data on target area, and can not recognize camouflage target accurately, a target detection method by airborne and spaceborne images fusion based on past images is proposed in this paper. The target area's past of space remote sensing image is taken as background. The airborne and spaceborne remote sensing data is fused and target feature is extracted by the means of airborne and spaceborne images registration, target change feature extraction, background noise suppression and artificial target feature extraction based on real-time aerial optical remote sensing image. Finally, the support vector machine is used to detect and recognize the target on feature fusion data. The experimental results have established that the proposed method combines the target area change feature of airborne and spaceborne remote sensing images with target detection algorithm, and obtains fine detection and recognition effect on camouflage and non-camouflage targets.
NASA Astrophysics Data System (ADS)
Liu, Jiamin; Chang, Kevin; Kim, Lauren; Turkbey, Evrim; Lu, Le; Yao, Jianhua; Summers, Ronald
2015-03-01
The thyroid gland plays an important role in clinical practice, especially for radiation therapy treatment planning. For patients with head and neck cancer, radiation therapy requires a precise delineation of the thyroid gland to be spared on the pre-treatment planning CT images to avoid thyroid dysfunction. In the current clinical workflow, the thyroid gland is normally manually delineated by radiologists or radiation oncologists, which is time consuming and error prone. Therefore, a system for automated segmentation of the thyroid is desirable. However, automated segmentation of the thyroid is challenging because the thyroid is inhomogeneous and surrounded by structures that have similar intensities. In this work, the thyroid gland segmentation is initially estimated by multi-atlas label fusion algorithm. The segmentation is refined by supervised statistical learning based voxel labeling with a random forest algorithm. Multiatlas label fusion (MALF) transfers expert-labeled thyroids from atlases to a target image using deformable registration. Errors produced by label transfer are reduced by label fusion that combines the results produced by all atlases into a consensus solution. Then, random forest (RF) employs an ensemble of decision trees that are trained on labeled thyroids to recognize features. The trained forest classifier is then applied to the thyroid estimated from the MALF by voxel scanning to assign the class-conditional probability. Voxels from the expert-labeled thyroids in CT volumes are treated as positive classes; background non-thyroid voxels as negatives. We applied this automated thyroid segmentation system to CT scans of 20 patients. The results showed that the MALF achieved an overall 0.75 Dice Similarity Coefficient (DSC) and the RF classification further improved the DSC to 0.81.
Image fusion algorithm based on energy of Laplacian and PCNN
NASA Astrophysics Data System (ADS)
Li, Meili; Wang, Hongmei; Li, Yanjun; Zhang, Ke
2009-12-01
Owing to the global coupling and pulse synchronization characteristic of pulse coupled neural networks (PCNN), it has been proved to be suitable for image processing and successfully employed in image fusion. However, in almost all the literatures of image processing about PCNN, linking strength of each neuron is assigned the same value which is chosen by experiments. This is not consistent with the human vision system in which the responses to the region with notable features are stronger than that to the region with nonnotable features. It is more reasonable that notable features, rather than the same value, are employed to linking strength of each neuron. As notable feature, energy of Laplacian (EOL) is used to obtain the value of linking strength in PCNN in this paper. Experimental results demonstrate that the proposed algorithm outperforms Laplacian-based, wavelet-based, PCNN -based fusion algorithms.
Barks, Sarah K.; Bauernfeind, Amy L.; Bonar, Christopher J.; Cranfield, Michael R.; de Sousa, Alexandra A.; Erwin, Joseph M.; Hopkins, William D.; Lewandowski, Albert H.; Mudakikwa, Antoine; Phillips, Kimberley A.; Raghanti, Mary Ann; Stimpson, Cheryl D.; Hof, Patrick R.; Zilles, Karl; Sherwood, Chet C.
2013-01-01
In this study, we describe an atypical neuroanatomical feature present in several primate species that involves a fusion between the temporal lobe (often including Heschl’s gyrus in great apes) and the posterior dorsal insula, such that a portion of insular cortex forms an isolated pocket medial to the Sylvian fissure. We assessed the frequency of this fusion in 56 primate species (including apes, Old World monkeys, New World monkeys, and strepsirrhines) using either magnetic resonance images or histological sections. A fusion between temporal cortex and posterior insula was present in 22 species (7 apes, 2 Old World monkeys, 4 New World monkeys, and 9 strepsirrhines). The temporo-insular fusion was observed in most eastern gorilla (Gorilla beringei beringei and G. b. graueri) specimens (62% and 100% of cases, respectively) but less frequently in other great apes and was never found in humans. We further explored the histology of this fusion in eastern gorillas by examining the cyto- and myeloarchitecture within this region, and observed that the degree to which deep cortical layers and white matter are incorporated into the fusion varies among individuals within a species. We suggest that fusion between temporal and insular cortex is an example of a relatively rare neuroanatomical feature that has become more common in eastern gorillas, possibly as the result of a population bottleneck effect. Characterizing the phylogenetic distribution of this morphology highlights a derived feature of these great apes. PMID:23939630
ERIC Educational Resources Information Center
Lo, Mun Ling; Chik, Pakey Pui Man
2016-01-01
In this paper, we aim to differentiate the internal and external horizons of "fusion." "Fusion" in the internal horizon relates to the structure and meaning of the object of learning as experienced by the learner. It clarifies the interrelationships among an object's critical features and aspects. It also illuminates the…
Designing Image Operators for MRI-PET Image Fusion of the Brain
NASA Astrophysics Data System (ADS)
Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.
2006-09-01
Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.
FuzzyFusion: an application architecture for multisource information fusion
NASA Astrophysics Data System (ADS)
Fox, Kevin L.; Henning, Ronda R.
2009-04-01
The correlation of information from disparate sources has long been an issue in data fusion research. Traditional data fusion addresses the correlation of information from sources as diverse as single-purpose sensors to all-source multi-media information. Information system vulnerability information is similar in its diversity of sources and content, and in the desire to draw a meaningful conclusion, namely, the security posture of the system under inspection. FuzzyFusionTM, A data fusion model that is being applied to the computer network operations domain is presented. This model has been successfully prototyped in an applied research environment and represents a next generation assurance tool for system and network security.
News video story segmentation method using fusion of audio-visual features
NASA Astrophysics Data System (ADS)
Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang
2007-11-01
News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.
Three-dimensional fingerprint recognition by using convolution neural network
NASA Astrophysics Data System (ADS)
Tian, Qianyu; Gao, Nan; Zhang, Zonghua
2018-01-01
With the development of science and technology and the improvement of social information, fingerprint recognition technology has become a hot research direction and been widely applied in many actual fields because of its feasibility and reliability. The traditional two-dimensional (2D) fingerprint recognition method relies on matching feature points. This method is not only time-consuming, but also lost three-dimensional (3D) information of fingerprint, with the fingerprint rotation, scaling, damage and other issues, a serious decline in robustness. To solve these problems, 3D fingerprint has been used to recognize human being. Because it is a new research field, there are still lots of challenging problems in 3D fingerprint recognition. This paper presents a new 3D fingerprint recognition method by using a convolution neural network (CNN). By combining 2D fingerprint and fingerprint depth map into CNN, and then through another CNN feature fusion, the characteristics of the fusion complete 3D fingerprint recognition after classification. This method not only can preserve 3D information of fingerprints, but also solves the problem of CNN input. Moreover, the recognition process is simpler than traditional feature point matching algorithm. 3D fingerprint recognition rate by using CNN is compared with other fingerprint recognition algorithms. The experimental results show that the proposed 3D fingerprint recognition method has good recognition rate and robustness.
Seethala, Raja R; Stenman, Göran
2017-03-01
The salivary gland section in the 4th edition of the World Health Organization classification of head and neck tumors features the description and inclusion of several entities, the most significant of which is represented by (mammary analogue) secretory carcinoma. This entity was extracted mainly from acinic cell carcinoma based on recapitulation of breast secretory carcinoma and a shared ETV6-NTRK3 gene fusion. Also new is the subsection of "Other epithelial lesions," for which key entities include sclerosing polycystic adenosis and intercalated duct hyperplasia. Many entities have been compressed into their broader categories given clinical and morphologic similarities, or transitioned to a different grouping as was the case with low-grade cribriform cystadenocarcinoma reclassified as intraductal carcinoma (with the applied qualifier of low-grade). Specific grade has been removed from the names of the salivary gland entities such as polymorphous adenocarcinoma, providing pathologists flexibility in assigning grade and allowing for recognition of a broader spectrum within an entity. Cribriform adenocarcinoma of (minor) salivary gland origin continues to be divisive in terms of whether it should be recognized as a distinct category. This chapter also features new key concepts such as high-grade transformation. The new paradigm of translocations and gene fusions being common in salivary gland tumors is featured heavily in this chapter.
Angiogram, fundus, and oxygen saturation optic nerve head image fusion
NASA Astrophysics Data System (ADS)
Cao, Hua; Khoobehi, Bahram
2009-02-01
A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.
Nighttime images fusion based on Laplacian pyramid
NASA Astrophysics Data System (ADS)
Wu, Cong; Zhan, Jinhao; Jin, Jicheng
2018-02-01
This paper expounds method of the average weighted fusion, image pyramid fusion, the wavelet transform and apply these methods on the fusion of multiple exposures nighttime images. Through calculating information entropy and cross entropy of fusion images, we can evaluate the effect of different fusion. Experiments showed that Laplacian pyramid image fusion algorithm is suitable for processing nighttime images fusion, it can reduce the halo while preserving image details.
Research on oral test modeling based on multi-feature fusion
NASA Astrophysics Data System (ADS)
Shi, Yuliang; Tao, Yiyue; Lei, Jun
2018-04-01
In this paper, the spectrum of speech signal is taken as an input of feature extraction. The advantage of PCNN in image segmentation and other processing is used to process the speech spectrum and extract features. And a new method combining speech signal processing and image processing is explored. At the same time of using the features of the speech map, adding the MFCC to establish the spectral features and integrating them with the features of the spectrogram to further improve the accuracy of the spoken language recognition. Considering that the input features are more complicated and distinguishable, we use Support Vector Machine (SVM) to construct the classifier, and then compare the extracted test voice features with the standard voice features to achieve the spoken standard detection. Experiments show that the method of extracting features from spectrograms using PCNN is feasible, and the fusion of image features and spectral features can improve the detection accuracy.
A fast recognition method of warhead target in boost phase using kinematic features
NASA Astrophysics Data System (ADS)
Chen, Jian; Xu, Shiyou; Tian, Biao; Wu, Jianhua; Chen, Zengping
2015-12-01
The radar targets number increases from one to more when the ballistic missile is in the process of separating the lower stage rocket or casting covers or other components. It is vital to identify the warhead target quickly among these multiple targets for radar tracking. A fast recognition method of the warhead target is proposed to solve this problem by using kinematic features, utilizing fuzzy comprehensive method and information fusion method. In order to weaken the influence of radar measurement noise, an extended Kalman filter with constant jerk model (CJEKF) is applied to obtain more accurate target's motion information. The simulation shows the validity of the algorithm and the effects of the radar measurement precision upon the algorithm's performance.
NASA Astrophysics Data System (ADS)
Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng
2018-01-01
Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.
Comparing ensemble learning methods based on decision tree classifiers for protein fold recognition.
Bardsiri, Mahshid Khatibi; Eftekhari, Mahdi
2014-01-01
In this paper, some methods for ensemble learning of protein fold recognition based on a decision tree (DT) are compared and contrasted against each other over three datasets taken from the literature. According to previously reported studies, the features of the datasets are divided into some groups. Then, for each of these groups, three ensemble classifiers, namely, random forest, rotation forest and AdaBoost.M1 are employed. Also, some fusion methods are introduced for combining the ensemble classifiers obtained in the previous step. After this step, three classifiers are produced based on the combination of classifiers of types random forest, rotation forest and AdaBoost.M1. Finally, the three different classifiers achieved are combined to make an overall classifier. Experimental results show that the overall classifier obtained by the genetic algorithm (GA) weighting fusion method, is the best one in comparison to previously applied methods in terms of classification accuracy.
Isotope effect on blob-statistics in gyrofluid simulations of scrape-off layer turbulence
NASA Astrophysics Data System (ADS)
Meyer, O. H. H.; Kendl, A.
2017-12-01
In this contribution we apply a recently established stochastic model for scrape-off layer fluctuations to long time series obtained from gyrofluid simulations of fusion edge plasma turbulence. Characteristic parameters are estimated for different fusion relevant isotopic compositions (protium, deuterium, tritium and singly charged helium) by means of conditional averaging. It is shown that large amplitude fluctuations associated with radially propagating filaments in the scrape-off layer feature double-exponential wave-forms. We find increased pulse duration and longer waiting times between peaks for heavier ions, while the amplitudes are similar. The associated radial blob velocity is shown to be reduced for heavier ions. A parabolic relation between skewness and kurtosis of density fluctuations seems to be present. Improved particle confinement in terms of reduced mean value close to the outermost radial boundary and blob characteristics for heavier plasmas is presented.
Molecular mechanisms that underpin EML4-ALK driven cancers and their response to targeted drugs.
Bayliss, Richard; Choi, Jene; Fennell, Dean A; Fry, Andrew M; Richards, Mark W
2016-03-01
A fusion between the EML4 (echinoderm microtubule-associated protein-like) and ALK (anaplastic lymphoma kinase) genes was identified in non-small cell lung cancer (NSCLC) in 2007 and there has been rapid progress in applying this knowledge to the benefit of patients. However, we have a poor understanding of EML4 and ALK biology and there are many challenges to devising the optimal strategy for treating EML4-ALK NSCLC patients. In this review, we describe the biology of EML4 and ALK, explain the main features of EML4-ALK fusion proteins and outline the therapies that target EML4-ALK. In particular, we highlight the recent advances in our understanding of the structures of EML proteins, describe the molecular mechanisms of resistance to ALK inhibitors and assess current thinking about combinations of ALK drugs with inhibitors that target other kinases or Hsp90.
Study on the multi-sensors monitoring and information fusion technology of dangerous cargo container
NASA Astrophysics Data System (ADS)
Xu, Shibo; Zhang, Shuhui; Cao, Wensheng
2017-10-01
In this paper, monitoring system of dangerous cargo container based on multi-sensors is presented. In order to improve monitoring accuracy, multi-sensors will be applied inside of dangerous cargo container. Multi-sensors information fusion solution of monitoring dangerous cargo container is put forward, and information pre-processing, the fusion algorithm of homogenous sensors and information fusion based on BP neural network are illustrated, applying multi-sensors in the field of container monitoring has some novelty.
Ahmed, Shaheen; Iftekharuddin, Khan M; Vossough, Arastoo
2011-03-01
Our previous works suggest that fractal texture feature is useful to detect pediatric brain tumor in multimodal MRI. In this study, we systematically investigate efficacy of using several different image features such as intensity, fractal texture, and level-set shape in segmentation of posterior-fossa (PF) tumor for pediatric patients. We explore effectiveness of using four different feature selection and three different segmentation techniques, respectively, to discriminate tumor regions from normal tissue in multimodal brain MRI. We further study the selective fusion of these features for improved PF tumor segmentation. Our result suggests that Kullback-Leibler divergence measure for feature ranking and selection and the expectation maximization algorithm for feature fusion and tumor segmentation offer the best results for the patient data in this study. We show that for T1 and fluid attenuation inversion recovery (FLAIR) MRI modalities, the best PF tumor segmentation is obtained using the texture feature such as multifractional Brownian motion (mBm) while that for T2 MRI is obtained by fusing level-set shape with intensity features. In multimodality fused MRI (T1, T2, and FLAIR), mBm feature offers the best PF tumor segmentation performance. We use different similarity metrics to evaluate quality and robustness of these selected features for PF tumor segmentation in MRI for ten pediatric patients.
Pulsed excitation terahertz tomography - multiparametric approach
NASA Astrophysics Data System (ADS)
Lopato, Przemyslaw
2018-04-01
This article deals with pulsed excitation terahertz computed tomography (THz CT). Opposite to x-ray CT, where just a single value (pixel) is obtained, in case of pulsed THz CT the time signal is acquired for each position. Recorded waveform can be parametrized - many features carrying various information about examined structure can be calculated. Based on this, multiparametric reconstruction algorithm was proposed: inverse Radon transform based reconstruction is applied for each parameter and then fusion of results is utilized. Performance of the proposed imaging scheme was experimentally verified using dielectric phantoms.
Multi-source remotely sensed data fusion for improving land cover classification
NASA Astrophysics Data System (ADS)
Chen, Bin; Huang, Bo; Xu, Bing
2017-02-01
Although many advances have been made in past decades, land cover classification of fine-resolution remotely sensed (RS) data integrating multiple temporal, angular, and spectral features remains limited, and the contribution of different RS features to land cover classification accuracy remains uncertain. We proposed to improve land cover classification accuracy by integrating multi-source RS features through data fusion. We further investigated the effect of different RS features on classification performance. The results of fusing Landsat-8 Operational Land Imager (OLI) data with Moderate Resolution Imaging Spectroradiometer (MODIS), China Environment 1A series (HJ-1A), and Advanced Spaceborne Thermal Emission and Reflection (ASTER) digital elevation model (DEM) data, showed that the fused data integrating temporal, spectral, angular, and topographic features achieved better land cover classification accuracy than the original RS data. Compared with the topographic feature, the temporal and angular features extracted from the fused data played more important roles in classification performance, especially those temporal features containing abundant vegetation growth information, which markedly increased the overall classification accuracy. In addition, the multispectral and hyperspectral fusion successfully discriminated detailed forest types. Our study provides a straightforward strategy for hierarchical land cover classification by making full use of available RS data. All of these methods and findings could be useful for land cover classification at both regional and global scales.
Some new classification methods for hyperspectral remote sensing
NASA Astrophysics Data System (ADS)
Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia
2006-10-01
Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.
Game theory-based visual tracking approach focusing on color and texture features.
Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Chen, Chuanhua; Wang, Xin
2017-07-20
It is difficult for a single-feature tracking algorithm to achieve strong robustness under a complex environment. To solve this problem, we proposed a multifeature fusion tracking algorithm that is based on game theory. By focusing on color and texture features as two gamers, this algorithm accomplishes tracking by using a mean shift iterative formula to search for the Nash equilibrium of the game. The contribution of different features is always keeping the state of optical balance, so that the algorithm can fully take advantage of feature fusion. According to the experiment results, this algorithm proves to possess good performance, especially under the condition of scene variation, target occlusion, and similar interference.
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
Weirather, Jason L.; Afshar, Pegah Tootoonchi; Clark, Tyson A.; Tseng, Elizabeth; Powers, Linda S.; Underwood, Jason G.; Zabner, Joseph; Korlach, Jonas; Wong, Wing Hung; Au, Kin Fai
2015-01-01
We developed an innovative hybrid sequencing approach, IDP-fusion, to detect fusion genes, determine fusion sites and identify and quantify fusion isoforms. IDP-fusion is the first method to study gene fusion events by integrating Third Generation Sequencing long reads and Second Generation Sequencing short reads. We applied IDP-fusion to PacBio data and Illumina data from the MCF-7 breast cancer cells. Compared with the existing tools, IDP-fusion detects fusion genes at higher precision and a very low false positive rate. The results show that IDP-fusion will be useful for unraveling the complexity of multiple fusion splices and fusion isoforms within tumorigenesis-relevant fusion genes. PMID:26040699
Duane's retraction syndrome: its sensory features.
Tomaç, Suhan; Mutlu, Fatih Mehmet; Altinsoy, Halil Ibrahim
2007-11-01
To investigate binocularity in Duane's retraction syndrome (DRS) and to evaluate whether or not there is a relationship between the sensory and clinical features of the syndrome. Clinical and sensory findings of 29 patients with DRS were recorded. Binocularity was tested with the Bagolini glasses (BG), Worth four-dot (W4D), TNO and the stereo-fly plate of the Titmus test. Twenty-four (83%) patients showed fusion with the BG at near and 23 (79%) had fusion at distance. With the W4D, 23 (79%) patients had fusion at near and 19 (65%) had fusion at distance. Seven (24%) patients demonstrated normal stereoacuity, 15 (52%) had reduced stereoacuity and the remaining seven (24%) patients had no measurable stereoacuity. In patients without stereoacuity, amblyopia (p < 0.001), type 2 and 3 DRS (p = 0.031) and exotropia (p = 0.003) in primary position were more common than in those with reduced or with normal stereoacuity. Restriction of ocular ductions was also more severe in patients without stereoacuity than in those with reduced or normal stereoacuity (p = 0.019, p = 0.016). Patients with type 2 and 3 DRS were significantly more likely to have amblyopia (p = 0.037), large-angle heterotropia (p = 0.005) in primary position, upshoot or downshoot (p = 0.010) than those with type 1 DRS. Although approximately 75% of DRS patients had fusion and measurable stereoacuity, only 25% demonstrated normal binocularity. This report provides new data on the relationship of sensory features to most of the clinical findings of this syndrome. Sensory features, as well as most clinical features of the syndrome, are better in patients with type 1 DRS.
Khodabandeloo, Babak; Melvin, Dyan; Jo, Hongki
2017-01-01
Direct measurements of external forces acting on a structure are infeasible in many cases. The Augmented Kalman Filter (AKF) has several attractive features that can be utilized to solve the inverse problem of identifying applied forces, as it requires the dynamic model and the measured responses of structure at only a few locations. But, the AKF intrinsically suffers from numerical instabilities when accelerations, which are the most common response measurements in structural dynamics, are the only measured responses. Although displacement measurements can be used to overcome the instability issue, the absolute displacement measurements are challenging and expensive for full-scale dynamic structures. In this paper, a reliable model-based data fusion approach to reconstruct dynamic forces applied to structures using heterogeneous structural measurements (i.e., strains and accelerations) in combination with AKF is investigated. The way of incorporating multi-sensor measurements in the AKF is formulated. Then the formulation is implemented and validated through numerical examples considering possible uncertainties in numerical modeling and sensor measurement. A planar truss example was chosen to clearly explain the formulation, while the method and formulation are applicable to other structures as well. PMID:29149088
A wavelet-based adaptive fusion algorithm of infrared polarization imaging
NASA Astrophysics Data System (ADS)
Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang
2011-08-01
The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.
Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues
NASA Astrophysics Data System (ADS)
Adams, W. H.; Iyengar, Giridharan; Lin, Ching-Yung; Naphade, Milind Ramesh; Neti, Chalapathy; Nock, Harriet J.; Smith, John R.
2003-12-01
We present a learning-based approach to the semantic indexing of multimedia content using cues derived from audio, visual, and text features. We approach the problem by developing a set of statistical models for a predefined lexicon. Novel concepts are then mapped in terms of the concepts in the lexicon. To achieve robust detection of concepts, we exploit features from multiple modalities, namely, audio, video, and text. Concept representations are modeled using Gaussian mixture models (GMM), hidden Markov models (HMM), and support vector machines (SVM). Models such as Bayesian networks and SVMs are used in a late-fusion approach to model concepts that are not explicitly modeled in terms of features. Our experiments indicate promise in the proposed classification and fusion methodologies: our proposed fusion scheme achieves more than 10% relative improvement over the best unimodal concept detector.
Feature-fused SSD: fast detection for small objects
NASA Astrophysics Data System (ADS)
Cao, Guimei; Xie, Xuemei; Yang, Wenzhe; Liao, Quan; Shi, Guangming; Wu, Jinjian
2018-04-01
Small objects detection is a challenging task in computer vision due to its limited resolution and information. In order to solve this problem, the majority of existing methods sacrifice speed for improvement in accuracy. In this paper, we aim to detect small objects at a fast speed, using the best object detector Single Shot Multibox Detector (SSD) with respect to accuracy-vs-speed trade-off as base architecture. We propose a multi-level feature fusion method for introducing contextual information in SSD, in order to improve the accuracy for small objects. In detailed fusion operation, we design two feature fusion modules, concatenation module and element-sum module, different in the way of adding contextual information. Experimental results show that these two fusion modules obtain higher mAP on PASCAL VOC2007 than baseline SSD by 1.6 and 1.7 points respectively, especially with 2-3 points improvement on some small objects categories. The testing speed of them is 43 and 40 FPS respectively, superior to the state of the art Deconvolutional single shot detector (DSSD) by 29.4 and 26.4 FPS.
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1990-01-01
Various papers on human and machine strategies in sensor fusion are presented. The general topics addressed include: active vision, measurement and analysis of visual motion, decision models for sensor fusion, implementation of sensor fusion algorithms, applying sensor fusion to image analysis, perceptual modules and their fusion, perceptual organization and object recognition, planning and the integration of high-level knowledge with perception, using prior knowledge and context in sensor fusion.
Viswanathan, P; Krishna, P Venkata
2014-05-01
Teleradiology allows transmission of medical images for clinical data interpretation to provide improved e-health care access, delivery, and standards. The remote transmission raises various ethical and legal issues like image retention, fraud, privacy, malpractice liability, etc. A joint FED watermarking system means a joint fingerprint/encryption/dual watermarking system is proposed for addressing these issues. The system combines a region based substitution dual watermarking algorithm using spatial fusion, stream cipher algorithm using symmetric key, and fingerprint verification algorithm using invariants. This paper aims to give access to the outcomes of medical images with confidentiality, availability, integrity, and its origin. The watermarking, encryption, and fingerprint enrollment are conducted jointly in protection stage such that the extraction, decryption, and verification can be applied independently. The dual watermarking system, introducing two different embedding schemes, one used for patient data and other for fingerprint features, reduces the difficulty in maintenance of multiple documents like authentication data, personnel and diagnosis data, and medical images. The spatial fusion algorithm, which determines the region of embedding using threshold from the image to embed the encrypted patient data, follows the exact rules of fusion resulting in better quality than other fusion techniques. The four step stream cipher algorithm using symmetric key for encrypting the patient data with fingerprint verification system using algebraic invariants improves the robustness of the medical information. The experiment result of proposed scheme is evaluated for security and quality analysis in DICOM medical images resulted well in terms of attacks, quality index, and imperceptibility.
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
A Summary of the NASA Fusion Propulsion Workshop 2000
NASA Technical Reports Server (NTRS)
Thio, Y. C. Francis; Turchi, Peter J.; Santarius, John F.; Schafer, Charles (Technical Monitor)
2001-01-01
A NASA Fusion Propulsion Workshop was held on Nov. 8 and 9, 2000 at Marshall Space Flight Center (MSFC) in Huntsville, Alabama. A total of 43 papers were presented at the Workshop orally or by posters, covering a broad spectrum of issues related to applying fusion to propulsion. The status of fusion research was reported at the Workshop showing the outstanding scientific research that has been accomplished worldwide in the fusion energy research program. The international fusion research community has demonstrated the scientific principles of fusion creating plasmas with conditions for fusion burn with a gain of order unity: 0.25 in Princeton TFTR, 0.65 in the Joint European Torus, and a Q-equivalent of 1.25 in Japan's JT-60. This research has developed an impressive range of physics and technological capabilities that may be applied effectively to the research of possibly new propulsion-oriented fusion schemes. The pertinent physics capabilities include the plasma computational tools, the experimental plasma facilities, the diagnostics techniques, and the theoretical understanding. The enabling technologies include the various plasma heating, acceleration, and the pulsed power technologies.
Gene Fusion Markup Language: a prototype for exchanging gene fusion data.
Kalyana-Sundaram, Shanker; Shanmugam, Achiraman; Chinnaiyan, Arul M
2012-10-16
An avalanche of next generation sequencing (NGS) studies has generated an unprecedented amount of genomic structural variation data. These studies have also identified many novel gene fusion candidates with more detailed resolution than previously achieved. However, in the excitement and necessity of publishing the observations from this recently developed cutting-edge technology, no community standardization approach has arisen to organize and represent the data with the essential attributes in an interchangeable manner. As transcriptome studies have been widely used for gene fusion discoveries, the current non-standard mode of data representation could potentially impede data accessibility, critical analyses, and further discoveries in the near future. Here we propose a prototype, Gene Fusion Markup Language (GFML) as an initiative to provide a standard format for organizing and representing the significant features of gene fusion data. GFML will offer the advantage of representing the data in a machine-readable format to enable data exchange, automated analysis interpretation, and independent verification. As this database-independent exchange initiative evolves it will further facilitate the formation of related databases, repositories, and analysis tools. The GFML prototype is made available at http://code.google.com/p/gfml-prototype/. The Gene Fusion Markup Language (GFML) presented here could facilitate the development of a standard format for organizing, integrating and representing the significant features of gene fusion data in an inter-operable and query-able fashion that will enable biologically intuitive access to gene fusion findings and expedite functional characterization. A similar model is envisaged for other NGS data analyses.
Arnold, Michael A; Anderson, James R; Gastier-Foster, Julie M; Barr, Frederic G; Skapek, Stephen X; Hawkins, Douglas S; Raney, R Beverly; Parham, David M; Teot, Lisa A; Rudzinski, Erin R; Walterhouse, David O
2016-04-01
Distinguishing alveolar rhabdomyosarcoma (ARMS) from embryonal rhabdomyosarcoma (ERMS) is of prognostic and therapeutic importance. Criteria for classifying these entities evolved significantly from 1995 to 2013. ARMS is associated with inferior outcome; therefore, patients with alveolar histology have generally been excluded from low-risk therapy. However, patients with ARMS and low-risk stage and group (Stage 1, Group I/II/orbit III; or Stage 2/3, Group I/II) were eligible for the Children's Oncology Group (COG) low-risk rhabdomyosarcoma (RMS) study D9602 from 1997 to 1999. The characteristics and outcomes of these patients have not been previously reported, and the histology of these cases has not been reviewed using current criteria. We re-reviewed cases that were classified as ARMS on D9602 using current histologic criteria, determined PAX3/PAX7-FOXO1 fusion status, and compared these data with outcome for this unique group of patients. Thirty-eight patients with ARMS were enrolled onto D9602. Only one-third of cases with slides available for re-review (11/33) remained classified as ARMS by current histologic criteria. Most cases were reclassified as ERMS (17/33, 51.5%). Cases that remained classified as ARMS were typically fusion-positive (8/11, 73%), therefore current classification results in a similar rate of fusion-positive ARMS for all clinical risk groups. In conjunction with data from COG intermediate-risk treatment protocol D9803, our data demonstrate excellent outcomes for fusion-negative ARMS with otherwise low-risk clinical features. Patients with fusion-positive RMS with low-risk clinical features should be classified and treated as intermediate risk, while patients with fusion-negative ARMS could be appropriately treated with reduced intensity therapy. © 2016 Wiley Periodicals, Inc.
Range and Panoramic Image Fusion Into a Textured Range Image for Culture Heritage Documentation
NASA Astrophysics Data System (ADS)
Bila, Z.; Reznicek, J.; Pavelka, K.
2013-07-01
This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.
Applying design principles to fusion reactor configurations for propulsion in space
NASA Technical Reports Server (NTRS)
Carpenter, Scott A.; Deveny, Marc E.; Schulze, Norman R.
1993-01-01
The application of fusion power to space propulsion requires rethinking the engineering-design solution to controlled-fusion energy. Whereas the unit cost of electricity (COE) drives the engineering-design solution for utility-based fusion reactor configurations; initial mass to low earth orbit (IMLEO), specific jet power (kW(thrust)/kg(engine)), and reusability drive the engineering-design solution for successful application of fusion power to space propulsion. We applied three design principles (DP's) to adapt and optimize three candidate-terrestrial-fusion-reactor configurations for propulsion in space. The three design principles are: provide maximum direct access to space for waste radiation, operate components as passive radiators to minimize cooling-system mass, and optimize the plasma fuel, fuel mix, and temperature for best specific jet power. The three candidate terrestrial fusion reactor configurations are: the thermal barrier tandem mirror (TBTM), field reversed mirror (FRM), and levitated dipole field (LDF). The resulting three candidate space fusion propulsion systems have their IMLEO minimized and their specific jet power and reusability maximized. We performed a preliminary rating of these configurations and concluded that the leading engineering-design solution to space fusion propulsion is a modified TBTM that we call the Mirror Fusion Propulsion System (MFPS).
Pansharpening via coupled triple factorization dictionary learning
Skau, Erik; Wohlberg, Brendt; Krim, Hamid; ...
2016-03-01
Data fusion is the operation of integrating data from different modalities to construct a single consistent representation. Here, this paper proposes variations of coupled dictionary learning through an additional factorization. One variation of this model is applicable to the pansharpening data fusion problem. Real world pansharpening data was applied to train and test our proposed formulation. The results demonstrate that the data fusion model can successfully be applied to the pan-sharpening problem.
Multispectral Palmprint Recognition Using a Quaternion Matrix
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%. PMID:22666049
Multispectral palmprint recognition using a quaternion matrix.
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
Weirather, Jason L; Afshar, Pegah Tootoonchi; Clark, Tyson A; Tseng, Elizabeth; Powers, Linda S; Underwood, Jason G; Zabner, Joseph; Korlach, Jonas; Wong, Wing Hung; Au, Kin Fai
2015-10-15
We developed an innovative hybrid sequencing approach, IDP-fusion, to detect fusion genes, determine fusion sites and identify and quantify fusion isoforms. IDP-fusion is the first method to study gene fusion events by integrating Third Generation Sequencing long reads and Second Generation Sequencing short reads. We applied IDP-fusion to PacBio data and Illumina data from the MCF-7 breast cancer cells. Compared with the existing tools, IDP-fusion detects fusion genes at higher precision and a very low false positive rate. The results show that IDP-fusion will be useful for unraveling the complexity of multiple fusion splices and fusion isoforms within tumorigenesis-relevant fusion genes. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Delrue, Steven; Tabatabaeipour, Morteza; Hettler, Jan; Van Den Abeele, Koen
2016-05-01
Friction stir welding (FSW) is a promising technology for the joining of aluminum alloys and other metallic admixtures that are hard to weld by conventional fusion welding. Although FSW generally provides better fatigue properties than traditional fusion welding methods, fatigue properties are still significantly lower than for the base material. Apart from voids, kissing bonds for instance, in the form of closed cracks propagating along the interface of the stirred and heat affected zone, are inherent features of the weld and can be considered as one of the main causes of a reduced fatigue life of FSW in comparison to the base material. The main problem with kissing bond defects in FSW, is that they currently are very difficult to detect using existing NDT methods. Besides, in most cases, the defects are not directly accessible from the exposed surface. Therefore, new techniques capable of detecting small kissing bond flaws need to be introduced. In the present paper, a novel and practical approach is introduced based on a nonlinear, single-sided, ultrasonic technique. The proposed inspection technique uses two single element transducers, with the first transducer transmitting an ultrasonic signal that focuses the ultrasonic waves at the bottom side of the sample where cracks are most likely to occur. The large amount of energy at the focus activates the kissing bond, resulting in the generation of nonlinear features in the wave propagation. These nonlinear features are then captured by the second transducer operating in pitch-catch mode, and are analyzed, using pulse inversion, to reveal the presence of a defect. The performance of the proposed nonlinear, pitch-catch technique, is first illustrated using a numerical study of an aluminum sample containing simple, vertically oriented, incipient cracks. Later, the proposed technique is also applied experimentally on a real-life friction stir welded butt joint containing a kissing bond flaw. Copyright © 2016 Elsevier B.V. All rights reserved.
Remote sensing fusion based on guided image filtering
NASA Astrophysics Data System (ADS)
Zhao, Wenfei; Dai, Qinling; Wang, Leiguang
2015-12-01
In this paper, we propose a novel remote sensing fusion approach based on guided image filtering. The fused images can well preserve the spectral features of the original multispectral (MS) images, meanwhile, enhance the spatial details information. Four quality assessment indexes are also introduced to evaluate the fusion effect when compared with other fusion methods. Experiments carried out on Gaofen-2, QuickBird, WorldView-2 and Landsat-8 images. And the results show an excellent performance of the proposed method.
Kinase fusions are frequent in Spitz tumors and spitzoid melanomas
Esteve-Puig, Rosaura; Botton, Thomas; Yeh, Iwei; Lipson, Doron; Otto, Geoff; Brennan, Kristina; Murali, Rajmohan; Garrido, Maria; Miller, Vincent A.; Ross, Jeffrey S; Berger, Michael F.; Sparatta, Alyssa; Palmedo, Gabriele; Cerroni, Lorenzo; Busam, Klaus J.; Kutzner, Heinz; Cronin, Maureen T; Stephens, Philip J; Bastian, Boris C.
2014-01-01
Spitzoid neoplasms are a group of melanocytic tumors with distinctive histopathologic features. They include benign tumors (Spitz nevi), malignant tumors (spitzoid melanomas), and tumors with borderline histopathologic features and uncertain clinical outcome (atypical Spitz tumors). Their genetic underpinnings are poorly understood, and alterations in common melanoma-associated oncogenes are typically absent. Here we show that spitzoid neoplasms harbor kinase fusions of ROS1 (17%), NTRK1 (16%), ALK (10%), BRAF (5%), and RET (3%) in a mutually exclusive pattern. The chimeric proteins are constitutively active, stimulate oncogenic signaling pathways, are tumorigenic, and are found in the entire biologic spectrum of spitzoid neoplasms, including 55% of Spitz nevi, 56% of atypical Spitz tumors, and 39% of spitzoid melanomas. Kinase inhibitors suppress the oncogenic signaling of the fusion proteins in vitro. In summary, kinase fusions account for the majority of oncogenic aberrations in spitzoid neoplasms, and may serve as therapeutic targets for metastatic spitzoid melanomas. PMID:24445538
Apostolou, N; Papazoglou, Th; Koutsouris, D
2006-01-01
Image fusion is a process of combining information from multiple sensors. It is a useful tool implemented in the treatment planning programme of Gamma Knife Radiosurgery. In this paper we evaluate advanced image fusion algorithms for Matlab platform and head images. We develop nine level grayscale image fusion methods: average, principal component analysis (PCA), discrete wavelet transform (DWT) and Laplacian, filter - subtract - decimate (FSD), contrast, gradient, morphological pyramid and a shift invariant discrete wavelet transform (SIDWT) method in Matlab platform. We test these methods qualitatively and quantitatively. The quantitative criteria we use are the Root Mean Square Error (RMSE), the Mutual Information (MI), the Standard Deviation (STD), the Entropy (H), the Difference Entropy (DH) and the Cross Entropy (CEN). The qualitative are: natural appearance, brilliance contrast, presence of complementary features and enhancement of common features. Finally we make clinically useful suggestions.
Kinase fusions are frequent in Spitz tumours and spitzoid melanomas
NASA Astrophysics Data System (ADS)
Wiesner, Thomas; He, Jie; Yelensky, Roman; Esteve-Puig, Rosaura; Botton, Thomas; Yeh, Iwei; Lipson, Doron; Otto, Geoff; Brennan, Kristina; Murali, Rajmohan; Garrido, Maria; Miller, Vincent A.; Ross, Jeffrey S.; Berger, Michael F.; Sparatta, Alyssa; Palmedo, Gabriele; Cerroni, Lorenzo; Busam, Klaus J.; Kutzner, Heinz; Cronin, Maureen T.; Stephens, Philip J.; Bastian, Boris C.
2014-01-01
Spitzoid neoplasms are a group of melanocytic tumours with distinctive histopathological features. They include benign tumours (Spitz naevi), malignant tumours (spitzoid melanomas) and tumours with borderline histopathological features and uncertain clinical outcome (atypical Spitz tumours). Their genetic underpinnings are poorly understood, and alterations in common melanoma-associated oncogenes are typically absent. Here we show that spitzoid neoplasms harbour kinase fusions of ROS1 (17%), NTRK1 (16%), ALK (10%), BRAF (5%) and RET (3%) in a mutually exclusive pattern. The chimeric proteins are constitutively active, stimulate oncogenic signalling pathways, are tumourigenic and are found in the entire biologic spectrum of spitzoid neoplasms, including 55% of Spitz naevi, 56% of atypical Spitz tumours and 39% of spitzoid melanomas. Kinase inhibitors suppress the oncogenic signalling of the fusion proteins in vitro. In summary, kinase fusions account for the majority of oncogenic aberrations in spitzoid neoplasms and may serve as therapeutic targets for metastatic spitzoid melanomas.
Low-energy fusion dynamics of weakly bound nuclei: A time dependent perspective
NASA Astrophysics Data System (ADS)
Diaz-Torres, A.; Boselli, M.
2016-05-01
Recent dynamical fusion models for weakly bound nuclei at low incident energies, based on a time-dependent perspective, are briefly presented. The main features of both the PLATYPUS model and a new quantum approach are highlighted. In contrast to existing timedependent quantum models, the present quantum approach separates the complete and incomplete fusion from the total fusion. Calculations performed within a toy model for 6Li + 209Bi at near-barrier energies show that converged excitation functions for total, complete and incomplete fusion can be determined with the time-dependent wavepacket dynamics.
MDSplus quality improvement project
Fredian, Thomas W.; Stillerman, Joshua; Manduchi, Gabriele; ...
2016-05-31
MDSplus is a data acquisition and analysis system used worldwide predominantly in the fusion research community. Development began 29 years ago on the OpenVMS operating system. Since that time there have been many new features added and the code has been ported to many different operating systems. There have been contributions to the MDSplus development from the fusion community in the way of feature suggestions, feature implementations, documentation and porting to different operating systems. The bulk of the development and support of MDSplus, however, has been provided by a relatively small core developer group of three or four members. Givenmore » the size of the development team and the large number of users much more effort was focused on providing new features for the community than on keeping the underlying code and documentation up to date with the evolving software development standards. To ensure that MDSplus will continue to provide the needs of the community in the future, the MDSplus development team along with other members of the MDSplus user community has commenced on a major quality improvement project. The planned improvements include changes to software build scripts to better use GNU Autoconf and Automake tools, refactoring many of the source code modules using new language features available in modern compilers, using GNU MinGW-w64 to create MS Windows distributions, migrating to a more modern source code management system, improvement of source documentation as well as improvements to the www.mdsplus.org web site documentation and layout, and the addition of more comprehensive test suites to apply to MDSplus code builds prior to releasing installation kits to the community. This paper should lead to a much more robust product and establish a framework to maintain stability as more enhancements and features are added. Finally, this paper will describe these efforts that are either in progress or planned for the near future.« less
Argani, Pedram; Zhong, Minghao; Reuter, Victor E.; Fallon, John T.; Epstein, Jonathan I.; Netto, George J.; Antonescu, Cristina R.
2016-01-01
Xp11 translocation cancers include Xp11 translocation renal cell carcinoma (RCC), Xp11 translocation perivascular epithelioid cell tumor (PEComa), and melanotic Xp11 translocation renal cancer. In Xp11 translocation cancers, oncogenic activation of TFE3 is driven by the fusion of TFE3 with a number of different gene partners, however, the impact of individual fusion variant on specific clinicopathologic features of Xp11 translocation cancers has not been well defined. In this study, we analyze 60 Xp11 translocation cancers by fluorescence in situ hybridization (FISH) using custom BAC probes to establish their TFE3 fusion gene partner. In 5 cases RNA sequencing (RNA-seq) was also used to further characterize the fusion transcripts. The 60 Xp11 translocation cancers included 47 Xp11 translocation RCC, 8 Xp11 translocation PEComas, and 5 melanotic Xp11 translocation renal cancers. A fusion partner was identified in 53/60 (88%) cases, including 18 SFPQ (PSF), 16 PRCC, 12 ASPSCR1 (ASPL), 6 NONO, and 1 DVL2. We provide the first morphologic description of the NONO-TFE3 RCC, which frequently demonstrates sub-nuclear vacuoles leading to distinctive suprabasal nuclear palisading. Similar sub-nuclear vacuolization was also characteristic of SFPQ-TFE3 RCC, creating overlapping features with clear cell papillary RCC. We also describe the first RCC with a DVL2-TFE3 gene fusion, in addition to an extrarenal pigmented PEComa with a NONO-TFE3 gene fusion. Furthermore, among neoplasms with the SFPQ-TFE3, NONO-TFE3, DVL2-TFE3 and ASPL-TFE3 gene fusions, the RCC are almost always PAX8-positive, cathepsin K-negative by immunohistochemistry, whereas the mesenchymal counterparts (Xp11 translocation PEComas, melanotic Xp11 translocation renal cancers, and alveolar soft part sarcoma) are PAX8-negative, cathepsin K-positive. These findings support the concept that despite an identical gene fusion, the RCCs are distinct from the corresponding mesenchymal neoplasms, perhaps due to the cellular context in which the translocation occurs. We corroborate prior data showing that the PRCC-TFE3 RCC are the only known Xp11 translocation RCC molecular subtype which is consistently cathepsin K positive. In summary, our data expand further the clinicopathologic features of cancers with specific TFE3 gene fusions, and should allow for more meaningful clinicopathologic associations to be drawn. PMID:26975036
NASA Astrophysics Data System (ADS)
Jia, Lihui; Liang, Shuang; Sackett, Kelly; Xie, Li; Ghosh, Ujjayini; Weliky, David P.
2015-04-01
Rotational-echo double-resonance (REDOR) solid-state NMR is applied to probe the membrane locations of specific residues of membrane proteins. Couplings are measured between protein 13CO nuclei and membrane lipid or cholesterol 2H and 31P nuclei. Specific 13CO labeling is used to enable unambiguous assignment and 2H labeling covers a small region of the lipid or cholesterol molecule. The 13CO-31P and 13CO-2H REDOR respectively probe proximity to the membrane headgroup region and proximity to specific insertion depths within the membrane hydrocarbon core. One strength of the REDOR approach is use of chemically-native proteins and membrane components. The conventional REDOR pulse sequence with 100 kHz 2H π pulses is robust with respect to the 2H quadrupolar anisotropy. The 2H T1's are comparable to the longer dephasing times (τ's) and this leads to exponential rather than sigmoidal REDOR buildups. The 13CO-2H buildups are well-fitted to A × (1 - e-γτ) where A and γ are fitting parameters that are correlated as the fraction of molecules (A) with effective 13CO-2H coupling d = 3γ/2. The REDOR approach is applied to probe the membrane locations of the "fusion peptide" regions of the HIV gp41 and influenza virus hemagglutinin proteins which both catalyze joining of the viral and host cell membranes during initial infection of the cell. The HIV fusion peptide forms an intermolecular antiparallel β sheet and the REDOR data support major deeply-inserted and minor shallowly-inserted molecular populations. A significant fraction of the influenza fusion peptide molecules form a tight hairpin with antiparallel N- and C-α helices and the REDOR data support a single peptide population with a deeply-inserted N-helix. The shared feature of deep insertion of the β and α fusion peptide structures may be relevant for fusion catalysis via the resultant local perturbation of the membrane bilayer. Future applications of the REDOR approach may include samples that contain cell membrane extracts and use of lower temperatures and dynamic nuclear polarization to reduce data acquisition times.
Fusion of infrared polarization and intensity images based on improved toggle operator
NASA Astrophysics Data System (ADS)
Zhu, Pan; Ding, Lei; Ma, Xiaoqing; Huang, Zhanhua
2018-01-01
Integration of infrared polarization and intensity images has been a new topic in infrared image understanding and interpretation. The abundant infrared details and target from infrared image and the salient edge and shape information from polarization image should be preserved or even enhanced in the fused result. In this paper, a new fusion method is proposed for infrared polarization and intensity images based on the improved multi-scale toggle operator with spatial scale, which can effectively extract the feature information of source images and heavily reduce redundancy among different scale. Firstly, the multi-scale image features of infrared polarization and intensity images are respectively extracted at different scale levels by the improved multi-scale toggle operator. Secondly, the redundancy of the features among different scales is reduced by using spatial scale. Thirdly, the final image features are combined by simply adding all scales of feature images together, and a base image is calculated by performing mean value weighted method on smoothed source images. Finally, the fusion image is obtained by importing the combined image features into the base image with a suitable strategy. Both objective assessment and subjective vision of the experimental results indicate that the proposed method obtains better performance in preserving the details and edge information as well as improving the image contrast.
URREF Reliability Versus Credibility in Information Fusion
2013-07-01
Fusion, Vol. 3, No. 2, December, 2008. [31] E. Blasch, J. Dezert, and P. Valin , “DSMT Applied to Seismic and Acoustic Sensor Fusion,” Proc. IEEE Nat...44] E. Blasch, P. Valin , E. Bossé, “Measures of Effectiveness for High- Level Fusion,” Int. Conference on Information Fusion, 2010. [45] X. Mei, H...and P. Valin , “Information Fusion Measures of Effectiveness (MOE) for Decision Support,” Proc. SPIE 8050, 2011. [49] Y. Zheng, W. Dong, and E
Study on Hybrid Image Search Technology Based on Texts and Contents
NASA Astrophysics Data System (ADS)
Wang, H. T.; Ma, F. L.; Yan, C.; Pan, H.
2018-05-01
Image search was studied first here based on texts and contents, respectively. The text-based image feature extraction was put forward by integrating the statistical and topic features in view of the limitation of extraction of keywords only by means of statistical features of words. On the other hand, a search-by-image method was put forward based on multi-feature fusion in view of the imprecision of the content-based image search by means of a single feature. The layered-searching method depended on primarily the text-based image search method and additionally the content-based image search was then put forward in view of differences between the text-based and content-based methods and their difficult direct fusion. The feasibility and effectiveness of the hybrid search algorithm were experimentally verified.
Fusion Simulation Project Workshop Report
NASA Astrophysics Data System (ADS)
Kritz, Arnold; Keyes, David
2009-03-01
The mission of the Fusion Simulation Project is to develop a predictive capability for the integrated modeling of magnetically confined plasmas. This FSP report adds to the previous activities that defined an approach to integrated modeling in magnetic fusion. These previous activities included a Fusion Energy Sciences Advisory Committee panel that was charged to study integrated simulation in 2002. The report of that panel [Journal of Fusion Energy 20, 135 (2001)] recommended the prompt initiation of a Fusion Simulation Project. In 2003, the Office of Fusion Energy Sciences formed a steering committee that developed a project vision, roadmap, and governance concepts [Journal of Fusion Energy 23, 1 (2004)]. The current FSP planning effort involved 46 physicists, applied mathematicians and computer scientists, from 21 institutions, formed into four panels and a coordinating committee. These panels were constituted to consider: Status of Physics Components, Required Computational and Applied Mathematics Tools, Integration and Management of Code Components, and Project Structure and Management. The ideas, reported here, are the products of these panels, working together over several months and culminating in a 3-day workshop in May 2007.
An FPGA-based heterogeneous image fusion system design method
NASA Astrophysics Data System (ADS)
Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong
2011-08-01
Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.
Lesion classification using clinical and visual data fusion by multiple kernel learning
NASA Astrophysics Data System (ADS)
Kisilev, Pavel; Hashoul, Sharbell; Walach, Eugene; Tzadok, Asaf
2014-03-01
To overcome operator dependency and to increase diagnosis accuracy in breast ultrasound (US), a lot of effort has been devoted to developing computer-aided diagnosis (CAD) systems for breast cancer detection and classification. Unfortunately, the efficacy of such CAD systems is limited since they rely on correct automatic lesions detection and localization, and on robustness of features computed based on the detected areas. In this paper we propose a new approach to boost the performance of a Machine Learning based CAD system, by combining visual and clinical data from patient files. We compute a set of visual features from breast ultrasound images, and construct the textual descriptor of patients by extracting relevant keywords from patients' clinical data files. We then use the Multiple Kernel Learning (MKL) framework to train SVM based classifier to discriminate between benign and malignant cases. We investigate different types of data fusion methods, namely, early, late, and intermediate (MKL-based) fusion. Our database consists of 408 patient cases, each containing US images, textual description of complaints and symptoms filled by physicians, and confirmed diagnoses. We show experimentally that the proposed MKL-based approach is superior to other classification methods. Even though the clinical data is very sparse and noisy, its MKL-based fusion with visual features yields significant improvement of the classification accuracy, as compared to the image features only based classifier.
Jabeen, Safia; Mehmood, Zahid; Mahmood, Toqeer; Saba, Tanzila; Rehman, Amjad; Mahmood, Muhammad Tariq
2018-01-01
For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques. PMID:29694429
Jabeen, Safia; Mehmood, Zahid; Mahmood, Toqeer; Saba, Tanzila; Rehman, Amjad; Mahmood, Muhammad Tariq
2018-01-01
For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques.
Fusion of ECG and ABP signals based on wavelet transform for cardiac arrhythmias classification.
Arvanaghi, Roghayyeh; Daneshvar, Sabalan; Seyedarabi, Hadi; Goshvarpour, Atefeh
2017-11-01
Each of Electrocardiogram (ECG) and Atrial Blood Pressure (ABP) signals contain information of cardiac status. This information can be used for diagnosis and monitoring of diseases. The majority of previously proposed methods rely only on ECG signal to classify heart rhythms. In this paper, ECG and ABP were used to classify five different types of heart rhythms. To this end, two mentioned signals (ECG and ABP) have been fused. These physiological signals have been used from MINIC physioNet database. ECG and ABP signals have been fused together on the basis of the proposed Discrete Wavelet Transformation fusion technique. Then, some frequency features were extracted from the fused signal. To classify the different types of cardiac arrhythmias, these features were given to a multi-layer perceptron neural network. In this study, the best results for the proposed fusion algorithm were obtained. In this case, the accuracy rates of 96.6%, 96.9%, 95.6% and 93.9% were achieved for two, three, four and five classes, respectively. However, the maximum classification rate of 89% was obtained for two classes on the basis of ECG features. It has been found that the higher accuracy rates were acquired by using the proposed fusion technique. The results confirmed the importance of fusing features from different physiological signals to gain more accurate assessments. Copyright © 2017 Elsevier B.V. All rights reserved.
Gene Fusion Markup Language: a prototype for exchanging gene fusion data
2012-01-01
Background An avalanche of next generation sequencing (NGS) studies has generated an unprecedented amount of genomic structural variation data. These studies have also identified many novel gene fusion candidates with more detailed resolution than previously achieved. However, in the excitement and necessity of publishing the observations from this recently developed cutting-edge technology, no community standardization approach has arisen to organize and represent the data with the essential attributes in an interchangeable manner. As transcriptome studies have been widely used for gene fusion discoveries, the current non-standard mode of data representation could potentially impede data accessibility, critical analyses, and further discoveries in the near future. Results Here we propose a prototype, Gene Fusion Markup Language (GFML) as an initiative to provide a standard format for organizing and representing the significant features of gene fusion data. GFML will offer the advantage of representing the data in a machine-readable format to enable data exchange, automated analysis interpretation, and independent verification. As this database-independent exchange initiative evolves it will further facilitate the formation of related databases, repositories, and analysis tools. The GFML prototype is made available at http://code.google.com/p/gfml-prototype/. Conclusion The Gene Fusion Markup Language (GFML) presented here could facilitate the development of a standard format for organizing, integrating and representing the significant features of gene fusion data in an inter-operable and query-able fashion that will enable biologically intuitive access to gene fusion findings and expedite functional characterization. A similar model is envisaged for other NGS data analyses. PMID:23072312
A Markov game theoretic data fusion approach for cyber situational awareness
NASA Astrophysics Data System (ADS)
Shen, Dan; Chen, Genshe; Cruz, Jose B., Jr.; Haynes, Leonard; Kruger, Martin; Blasch, Erik
2007-04-01
This paper proposes an innovative data-fusion/ data-mining game theoretic situation awareness and impact assessment approach for cyber network defense. Alerts generated by Intrusion Detection Sensors (IDSs) or Intrusion Prevention Sensors (IPSs) are fed into the data refinement (Level 0) and object assessment (L1) data fusion components. High-level situation/threat assessment (L2/L3) data fusion based on Markov game model and Hierarchical Entity Aggregation (HEA) are proposed to refine the primitive prediction generated by adaptive feature/pattern recognition and capture new unknown features. A Markov (Stochastic) game method is used to estimate the belief of each possible cyber attack pattern. Game theory captures the nature of cyber conflicts: determination of the attacking-force strategies is tightly coupled to determination of the defense-force strategies and vice versa. Also, Markov game theory deals with uncertainty and incompleteness of available information. A software tool is developed to demonstrate the performance of the high level information fusion for cyber network defense situation and a simulation example shows the enhanced understating of cyber-network defense.
Geoinformatics and Data Fusion in the Southwestern Utah Mineral Belt
NASA Astrophysics Data System (ADS)
Kiesel, T.; Enright, R.
2012-12-01
Data Fusion is a technique in remote sensing that combines separate geophysical data sets from different platforms to yield the maximum information of each set. Data fusion was employed on multiple sources of data for the purposes of investigating an area of the Utah Mineral Belt known as the San Francisco Mining District. In the past many mineral deposits were expressed in or on the immediate surface and therefore relatively easy to locate. More modern methods of investigation look for evidence beyond the visible spectrum to find patterns that predict the presence of deeply buried mineral deposits. The methods used in this study employed measurements of reflectivity or emissivity features in the infrared portion of the electromagnetic spectrum for different materials, elevation data collected from the Shuttle Radar Topography Mission and indirect measurement of the magnetic or mass properties of deposits. The measurements were collected by various spaceborne remote sensing instruments like Landsat TM, ASTER and Hyperion and ground-based statewide geophysical surveys. ASTER's shortwave infrared bands, that have been calibrated to surface reflectance using the atmospheric correction tool FLAASH, can be used to identify products of hydrothermal alteration like kaolinite, alunite, limonite and pyrophyllite using image spectroscopy. The thermal infrared bands once calibrated to emissivity can be used to differentiate between felsic, mafic and carbonate rock units for the purposes of lithologic mapping. To validate results from the extracted spectral profiles existing geological reports were used for ground truth data. Measurements of electromagnetic spectra can only reveal the composition of surface features. Gravimetric and magnetic information were utilized to reveal subsurface features. Using Bouguer anomaly data provided by the USGS an interpreted geological cross section can be created that indicates the shape of local igneous intrusions and the depth of sedimentary basins. By comparing the digital elevation model with a satellite photo of the area a major high angle fault system was identified that had not been clearly evaluated in previous geologic mapping. For the investigation of the Frisco Mining District, gravity and magnetic data was fused to help differentiate igneous and sedimentary rocks that might have the same density. Data fusion allows for a more thorough analysis rather than viewing each data set separately with the accompanying improvement in ability to understand the complex geology of an area and can be applied to any remote sensing data set regardless of the type of instrument used.
Hypoglycemia alarm enhancement using data fusion.
Skladnev, Victor N; Tarnavskii, Stanislav; McGregor, Thomas; Ghevondian, Nejhdeh; Gourlay, Steve; Jones, Timothy W
2010-01-01
The acceptance of closed-loop blood glucose (BG) control using continuous glucose monitoring systems (CGMS) is likely to improve with enhanced performance of their integral hypoglycemia alarms. This article presents an in silico analysis (based on clinical data) of a modeled CGMS alarm system with trained thresholds on type 1 diabetes mellitus (T1DM) patients that is augmented by sensor fusion from a prototype hypoglycemia alarm system (HypoMon). This prototype alarm system is based on largely independent autonomic nervous system (ANS) response features. Alarm performance was modeled using overnight BG profiles recorded previously on 98 T1DM volunteers. These data included the corresponding ANS response features detected by HypoMon (AiMedics Pty. Ltd.) systems. CGMS data and alarms were simulated by applying a probabilistic model to these overnight BG profiles. The probabilistic model developed used a mean response delay of 7.1 minutes, measurement error offsets on each sample of +/- standard deviation (SD) = 4.5 mg/dl (0.25 mmol/liter), and vertical shifts (calibration offsets) of +/- SD = 19.8 mg/dl (1.1 mmol/liter). Modeling produced 90 to 100 simulated measurements per patient. Alarm systems for all analyses were optimized on a training set of 46 patients and evaluated on the test set of 56 patients. The split between the sets was based on enrollment dates. Optimization was based on detection accuracy but not time to detection for these analyses. The contribution of this form of data fusion to hypoglycemia alarm performance was evaluated by comparing the performance of the trained CGMS and fused data algorithms on the test set under the same evaluation conditions. The simulated addition of HypoMon data produced an improvement in CGMS hypoglycemia alarm performance of 10% at equal specificity. Sensitivity improved from 87% (CGMS as stand-alone measurement) to 97% for the enhanced alarm system. Specificity was maintained constant at 85%. Positive predictive values on the test set improved from 61 to 66% with negative predictive values improving from 96 to 99%. These enhancements were stable within sensitivity analyses. Sensitivity analyses also suggested larger performance increases at lower CGMS alarm performance levels. Autonomic nervous system response features provide complementary information suitable for fusion with CGMS data to enhance nocturnal hypoglycemia alarms. 2010 Diabetes Technology Society.
NASA Astrophysics Data System (ADS)
He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing
2014-12-01
Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.
Yang, Zhiwei; Gou, Lu; Chen, Shuyu; Li, Na; Zhang, Shengli; Zhang, Lei
2017-01-01
Membrane fusion is one of the most fundamental physiological processes in eukaryotes for triggering the fusion of lipid and content, as well as the neurotransmission. However, the architecture features of neurotransmitter release machinery and interdependent mechanism of synaptic membrane fusion have not been extensively studied. This review article expounds the neuronal membrane fusion processes, discusses the fundamental steps in all fusion reactions (membrane aggregation, membrane association, lipid rearrangement and lipid and content mixing) and the probable mechanism coupling to the delivery of neurotransmitters. Subsequently, this work summarizes the research on the fusion process in synaptic transmission, using electron microscopy (EM) and molecular simulation approaches. Finally, we propose the future outlook for more exciting applications of membrane fusion involved in synaptic transmission, with the aid of stochastic optical reconstruction microscopy (STORM), cryo-EM (cryo-EM), and molecular simulations. PMID:28638320
Soft computing-based terrain visual sensing and data fusion for unmanned ground robotic systems
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir
2006-05-01
In this paper, we have primarily discussed technical challenges and navigational skill requirements of mobile robots for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions and classifies terrain condition exclusively within each sub-terrain region based on terrain visual clues. The Kalman Filtering technique is applied for aggregative fusion of sub-terrain assessment results. The last two terrain classifiers are shown to have remarkable capability for terrain traversability assessment of natural terrains. We have conducted a comparative performance evaluation of all three terrain classifiers and presented the results in this paper.
Learning target masks in infrared linescan imagery
NASA Astrophysics Data System (ADS)
Fechner, Thomas; Rockinger, Oliver; Vogler, Axel; Knappe, Peter
1997-04-01
In this paper we propose a neural network based method for the automatic detection of ground targets in airborne infrared linescan imagery. Instead of using a dedicated feature extraction stage followed by a classification procedure, we propose the following three step scheme: In the first step of the recognition process, the input image is decomposed into its pyramid representation, thus obtaining a multiresolution signal representation. At the lowest three levels of the Laplacian pyramid a neural network filter of moderate size is trained to indicate the target location. The last step consists of a fusion process of the several neural network filters to obtain the final result. To perform this fusion we use a belief network to combine the various filter outputs in a statistical meaningful way. In addition, the belief network allows the integration of further knowledge about the image domain. By applying this multiresolution recognition scheme, we obtain a nearly scale- and rotational invariant target recognition with a significantly decreased false alarm rate compared with a single resolution target recognition scheme.
Atomic-Level Quality Assessment of Enzymes Encapsulated in Bioinspired Silica.
Martelli, Tommaso; Ravera, Enrico; Louka, Alexandra; Cerofolini, Linda; Hafner, Manuel; Fragai, Marco; Becker, Christian F W; Luchinat, Claudio
2016-01-04
Among protein immobilization strategies, encapsulation in bioinspired silica is increasingly popular. Encapsulation offers high yields and the solid support is created through a protein-catalyzed polycondensation reaction that occurs under mild conditions. An integrated strategy is reported for the characterization of both the protein and bioinspired silica scaffold generated by the encapsulation of enzymes with an external silica-forming promoter or with the promoter expressed as a fusion to the enzyme. This strategy is applied to the catalytic domain of matrix metalloproteinase 12. Analysis reveals that the structure of the protein encapsulated by either method is not significantly altered with respect to the native form. The structural features of silica obtained by either strategy are also similar, but differ from those obtained by other approaches. In case of the covalently linked R5-enzyme construct, immobilization yields are higher. Encapsulation through a fusion protein, therefore, appears to be the method of choice. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Chaa, Mourad; Boukezzoula, Naceur-Eddine; Attia, Abdelouahab
2017-01-01
Two types of scores extracted from two-dimensional (2-D) and three-dimensional (3-D) palmprint for personal recognition systems are merged, introducing a local image descriptor for 2-D palmprint-based recognition systems, named bank of binarized statistical image features (B-BSIF). The main idea of B-BSIF is that the extracted histograms from the binarized statistical image features (BSIF) code images (the results of applying the different BSIF descriptor size with the length 12) are concatenated into one to produce a large feature vector. 3-D palmprint contains the depth information of the palm surface. The self-quotient image (SQI) algorithm is applied for reconstructing illumination-invariant 3-D palmprint images. To extract discriminative Gabor features from SQI images, Gabor wavelets are defined and used. Indeed, the dimensionality reduction methods have shown their ability in biometrics systems. Given this, a principal component analysis (PCA)+linear discriminant analysis (LDA) technique is employed. For the matching process, the cosine Mahalanobis distance is applied. Extensive experiments were conducted on a 2-D and 3-D palmprint database with 10,400 range images from 260 individuals. Then, a comparison was made between the proposed algorithm and other existing methods in the literature. Results clearly show that the proposed framework provides a higher correct recognition rate. Furthermore, the best results were obtained by merging the score of B-BSIF descriptor with the score of the SQI+Gabor wavelets+PCA+LDA method, yielding an equal error rate of 0.00% and a recognition rate of rank-1=100.00%.
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang
2018-04-01
A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.
Joint sparsity based heterogeneous data-level fusion for target detection and estimation
NASA Astrophysics Data System (ADS)
Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe
2017-05-01
Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.
NASA-NIAC 2001 Phase I Research Grant on Aneutronic Fusion Spacecraft Architecture
NASA Technical Reports Server (NTRS)
Tarditi, Alfonso G. (Principal Investigator); Scott, John H.; Miley, George H.
2012-01-01
This study was developed because the recognized need of defining of a new spacecraft architecture suitable for aneutronic fusion and featuring game-changing space travel capabilities. The core of this architecture is the definition of a new kind of fusion-based space propulsion system. This research is not about exploring a new fusion energy concept, it actually assumes the availability of an aneutronic fusion energy reactor. The focus is on providing the best (most efficient) utilization of fusion energy for propulsion purposes. The rationale is that without a proper architecture design even the utilization of a fusion reactor as a prime energy source for spacecraft propulsion is not going to provide the required performances for achieving a substantial change of current space travel capabilities.
Liu, Yu-Tao; Shi, Yuan-Kai; Hao, Xue-Zhi; Wang, Lin; Li, Jun-Ling; Han, Xiao-Hong; Li, Dan; Zhou, Yu-Jie; Tang, Le
2014-01-01
Background The echinoderm microtubule-associated protein-like-4-anaplastic lymphoma kinase (EML4-ALK) fusion gene defines a novel molecular subset of non-small-cell lung cancer (NSCLC). However, the clinicopathological features of patients with the EML4-ALK fusion gene have not been defined completely. Methods Clinicopathological data of 200 Chinese patients with advanced NSCLC were analyzed retrospectively to explore their possible correlations with EML4-ALK fusions. Results The EML4-ALK fusion gene was detected in 56 (28.0%) of the 200 NSCLC patients, and undetected in 22 (11.0%) patients because of an insufficient amount of pathological tissue. The median age of the patients with positive and negative EML4-ALK was 48 and 55 years, respectively. Patients with the EML4-ALK fusion gene were significantly younger (P< 0.001). The detection rate of the EML4-ALK fusion gene in patients who received primary tumor or metastatic lymph node resection was significantly higher than in patients who received fine-needle biopsy (P= 0.003). The detection rate of the EML4-ALK fusion gene in patients with a time lag from obtainment of the pathological tissue to EML4-ALK fusion gene detection ≤48 months was significantly higher than in patients >48 months (P= 0.020). The occurrence of the EML4-ALK fusion gene in patients with wild-type epidermal growth factor receptor (EGFR) was significantly higher than in patients with mutant-type EGFR (42.5% [37/87] vs. 6.3% [1/16], P= 0.005). Conclusions Younger age and wild-type EGFR were identified as clinicopathological characteristics of patients with advanced NSCLC who harbored the EML4-ALK fusion gene. The optimal time lag from the obtainment of the pathological tissue to the time of EML4-ALK fusion gene detection is ≤48 months. PMID:26767009
Bigdeli, Amir Khosrow; Gazyakan, Emre; Schmidt, Volker Juergen; Hernekamp, Frederick Jochen; Harhaus, Leila; Henzler, Thomas; Kremer, Thomas; Kneser, Ulrich; Hirche, Christoph
2016-06-01
Near-infrared indocyanine green video angiography (ICG-NIR-VA) has been introduced for free-flap surgery and may provide intraoperative flap designing as well as postoperative monitoring. Nevertheless, the technique has not been established in clinical routine because of controversy over benefits. Improved technical features of the novel Visionsense ICG-NIR-VA surgery system are promising to revisit the field of application. It features a unique real-time fusion image of simultaneous NIR and white light visualization, with highlighted perfusion, including a color-coded perfusion flow scale for optimized anatomical understanding. In a feasibility study, the Visionsense ICG-NIR-VA system was applied during 10 free-flap surgeries in 8 patients at our center. Indications included anterior lateral thigh (ALT) flap (n = 4), latissimus dorsi muscle flap (n = 1), tensor fascia latae flap (n = 1), and two bilateral deep inferior epigastric artery perforator flaps (n = 4). The system was used intraoperatively and postoperatively to investigate its impact on surgical decision making and to observe perfusion patterns correlated to clinical monitoring. Visionsense ICG-NIR-VA aided assessing free-flap design and perfusion patterns in all cases and correlated with clinical observations. Additional interventions were performed in 2 cases (22%). One venous anastomosis was revised, and 1 flap was redesigned. Indicated by ICG-NIR-VA, 1 ALT flap developed partial flap necrosis (11%). The Visionsense ICG-NIR-VA system allowed a virtual view of flap perfusion anatomy by fusion imaging in real-time. The system improved decision making for flap design and surgical decisions. Clinical and ICG-NIR-VA parameters correlated. Its future implementation may aid in improving outcomes for free-flap surgery, but additional experience is needed to define its final role. © The Author(s) 2015.
Development of emergent processing loops as a system of systems concept
NASA Astrophysics Data System (ADS)
Gainey, James C., Jr.; Blasch, Erik P.
1999-03-01
This paper describes an engineering approach toward implementing the current neuroscientific understanding of how the primate brain fuses, or integrates, 'information' in the decision-making process. We describe a System of Systems (SoS) design for improving the overall performance, capabilities, operational robustness, and user confidence in Identification (ID) systems and show how it could be applied to biometrics security. We use the Physio-associative temporal sensor integration algorithm (PATSIA) which is motivated by observed functions and interactions of the thalamus, hippocampus, and cortical structures in the brain. PATSIA utilizes signal theory mathematics to model how the human efficiently perceives and uses information from the environment. The hybrid architecture implements a possible SoS-level description of the Joint Directors of US Laboratories for Fusion Working Group's functional description involving 5 levels of fusion and their associated definitions. This SoS architecture propose dynamic sensor and knowledge-source integration by implementing multiple Emergent Processing Loops for predicting, feature extracting, matching, and Searching both static and dynamic database like MSTAR's PEMS loops. Biologically, this effort demonstrates these objectives by modeling similar processes from the eyes, ears, and somatosensory channels, through the thalamus, and to the cortices as appropriate while using the hippocampus for short-term memory search and storage as necessary. The particular approach demonstrated incorporates commercially available speaker verification and face recognition software and hardware to collect data and extract features to the PATSIA. The PATSIA maximizes the confidence levels for target identification or verification in dynamic situations using a belief filter. The proof of concept described here is easily adaptable and scaleable to other military and nonmilitary sensor fusion applications.
Information Fusion - Methods and Aggregation Operators
NASA Astrophysics Data System (ADS)
Torra, Vicenç
Information fusion techniques are commonly applied in Data Mining and Knowledge Discovery. In this chapter, we will give an overview of such applications considering their three main uses. This is, we consider fusion methods for data preprocessing, model building and information extraction. Some aggregation operators (i.e. particular fusion methods) and their properties are briefly described as well.
NASA Astrophysics Data System (ADS)
Singh, Dharmendra; Kumar, Harish
Earth observation satellites provide data that covers different portions of the electromagnetic spectrum at different spatial and spectral resolutions. The increasing availability of information products generated from satellite images are extending the ability to understand the patterns and dynamics of the earth resource systems at all scales of inquiry. In which one of the most important application is the generation of land cover classification from satellite images for understanding the actual status of various land cover classes. The prospect for the use of satel-lite images in land cover classification is an extremely promising one. The quality of satellite images available for land-use mapping is improving rapidly by development of advanced sensor technology. Particularly noteworthy in this regard is the improved spatial and spectral reso-lution of the images captured by new satellite sensors like MODIS, ASTER, Landsat 7, and SPOT 5. For the full exploitation of increasingly sophisticated multisource data, fusion tech-niques are being developed. Fused images may enhance the interpretation capabilities. The images used for fusion have different temporal, and spatial resolution. Therefore, the fused image provides a more complete view of the observed objects. It is one of the main aim of image fusion to integrate different data in order to obtain more information that can be de-rived from each of the single sensor data alone. A good example of this is the fusion of images acquired by different sensors having a different spatial resolution and of different spectral res-olution. Researchers are applying the fusion technique since from three decades and propose various useful methods and techniques. The importance of high-quality synthesis of spectral information is well suited and implemented for land cover classification. More recently, an underlying multiresolution analysis employing the discrete wavelet transform has been used in image fusion. It was found that multisensor image fusion is a tradeoff between the spectral information from a low resolution multi-spectral images and the spatial information from a high resolution multi-spectral images. With the wavelet transform based fusion method, it is easy to control this tradeoff. A new transform, the curvelet transform was used in recent years by Starck. A ridgelet transform is applied to square blocks of detail frames of undecimated wavelet decomposition, consequently the curvelet transform is obtained. Since the ridgelet transform possesses basis functions matching directional straight lines therefore, the curvelet transform is capable of representing piecewise linear contours on multiple scales through few significant coefficients. This property leads to a better separation between geometric details and background noise, which may be easily reduced by thresholding curvelet coefficients before they are used for fusion. The Terra and Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) instrument provides high radiometric sensitivity (12 bit) in 36 spectral bands ranging in wavelength from 0.4 m to 14.4 m and also it is freely available. Two bands are imaged at a nominal resolution of 250 m at nadir, with five bands at 500 m, and the remaining 29 bands at 1 km. In this paper, the band 1 of spatial resolution 250 m and bandwidth 620-670 nm, and band 2, of spatial resolution of 250m and bandwidth 842-876 nm is considered as these bands has special features to identify the agriculture and other land covers. In January 2006, the Advanced Land Observing Satellite (ALOS) was successfully launched by the Japan Aerospace Exploration Agency (JAXA). The Phased Arraytype L-band SAR (PALSAR) sensor onboard the satellite acquires SAR imagery at a wavelength of 23.5 cm (frequency 1.27 GHz) with capabilities of multimode and multipolarization observation. PALSAR can operate in several modes: the fine-beam single (FBS) polarization mode (HH), fine-beam dual (FBD) polariza-tion mode (HH/HV or VV/VH), polarimetric (PLR) mode (HH/HV/VH/VV), and ScanSAR (WB) mode (HH/VV) [15]. These makes PALSAR imagery very attractive for spatially and temporally consistent monitoring system. The Overview of Principal Component Analysis is that the most of the information within all the bands can be compressed into a much smaller number of bands with little loss of information. It allows us to extract the low-dimensional subspaces that capture the main linear correlation among the high-dimensional image data. This facilitates viewing the explained variance or signal in the available imagery, allowing both gross and more subtle features in the imagery to be seen. In this paper we have explored the fusion technique for enhancing the land cover classification of low resolution satellite data espe-cially freely available satellite data. For this purpose, we have considered to fuse the PALSAR principal component data with MODIS principal component data. Initially, the MODIS band 1 and band 2 is considered, its principal component is computed. Similarly the PALSAR HH, HV and VV polarized data are considered, and there principal component is also computed. con-sequently, the PALSAR principal component image is fused with MODIS principal component image. The aim of this paper is to analyze the effect of classification accuracy on major type of land cover types like agriculture, water and urban bodies with fusion of PALSAR data to MODIS data. Curvelet transformation has been applied for fusion of these two satellite images and Minimum Distance classification technique has been applied for the resultant fused image. It is qualitatively and visually observed that the overall classification accuracy of MODIS image after fusion is enhanced. This type of fusion technique may be quite helpful in near future to use freely available satellite data to develop monitoring system for different land cover classes on the earth.
Visual affective classification by combining visual and text features.
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.
Visual affective classification by combining visual and text features
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566
Fusion of Deep Learning and Compressed Domain features for Content Based Image Retrieval.
Liu, Peizhong; Guo, Jing-Ming; Wu, Chi-Yi; Cai, Danlin
2017-08-29
This paper presents an effective image retrieval method by combining high-level features from Convolutional Neural Network (CNN) model and low-level features from Dot-Diffused Block Truncation Coding (DDBTC). The low-level features, e.g., texture and color, are constructed by VQ-indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features (DL-TLCF) is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate (APR) and average recall rate (ARR), are employed to examine various datasets. As documented in the experimental results, the proposed schemes can achieve superior performance compared to the state-of-the-art methods with either low- or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.
Chen, Xiancheng; Yang, Yang; Gan, Weidong; Xu, Linfeng; Ye, Qing; Guo, Hongqian
2015-01-01
Abstract The diagnosis of Xp11.2 translocation renal cell carcinoma (tRCC), which relies on morphology and immunohistochemistry (IHC), is often either missed in the diagnosis or misdiagnosed. To improve the accuracy of diagnosis of Xp11.2 tRCC and ASPL-TFE3 renal cell carcinoma (RCC), we investigated newly designed fluorescence in situ hybridization (FISH) probes (diagnostic accuracy study). Based on the genetic characteristics of Xp11.2 tRCC and the ASPL-TFE3 RCC, a new break-apart TFE3 FISH probe and an ASPL-TFE3 dual-fusion FISH probe were designed and applied to 65 patients with RCC who were <45 years old or showed suspicious microscopic features of Xp11.2 tRCC in our hospital. To test the accuracy of the probes, we further performed reverse transcriptase–polymerase chain reaction (PCR) on 8 cases for which frozen tissues were available. Among the 65 cases diagnosed with RCC, TFE3 IHC was positive in 24 cases. Twenty-two cases were confirmed as Xp11.2 tRCC by break-apart TFE3 FISH, and 6 of these cases were further diagnosed as ASPL-TFE3 RCC by ASPL-TFE3 dual-fusion FISH detection. Importantly, reverse transcriptase–PCR showed concordant results with the results of FISH assay in the 8 available frozen cases. The break-apart and ASPL-TFE3 dual-fusion FISH assay can accurately detect the translocation of the TFE3 gene and ASPL-TFE3 fusion gene and can thus serve as a valid complementary method for diagnosing Xp11.2 tRCC and ASPL-TFE3 RCC. PMID:25984679
Chen, Xiancheng; Yang, Yang; Gan, Weidong; Xu, Linfeng; Ye, Qing; Guo, Hongqian
2015-05-01
The diagnosis of Xp11.2 translocation renal cell carcinoma (tRCC), which relies on morphology and immunohistochemistry (IHC), is often either missed in the diagnosis or misdiagnosed. To improve the accuracy of diagnosis of Xp11.2 tRCC and ASPL-TFE3 renal cell carcinoma (RCC), we investigated newly designed fluorescence in situ hybridization (FISH) probes (diagnostic accuracy study).Based on the genetic characteristics of Xp11.2 tRCC and the ASPL-TFE3 RCC, a new break-apart TFE3 FISH probe and an ASPL-TFE3 dual-fusion FISH probe were designed and applied to 65 patients with RCC who were <45 years old or showed suspicious microscopic features of Xp11.2 tRCC in our hospital. To test the accuracy of the probes, we further performed reverse transcriptase-polymerase chain reaction (PCR) on 8 cases for which frozen tissues were available.Among the 65 cases diagnosed with RCC, TFE3 IHC was positive in 24 cases. Twenty-two cases were confirmed as Xp11.2 tRCC by break-apart TFE3 FISH, and 6 of these cases were further diagnosed as ASPL-TFE3 RCC by ASPL-TFE3 dual-fusion FISH detection. Importantly, reverse transcriptase-PCR showed concordant results with the results of FISH assay in the 8 available frozen cases.The break-apart and ASPL-TFE3 dual-fusion FISH assay can accurately detect the translocation of the TFE3 gene and ASPL-TFE3 fusion gene and can thus serve as a valid complementary method for diagnosing Xp11.2 tRCC and ASPL-TFE3 RCC.
Landcover classification in MRF context using Dempster-Shafer fusion for multisensor imagery.
Sarkar, Anjan; Banerjee, Anjan; Banerjee, Nilanjan; Brahma, Siddhartha; Kartikeyan, B; Chakraborty, Manab; Majumder, K L
2005-05-01
This work deals with multisensor data fusion to obtain landcover classification. The role of feature-level fusion using the Dempster-Shafer rule and that of data-level fusion in the MRF context is studied in this paper to obtain an optimally segmented image. Subsequently, segments are validated and classification accuracy for the test data is evaluated. Two examples of data fusion of optical images and a synthetic aperture radar image are presented, each set having been acquired on different dates. Classification accuracies of the technique proposed are compared with those of some recent techniques in literature for the same image data.
Automotive System for Remote Surface Classification.
Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail
2017-04-01
In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions.
Automotive System for Remote Surface Classification
Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail
2017-01-01
In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions. PMID:28368297
Multisensor Fusion for Change Detection
NASA Astrophysics Data System (ADS)
Schenk, T.; Csatho, B.
2005-12-01
Combining sensors that record different properties of a 3-D scene leads to complementary and redundant information. If fused properly, a more robust and complete scene description becomes available. Moreover, fusion facilitates automatic procedures for object reconstruction and modeling. For example, aerial imaging sensors, hyperspectral scanning systems, and airborne laser scanning systems generate complementary data. We describe how data from these sensors can be fused for such diverse applications as mapping surface erosion and landslides, reconstructing urban scenes, monitoring urban land use and urban sprawl, and deriving velocities and surface changes of glaciers and ice sheets. An absolute prerequisite for successful fusion is a rigorous co-registration of the sensors involved. We establish a common 3-D reference frame by using sensor invariant features. Such features are caused by the same object space phenomena and are extracted in multiple steps from the individual sensors. After extracting, segmenting and grouping the features into more abstract entities, we discuss ways on how to automatically establish correspondences. This is followed by a brief description of rigorous mathematical models suitable to deal with linear and area features. In contrast to traditional, point-based registration methods, lineal and areal features lend themselves to a more robust and more accurate registration. More important, the chances to automate the registration process increases significantly. The result of the co-registration of the sensors is a unique transformation between the individual sensors and the object space. This makes spatial reasoning of extracted information more versatile; reasoning can be performed in sensor space or in 3-D space where domain knowledge about features and objects constrains reasoning processes, reduces the search space, and helps to make the problem well-posed. We demonstrate the feasibility of the proposed multisensor fusion approach with detecting surface elevation changes on the Byrd Glacier, Antarctica, with aerial imagery from 1980s and ICESat laser altimetry data from 2003-05. Change detection from such disparate data sets is an intricate fusion problem, beginning with sensor alignment, and on to reasoning with spatial information as to where changes occurred and to what extent.
Han, Lei; Shi, Lu; Yang, Yiling; Song, Dalei
2014-01-01
Geostationary meteorological satellite infrared (IR) channel data contain important spectral information for meteorological research and applications, but their spatial resolution is relatively low. The objective of this study is to obtain higher-resolution IR images. One common method of increasing resolution fuses the IR data with high-resolution visible (VIS) channel data. However, most existing image fusion methods focus only on visual performance, and often fail to take into account the thermal physical properties of the IR images. As a result, spectral distortion occurs frequently. To tackle this problem, we propose a thermal physical properties-based correction method for fusing geostationary meteorological satellite IR and VIS images. In our two-step process, the high-resolution structural features of the VIS image are first extracted and incorporated into the IR image using regular multi-resolution fusion approach, such as the multiwavelet analysis. This step significantly increases the visual details in the IR image, but fake thermal information may be included. Next, the Stefan-Boltzmann Law is applied to correct the distortion, to retain or recover the thermal infrared nature of the fused image. The results of both the qualitative and quantitative evaluation demonstrate that the proposed physical correction method both improves the spatial resolution and preserves the infrared thermal properties. PMID:24919017
Han, Lei; Shi, Lu; Yang, Yiling; Song, Dalei
2014-06-10
Geostationary meteorological satellite infrared (IR) channel data contain important spectral information for meteorological research and applications, but their spatial resolution is relatively low. The objective of this study is to obtain higher-resolution IR images. One common method of increasing resolution fuses the IR data with high-resolution visible (VIS) channel data. However, most existing image fusion methods focus only on visual performance, and often fail to take into account the thermal physical properties of the IR images. As a result, spectral distortion occurs frequently. To tackle this problem, we propose a thermal physical properties-based correction method for fusing geostationary meteorological satellite IR and VIS images. In our two-step process, the high-resolution structural features of the VIS image are first extracted and incorporated into the IR image using regular multi-resolution fusion approach, such as the multiwavelet analysis. This step significantly increases the visual details in the IR image, but fake thermal information may be included. Next, the Stefan-Boltzmann Law is applied to correct the distortion, to retain or recover the thermal infrared nature of the fused image. The results of both the qualitative and quantitative evaluation demonstrate that the proposed physical correction method both improves the spatial resolution and preserves the infrared thermal properties.
In vivo engineering of oncogenic chromosomal rearrangements with the CRISPR/Cas9 system
Maddalo, Danilo; Manchado, Eusebio; Concepcion, Carla P.; Bonetti, Ciro; Vidigal, Joana A.; Han, Yoon-Chi; Ogrodowski, Paul; Crippa, Alessandra; Rekhtman, Natasha; de Stanchina, Elisa; Lowe, Scott W.; Ventura, Andrea
2014-01-01
Chromosomal rearrangements play a central role in the pathogenesis of human cancers and often result in the expression of therapeutically actionable gene fusions1. A recently discovered example is a fusion between the Echinoderm Microtubule-associated Protein-like 4 (EML4) and the Anaplastic Lymphoma Kinase (ALK) genes, generated by an inversion on the short arm of chromosome 2: inv(2)(p21p23). The EML4-ALK oncogene is detected in a subset of human non-small cell lung cancers (NSCLC)2 and is clinically relevant because it confers sensitivity to ALK inhibitors3. Despite their importance, modeling such genetic events in mice has proven challenging and requires complex manipulation of the germline. Here we describe an efficient method to induce specific chromosomal rearrangements in vivo using viral-mediated delivery of the CRISPR/Cas9 system to somatic cells of adult animals. We apply it to generate a mouse model of Eml4-Alk-driven lung cancer. The resulting tumors invariably harbor the Eml4-Alkinversion, express the Eml4-Alk fusion gene, display histo-pathologic and molecular features typical of ALK+ human NSCLCs, and respond to treatment with ALK-inhibitors. The general strategy described here substantially expands our ability to model human cancers in mice and potentially in other organisms. PMID:25337876
Helium-3 blankets for tritium breeding in fusion reactors
NASA Technical Reports Server (NTRS)
Steiner, Don; Embrechts, Mark; Varsamis, Georgios; Vesey, Roger; Gierszewski, Paul
1988-01-01
It is concluded that He-3 blankets offers considerable promise for tritium breeding in fusion reactors: good breeding potential, low operational risk, and attractive safety features. The availability of He-3 resources is the key issue for this concept. There is sufficient He-3 from decay of military stockpiles to meet the International Thermonuclear Experimental Reactor needs. Extraterrestrial sources of He-3 would be required for a fusion power economy.
Prostate cancer detection: Fusion of cytological and textural features.
Nguyen, Kien; Jain, Anil K; Sabata, Bikash
2011-01-01
A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i) to locate cancer regions in a large digitized tissue biopsy, and (ii) to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli) in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000×7,000 pixels) and 1 1 whole-slide test images (each of which has approximately 5,000×23,000 pixels). All images are at 20X magnification.
Prostate cancer detection: Fusion of cytological and textural features
Nguyen, Kien; Jain, Anil K.; Sabata, Bikash
2011-01-01
A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i) to locate cancer regions in a large digitized tissue biopsy, and (ii) to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli) in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000×7,000 pixels) and 1 1 whole-slide test images (each of which has approximately 5,000×23,000 pixels). All images are at 20X magnification. PMID:22811959
NASA Astrophysics Data System (ADS)
Gholoum, M.; Bruce, D.; Hazeam, S. Al
2012-07-01
A coral reef ecosystem, one of the most complex marine environmental systems on the planet, is defined as biologically diverse and immense. It plays an important role in maintaining a vast biological diversity for future generations and functions as an essential spawning, nursery, breeding and feeding ground for many kinds of marine species. In addition, coral reef ecosystems provide valuable benefits such as fisheries, ecological goods and services and recreational activities to many communities. However, this valuable resource is highly threatened by a number of environmental changes and anthropogenic impacts that can lead to reduced coral growth and production, mass coral mortality and loss of coral diversity. With the growth of these threats on coral reef ecosystems, there is a strong management need for mapping and monitoring of coral reef ecosystems. Remote sensing technology can be a valuable tool for mapping and monitoring of these ecosystems. However, the diversity and complexity of coral reef ecosystems, the resolution capabilities of satellite sensors and the low reflectivity of shallow water increases the difficulties to identify and classify its features. This paper reviews the methods used in mapping and monitoring coral reef ecosystems. In addition, this paper proposes improved methods for mapping and monitoring coral reef ecosystems based on image fusion techniques. This image fusion techniques will be applied to satellite images exhibiting high spatial and low to medium spectral resolution with images exhibiting low spatial and high spectral resolution. Furthermore, a new method will be developed to fuse hyperspectral imagery with multispectral imagery. The fused image will have a large number of spectral bands and it will have all pairs of corresponding spatial objects. This will potentially help to accurately classify the image data. Accuracy assessment use ground truth will be performed for the selected methods to determine the quality of the information derived from image classification. The research will be applied to the Kuwait's southern coral reefs: Kubbar and Um Al-Maradim.
Epileptic seizure onset detection based on EEG and ECG data fusion.
Qaraqe, Marwa; Ismail, Muhammad; Serpedin, Erchin; Zulfi, Haneef
2016-05-01
This paper presents a novel method for seizure onset detection using fused information extracted from multichannel electroencephalogram (EEG) and single-channel electrocardiogram (ECG). In existing seizure detectors, the analysis of the nonlinear and nonstationary ECG signal is limited to the time-domain or frequency-domain. In this work, heart rate variability (HRV) extracted from ECG is analyzed using a Matching-Pursuit (MP) and Wigner-Ville Distribution (WVD) algorithm in order to effectively extract meaningful HRV features representative of seizure and nonseizure states. The EEG analysis relies on a common spatial pattern (CSP) based feature enhancement stage that enables better discrimination between seizure and nonseizure features. The EEG-based detector uses logical operators to pool SVM seizure onset detections made independently across different EEG spectral bands. Two fusion systems are adopted. In the first system, EEG-based and ECG-based decisions are directly fused to obtain a final decision. The second fusion system adopts an override option that allows for the EEG-based decision to override the fusion-based decision in the event that the detector observes a string of EEG-based seizure decisions. The proposed detectors exhibit an improved performance, with respect to sensitivity and detection latency, compared with the state-of-the-art detectors. Experimental results demonstrate that the second detector achieves a sensitivity of 100%, detection latency of 2.6s, and a specificity of 99.91% for the MAJ fusion case. Copyright © 2016 Elsevier Inc. All rights reserved.
Wang, Xiao-Tong; Xia, Qiu-Yuan; Ni, Hao; Ye, Sheng-Bing; Li, Rui; Wang, Xuan; Shi, Shan-Shan; Zhou, Xiao-Jun; Rao, Qiu
2017-05-01
Xp11 translocation renal cell carcinoma (RCC) with SFPQ/PSF-TFE3 gene fusion is a rare epithelial tumor. Of note, the appearance of the gene fusion does not necessarily mean that it is renal cell carcinoma. The corresponding mesenchymal neoplasms, including Xp11 neoplasm with melanocytic differentiation, TFE3 rearrangement-associated perivascular epithelioid cell tumor (PEComa) and melanotic Xp11 translocation renal cancer, can also harbor the identical gene fusion. However, the differences between Xp11 translocation RCC and the corresponding mesenchymal neoplasm have only recently been described. Herein, we examined 5 additional cases of SFPQ-TFE3 RCCs using clinicopathologic, immunohistochemical, and molecular analyses. One tumor had the typical morphologic features of SFPQ-TFE3 RCC, whereas other 3 cases demonstrated the unusual morphologic features associated with pseudorosettes formation or clusters of smaller cells, mimicking TFEB RCC. The remaining one showed branching tubules and papillary structure composed of clear and eosinophilic tumor cells. Immunohistochemically, all 5 cases demonstrated moderate (2+) or strong (3+) positive staining for TFE3, PAX-8 and CD10, whereas no cases demonstrated TFEB, Cathepsin K, CA-IX, CK7, Melan-A, or HMB-45 expression. Genetically, the fusion transcripts were identified in 3 cases by reverse-transcription polymerase chain reaction (RT-PCR). On the basis of fluorescence in situ hybridization (FISH) analysis, all the cases were detected with SFPQ-TFE3 gene fusion. Clinical follow-up data were available for all the patients, and no one developed tumor recurrence, progression, or metastasis. We also review the differences between SFPQ-TFE3 RCC and the corresponding mesenchymal neoplasm despite the identical gene fusion. The presence of pseudorosettes also expands the known histological features of SFPQ-TFE3 RCC. Copyright © 2017 Elsevier Inc. All rights reserved.
Fc-fusion Proteins in Therapy: An Updated View.
Jafari, Reza; Zolbanin, Naime M; Rafatpanah, Houshang; Majidi, Jafar; Kazemi, Tohid
2017-01-01
Fc-fusion proteins are composed of Fc region of IgG antibody (Hinge-CH2-CH3) and a desired linked protein. Fc region of Fc-fusion proteins can bind to neonatal Fc receptor (FcRn) thereby rescuing it from degradation. The first therapeutic Fc-fusion protein was introduced for the treatment of AIDS. The molecular designing is the first stage in production of Fc-fusion proteins. The amino acid residues in the Fc region and linked protein are very important in the bioactivity and affinity of the fusion proteins. Although, therapeutic monoclonal antibodies are the top selling biologics but the application of therapeutic Fc-fusion proteins in clinic is in progress and among these medications Etanercept is the most effective in therapy. At present, eleven Fc-fusion proteins have been approved by FDA. There are novel Fc-fusion proteins which are in pre-clinical and clinical development. In this article, we review the molecular and biological characteristics of Fc-fusion proteins and then further discuss the features of novel therapeutic Fc-fusion proteins. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Yoo, Jejoong; Jackson, Meyer B.; Cui, Qiang
2013-01-01
To establish the validity of continuum mechanics models quantitatively for the analysis of membrane remodeling processes, we compare the shape and energies of the membrane fusion pore predicted by coarse-grained (MARTINI) and continuum mechanics models. The results at these distinct levels of resolution give surprisingly consistent descriptions for the shape of the fusion pore, and the deviation between the continuum and coarse-grained models becomes notable only when the radius of curvature approaches the thickness of a monolayer. Although slow relaxation beyond microseconds is observed in different perturbative simulations, the key structural features (e.g., dimension and shape of the fusion pore near the pore center) are consistent among independent simulations. These observations provide solid support for the use of coarse-grained and continuum models in the analysis of membrane remodeling. The combined coarse-grained and continuum analysis confirms the recent prediction of continuum models that the fusion pore is a metastable structure and that its optimal shape is neither toroidal nor catenoidal. Moreover, our results help reveal a new, to our knowledge, bowing feature in which the bilayers close to the pore axis separate more from one another than those at greater distances from the pore axis; bowing helps reduce the curvature and therefore stabilizes the fusion pore structure. The spread of the bilayer deformations over distances of hundreds of nanometers and the substantial reduction in energy of fusion pore formation provided by this spread indicate that membrane fusion can be enhanced by allowing a larger area of membrane to participate and be deformed. PMID:23442963
A trainable decisions-in decision-out (DEI-DEO) fusion system
NASA Astrophysics Data System (ADS)
Dasarathy, Belur V.
1998-03-01
Most of the decision fusion systems proposed hitherto in the literature for multiple data source (sensor) environments operate on the basis of pre-defined fusion logic, be they crisp (deterministic), probabilistic, or fuzzy in nature, with no specific learning phase. The fusion systems that are trainable, i.e., ones that have a learning phase, mostly operate in the features-in-decision-out mode, which essentially reduces the fusion process functionally to a pattern classification task in the joint feature space. In this study, a trainable decisions-in-decision-out fusion system is described which estimates a fuzzy membership distribution spread across the different decision choices based on the performance of the different decision processors (sensors) corresponding to each training sample (object) which is associated with a specific ground truth (true decision). Based on a multi-decision space histogram analysis of the performance of the different processors over the entire training data set, a look-up table associating each cell of the histogram with a specific true decision is generated which forms the basis for the operational phase. In the operational phase, for each set of decision inputs, a pointer to the look-up table learnt previously is generated from which a fused decision is derived. This methodology, although primarily designed for fusing crisp decisions from the multiple decision sources, can be adapted for fusion of fuzzy decisions as well if such are the inputs from these sources. Examples, which illustrate the benefits and limitations of the crisp and fuzzy versions of the trainable fusion systems, are also included.
Use of the Nanofitin Alternative Scaffold as a GFP-Ready Fusion Tag
Huet, Simon; Gorre, Harmony; Perrocheau, Anaëlle; Picot, Justine; Cinier, Mathieu
2015-01-01
With the continuous diversification of recombinant DNA technologies, the possibilities for new tailor-made protein engineering have extended on an on-going basis. Among these strategies, the use of the green fluorescent protein (GFP) as a fusion domain has been widely adopted for cellular imaging and protein localization. Following the lead of the direct head-to-tail fusion of GFP, we proposed to provide additional features to recombinant proteins by genetic fusion of artificially derived binders. Thus, we reported a GFP-ready fusion tag consisting of a small and robust fusion-friendly anti-GFP Nanofitin binding domain as a proof-of-concept. While limiting steric effects on the carrier, the GFP-ready tag allows the capture of GFP or its blue (BFP), cyan (CFP) and yellow (YFP) alternatives. Here, we described the generation of the GFP-ready tag from the selection of a Nanofitin variant binding to the GFP and its spectral variants with a nanomolar affinity, while displaying a remarkable folding stability, as demonstrated by its full resistance upon thermal sterilization process or the full chemical synthesis of Nanofitins. To illustrate the potential of the Nanofitin-based tag as a fusion partner, we compared the expression level in Escherichia coli and activity profile of recombinant human tumor necrosis factor alpha (TNFα) constructs, fused to a SUMO or GFP-ready tag. Very similar expression levels were found with the two fusion technologies. Both domains of the GFP-ready tagged TNFα were proved fully active in ELISA and interferometry binding assays, allowing the simultaneous capture by an anti-TNFα antibody and binding to the GFP, and its spectral mutants. The GFP-ready tag was also shown inert in a L929 cell based assay, demonstrating the potent TNFα mediated apoptosis induction by the GFP-ready tagged TNFα. Eventually, we proposed the GFP-ready tag as a versatile capture and labeling system in addition to expected applications of anti-GFP Nanofitins (as illustrated with previously described state-of-the-art anti-GFP binders applied to living cells and in vitro applications). Through a single fusion domain, the GFP-ready tagged proteins benefit from subsequent customization within a wide range of fluorescence spectra upon indirect binding of a chosen GFP variant. PMID:26539718
Use of the Nanofitin Alternative Scaffold as a GFP-Ready Fusion Tag.
Huet, Simon; Gorre, Harmony; Perrocheau, Anaëlle; Picot, Justine; Cinier, Mathieu
2015-01-01
With the continuous diversification of recombinant DNA technologies, the possibilities for new tailor-made protein engineering have extended on an on-going basis. Among these strategies, the use of the green fluorescent protein (GFP) as a fusion domain has been widely adopted for cellular imaging and protein localization. Following the lead of the direct head-to-tail fusion of GFP, we proposed to provide additional features to recombinant proteins by genetic fusion of artificially derived binders. Thus, we reported a GFP-ready fusion tag consisting of a small and robust fusion-friendly anti-GFP Nanofitin binding domain as a proof-of-concept. While limiting steric effects on the carrier, the GFP-ready tag allows the capture of GFP or its blue (BFP), cyan (CFP) and yellow (YFP) alternatives. Here, we described the generation of the GFP-ready tag from the selection of a Nanofitin variant binding to the GFP and its spectral variants with a nanomolar affinity, while displaying a remarkable folding stability, as demonstrated by its full resistance upon thermal sterilization process or the full chemical synthesis of Nanofitins. To illustrate the potential of the Nanofitin-based tag as a fusion partner, we compared the expression level in Escherichia coli and activity profile of recombinant human tumor necrosis factor alpha (TNFα) constructs, fused to a SUMO or GFP-ready tag. Very similar expression levels were found with the two fusion technologies. Both domains of the GFP-ready tagged TNFα were proved fully active in ELISA and interferometry binding assays, allowing the simultaneous capture by an anti-TNFα antibody and binding to the GFP, and its spectral mutants. The GFP-ready tag was also shown inert in a L929 cell based assay, demonstrating the potent TNFα mediated apoptosis induction by the GFP-ready tagged TNFα. Eventually, we proposed the GFP-ready tag as a versatile capture and labeling system in addition to expected applications of anti-GFP Nanofitins (as illustrated with previously described state-of-the-art anti-GFP binders applied to living cells and in vitro applications). Through a single fusion domain, the GFP-ready tagged proteins benefit from subsequent customization within a wide range of fluorescence spectra upon indirect binding of a chosen GFP variant.
Multi-focus image fusion based on window empirical mode decomposition
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao
2017-09-01
In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.
NASA Astrophysics Data System (ADS)
Nespoli, F.; Labit, B.; Furno, I.; Theiler, C.; Sheikh, U. A.; Tsui, C. K.; Boedo, J. A.; TCV Team
2018-05-01
In inboard-limited plasmas, foreseen to be used in future fusion reactor start-up and ramp down phases, the Scrape-Off Layer (SOL) exhibits two regions: the "near" and "far" SOL. The steep radial gradient of the parallel heat flux associated with the near SOL can result in excessive thermal loads onto the solid surfaces, damaging them and/or limiting the operational space of a fusion reactor. In this article, leveraging the results presented in the study by F. Nespoli et al. [Nucl. Fusion 57, 126029 (2017)], we propose a technique for the mitigation and suppression of the near SOL heat flux feature by impurity seeding. The first successful experimental results from the TCV tokamak are presented and discussed.
Clerico, Andrea; Tiwari, Abhishek; Gupta, Rishabh; Jayaraman, Srinivasan; Falk, Tiago H.
2018-01-01
The quantity of music content is rapidly increasing and automated affective tagging of music video clips can enable the development of intelligent retrieval, music recommendation, automatic playlist generators, and music browsing interfaces tuned to the users' current desires, preferences, or affective states. To achieve this goal, the field of affective computing has emerged, in particular the development of so-called affective brain-computer interfaces, which measure the user's affective state directly from measured brain waves using non-invasive tools, such as electroencephalography (EEG). Typically, conventional features extracted from the EEG signal have been used, such as frequency subband powers and/or inter-hemispheric power asymmetry indices. More recently, the coupling between EEG and peripheral physiological signals, such as the galvanic skin response (GSR), have also been proposed. Here, we show the importance of EEG amplitude modulations and propose several new features that measure the amplitude-amplitude cross-frequency coupling per EEG electrode, as well as linear and non-linear connections between multiple electrode pairs. When tested on a publicly available dataset of music video clips tagged with subjective affective ratings, support vector classifiers trained on the proposed features were shown to outperform those trained on conventional benchmark EEG features by as much as 6, 20, 8, and 7% for arousal, valence, dominance and liking, respectively. Moreover, fusion of the proposed features with EEG-GSR coupling features showed to be particularly useful for arousal (feature-level fusion) and liking (decision-level fusion) prediction. Together, these findings show the importance of the proposed features to characterize human affective states during music clip watching. PMID:29367844
Cognitive Load Measurement in a Virtual Reality-based Driving System for Autism Intervention.
Zhang, Lian; Wade, Joshua; Bian, Dayi; Fan, Jing; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan
2017-01-01
Autism Spectrum Disorder (ASD) is a highly prevalent neurodevelopmental disorder with enormous individual and social cost. In this paper, a novel virtual reality (VR)-based driving system was introduced to teach driving skills to adolescents with ASD. This driving system is capable of gathering eye gaze, electroencephalography, and peripheral physiology data in addition to driving performance data. The objective of this paper is to fuse multimodal information to measure cognitive load during driving such that driving tasks can be individualized for optimal skill learning. Individualization of ASD intervention is an important criterion due to the spectrum nature of the disorder. Twenty adolescents with ASD participated in our study and the data collected were used for systematic feature extraction and classification of cognitive loads based on five well-known machine learning methods. Subsequently, three information fusion schemes-feature level fusion, decision level fusion and hybrid level fusion-were explored. Results indicate that multimodal information fusion can be used to measure cognitive load with high accuracy. Such a mechanism is essential since it will allow individualization of driving skill training based on cognitive load, which will facilitate acceptance of this driving system for clinical use and eventual commercialization.
TFG-MET fusion in an infantile spindle cell sarcoma with neural features.
Flucke, Uta; van Noesel, Max M; Wijnen, Marc; Zhang, Lei; Chen, Chun-Liang; Sung, Yun-Shao; Antonescu, Cristina R
2017-09-01
An increasing number of congenital and infantile sarcomas displaying a primitive, monomorphic spindle cell phenotype have been characterized to harbor recurrent gene fusions, including infantile fibrosarcoma and congenital spindle cell rhabdomyosarcoma. Here, we report an unusual spindle cell sarcoma presenting as a large and infiltrative pelvic soft tissue mass in a 4-month-old girl, which revealed a novel TFG-MET gene fusion by whole transcriptome RNA sequencing. The tumor resembled the morphology of an infantile fibrosarcoma with both fascicular and patternless growth, however, it expressed strong S100 protein immunoreactivity, while lacking SOX10 staining and retaining H3K27me3 expression. Although this immunoprofile suggested partial neural/neuroectodermal differentiation, overall features were unusual and did not fit into any known tumor types (cellular schwannoma, MPNST), raising the possibility of a novel pathologic entity. The TFG-MET gene fusion expands the genetic spectrum implicated in the pathogenesis of congenital spindle cell sarcomas, with yet another example of kinase oncogenic activation through chromosomal translocation. The discovery of this new fusion is significant since the resulting MET activation can potentially be inhibited by targeted therapy, as MET inhibitors are presently available in clinical trials. © 2017 Wiley Periodicals, Inc.
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
A hybrid three-class brain-computer interface system utilizing SSSEPs and transient ERPs
NASA Astrophysics Data System (ADS)
Breitwieser, Christian; Pokorny, Christoph; Müller-Putz, Gernot R.
2016-12-01
Objective. This paper investigates the fusion of steady-state somatosensory evoked potentials (SSSEPs) and transient event-related potentials (tERPs), evoked through tactile simulation on the left and right-hand fingertips, in a three-class EEG based hybrid brain-computer interface. It was hypothesized, that fusing the input signals leads to higher classification rates than classifying tERP and SSSEP individually. Approach. Fourteen subjects participated in the studies, consisting of a screening paradigm to determine person dependent resonance-like frequencies and a subsequent online paradigm. The whole setup of the BCI system was based on open interfaces, following suggestions for a common implementation platform. During the online experiment, subjects were instructed to focus their attention on the stimulated fingertips as indicated by a visual cue. The recorded data were classified during runtime using a multi-class shrinkage LDA classifier and the outputs were fused together applying a posterior probability based fusion. Data were further analyzed offline, involving a combined classification of SSSEP and tERP features as a second fusion principle. The final results were tested for statistical significance applying a repeated measures ANOVA. Main results. A significant classification increase was achieved when fusing the results with a combined classification compared to performing an individual classification. Furthermore, the SSSEP classifier was significantly better in detecting a non-control state, whereas the tERP classifier was significantly better in detecting control states. Subjects who had a higher relative band power increase during the screening session also achieved significantly higher classification results than subjects with lower relative band power increase. Significance. It could be shown that utilizing SSSEP and tERP for hBCIs increases the classification accuracy and also that tERP and SSSEP are not classifying control- and non-control states with the same level of accuracy.
NASA Astrophysics Data System (ADS)
Xu, Z.; Guan, K.; Peng, B.; Casler, N. P.; Wang, S. W.
2017-12-01
Landscape has complex three-dimensional features. These 3D features are difficult to extract using conventional methods. Small-footprint LiDAR provides an ideal way for capturing these features. Existing approaches, however, have been relegated to raster or metric-based (two-dimensional) feature extraction from the upper or bottom layer, and thus are not suitable for resolving morphological and intensity features that could be important to fine-scale land cover mapping. Therefore, this research combines airborne LiDAR and multi-temporal Landsat imagery to classify land cover types of Williamson County, Illinois that has diverse and mixed landscape features. Specifically, we applied a 3D convolutional neural network (CNN) method to extract features from LiDAR point clouds by (1) creating occupancy grid, intensity grid at 1-meter resolution, and then (2) normalizing and incorporating data into a 3D CNN feature extractor for many epochs of learning. The learned features (e.g., morphological features, intensity features, etc) were combined with multi-temporal spectral data to enhance the performance of land cover classification based on a Support Vector Machine classifier. We used photo interpretation for training and testing data generation. The classification results show that our approach outperforms traditional methods using LiDAR derived feature maps, and promises to serve as an effective methodology for creating high-quality land cover maps through fusion of complementary types of remote sensing data.
An Investigation for Ground State Features of Some Structural Fusion Materials
NASA Astrophysics Data System (ADS)
Aytekin, H.; Tel, E.; Baldik, R.; Aydin, A.
2011-02-01
Environmental concerns associated with fossil fuels are creating increased interest in alternative non-fossil energy sources. Nuclear fusion can be one of the most attractive sources of energy from the viewpoint of safety and minimal environmental impact. When considered in all energy systems, the requirements for performance of structural materials in a fusion reactor first wall, blanket or diverter, are arguably more demanding or difficult than for other energy system. The development of fusion materials for the safety of fusion power systems and understanding nuclear properties is important. In this paper, ground state properties for some structural fusion materials as 27Al, 51V, 52Cr, 55Mn, and 56Fe are investigated using Skyrme-Hartree-Fock method. The obtained results have been discussed and compared with the available experimental data.
Tensor functors between Morita duals of fusion categories
NASA Astrophysics Data System (ADS)
Galindo, César; Plavnik, Julia Yael
2017-03-01
Given a fusion category C and an indecomposable C -module category M , the fusion category C^*_{_{M}} of C-module endofunctors of M is called the (Morita) dual fusion category of C with respect to M . We describe tensor functors between two arbitrary duals C^*_{_{M}} and D^*_N in terms of data associated to C and D . We apply the results to G-equivariantizations of fusion categories and group-theoretical fusion categories. We describe the orbits of the action of the Brauer-Picard group on the set of module categories and we propose a categorification of the Rosenberg-Zelinsky sequence for fusion categories.
Pires, Ivan Miguel; Garcia, Nuno M.; Pombo, Nuno; Flórez-Revuelta, Francisco
2016-01-01
This paper focuses on the research on the state of the art for sensor fusion techniques, applied to the sensors embedded in mobile devices, as a means to help identify the mobile device user’s daily activities. Sensor data fusion techniques are used to consolidate the data collected from several sensors, increasing the reliability of the algorithms for the identification of the different activities. However, mobile devices have several constraints, e.g., low memory, low battery life and low processing power, and some data fusion techniques are not suited to this scenario. The main purpose of this paper is to present an overview of the state of the art to identify examples of sensor data fusion techniques that can be applied to the sensors available in mobile devices aiming to identify activities of daily living (ADLs). PMID:26848664
Fusion of Enveloped Viruses in Endosomes
White, Judith M.; Whittaker, Gary R.
2016-01-01
Ari Helenius launched the field of enveloped virus fusion in endosomes with a seminal paper in the Journal of Cell Biology in 1980. In the intervening years a great deal has been learned about the structures and mechanisms of viral membrane fusion proteins as well as about the endosomes in which different enveloped viruses fuse and the endosomal cues that trigger fusion. We now recognize three classes of viral membrane fusion proteins based on structural criteria and four mechanisms of fusion triggering. After reviewing general features of viral membrane fusion proteins and viral fusion in endosomes, we delve into three characterized mechanisms for viral fusion triggering in endosomes: by low pH, by receptor binding plus low pH, and by receptor binding plus the action of a protease. We end with a discussion of viruses that may employ novel endosomal fusion triggering mechanisms. A key take home message is that enveloped viruses that enter cells by fusing in endosomes traverse the endocytic pathway until they reach an endosome that has all of the environmental conditions (pH, proteases, ions, intracellular receptors, and lipid composition) to (if needed) prime and (in all cases) trigger the fusion protein and to support membrane fusion. PMID:26935856
Li, Yun; Zhang, Jin-Yu; Wang, Yuan-Zhong
2018-01-01
Three data fusion strategies (low-llevel, mid-llevel, and high-llevel) combined with a multivariate classification algorithm (random forest, RF) were applied to authenticate the geographical origins of Panax notoginseng collected from five regions of Yunnan province in China. In low-level fusion, the original data from two spectra (Fourier transform mid-IR spectrum and near-IR spectrum) were directly concatenated into a new matrix, which then was applied for the classification. Mid-level fusion was the strategy that inputted variables extracted from the spectral data into an RF classification model. The extracted variables were processed by iterate variable selection of the RF model and principal component analysis. The use of high-level fusion combined the decision making of each spectroscopic technique and resulted in an ensemble decision. The results showed that the mid-level and high-level data fusion take advantage of the information synergy from two spectroscopic techniques and had better classification performance than that of independent decision making. High-level data fusion is the most effective strategy since the classification results are better than those of the other fusion strategies: accuracy rates ranged between 93% and 96% for the low-level data fusion, between 95% and 98% for the mid-level data fusion, and between 98% and 100% for the high-level data fusion. In conclusion, the high-level data fusion strategy for Fourier transform mid-IR and near-IR spectra can be used as a reliable tool for correct geographical identification of P. notoginseng. Graphical abstract The analytical steps of Fourier transform mid-IR and near-IR spectral data fusion for the geographical traceability of Panax notoginseng.
Fritsch, Michael K; Bridge, Julia A; Schuster, Amy E; Perlman, Elizabeth J; Argani, Pedram
2003-01-01
Pediatric small round cell tumors still pose tremendous diagnostic problems. In difficult cases, the ability to detect tumor-specific gene fusion transcripts for several of these neoplasms, including Ewing sarcoma/peripheral primitive neuroectodermal tumor (ES/PNET), synovial sarcoma (SS), alveolar rhabdomyosarcoma (ARMS), and desmoplastic small round cell tumor (DSRCT) using reverse transcriptase-polymerase chain reaction (RT-PCR), can be extremely helpful. Few studies to date, however, have systematically examined several different tumor types for the presence of multiple different fusion transcripts in order to determine the specificity and sensitivity of the RT-PCR method, and no study has addressed this issue for formalin-fixed material. The objectives of this study were to address the specificity, sensitivity, and practicality of such an assay applied strictly to formalin-fixed tissue blocks. Our results demonstrate that, for these tumors, the overall sensitivity for detecting each fusion transcript is similar to that reported in the literature for RT-PCR on fresh or formalin-fixed tissues. The specificity of the assay is very high, being essentially 100% for each primer pair when interpreting the results from visual inspection of agarose gels. However, when these same agarose gels were examined using Southern blotting, a small number of tumors also yielded reproducibly detectable weak signals for unexpected fusion products, in addition to a strong signal for the expected fusion product. Fluorescence in situ hybridization (FISH) studies in one such case indicated that a rearrangement that would account for the unexpected fusion was not present, while another case was equivocal. The overall specificity for each primer pair used in this assay ranged from 94 to 100%. Therefore, RT-PCR using formalin-fixed paraffin-embedded tissue sections can be used to detect chimeric transcripts as a reliable, highly sensitive, and highly specific diagnostic assay. However, we strongly suggest that the final interpretation of the results from this assay be viewed in light of the other features of the case, including clinical history, histology, and immunohistochemistry, by the diagnostic pathologist. Additional studies such as FISH may be useful in clarifying the nature of equivocal or unexpected results.
chimeraviz: a tool for visualizing chimeric RNA.
Lågstad, Stian; Zhao, Sen; Hoff, Andreas M; Johannessen, Bjarne; Lingjærde, Ole Christian; Skotheim, Rolf I
2017-09-15
Advances in high-throughput RNA sequencing have enabled more efficient detection of fusion transcripts, but the technology and associated software used for fusion detection from sequencing data often yield a high false discovery rate. Good prioritization of the results is important, and this can be helped by a visualization framework that automatically integrates RNA data with known genomic features. Here we present chimeraviz , a Bioconductor package that automates the creation of chimeric RNA visualizations. The package supports input from nine different fusion-finder tools: deFuse, EricScript, InFusion, JAFFA, FusionCatcher, FusionMap, PRADA, SOAPfuse and STAR-FUSION. chimeraviz is an R package available via Bioconductor ( https://bioconductor.org/packages/release/bioc/html/chimeraviz.html ) under Artistic-2.0. Source code and support is available at GitHub ( https://github.com/stianlagstad/chimeraviz ). rolf.i.skotheim@rr-research.no. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Multispectral image fusion for target detection
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-09-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Automatic building extraction from LiDAR data fusion of point and grid-based features
NASA Astrophysics Data System (ADS)
Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang
2017-08-01
This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.
Palmprint and Face Multi-Modal Biometric Recognition Based on SDA-GSVD and Its Kernelization
Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu
2012-01-01
When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance. PMID:22778600
Palmprint and face multi-modal biometric recognition based on SDA-GSVD and its kernelization.
Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu
2012-01-01
When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance.
Fusion Materials Research at Oak Ridge National Laboratory in Fiscal Year 2015
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiffen, F. W.; Katoh, Yutai; Melton, Stephanie G.
The realization of fusion energy is a formidable challenge with significant achievements resulting from close integration of the plasma physics and applied technology disciplines. Presently, the most significant technological challenge for the near-term experiments such as ITER, and next generation fusion power systems, is the inability of current materials and components to withstand the harsh fusion nuclear environment. The overarching goal of the Oak Ridge National Laboratory (ORNL) fusion materials program is to provide the applied materials science support and understanding to underpin the ongoing Department of Energy (DOE) Office of Science fusion energy program while developing materials for fusionmore » power systems. In doing so the program continues to be integrated both with the larger United States (US) and international fusion materials communities, and with the international fusion design and technology communities.This document provides a summary of Fiscal Year (FY) 2015 activities supporting the Office of Science, Office of Fusion Energy Sciences Materials Research for Magnetic Fusion Energy (AT-60-20-10-0) carried out by ORNL. The organization of this report is mainly by material type, with sections on specific technical activities. Four projects selected in the Funding Opportunity Announcement (FOA) solicitation of late 2011 and funded in FY2012-FY2014 are identified by “FOA” in the titles. This report includes the final funded work of these projects, although ORNL plans to continue some of this work within the base program.« less
Fusion programs in applied plasma physics
NASA Astrophysics Data System (ADS)
1992-07-01
The Applied Plasma Physics (APP) program at General Atomics (GA) described here includes four major elements: (1) Applied Plasma Physics Theory Program, (2) Alpha Particle Diagnostic, (3) Edge and Current Density Diagnostic, and (4) Fusion User Service Center (USC). The objective of the APP theoretical plasma physics research at GA is to support the DIII-D and other tokamak experiments and to significantly advance our ability to design a commercially-attractive fusion reactor. We categorize our efforts in three areas: magnetohydrodynamic (MHD) equilibria and stability; plasma transport with emphasis on H-mode, divertor, and boundary physics; and radio frequency (RF). The objective of the APP alpha particle diagnostic is to develop diagnostics of fast confined alpha particles using the interactions with the ablation cloud surrounding injected pellets and to develop diagnostic systems for reacting and ignited plasmas. The objective of the APP edge and current density diagnostic is to first develop a lithium beam diagnostic system for edge fluctuation studies on the Texas Experimental Tokamak (TEXT). The objective of the Fusion USC is to continue to provide maintenance and programming support to computer users in the GA fusion community. The detailed progress of each separate program covered in this report period is described.
NASA Astrophysics Data System (ADS)
Ogawa, Yuichi
2016-05-01
A new strategic energy plan decided by the Japanese Cabinet in 2014 strongly supports the steady promotion of nuclear fusion development activities, including the ITER project and the Broader Approach activities from the long-term viewpoint. Atomic Energy Commission (AEC) in Japan formulated the Third Phase Basic Program so as to promote an experimental fusion reactor project. In 2005 AEC has reviewed this Program, and discussed on selection and concentration among many projects of fusion reactor development. In addition to the promotion of ITER project, advanced tokamak research by JT-60SA, helical plasma experiment by LHD, FIREX project in laser fusion research and fusion engineering by IFMIF were highly prioritized. Although the basic concept is quite different between tokamak, helical and laser fusion researches, there exist a lot of common features such as plasma physics on 3-D magnetic geometry, high power heat load on plasma facing component and so on. Therefore, a synergetic scenario on fusion reactor development among various plasma confinement concepts would be important.
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-01-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary. PMID:26942233
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution.
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-10-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary.
Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun
2015-01-01
Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.
IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion.
Dehzangi, Omid; Taherisadr, Mojtaba; ChangalVala, Raghvendar
2017-11-27
The wide spread usage of wearable sensors such as in smart watches has provided continuous access to valuable user generated data such as human motion that could be used to identify an individual based on his/her motion patterns such as, gait. Several methods have been suggested to extract various heuristic and high-level features from gait motion data to identify discriminative gait signatures and distinguish the target individual from others. However, the manual and hand crafted feature extraction is error prone and subjective. Furthermore, the motion data collected from inertial sensors have complex structure and the detachment between manual feature extraction module and the predictive learning models might limit the generalization capabilities. In this paper, we propose a novel approach for human gait identification using time-frequency (TF) expansion of human gait cycles in order to capture joint 2 dimensional (2D) spectral and temporal patterns of gait cycles. Then, we design a deep convolutional neural network (DCNN) learning to extract discriminative features from the 2D expanded gait cycles and jointly optimize the identification model and the spectro-temporal features in a discriminative fashion. We collect raw motion data from five inertial sensors placed at the chest, lower-back, right hand wrist, right knee, and right ankle of each human subject synchronously in order to investigate the impact of sensor location on the gait identification performance. We then present two methods for early (input level) and late (decision score level) multi-sensor fusion to improve the gait identification generalization performance. We specifically propose the minimum error score fusion (MESF) method that discriminatively learns the linear fusion weights of individual DCNN scores at the decision level by minimizing the error rate on the training data in an iterative manner. 10 subjects participated in this study and hence, the problem is a 10-class identification task. Based on our experimental results, 91% subject identification accuracy was achieved using the best individual IMU and 2DTF-DCNN. We then investigated our proposed early and late sensor fusion approaches, which improved the gait identification accuracy of the system to 93.36% and 97.06%, respectively.
Paramyxovirus F1 protein has two fusion peptides: implications for the mechanism of membrane fusion.
Peisajovich, S G; Samuel, O; Shai, Y
2000-03-10
Viral fusion proteins contain a highly hydrophobic segment, named the fusion peptide, which is thought to be responsible for the merging of the cellular and viral membranes. Paramyxoviruses are believed to contain a single fusion peptide at the N terminus of the F1 protein. However, here we identified an additional internal segment in the Sendai virus F1 protein (amino acids 214-226) highly homologous to the fusion peptides of HIV-1 and RSV. A synthetic peptide, which includes this region, was found to induce membrane fusion of large unilamellar vesicles, at concentrations where the known N-terminal fusion peptide is not effective. A scrambled peptide as well as several peptides from other regions of the F1 protein, which strongly bind to membranes, are not fusogenic. The functional and structural characterization of this active segment suggest that the F1 protein has an additional internal fusion peptide that could participate in the actual fusion event. The presence of homologous regions in other members of the same family suggests that the concerted action of two fusion peptides, one N-terminal and the other internal, is a general feature of paramyxoviruses. Copyright 2000 Academic Press.
NASA Astrophysics Data System (ADS)
Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue
2016-03-01
During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.
Quality dependent fusion of intramodal and multimodal biometric experts
NASA Astrophysics Data System (ADS)
Kittler, J.; Poh, N.; Fatukasi, O.; Messer, K.; Kryszczuk, K.; Richiardi, J.; Drygajlo, A.
2007-04-01
We address the problem of score level fusion of intramodal and multimodal experts in the context of biometric identity verification. We investigate the merits of confidence based weighting of component experts. In contrast to the conventional approach where confidence values are derived from scores, we use instead raw measures of biometric data quality to control the influence of each expert on the final fused score. We show that quality based fusion gives better performance than quality free fusion. The use of quality weighted scores as features in the definition of the fusion functions leads to further improvements. We demonstrate that the achievable performance gain is also affected by the choice of fusion architecture. The evaluation of the proposed methodology involves 6 face and one speech verification experts. It is carried out on the XM2VTS data base.
Reprogramming of Somatic Cells Towards Pluripotency by Cell Fusion.
Malinowski, Andrzej R; Fisher, Amanda G
2016-01-01
Pluripotent reprogramming can be dominantly induced in a somatic nucleus upon fusion with a pluripotent cell such as embryonic stem (ES) cell. Cell fusion between ES cells and somatic cells results in the formation of heterokaryons, in which the somatic nuclei begin to acquire features of the pluripotent partner. The generation of interspecies heterokaryons between mouse ES- and human somatic cells allows an experimenter to distinguish the nuclear events occurring specifically within the reprogrammed nucleus. Therefore, cell fusion provides a simple and rapid approach to look at the early nuclear events underlying pluripotent reprogramming. Here, we describe a polyethylene glycol (PEG)-mediated cell fusion protocol to generate interspecies heterokaryons and intraspecies hybrids between ES cells and B lymphocytes or fibroblasts.
Review of the magnetic fusion program by the 1986 ERAB Fusion Panel
NASA Astrophysics Data System (ADS)
Davidson, Ronald C.
1987-09-01
The 1986 ERAB Fusion Panel finds that fusion energy continues to be an attractive energy source with great potential for the future, and that the magnetic fusion program continues to make substantial technical progress. In addition, fusion research advances plasma physics, a sophisticated and useful branch of applied science, as well as technologies important to industry and defense. These factors fully justify the substantial expenditures by the Department of Energy in fusion research and development (R&D). The Panel endorses the overall program direction, strategy, and plans, and recognizes the importance and timeliness of proceeding with a burning plasma experiment, such as the proposed Compact Ignition Tokamak (CIT) experiment.
Research on Remote Sensing Image Classification Based on Feature Level Fusion
NASA Astrophysics Data System (ADS)
Yuan, L.; Zhu, G.
2018-04-01
Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.
Fusion peptide of influenza hemagglutinin requires a fixed angle boomerang structure for activity.
Lai, Alex L; Park, Heather; White, Judith M; Tamm, Lukas K
2006-03-03
The fusion peptide of influenza hemagglutinin is crucial for cell entry of this virus. Previous studies showed that this peptide adopts a boomerang-shaped structure in lipid model membranes at the pH of membrane fusion. To examine the role of the boomerang in fusion, we changed several residues proposed to stabilize the kink in this structure and measured fusion. Among these, mutants E11A and W14A expressed hemagglutinins with hemifusion and no fusion activities, and F9A and N12A had no effect on fusion, respectively. Binding enthalpies and free energies of mutant peptides to model membranes and their ability to perturb lipid bilayer structures correlated well with the fusion activities of the parent full-length molecules. The structure of W14A determined by NMR and site-directed spin labeling features a flexible kink that points out of the membrane, in sharp contrast to the more ordered boomerang of the wild-type, which points into the membrane. A specific fixed angle boomerang structure is thus required to support membrane fusion.
Karim, Mahmoud Abdul; Samyn, Dieter Ronny; Mattie, Sevan; Brett, Christopher Leonard
2018-02-01
When marked for degradation, surface receptor and transporter proteins are internalized and delivered to endosomes where they are packaged into intralumenal vesicles (ILVs). Many rounds of ILV formation create multivesicular bodies (MVBs) that fuse with lysosomes exposing ILVs to hydrolases for catabolism. Despite being critical for protein degradation, the molecular underpinnings of MVB-lysosome fusion remain unclear, although machinery underlying other lysosome fusion events is implicated. But how then is specificity conferred? And how is MVB maturation and fusion coordinated for efficient protein degradation? To address these questions, we developed a cell-free MVB-lysosome fusion assay using Saccharomyces cerevisiae as a model. After confirming that the Rab7 ortholog Ypt7 and the multisubunit tethering complex HOPS (homotypic fusion and vacuole protein sorting complex) are required, we found that the Qa-SNARE Pep12 distinguishes this event from homotypic lysosome fusion. Mutations that impair MVB maturation block fusion by preventing Ypt7 activation, confirming that a Rab-cascade mechanism harmonizes MVB maturation with lysosome fusion. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Zhang, Hongsheng; Xu, Ru
2018-02-01
Integrating synthetic aperture radar (SAR) and optical data to improve urban land cover classification has been identified as a promising approach. However, which integration level is the most suitable remains unclear but important to many researchers and engineers. This study aimed to compare different integration levels for providing a scientific reference for a wide range of studies using optical and SAR data. SAR data from TerraSAR-X and ENVISAT ASAR in both WSM and IMP modes were used to be combined with optical data at pixel level, feature level and decision levels using four typical machine learning methods. The experimental results indicated that: 1) feature level that used both the original images and extracted features achieved a significant improvement of up to 10% compared to that using optical data alone; 2) different levels of fusion required different suitable methods depending on the data distribution and data resolution. For instance, support vector machine was the most stable at both the feature and decision levels, while random forest was suitable at the pixel level but not suitable at the decision level. 3) By examining the distribution of SAR features, some features (e.g., homogeneity) exhibited a close-to-normal distribution, explaining the improvement from the maximum likelihood method at the feature and decision levels. This indicated the benefits of using texture features from SAR data when being combined with optical data for land cover classification. Additionally, the research also shown that combining optical and SAR data does not guarantee improvement compared with using single data source for urban land cover classification, depending on the selection of appropriate fusion levels and fusion methods.
2013-10-01
AD_________________ Award Number: W81XWH-12-1-0597 TITLE: Parametric PET /MR Fusion Imaging to...Parametric PET /MR Fusion Imaging to Differentiate Aggressive from Indolent Primary Prostate Cancer with Application for Image-Guided Prostate Cancer Biopsies...The study investigates whether fusion PET /MRI imaging with 18F-choline PET /CT and diffusion-weighted MRI can be successfully applied to target prostate
Distributed service-based approach for sensor data fusion in IoT environments.
Rodríguez-Valenzuela, Sandra; Holgado-Terriza, Juan A; Gutiérrez-Guerrero, José M; Muros-Cobos, Jesús L
2014-10-15
The Internet of Things (IoT) enables the communication among smart objects promoting the pervasive presence around us of a variety of things or objects that are able to interact and cooperate jointly to reach common goals. IoT objects can obtain data from their context, such as the home, office, industry or body. These data can be combined to obtain new and more complex information applying data fusion processes. However, to apply data fusion algorithms in IoT environments, the full system must deal with distributed nodes, decentralized communication and support scalability and nodes dynamicity, among others restrictions. In this paper, a novel method to manage data acquisition and fusion based on a distributed service composition model is presented, improving the data treatment in IoT pervasive environments.
Distributed Service-Based Approach for Sensor Data Fusion in IoT Environments
Rodríguez-Valenzuela, Sandra; Holgado-Terriza, Juan A.; Gutiérrez-Guerrero, José M.; Muros-Cobos, Jesús L.
2014-01-01
The Internet of Things (IoT) enables the communication among smart objects promoting the pervasive presence around us of a variety of things or objects that are able to interact and cooperate jointly to reach common goals. IoT objects can obtain data from their context, such as the home, office, industry or body. These data can be combined to obtain new and more complex information applying data fusion processes. However, to apply data fusion algorithms in IoT environments, the full system must deal with distributed nodes, decentralized communication and support scalability and nodes dynamicity, among others restrictions. In this paper, a novel method to manage data acquisition and fusion based on a distributed service composition model is presented, improving the data treatment in IoT pervasive environments. PMID:25320907
Inversion-mediated gene fusions involving NAB2-STAT6 in an unusual malignant meningioma.
Gao, F; Ling, C; Shi, L; Commins, D; Zada, G; Mack, W J; Wang, K
2013-08-20
Meningiomas are the most common primary intracranial tumours, with ∼3% meeting current histopathologic criteria for malignancy. In this study, we explored the transcriptome of meningiomas using RNA-Seq. Inversion-mediated fusions between two adjacent genes, NAB2 and STAT6, were detected in one malignant tumour, creating two novel in-frame transcripts that were validated by RT-PCR and Sanger sequencing. Gene fusions of NAB2-STAT6 were recently implicated in the pathogenesis of solitary fibrous tumours; our study suggested that similar fusions may also have a role in a malignant meningioma with unusual histopathologic features.
Structural basis of viral invasion: lessons from paramyxovirus F
Lamb, Robert A.; Jardetzky, Theodore S.
2007-01-01
Summary The structures of glycoproteins that mediate enveloped virus entry into cells have revealed dramatic structural changes that accompany membrane fusion and provided mechanistic insights into this process. The group of class I viral fusion proteins includes the influenza hemagglutinin, paramyxovirus F, HIV env and other mechanistically related fusogens, but these proteins are unrelated in sequence and exhibit clearly distinct structural features. Recently determined crystal structures of the paramyxovirus F protein in two conformations, representing prefusion and postfusion states, reveal a novel protein architecture that undergoes large-scale, irreversible refolding during membrane fusion, extending our understanding of this diverse group of membrane fusion machines. PMID:17870467
A Smartphone-Based Driver Safety Monitoring System Using Data Fusion
Lee, Boon-Giin; Chung, Wan-Young
2012-01-01
This paper proposes a method for monitoring driver safety levels using a data fusion approach based on several discrete data types: eye features, bio-signal variation, in-vehicle temperature, and vehicle speed. The driver safety monitoring system was developed in practice in the form of an application for an Android-based smartphone device, where measuring safety-related data requires no extra monetary expenditure or equipment. Moreover, the system provides high resolution and flexibility. The safety monitoring process involves the fusion of attributes gathered from different sensors, including video, electrocardiography, photoplethysmography, temperature, and a three-axis accelerometer, that are assigned as input variables to an inference analysis framework. A Fuzzy Bayesian framework is designed to indicate the driver’s capability level and is updated continuously in real-time. The sensory data are transmitted via Bluetooth communication to the smartphone device. A fake incoming call warning service alerts the driver if his or her safety level is suspiciously compromised. Realistic testing of the system demonstrates the practical benefits of multiple features and their fusion in providing a more authentic and effective driver safety monitoring. PMID:23247416
Application of the JDL data fusion process model for cyber security
NASA Astrophysics Data System (ADS)
Giacobe, Nicklaus A.
2010-04-01
A number of cyber security technologies have proposed the use of data fusion to enhance the defensive capabilities of the network and aid in the development of situational awareness for the security analyst. While there have been advances in fusion technologies and the application of fusion in intrusion detection systems (IDSs), in particular, additional progress can be made by gaining a better understanding of a variety of data fusion processes and applying them to the cyber security application domain. This research explores the underlying processes identified in the Joint Directors of Laboratories (JDL) data fusion process model and further describes them in a cyber security context.
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-01-01
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. PMID:28505137
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-05-15
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems.
NASA Astrophysics Data System (ADS)
Bonfiglio, D.; Chacón, L.; Cappello, S.
2010-08-01
With the increasing impact of scientific discovery via advanced computation, there is presently a strong emphasis on ensuring the mathematical correctness of computational simulation tools. Such endeavor, termed verification, is now at the center of most serious code development efforts. In this study, we address a cross-benchmark nonlinear verification study between two three-dimensional magnetohydrodynamics (3D MHD) codes for fluid modeling of fusion plasmas, SPECYL [S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996)] and PIXIE3D [L. Chacón, Phys. Plasmas 15, 056103 (2008)], in their common limit of application: the simple viscoresistive cylindrical approximation. SPECYL is a serial code in cylindrical geometry that features a spectral formulation in space and a semi-implicit temporal advance, and has been used extensively to date for reversed-field pinch studies. PIXIE3D is a massively parallel code in arbitrary curvilinear geometry that features a conservative, solenoidal finite-volume discretization in space, and a fully implicit temporal advance. The present study is, in our view, a first mandatory step in assessing the potential of any numerical 3D MHD code for fluid modeling of fusion plasmas. Excellent agreement is demonstrated over a wide range of parameters for several fusion-relevant cases in both two- and three-dimensional geometries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonfiglio, Daniele; Chacon, Luis; Cappello, Susanna
2010-01-01
With the increasing impact of scientific discovery via advanced computation, there is presently a strong emphasis on ensuring the mathematical correctness of computational simulation tools. Such endeavor, termed verification, is now at the center of most serious code development efforts. In this study, we address a cross-benchmark nonlinear verification study between two three-dimensional magnetohydrodynamics (3D MHD) codes for fluid modeling of fusion plasmas, SPECYL [S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996)] and PIXIE3D [L. Chacon, Phys. Plasmas 15, 056103 (2008)], in their common limit of application: the simple viscoresistive cylindrical approximation. SPECYL is a serial code inmore » cylindrical geometry that features a spectral formulation in space and a semi-implicit temporal advance, and has been used extensively to date for reversed-field pinch studies. PIXIE3D is a massively parallel code in arbitrary curvilinear geometry that features a conservative, solenoidal finite-volume discretization in space, and a fully implicit temporal advance. The present study is, in our view, a first mandatory step in assessing the potential of any numerical 3D MHD code for fluid modeling of fusion plasmas. Excellent agreement is demonstrated over a wide range of parameters for several fusion-relevant cases in both two- and three-dimensional geometries.« less
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-18
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.
Log-Gabor Weber descriptor for face recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Sang, Nong; Gao, Changxin
2015-09-01
The Log-Gabor transform, which is suitable for analyzing gradually changing data such as in iris and face images, has been widely used in image processing, pattern recognition, and computer vision. In most cases, only the magnitude or phase information of the Log-Gabor transform is considered. However, the complementary effect taken by combining magnitude and phase information simultaneously for an image-feature extraction problem has not been systematically explored in the existing works. We propose a local image descriptor for face recognition, called Log-Gabor Weber descriptor (LGWD). The novelty of our LGWD is twofold: (1) to fully utilize the information from the magnitude or phase feature of multiscale and orientation Log-Gabor transform, we apply the Weber local binary pattern operator to each transform response. (2) The encoded Log-Gabor magnitude and phase information are fused at the feature level by utilizing kernel canonical correlation analysis strategy, considering that feature level information fusion is effective when the modalities are correlated. Experimental results on the AR, Extended Yale B, and UMIST face databases, compared with those available from recent experiments reported in the literature, show that our descriptor yields a better performance than state-of-the art methods.
A Tensor-Based Structural Damage Identification and Severity Assessment
Anaissi, Ali; Makki Alamdari, Mehrisadat; Rakotoarivelo, Thierry; Khoa, Nguyen Lu Dang
2018-01-01
Early damage detection is critical for a large set of global ageing infrastructure. Structural Health Monitoring systems provide a sensor-based quantitative and objective approach to continuously monitor these structures, as opposed to traditional engineering visual inspection. Analysing these sensed data is one of the major Structural Health Monitoring (SHM) challenges. This paper presents a novel algorithm to detect and assess damage in structures such as bridges. This method applies tensor analysis for data fusion and feature extraction, and further uses one-class support vector machine on this feature to detect anomalies, i.e., structural damage. To evaluate this approach, we collected acceleration data from a sensor-based SHM system, which we deployed on a real bridge and on a laboratory specimen. The results show that our tensor method outperforms a state-of-the-art approach using the wavelet energy spectrum of the measured data. In the specimen case, our approach succeeded in detecting 92.5% of induced damage cases, as opposed to 61.1% for the wavelet-based approach. While our method was applied to bridges, its algorithm and computation can be used on other structures or sensor-data analysis problems, which involve large series of correlated data from multiple sensors. PMID:29301314
Single-trial EEG-informed fMRI analysis of emotional decision problems in hot executive function.
Guo, Qian; Zhou, Tiantong; Li, Wenjie; Dong, Li; Wang, Suhong; Zou, Ling
2017-07-01
Executive function refers to conscious control in psychological process which relates to thinking and action. Emotional decision is a part of hot executive function and contains emotion and logic elements. As a kind of important social adaptation ability, more and more attention has been paid in recent years. Gambling task can be well performed in the study of emotional decision. As fMRI researches focused on gambling task show not completely consistent brain activation regions, this study adopted EEG-fMRI fusion technology to reveal brain neural activity related with feedback stimuli. In this study, an EEG-informed fMRI analysis was applied to process simultaneous EEG-fMRI data. First, relative power-spectrum analysis and K-means clustering method were performed separately to extract EEG-fMRI features. Then, Generalized linear models were structured using fMRI data and using different EEG features as regressors. The results showed that in the win versus loss stimuli, the activated regions almost covered the caudate, the ventral striatum (VS), the orbital frontal cortex (OFC), and the cingulate. Wide activation areas associated with reward and punishment were revealed by the EEG-fMRI integration analysis than the conventional fMRI results, such as the posterior cingulate and the OFC. The VS and the medial prefrontal cortex (mPFC) were found when EEG power features were performed as regressors of GLM compared with results entering the amplitudes of feedback-related negativity (FRN) as regressors. Furthermore, the brain region activation intensity was the strongest when theta-band power was used as a regressor compared with the other two fusion results. The EEG-based fMRI analysis can more accurately depict the whole-brain activation map and analyze emotional decision problems.
An effective method for cirrhosis recognition based on multi-feature fusion
NASA Astrophysics Data System (ADS)
Chen, Yameng; Sun, Gengxin; Lei, Yiming; Zhang, Jinpeng
2018-04-01
Liver disease is one of the main causes of human healthy problem. Cirrhosis, of course, is the critical phase during the development of liver lesion, especially the hepatoma. Many clinical cases are still influenced by the subjectivity of physicians in some degree, and some objective factors such as illumination, scale, edge blurring will affect the judgment of clinicians. Then the subjectivity will affect the accuracy of diagnosis and the treatment of patients. In order to solve the difficulty above and improve the recognition rate of liver cirrhosis, we propose a method of multi-feature fusion to obtain more robust representations of texture in ultrasound liver images, the texture features we extract include local binary pattern(LBP), gray level co-occurrence matrix(GLCM) and histogram of oriented gradient(HOG). In this paper, we firstly make a fusion of multi-feature to recognize cirrhosis and normal liver based on parallel combination concept, and the experimental results shows that the classifier is effective for cirrhosis recognition which is evaluated by the satisfying classification rate, sensitivity and specificity of receiver operating characteristic(ROC), and cost time. Through the method we proposed, it will be helpful to improve the accuracy of diagnosis of cirrhosis and prevent the development of liver lesion towards hepatoma.
Momeni, Saba; Pourghassem, Hossein
2014-08-01
Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.
Wang, Wen-Tao; Li, Yin; Ma, Jie; Chen, Xiao-Bing; Qin, Jian-Jun
2014-01-01
Epidermal growth factor receptor (EGFR) mutations and echinoderm microtubule associated protein like 4-anaplastic lymphoma kinase (EML4-ALK) define specific molecular subsets of lung adenocarcinomas with distinct clinical features. Our purpose was to analyze clinical features and prognostic value of EGFR gene mutations and the EML4-ALK fusion gene in lung adenocarcinoma. EGFR gene mutations and the EML4-ALK fusion gene were detected in 92 lung adenocarcinoma patients in China. Tumor marker levels before first treatment were measured by electrochemiluminescence immunoassay. EGFR mutations were found in 40.2% (37/92) of lung adenocarcinoma patients, being identified at high frequencies in never-smokers (48.3% vs. 26.5% in smokers; P=0.040) and in patients with abnormal serum carcinoembryonic antigen (CEA) levels before the initial treatment (58.3% vs. 28.6%, P=0.004). Multivariate analysis revealed that a higher serum CEA level before the initial treatment was independently associated with EGFR gene mutations (95%CI: 1.476~11.343, P=0.007). We also identified 8 patients who harbored the EML4-ALK fusion gene (8.7%, 8/92). In concordance with previous reports, younger age was a clinical feature for these (P=0.008). Seven of the positive cases were never smokers, and no coexistence with EGFR mutation was discovered. In addition, the frequency of the EML4-ALK fusion gene among patients with a serum CEA concentration below 5 ng/ml seemed to be higher than patients with a concentration over 5 ng/ml (P=0.021). No significant difference was observed for time to progression and overall survival between EML4-ALK-positive group and EML4-ALK-negative group or between patients with and without an EGFR mutation. The serum CEA level before the initial treatment may be helpful in screening population for EGFR mutations or EML4-ALK fusion gene presence in lung adenocarcinoma patients.
NASA Astrophysics Data System (ADS)
García-Rentería, M. A.; López-Morelos, V. H.; González-Sánchez, J.; García-Hernández, R.; Dzib-Pérez, L.; Curiel-López, F. F.
2017-02-01
The effect of electromagnetic interaction of low intensity (EMILI) applied during fusion welding of AISI 2205 duplex stainless steel on the resistance to localised corrosion in natural seawater was investigated. The heat affected zone (HAZ) of samples welded under EMILI showed a higher temperature for pitting initiation and lower dissolution under anodic polarisation in chloride containing solutions than samples welded without EMILI. The EMILI assisted welding process developed in the present work enhanced the resistance to localised corrosion due to a modification on the microstructural evolution in the HAZ and the fusion zone during the thermal cycle involved in fusion welding. The application of EMILI reduced the size of the HAZ, limited coarsening of the ferrite grains and promoted regeneration of austenite in this zone, inducing a homogeneous passive condition of the surface. EMILI can be applied during fusion welding of structural or functional components of diverse size manufactured with duplex stainless steel designed to withstand aggressive environments such as natural seawater or marine atmospheres.
Information Fusion for Situational Awareness
2003-01-01
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP021704 TITLE: Information Fusion for Situational Awareness DISTRIBUTION...component part numbers comprise the compilation report: ADP021634 thru ADP021736 UNCLASSIFIED Information Fusion for Situational Awareness Dr. John...Situation Assessment, or level 2 be applied to address Situational Awareness - the processing, the knowledge of objects, their goal of this paper
NASA Astrophysics Data System (ADS)
Ma, Chuang; Bao, Zhong-Kui; Zhang, Hai-Feng
2017-10-01
So far, many network-structure-based link prediction methods have been proposed. However, these methods only highlight one or two structural features of networks, and then use the methods to predict missing links in different networks. The performances of these existing methods are not always satisfied in all cases since each network has its unique underlying structural features. In this paper, by analyzing different real networks, we find that the structural features of different networks are remarkably different. In particular, even in the same network, their inner structural features are utterly different. Therefore, more structural features should be considered. However, owing to the remarkably different structural features, the contributions of different features are hard to be given in advance. Inspired by these facts, an adaptive fusion model regarding link prediction is proposed to incorporate multiple structural features. In the model, a logistic function combing multiple structural features is defined, then the weight of each feature in the logistic function is adaptively determined by exploiting the known structure information. Last, we use the "learnt" logistic function to predict the connection probabilities of missing links. According to our experimental results, we find that the performance of our adaptive fusion model is better than many similarity indices.
Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan
2018-06-06
Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images.
Fusion and quality analysis for remote sensing images using contourlet transform
NASA Astrophysics Data System (ADS)
Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram
2013-05-01
Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.
Field-Reversed Configuration Power Plant Critical-Issue Scoping Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santarius, J. F.; Mogahed, E. A.; Emmert, G. A.
A team from the Universities of Wisconsin, Washington, and Illinois performed an engineering scoping study of critical issues for field-reversed configuration (FRC) power plants. The key tasks for this research were (1) systems analysis for deuterium-tritium (D-T) FRC fusion power plants, and (2) conceptual design of the blanket and shield module for an FRC fusion core. For the engineering conceptual design of the fusion core, the project team focused on intermediate-term technology. For example, one decision was to use steele structure. The FRC systems analysis led to a fusion power plant with attractive features including modest size, cylindrical symmetry, goodmore » thermal efficiency (52%), relatively easy maintenance, and a high ratio of electric power to fusion core mass, indicating that it would have favorable economics.« less
Dalin, Martin G; Katabi, Nora; Persson, Marta; Lee, Ken-Wing; Makarov, Vladimir; Desrichard, Alexis; Walsh, Logan A; West, Lyndsay; Nadeem, Zaineb; Ramaswami, Deepa; Havel, Jonathan J; Kuo, Fengshen; Chadalavada, Kalyani; Nanjangud, Gouri J; Ganly, Ian; Riaz, Nadeem; Ho, Alan L; Antonescu, Cristina R; Ghossein, Ronald; Stenman, Göran; Chan, Timothy A; Morris, Luc G T
2017-10-30
Myoepithelial carcinoma (MECA) is an aggressive salivary gland cancer with largely unknown genetic features. Here we comprehensively analyze molecular alterations in 40 MECAs using integrated genomic analyses. We identify a low mutational load, and high prevalence (70%) of oncogenic gene fusions. Most fusions involve the PLAG1 oncogene, which is associated with PLAG1 overexpression. We find FGFR1-PLAG1 in seven (18%) cases, and the novel TGFBR3-PLAG1 fusion in six (15%) cases. TGFBR3-PLAG1 promotes a tumorigenic phenotype in vitro, and is absent in 723 other salivary gland tumors. Other novel PLAG1 fusions include ND4-PLAG1; a fusion between mitochondrial and nuclear DNA. We also identify higher number of copy number alterations as a risk factor for recurrence, independent of tumor stage at diagnosis. Our findings indicate that MECA is a fusion-driven disease, nominate TGFBR3-PLAG1 as a hallmark of MECA, and provide a framework for future diagnostic and therapeutic research in this lethal cancer.
Multispectral image fusion based on fractal features
NASA Astrophysics Data System (ADS)
Tian, Jie; Chen, Jie; Zhang, Chunhua
2004-01-01
Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the composition of source pyramid images. So this fusion scheme is a multi-resolution analysis. The wavelet decomposition of image can be actually considered as special pyramid decomposition. According to wavelet decomposition theories, the approximation of image (formula available in paper) at resolution 2j+1 equal to its orthogonal projection in space , that is, where Ajf is the low-frequency approximation of image f(x, y) at resolution 2j and , , represent the vertical, horizontal and diagonal wavelet coefficients respectively at resolution 2j. These coefficients describe the high-frequency information of image at direction of vertical, horizontal and diagonal respectively. Ajf, , and are independent and can be considered as images. In this paper J is set to be 1, so the source image is decomposed to produce the son-images Af, D1f, D2f and D3f. To solve the problem of detecting artifacts, the concepts of vertical fractal dimension FD1, horizontal fractal dimension FD2 and diagonal fractal dimension FD3 are proposed in this paper. The vertical fractal dimension FD1 corresponds to the vertical wavelet coefficients image after the wavelet decomposition of source image, the horizontal fractal dimension FD2 corresponds to the horizontal wavelet coefficients and the diagonal fractal dimension FD3 the diagonal one. These definitions enrich the illustration of source images. Therefore they are helpful to classify the targets. Then the detection of artifacts in the decomposed images is a problem of pattern recognition in 4-D space. The combination of FD0, FD1, FD2 and FD3 make a vector of (FD0, FD1, FD2, FD3), which can be considered as a united feature vector of the studied image. All the parts of the images are classified in the 4-D pattern space created by the vector of (FD0, FD1, FD2, FD3) so that the area that contains man-made objects could be detected. This detection can be considered as a coarse recognition, and then the significant areas in each son-images are signed so that they can be dealt with special rules. There has been various fusion rules developed with each one aiming at a special problem. These rules have different performance, so it is very important to select an appropriate rule during the design of an image fusion system. Recent research denotes that the rule should be adjustable so that it is always suitable to extrude the features of targets and to preserve the pixels of useful information. In this paper, owing to the consideration that fractal dimension is one of the main features to distinguish man-made targets from natural objects, the fusion rule was defined that if the studied region of image contains man-made target, the pixels of the source image whose fractal dimension is minimal are saved to be the pixels of the fused image, otherwise, a weighted average operator is adopted to avoid loss of information. The main idea of this rule is to store the pixels with low fractal dimensions, so it can be named Minimal Fractal dimensions (MFD) fusion rule. This fractal-based algorithm is compared with a common weighted average fusion algorithm. An objective assessment is taken to the two fusion results. The criteria of Entropy, Cross-Entropy, Peak Signal-to-Noise Ratio (PSNR) and Standard Gray Scale Difference are defined in this paper. Reversely to the idea of constructing an ideal image as the assessing reference, the source images are selected to be the reference in this paper. It can be deemed that this assessment is to calculate how much the image quality has been enhanced and the quantity of information has been increased when the fused image is compared with the source images. The experimental results imply that the fractal-based multi-spectral fusion algorithm can effectively preserve the information of man-made objects with a high contrast. It is proved that this algorithm could well preserve features of military targets because that battlefield targets are most man-made objects and in common their images differ from fractal models obviously. Furthermore, the fractal features are not sensitive to the imaging conditions and the movement of targets, so this fractal-based algorithm may be very practical.
An image mosaic method based on corner
NASA Astrophysics Data System (ADS)
Jiang, Zetao; Nie, Heting
2015-08-01
In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, M.K.; Kershaw, D.S.; Shaw, M.J.
The authors present detailed features of the ICF3D hydrodynamics code used for inertial fusion simulations. This code is intended to be a state-of-the-art upgrade of the well-known fluid code, LASNEX. ICF3D employs discontinuous finite elements on a discrete unstructured mesh consisting of a variety of 3D polyhedra including tetrahedra, prisms, and hexahedra. The authors discussed details of how the ROE-averaged second-order convection was applied on the discrete elements, and how the C++ coding interface has helped to simplify implementing the many physics and numerics modules within the code package. The author emphasized the virtues of object-oriented design in large scalemore » projects such as ICF3D.« less
NASA Astrophysics Data System (ADS)
Perkins, L. J.; Ho, D. D.-M.; Logan, B. G.; Zimmerman, G. B.; Rhodes, M. A.; Strozzi, D. J.; Blackfield, D. T.; Hawkins, S. A.
2017-06-01
We examine the potential that imposed magnetic fields of tens of Tesla that increase to greater than 10 kT (100 MGauss) under implosion compression may relax the conditions required for ignition and propagating burn in indirect-drive inertial confinement fusion (ICF) targets. This may allow the attainment of ignition, or at least significant fusion energy yields, in presently performing ICF targets on the National Ignition Facility (NIF) that today are sub-marginal for thermonuclear burn through adverse hydrodynamic conditions at stagnation [Doeppner et al., Phys. Rev. Lett. 115, 055001 (2015)]. Results of detailed two-dimensional radiation-hydrodynamic-burn simulations applied to NIF capsule implosions with low-mode shape perturbations and residual kinetic energy loss indicate that such compressed fields may increase the probability for ignition through range reduction of fusion alpha particles, suppression of electron heat conduction, and potential stabilization of higher-mode Rayleigh-Taylor instabilities. Optimum initial applied fields are found to be around 50 T. Given that the full plasma structure at capsule stagnation may be governed by three-dimensional resistive magneto-hydrodynamics, the formation of closed magnetic field lines might further augment ignition prospects. Experiments are now required to further assess the potential of applied magnetic fields to ICF ignition and burn on NIF.
NASA Astrophysics Data System (ADS)
2017-05-01
Entrepreneur Richard Dinan - a former star of the UK reality-TV programme Made in Chelsea - founded the firm Applied Fusion Systems in 2014. The company has now released its first blueprint for a spherical fusion tokamak.
Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis.
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2014-11-01
For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)(2), a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET. Copyright © 2014 Elsevier Inc. All rights reserved.
A ℓ2, 1 norm regularized multi-kernel learning for false positive reduction in Lung nodule CAD.
Cao, Peng; Liu, Xiaoli; Zhang, Jian; Li, Wei; Zhao, Dazhe; Huang, Min; Zaiane, Osmar
2017-03-01
The aim of this paper is to describe a novel algorithm for False Positive Reduction in lung nodule Computer Aided Detection(CAD). In this paper, we describes a new CT lung CAD method which aims to detect solid nodules. Specially, we proposed a multi-kernel classifier with a ℓ 2, 1 norm regularizer for heterogeneous feature fusion and selection from the feature subset level, and designed two efficient strategies to optimize the parameters of kernel weights in non-smooth ℓ 2, 1 regularized multiple kernel learning algorithm. The first optimization algorithm adapts a proximal gradient method for solving the ℓ 2, 1 norm of kernel weights, and use an accelerated method based on FISTA; the second one employs an iterative scheme based on an approximate gradient descent method. The results demonstrates that the FISTA-style accelerated proximal descent method is efficient for the ℓ 2, 1 norm formulation of multiple kernel learning with the theoretical guarantee of the convergence rate. Moreover, the experimental results demonstrate the effectiveness of the proposed methods in terms of Geometric mean (G-mean) and Area under the ROC curve (AUC), and significantly outperforms the competing methods. The proposed approach exhibits some remarkable advantages both in heterogeneous feature subsets fusion and classification phases. Compared with the fusion strategies of feature-level and decision level, the proposed ℓ 2, 1 norm multi-kernel learning algorithm is able to accurately fuse the complementary and heterogeneous feature sets, and automatically prune the irrelevant and redundant feature subsets to form a more discriminative feature set, leading a promising classification performance. Moreover, the proposed algorithm consistently outperforms the comparable classification approaches in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Hierarchical Feature Representation and Multimodal Fusion with Deep Learning for AD/MCI Diagnosis
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2014-01-01
For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)1, a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET. PMID:25042445
Molecular and cellular aspects of rhabdovirus entry.
Albertini, Aurélie A V; Baquero, Eduard; Ferlin, Anna; Gaudin, Yves
2012-01-01
Rhabdoviruses enter the cell via the endocytic pathway and subsequently fuse with a cellular membrane within the acidic environment of the endosome. Both receptor recognition and membrane fusion are mediated by a single transmembrane viral glycoprotein (G). Fusion is triggered via a low-pH induced structural rearrangement. G is an atypical fusion protein as there is a pH-dependent equilibrium between its pre- and post-fusion conformations. The elucidation of the atomic structures of these two conformations for the vesicular stomatitis virus (VSV) G has revealed that it is different from the previously characterized class I and class II fusion proteins. In this review, the pre- and post-fusion VSV G structures are presented in detail demonstrating that G combines the features of the class I and class II fusion proteins. In addition to these similarities, these G structures also reveal some particularities that expand our understanding of the working of fusion machineries. Combined with data from recent studies that revealed the cellular aspects of the initial stages of rhabdovirus infection, all these data give an integrated view of the entry pathway of rhabdoviruses into their host cell.
Molecular and Cellular Aspects of Rhabdovirus Entry
Albertini, Aurélie A. V.; Baquero, Eduard; Ferlin, Anna; Gaudin, Yves
2012-01-01
Rhabdoviruses enter the cell via the endocytic pathway and subsequently fuse with a cellular membrane within the acidic environment of the endosome. Both receptor recognition and membrane fusion are mediated by a single transmembrane viral glycoprotein (G). Fusion is triggered via a low-pH induced structural rearrangement. G is an atypical fusion protein as there is a pH-dependent equilibrium between its pre- and post-fusion conformations. The elucidation of the atomic structures of these two conformations for the vesicular stomatitis virus (VSV) G has revealed that it is different from the previously characterized class I and class II fusion proteins. In this review, the pre- and post-fusion VSV G structures are presented in detail demonstrating that G combines the features of the class I and class II fusion proteins. In addition to these similarities, these G structures also reveal some particularities that expand our understanding of the working of fusion machineries. Combined with data from recent studies that revealed the cellular aspects of the initial stages of rhabdovirus infection, all these data give an integrated view of the entry pathway of rhabdoviruses into their host cell. PMID:22355455
Feature Selection and Pedestrian Detection Based on Sparse Representation.
Yao, Shihong; Wang, Tao; Shen, Weiming; Pan, Shaoming; Chong, Yanwen; Ding, Fei
2015-01-01
Pedestrian detection have been currently devoted to the extraction of effective pedestrian features, which has become one of the obstacles in pedestrian detection application according to the variety of pedestrian features and their large dimension. Based on the theoretical analysis of six frequently-used features, SIFT, SURF, Haar, HOG, LBP and LSS, and their comparison with experimental results, this paper screens out the sparse feature subsets via sparse representation to investigate whether the sparse subsets have the same description abilities and the most stable features. When any two of the six features are fused, the fusion feature is sparsely represented to obtain its important components. Sparse subsets of the fusion features can be rapidly generated by avoiding calculation of the corresponding index of dimension numbers of these feature descriptors; thus, the calculation speed of the feature dimension reduction is improved and the pedestrian detection time is reduced. Experimental results show that sparse feature subsets are capable of keeping the important components of these six feature descriptors. The sparse features of HOG and LSS possess the same description ability and consume less time compared with their full features. The ratios of the sparse feature subsets of HOG and LSS to their full sets are the highest among the six, and thus these two features can be used to best describe the characteristics of the pedestrian and the sparse feature subsets of the combination of HOG-LSS show better distinguishing ability and parsimony.
Ensemble Classifier Strategy Based on Transient Feature Fusion in Electronic Nose
NASA Astrophysics Data System (ADS)
Bagheri, Mohammad Ali; Montazer, Gholam Ali
2011-09-01
In this paper, we test the performance of several ensembles of classifiers and each base learner has been trained on different types of extracted features. Experimental results show the potential benefits introduced by the usage of simple ensemble classification systems for the integration of different types of transient features.
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification
Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references. PMID:29581722
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification.
Yu, Yunlong; Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.
Deep features for efficient multi-biometric recognition with face and ear images
NASA Astrophysics Data System (ADS)
Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng
2017-07-01
Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.
Multispectral Image Enhancement Through Adaptive Wavelet Fusion
2016-09-14
13. SUPPLEMENTARY NOTES 14. ABSTRACT This research developed a multiresolution image fusion scheme based on guided filtering . Guided filtering can...effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale...details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at
Fast multi-scale feature fusion for ECG heartbeat classification
NASA Astrophysics Data System (ADS)
Ai, Danni; Yang, Jian; Wang, Zeyu; Fan, Jingfan; Ai, Changbin; Wang, Yongtian
2015-12-01
Electrocardiogram (ECG) is conducted to monitor the electrical activity of the heart by presenting small amplitude and duration signals; as a result, hidden information present in ECG data is difficult to determine. However, this concealed information can be used to detect abnormalities. In our study, a fast feature-fusion method of ECG heartbeat classification based on multi-linear subspace learning is proposed. The method consists of four stages. First, baseline and high frequencies are removed to segment heartbeat. Second, as an extension of wavelets, wavelet-packet decomposition is conducted to extract features. With wavelet-packet decomposition, good time and frequency resolutions can be provided simultaneously. Third, decomposed confidences are arranged as a two-way tensor, in which feature fusion is directly implemented with generalized N dimensional ICA (GND-ICA). In this method, co-relationship among different data information is considered, and disadvantages of dimensionality are prevented; this method can also be used to reduce computing compared with linear subspace-learning methods (PCA). Finally, support vector machine (SVM) is considered as a classifier in heartbeat classification. In this study, ECG records are obtained from the MIT-BIT arrhythmia database. Four main heartbeat classes are used to examine the proposed algorithm. Based on the results of five measurements, sensitivity, positive predictivity, accuracy, average accuracy, and t-test, our conclusion is that a GND-ICA-based strategy can be used to provide enhanced ECG heartbeat classification. Furthermore, large redundant features are eliminated, and classification time is reduced.
Binaural fusion and the representation of virtual pitch in the human auditory cortex.
Pantev, C; Elbert, T; Ross, B; Eulitz, C; Terhardt, E
1996-10-01
The auditory system derives the pitch of complex tones from the tone's harmonics. Research in psychoacoustics predicted that binaural fusion was an important feature of pitch processing. Based on neuromagnetic human data, the first neurophysiological confirmation of binaural fusion in hearing is presented. The centre of activation within the cortical tonotopic map corresponds to the location of the perceived pitch and not to the locations that are activated when the single frequency constituents are presented. This is also true when the different harmonics of a complex tone are presented dichotically. We conclude that the pitch processor includes binaural fusion to determine the particular pitch location which is activated in the auditory cortex.
Comparing fusion techniques for the ImageCLEF 2013 medical case retrieval task.
G Seco de Herrera, Alba; Schaer, Roger; Markonis, Dimitrios; Müller, Henning
2015-01-01
Retrieval systems can supply similar cases with a proven diagnosis to a new example case under observation to help clinicians during their work. The ImageCLEFmed evaluation campaign proposes a framework where research groups can compare case-based retrieval approaches. This paper focuses on the case-based task and adds results of the compound figure separation and modality classification tasks. Several fusion approaches are compared to identify the approaches best adapted to the heterogeneous data of the task. Fusion of visual and textual features is analyzed, demonstrating that the selection of the fusion strategy can improve the best performance on the case-based retrieval task. Copyright © 2014 Elsevier Ltd. All rights reserved.
Cytogenetic analyses of four solid tumours in dogs.
Mayr, B; Reifinger, M; Weissenböck, H; Schleger, W; Eisenmenger, E
1994-07-01
Four solid tumours (one haemangiopericytoma, one haemangioendothelioma, one spindle-cell sarcoma and one mammary carcinoma) in dogs were analysed cytogenetically. In the haemangiopericytoma, an additional small chromosomal segment was present. Very complex changes including centric fusions and symmetric meta-centrics 1, 6, 10 and 12 were conspicuous in the highly unbalanced karyotype of the haemangioendothelioma. Complex changes, particularly many centric fusions and a tandem translocation 4/14, were features of the spindle-cell sarcoma. One centric fusion and a symmetric metacentric 13 were present in the mammary carcinoma.
Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B
2013-03-01
Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.
Discrimination of Oil Slicks and Lookalikes in Polarimetric SAR Images Using CNN.
Guo, Hao; Wu, Danni; An, Jubai
2017-08-09
Oil slicks and lookalikes (e.g., plant oil and oil emulsion) all appear as dark areas in polarimetric Synthetic Aperture Radar (SAR) images and are highly heterogeneous, so it is very difficult to use a single feature that can allow classification of dark objects in polarimetric SAR images as oil slicks or lookalikes. We established multi-feature fusion to support the discrimination of oil slicks and lookalikes. In the paper, simple discrimination analysis is used to rationalize a preferred features subset. The features analyzed include entropy, alpha, and Single-bounce Eigenvalue Relative Difference (SERD) in the C-band polarimetric mode. We also propose a novel SAR image discrimination method for oil slicks and lookalikes based on Convolutional Neural Network (CNN). The regions of interest are selected as the training and testing samples for CNN on the three kinds of polarimetric feature images. The proposed method is applied to a training data set of 5400 samples, including 1800 crude oil, 1800 plant oil, and 1800 oil emulsion samples. In the end, the effectiveness of the method is demonstrated through the analysis of some experimental results. The classification accuracy obtained using 900 samples of test data is 91.33%. It is here observed that the proposed method not only can accurately identify the dark spots on SAR images but also verify the ability of the proposed algorithm to classify unstructured features.
Discrimination of Oil Slicks and Lookalikes in Polarimetric SAR Images Using CNN
An, Jubai
2017-01-01
Oil slicks and lookalikes (e.g., plant oil and oil emulsion) all appear as dark areas in polarimetric Synthetic Aperture Radar (SAR) images and are highly heterogeneous, so it is very difficult to use a single feature that can allow classification of dark objects in polarimetric SAR images as oil slicks or lookalikes. We established multi-feature fusion to support the discrimination of oil slicks and lookalikes. In the paper, simple discrimination analysis is used to rationalize a preferred features subset. The features analyzed include entropy, alpha, and Single-bounce Eigenvalue Relative Difference (SERD) in the C-band polarimetric mode. We also propose a novel SAR image discrimination method for oil slicks and lookalikes based on Convolutional Neural Network (CNN). The regions of interest are selected as the training and testing samples for CNN on the three kinds of polarimetric feature images. The proposed method is applied to a training data set of 5400 samples, including 1800 crude oil, 1800 plant oil, and 1800 oil emulsion samples. In the end, the effectiveness of the method is demonstrated through the analysis of some experimental results. The classification accuracy obtained using 900 samples of test data is 91.33%. It is here observed that the proposed method not only can accurately identify the dark spots on SAR images but also verify the ability of the proposed algorithm to classify unstructured features. PMID:28792477
NASA Technical Reports Server (NTRS)
Rahman, Zia-ur
2005-01-01
The purpose of this research was to develop enhancement and multi-sensor fusion algorithms and techniques to make it safer for the pilot to fly in what would normally be considered Instrument Flight Rules (IFR) conditions, where pilot visibility is severely restricted due to fog, haze or other weather phenomenon. We proposed to use the non-linear Multiscale Retinex (MSR) as the basic driver for developing an integrated enhancement and fusion engine. When we started this research, the MSR was being applied primarily to grayscale imagery such as medical images, or to three-band color imagery, such as that produced in consumer photography: it was not, however, being applied to other imagery such as that produced by infrared image sources. However, we felt that it was possible by using the MSR algorithm in conjunction with multiple imaging modalities such as long-wave infrared (LWIR), short-wave infrared (SWIR), and visible spectrum (VIS), we could substantially improve over the then state-of-the-art enhancement algorithms, especially in poor visibility conditions. We proposed the following tasks: 1) Investigate the effects of applying the MSR to LWIR and SWIR images. This consisted of optimizing the algorithm in terms of surround scales, and weights for these spectral bands; 2) Fusing the LWIR and SWIR images with the VIS images using the MSR framework to determine the best possible representation of the desired features; 3) Evaluating different mixes of LWIR, SWIR and VIS bands for maximum fog and haze reduction, and low light level compensation; 4) Modifying the existing algorithms to work with video sequences. Over the course of the 3 year research period, we were able to accomplish these tasks and report on them at various internal presentations at NASA Langley Research Center, and in presentations and publications elsewhere. A description of the work performed under the tasks is provided in Section 2. The complete list of relevant publications during the research periods is provided in Section 5. This research also resulted in the generation of intellectual property.
Efficient sensor network vehicle classification using peak harmonics of acoustic emissions
NASA Astrophysics Data System (ADS)
William, Peter E.; Hoffman, Michael W.
2008-04-01
An application is proposed for detection and classification of battlefield ground vehicles using the emitted acoustic signal captured at individual sensor nodes of an ad hoc Wireless Sensor Network (WSN). We make use of the harmonic characteristics of the acoustic emissions of battlefield vehicles, in reducing both the computations carried on the sensor node and the transmitted data to the fusion center for reliable and effcient classification of targets. Previous approaches focus on the lower frequency band of the acoustic emissions up to 500Hz; however, we show in the proposed application how effcient discrimination between battlefield vehicles is performed using features extracted from higher frequency bands (50 - 1500Hz). The application shows that selective time domain acoustic features surpass equivalent spectral features. Collaborative signal processing is utilized, such that estimation of certain signal model parameters is carried by the sensor node, in order to reduce the communication between the sensor node and the fusion center, while the remaining model parameters are estimated at the fusion center. The transmitted data from the sensor node to the fusion center ranges from 1 ~ 5% of the sampled acoustic signal at the node. A variety of classification schemes were examined, such as maximum likelihood, vector quantization and artificial neural networks. Evaluation of the proposed application, through processing of an acoustic data set with comparison to previous results, shows that the improvement is not only in the number of computations but also in the detection and false alarm rate as well.
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.
Probabilistic combination of static and dynamic gait features for verification
NASA Astrophysics Data System (ADS)
Bazin, Alex I.; Nixon, Mark S.
2005-03-01
This paper describes a novel probabilistic framework for biometric identification and data fusion. Based on intra and inter-class variation extracted from training data, posterior probabilities describing the similarity between two feature vectors may be directly calculated from the data using the logistic function and Bayes rule. Using a large publicly available database we show the two imbalanced gait modalities may be fused using this framework. All fusion methods tested provide an improvement over the best modality, with the weighted sum rule giving the best performance, hence showing that highly imbalanced classifiers may be fused in a probabilistic setting; improving not only the performance, but also generalized application capability.
Facial recognition using multisensor images based on localized kernel eigen spaces.
Gundimada, Satyanadh; Asari, Vijayan K
2009-06-01
A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.
Registration and Fusion of Multiple Source Remotely Sensed Image Data
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline
2004-01-01
Earth and Space Science often involve the comparison, fusion, and integration of multiple types of remotely sensed data at various temporal, radiometric, and spatial resolutions. Results of this integration may be utilized for global change analysis, global coverage of an area at multiple resolutions, map updating or validation of new instruments, as well as integration of data provided by multiple instruments carried on multiple platforms, e.g. in spacecraft constellations or fleets of planetary rovers. Our focus is on developing methods to perform fast, accurate and automatic image registration and fusion. General methods for automatic image registration are being reviewed and evaluated. Various choices for feature extraction, feature matching and similarity measurements are being compared, including wavelet-based algorithms, mutual information and statistically robust techniques. Our work also involves studies related to image fusion and investigates dimension reduction and co-kriging for application-dependent fusion. All methods are being tested using several multi-sensor datasets, acquired at EOS Core Sites, and including multiple sensors such as IKONOS, Landsat-7/ETM+, EO1/ALI and Hyperion, MODIS, and SeaWIFS instruments. Issues related to the coregistration of data from the same platform (i.e., AIRS and MODIS from Aqua) or from several platforms of the A-train (i.e., MLS, HIRDLS, OMI from Aura with AIRS and MODIS from Terra and Aqua) will also be considered.
Visual saliency in MPEG-4 AVC video stream
NASA Astrophysics Data System (ADS)
Ammar, M.; Mitrea, M.; Hasnaoui, M.; Le Callet, P.
2015-03-01
Visual saliency maps already proved their efficiency in a large variety of image/video communication application fields, covering from selective compression and channel coding to watermarking. Such saliency maps are generally based on different visual characteristics (like color, intensity, orientation, motion,…) computed from the pixel representation of the visual content. This paper resumes and extends our previous work devoted to the definition of a saliency map solely extracted from the MPEG-4 AVC stream syntax elements. The MPEG-4 AVC saliency map thus defined is a fusion of static and dynamic map. The static saliency map is in its turn a combination of intensity, color and orientation features maps. Despite the particular way in which all these elementary maps are computed, the fusion techniques allowing their combination plays a critical role in the final result and makes the object of the proposed study. A total of 48 fusion formulas (6 for combining static features and, for each of them, 8 to combine static to dynamic features) are investigated. The performances of the obtained maps are evaluated on a public database organized at IRCCyN, by computing two objective metrics: the Kullback-Leibler divergence and the area under curve.
A case of PSF-TFE3 gene fusion in Xp11.2 renal cell carcinoma with melanotic features.
Zhan, He-Qin; Chen, Hong; Wang, Chao-Fu; Zhu, Xiong-Zeng
2015-03-01
Xp11.2 translocation renal cell carcinoma (Xp11.2 RCC) with PSF-TFE3 gene fusion is a rare neoplasm. Only 22 cases of Xp11.2 RCCs with PSF-TFE3 have been reported to date. We describe an additional case of Xp11.2 RCC with PSF-TFE3 showing melanotic features. Microscopically, the histologic features mimic clear cell renal cell carcinoma. However, the dark-brown pigments were identified and could be demonstrated as melanins. Immunohistochemically, the tumor cells were widely positive for CD10, human melanoma black 45, and TFE3 but negative for cytokeratins, vimentin, Melan-A, microphthalmia-associated transcription factor, smooth muscle actin, and S-100 protein. Genetically, we demonstrated PSF-TFE3 fusion between exon 9 of PSF and exon 5 of TFE3. The patient was free of disease with 50 months of follow-up. The prognosis of this type of tumor requires more cases because of limited number of cases and follow-up period. Xp11.2 RCC with PSF-TFE3 inevitably requires differentiation from other kidney neoplasms. Immunohistochemical and molecular genetic analyses are essential for accurate diagnosis. Copyright © 2015 Elsevier Inc. All rights reserved.
Hybrid Histogram Descriptor: A Fusion Feature Representation for Image Retrieval.
Feng, Qinghe; Hao, Qiaohong; Chen, Yuqi; Yi, Yugen; Wei, Ying; Dai, Jiangyan
2018-06-15
Currently, visual sensors are becoming increasingly affordable and fashionable, acceleratingly the increasing number of image data. Image retrieval has attracted increasing interest due to space exploration, industrial, and biomedical applications. Nevertheless, designing effective feature representation is acknowledged as a hard yet fundamental issue. This paper presents a fusion feature representation called a hybrid histogram descriptor (HHD) for image retrieval. The proposed descriptor comprises two histograms jointly: a perceptually uniform histogram which is extracted by exploiting the color and edge orientation information in perceptually uniform regions; and a motif co-occurrence histogram which is acquired by calculating the probability of a pair of motif patterns. To evaluate the performance, we benchmarked the proposed descriptor on RSSCN7, AID, Outex-00013, Outex-00014 and ETHZ-53 datasets. Experimental results suggest that the proposed descriptor is more effective and robust than ten recent fusion-based descriptors under the content-based image retrieval framework. The computational complexity was also analyzed to give an in-depth evaluation. Furthermore, compared with the state-of-the-art convolutional neural network (CNN)-based descriptors, the proposed descriptor also achieves comparable performance, but does not require any training process.
Feature-based fusion of medical imaging data.
Calhoun, Vince D; Adali, Tülay
2009-09-01
The acquisition of multiple brain imaging types for a given study is a very common practice. There have been a number of approaches proposed for combining or fusing multitask or multimodal information. These can be roughly divided into those that attempt to study convergence of multimodal imaging, for example, how function and structure are related in the same region of the brain, and those that attempt to study the complementary nature of modalities, for example, utilizing temporal EEG information and spatial functional magnetic resonance imaging information. Within each of these categories, one can attempt data integration (the use of one imaging modality to improve the results of another) or true data fusion (in which multiple modalities are utilized to inform one another). We review both approaches and present a recent computational approach that first preprocesses the data to compute features of interest. The features are then analyzed in a multivariate manner using independent component analysis. We describe the approach in detail and provide examples of how it has been used for different fusion tasks. We also propose a method for selecting which combination of modalities provides the greatest value in discriminating groups. Finally, we summarize and describe future research topics.
Line-Tension Controlled Mechanism for Influenza Fusion
Risselada, Herre Jelger; Smirnova, Yuliya G.; Grubmüller, Helmut; Marrink, Siewert Jan; Müller, Marcus
2012-01-01
Our molecular simulations reveal that wild-type influenza fusion peptides are able to stabilize a highly fusogenic pre-fusion structure, i.e. a peptide bundle formed by four or more trans-membrane arranged fusion peptides. We rationalize that the lipid rim around such bundle has a non-vanishing rim energy (line-tension), which is essential to (i) stabilize the initial contact point between the fusing bilayers, i.e. the stalk, and (ii) drive its subsequent evolution. Such line-tension controlled fusion event does not proceed along the hypothesized standard stalk-hemifusion pathway. In modeled influenza fusion, single point mutations in the influenza fusion peptide either completely inhibit fusion (mutants G1V and W14A) or, intriguingly, specifically arrest fusion at a hemifusion state (mutant G1S). Our simulations demonstrate that, within a line-tension controlled fusion mechanism, these known point mutations either completely inhibit fusion by impairing the peptide’s ability to stabilize the required peptide bundle (G1V and W14A) or stabilize a persistent bundle that leads to a kinetically trapped hemifusion state (G1S). In addition, our results further suggest that the recently discovered leaky fusion mutant G13A, which is known to facilitate a pronounced leakage of the target membrane prior to lipid mixing, reduces the membrane integrity by forming a ‘super’ bundle. Our simulations offer a new interpretation for a number of experimentally observed features of the fusion reaction mediated by the prototypical fusion protein, influenza hemagglutinin, and might bring new insights into mechanisms of other viral fusion reactions. PMID:22761674
Faulkner, Claire; Ellis, Hayley Patricia; Shaw, Abigail; Penman, Catherine; Palmer, Abigail; Wragg, Christopher; Greenslade, Mark; Haynes, Harry Russell; Williams, Hannah; Lowis, Stephen; White, Paul; Williams, Maggie; Capper, David; Kurian, Kathreena Mary
2015-01-01
Abstract Pilocytic astrocytomas (PAs) are increasingly tested for KIAA1549-BRAF fusions. We used reverse transcription polymerase chain reaction for the 3 most common KIAA1549-BRAF fusions, together with BRAF V600E and histone H3.3 K27M analyses to identify relationships of these molecular characteristics with clinical features in a cohort of 32 PA patients. In this group, the overall BRAF fusion detection rate was 24 (75%). Ten (42%) of the 24 had the 16-9 fusion, 8 (33%) had only the 15-9 fusion, and 1 (4%) of the patients had only the 16-11 fusion. In the PAs with only the 15-9 fusion, 1 PA was in the cerebellum and 7 were centered in the midline outside of the cerebellum, that is, in the hypothalamus (n = 4), optic pathways (n = 2), and brainstem (n = 1). Tumors within the cerebellum were negatively associated with fusion 15-9. Seven (22%) of the 32 patients had tumor-related deaths and 25 of the patients (78%) were alive between 2 and 14 years after initial biopsy. Age, sex, tumor location, 16-9 fusion, and 15-9 fusion were not associated with overall survival. Thus, in this small cohort, 15-9 KIAA1549-BRAF fusion was associated with midline PAs located outside of the cerebellum; these tumors, which are generally difficult to resect, are prone to recurrence. PMID:26222501
Encell, Lance P; Friedman Ohana, Rachel; Zimmerman, Kris; Otto, Paul; Vidugiris, Gediminas; Wood, Monika G; Los, Georgyi V; McDougall, Mark G; Zimprich, Chad; Karassina, Natasha; Learish, Randall D; Hurst, Robin; Hartnett, James; Wheeler, Sarah; Stecha, Pete; English, Jami; Zhao, Kate; Mendez, Jacqui; Benink, Hélène A; Murphy, Nancy; Daniels, Danette L; Slater, Michael R; Urh, Marjeta; Darzins, Aldis; Klaubert, Dieter H; Bulleit, Robert F; Wood, Keith V
2012-01-01
Our fundamental understanding of proteins and their biological significance has been enhanced by genetic fusion tags, as they provide a convenient method for introducing unique properties to proteins so that they can be examinedin isolation. Commonly used tags satisfy many of the requirements for applications relating to the detection and isolation of proteins from complex samples. However, their utility at low concentration becomes compromised if the binding affinity for a detection or capture reagent is not adequate to produce a stable interaction. Here, we describe HaloTag® (HT7), a genetic fusion tag based on a modified haloalkane dehalogenase designed and engineered to overcome the limitation of affinity tags by forming a high affinity, covalent attachment to a binding ligand. HT7 and its ligand have additional desirable features. The tag is relatively small, monomeric, and structurally compatible with fusion partners, while the ligand is specific, chemically simple, and amenable to modular synthetic design. Taken together, the design features and molecular evolution of HT7 have resulted in a superior alternative to common tags for the overexpression, detection, and isolation of target proteins. PMID:23248739
Cognitive Load Measurement in a Virtual Reality-based Driving System for Autism Intervention
Zhang, Lian; Wade, Joshua; Bian, Dayi; Fan, Jing; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan
2016-01-01
Autism Spectrum Disorder (ASD) is a highly prevalent neurodevelopmental disorder with enormous individual and social cost. In this paper, a novel virtual reality (VR)-based driving system was introduced to teach driving skills to adolescents with ASD. This driving system is capable of gathering eye gaze, electroencephalography, and peripheral physiology data in addition to driving performance data. The objective of this paper is to fuse multimodal information to measure cognitive load during driving such that driving tasks can be individualized for optimal skill learning. Individualization of ASD intervention is an important criterion due to the spectrum nature of the disorder. Twenty adolescents with ASD participated in our study and the data collected were used for systematic feature extraction and classification of cognitive loads based on five well-known machine learning methods. Subsequently, three information fusion schemes—feature level fusion, decision level fusion and hybrid level fusion—were explored. Results indicate that multimodal information fusion can be used to measure cognitive load with high accuracy. Such a mechanism is essential since it will allow individualization of driving skill training based on cognitive load, which will facilitate acceptance of this driving system for clinical use and eventual commercialization. PMID:28966730
Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming
2018-02-19
The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.
Fusion genes with ALK as recurrent partner in ependymoma-like gliomas: a new brain tumor entity?
Olsen, Thale Kristin; Panagopoulos, Ioannis; Meling, Torstein R.; Micci, Francesca; Gorunova, Ludmila; Thorsen, Jim; Due-Tønnessen, Bernt; Scheie, David; Lund-Iversen, Marius; Krossnes, Bård; Saxhaug, Cathrine; Heim, Sverre; Brandal, Petter
2015-01-01
Background We have previously characterized 19 ependymal tumors using Giemsa banding and high-resolution comparative genomic hybridization. The aim of this study was to analyze these tumors searching for fusion genes. Methods RNA sequencing was performed in 12 samples. Potential fusion transcripts were assessed by seed count and structural chromosomal aberrations. Transcripts of interest were validated using fluorescence in situ hybridization and PCR followed by direct sequencing. Results RNA sequencing identified rearrangements of the anaplastic lymphoma kinase gene (ALK) in 2 samples. Both tumors harbored structural aberrations involving the ALK locus 2p23. Tumor 1 had an unbalanced t(2;14)(p23;q22) translocation which led to the fusion gene KTN1-ALK. Tumor 2 had an interstitial del(2)(p16p23) deletion causing the fusion of CCDC88A and ALK. In both samples, the breakpoint of ALK was located between exons 19 and 20. Both patients were infants and both tumors were supratentorial. The tumors were well demarcated from surrounding tissue and had both ependymal and astrocytic features but were diagnosed and treated as ependymomas. Conclusions By combining karyotyping and RNA sequencing, we identified the 2 first ever reported ALK rearrangements in CNS tumors. Such rearrangements may represent the hallmark of a new entity of pediatric glioma characterized by both ependymal and astrocytic features. Our findings are of particular importance because crizotinib, a selective ALK inhibitor, has demonstrated effect in patients with lung cancer harboring ALK rearrangements. Thus, ALK emerges as an interesting therapeutic target in patients with ependymal tumors carrying ALK fusions. PMID:25795305
NASA Astrophysics Data System (ADS)
McCullough, Claire L.; Novobilski, Andrew J.; Fesmire, Francis M.
2006-04-01
Faculty from the University of Tennessee at Chattanooga and the University of Tennessee College of Medicine, Chattanooga Unit, have used data mining techniques and neural networks to examine a set of fourteen features, data items, and HUMINT assessments for 2,148 emergency room patients with symptoms possibly indicative of Acute Coronary Syndrome. Specifically, the authors have generated Bayesian networks describing linkages and causality in the data, and have compared them with neural networks. The data includes objective information routinely collected during triage and the physician's initial case assessment, a HUMINT appraisal. Both the neural network and the Bayesian network were used to fuse the disparate types of information with the goal of forecasting thirty-day adverse patient outcome. This paper presents details of the methods of data fusion including both the data mining techniques and the neural network. Results are compared using Receiver Operating Characteristic curves describing the outcomes of both methods, both using only objective features and including the subjective physician's assessment. While preliminary, the results of this continuing study are significant both from the perspective of potential use of the intelligent fusion of biomedical informatics to aid the physician in prescribing treatment necessary to prevent serious adverse outcome from ACS and as a model of fusion of objective data with subjective HUMINT assessment. Possible future work includes extension of successfully demonstrated intelligent fusion methods to other medical applications, and use of decision level fusion to combine results from data mining and neural net approaches for even more accurate outcome prediction.
Kozlov, M M; Chernomordik, L V
1998-01-01
Although membrane fusion mediated by influenza virus hemagglutinin (HA) is the best characterized example of ubiquitous protein-mediated fusion, it is still not known how the low-pH-induced refolding of HA trimers causes fusion. This refolding involves 1) repositioning of the hydrophobic N-terminal sequence of the HA2 subunit of HA ("fusion peptide"), and 2) the recruitment of additional residues to the alpha-helical coiled coil of a rigid central rod of the trimer. We propose here a mechanism by which these conformational changes can cause local bending of the viral membrane, priming it for fusion. In this model fusion is triggered by incorporation of fusion peptides into viral membrane. Refolding of a central rod exerts forces that pull the fusion peptides, tending to bend the membrane around HA trimer into a saddle-like shape. Elastic energy drives self-assembly of these HA-containing membrane elements in the plane of the membrane into a ring-like cluster. Bulging of the viral membrane within such cluster yields a dimple growing toward the bound target membrane. Bending stresses in the lipidic top of the dimple facilitate membrane fusion. We analyze the energetics of this proposed sequence of membrane rearrangements, and demonstrate that this simple mechanism may explain some of the known phenomenological features of fusion. PMID:9726939
A novel feature extraction approach for microarray data based on multi-algorithm fusion
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions. PMID:25780277
A novel feature extraction approach for microarray data based on multi-algorithm fusion.
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions.
Decision Fusion with Channel Errors in Distributed Decode-Then-Fuse Sensor Networks
Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Zhong, Xionghu
2015-01-01
Decision fusion for distributed detection in sensor networks under non-ideal channels is investigated in this paper. Usually, the local decisions are transmitted to the fusion center (FC) and decoded, and a fusion rule is then applied to achieve a global decision. We propose an optimal likelihood ratio test (LRT)-based fusion rule to take the uncertainty of the decoded binary data due to modulation, reception mode and communication channel into account. The average bit error rate (BER) is employed to characterize such an uncertainty. Further, the detection performance is analyzed under both non-identical and identical local detection performance indices. In addition, the performance of the proposed method is compared with the existing optimal and suboptimal LRT fusion rules. The results show that the proposed fusion rule is more robust compared to these existing ones. PMID:26251908
Ma, Xu; Cheng, Yongmei; Hao, Shuai
2016-12-10
Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allain, Jean Paul
2014-08-08
This project consisted of fundamental and applied research of advanced in-situ particle-beam interactions with surfaces/interfaces to discover novel materials able to tolerate intense conditions at the plasma-material interface (PMI) in future fusion burning plasma devices. The project established a novel facility that is capable of not only characterizing new fusion nanomaterials but, more importantly probing and manipulating materials at the nanoscale while performing subsequent single-effect in-situ testing of their performance under simulated environments in fusion PMI.
A color fusion method of infrared and low-light-level images based on visual perception
NASA Astrophysics Data System (ADS)
Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa
2014-11-01
The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.
Proposal for a novel type of small scale aneutronic fusion reactor
NASA Astrophysics Data System (ADS)
Gruenwald, J.
2017-02-01
The aim of this work is to propose a novel scheme for a small scale aneutronic fusion reactor. This new reactor type makes use of the advantages of combining laser driven plasma acceleration and electrostatic confinement fusion. An intense laser beam is used to create a lithium-proton plasma with high density, which is then collimated and focused into the centre of the fusion reaction chamber. The basic concept presented here is based on the 7Li-proton fusion reaction. However, the physical and technological fundamentals may generally as well be applied to 11B-proton fusion. The former fusion reaction path offers higher energy yields while the latter has larger fusion cross sections. Within this paper a technological realisation of such a fusion device, which allows a steady state operation with highly energetic, well collimated ion beam, is presented. It will be demonstrated that the energetic break even can be reached with this device by using a combination of already existing technologies.
NASA Astrophysics Data System (ADS)
Yang, G.; Lin, Y.; Bhattacharya, P.
2007-12-01
To achieve an effective and safe operation on the machine system where the human interacts with the machine mutually, there is a need for the machine to understand the human state, especially cognitive state, when the human's operation task demands an intensive cognitive activity. Due to a well-known fact with the human being, a highly uncertain cognitive state and behavior as well as expressions or cues, the recent trend to infer the human state is to consider multimodality features of the human operator. In this paper, we present a method for multimodality inferring of human cognitive states by integrating neuro-fuzzy network and information fusion techniques. To demonstrate the effectiveness of this method, we take the driver fatigue detection as an example. The proposed method has, in particular, the following new features. First, human expressions are classified into four categories: (i) casual or contextual feature, (ii) contact feature, (iii) contactless feature, and (iv) performance feature. Second, the fuzzy neural network technique, in particular Takagi-Sugeno-Kang (TSK) model, is employed to cope with uncertain behaviors. Third, the sensor fusion technique, in particular ordered weighted aggregation (OWA), is integrated with the TSK model in such a way that cues are taken as inputs to the TSK model, and then the outputs of the TSK are fused by the OWA which gives outputs corresponding to particular cognitive states under interest (e.g., fatigue). We call this method TSK-OWA. Validation of the TSK-OWA, performed in the Northeastern University vehicle drive simulator, has shown that the proposed method is promising to be a general tool for human cognitive state inferring and a special tool for the driver fatigue detection.
Surface apposition and multiple cell contacts promote myoblast fusion in Drosophila flight muscles
Dhanyasi, Nagaraju; Segal, Dagan; Shimoni, Eyal; Shinder, Vera
2015-01-01
Fusion of individual myoblasts to form multinucleated myofibers constitutes a widely conserved program for growth of the somatic musculature. We have used electron microscopy methods to study this key form of cell–cell fusion during development of the indirect flight muscles (IFMs) of Drosophila melanogaster. We find that IFM myoblast–myotube fusion proceeds in a stepwise fashion and is governed by apparent cross talk between transmembrane and cytoskeletal elements. Our analysis suggests that cell adhesion is necessary for bringing myoblasts to within a minimal distance from the myotubes. The branched actin polymerization machinery acts subsequently to promote tight apposition between the surfaces of the two cell types and formation of multiple sites of cell–cell contact, giving rise to nascent fusion pores whose expansion establishes full cytoplasmic continuity. Given the conserved features of IFM myogenesis, this sequence of cell interactions and membrane events and the mechanistic significance of cell adhesion elements and the actin-based cytoskeleton are likely to represent general principles of the myoblast fusion process. PMID:26459604
NASA Astrophysics Data System (ADS)
Parkar, V. V.; Sharma, Sushil K.; Palit, R.; Upadhyaya, S.; Shrivastava, A.; Pandit, S. K.; Mahata, K.; Jha, V.; Santra, S.; Ramachandran, K.; Nag, T. N.; Rath, P. K.; Kanagalekar, Bhushan; Trivedi, T.
2018-01-01
The complete and incomplete fusion cross sections for the 7Li+124Sn reaction were measured using online and offline characteristic γ -ray detection techniques. The complete fusion (CF) cross sections at energies above the Coulomb barrier were found to be suppressed by ˜26 % compared to the coupled channel calculations. This suppression observed in complete fusion cross sections is found to be commensurate with the measured total incomplete fusion (ICF) cross sections. There is a distinct feature observed in the ICF cross sections, i.e., t capture is found to be dominant compared to α capture at all the measured energies. A simultaneous explanation of complete, incomplete, and total fusion (TF) data was also obtained from the calculations based on the continuum discretized coupled channel method with short range imaginary potentials. The cross section ratios of CF/TF and ICF/TF obtained from the data as well as the calculations showed the dominance of ICF at below-barrier energies and CF at above-barrier energies.
NASA Technical Reports Server (NTRS)
Foyle, David C.
1993-01-01
Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.
Detection of buried objects by fusing dual-band infrared images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, G.A.; Sengupta, S.K.; Sherwood, R.J.
1993-11-01
We have conducted experiments to demonstrate the enhanced detectability of buried land mines using sensor fusion techniques. Multiple sensors, including visible imagery, infrared imagery, and ground penetrating radar (GPR), have been used to acquire data on a number of buried mines and mine surrogates. Because the visible wavelength and GPR data are currently incomplete. This paper focuses on the fusion of two-band infrared images. We use feature-level fusion and supervised learning with the probabilistic neural network (PNN) to evaluate detection performance. The novelty of the work lies in the application of advanced target recognition algorithms, the fusion of dual-band infraredmore » images and evaluation of the techniques using two real data sets.« less
Multisensor data fusion for IED threat detection
NASA Astrophysics Data System (ADS)
Mees, Wim; Heremans, Roel
2012-10-01
In this paper we present the multi-sensor registration and fusion algorithms that were developed for a force protection research project in order to detect threats against military patrol vehicles. The fusion is performed at object level, using a hierarchical evidence aggregation approach. It first uses expert domain knowledge about the features used to characterize the detected threats, that is implemented in the form of a fuzzy expert system. The next level consists in fusing intra-sensor and inter-sensor information. Here an ordered weighted averaging operator is used. The object level fusion between candidate threats that are detected asynchronously on a moving vehicle by sensors with different imaging geometries, requires an accurate sensor to world coordinate transformation. This image registration will also be discussed in this paper.
NASA Astrophysics Data System (ADS)
Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing
2018-02-01
For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.
A Fusion-Inhibiting Peptide against Rift Valley Fever Virus Inhibits Multiple, Diverse Viruses
Koehler, Jeffrey W.; Smith, Jeffrey M.; Ripoll, Daniel R.; Spik, Kristin W.; Taylor, Shannon L.; Badger, Catherine V.; Grant, Rebecca J.; Ogg, Monica M.; Wallqvist, Anders; Guttieri, Mary C.; Garry, Robert F.; Schmaljohn, Connie S.
2013-01-01
For enveloped viruses, fusion of the viral envelope with a cellular membrane is critical for a productive infection to occur. This fusion process is mediated by at least three classes of fusion proteins (Class I, II, and III) based on the protein sequence and structure. For Rift Valley fever virus (RVFV), the glycoprotein Gc (Class II fusion protein) mediates this fusion event following entry into the endocytic pathway, allowing the viral genome access to the cell cytoplasm. Here, we show that peptides analogous to the RVFV Gc stem region inhibited RVFV infectivity in cell culture by inhibiting the fusion process. Further, we show that infectivity can be inhibited for diverse, unrelated RNA viruses that have Class I (Ebola virus), Class II (Andes virus), or Class III (vesicular stomatitis virus) fusion proteins using this single peptide. Our findings are consistent with an inhibition mechanism similar to that proposed for stem peptide fusion inhibitors of dengue virus in which the RVFV inhibitory peptide first binds to both the virion and cell membranes, allowing it to traffic with the virus into the endocytic pathway. Upon acidification and rearrangement of Gc, the peptide is then able to specifically bind to Gc and prevent fusion of the viral and endocytic membranes, thus inhibiting viral infection. These results could provide novel insights into conserved features among the three classes of viral fusion proteins and offer direction for the future development of broadly active fusion inhibitors. PMID:24069485
State-of-the-Art Fusion-Finder Algorithms Sensitivity and Specificity
Carrara, Matteo; Beccuti, Marco; Lazzarato, Fulvio; Cavallo, Federica; Cordero, Francesca; Donatelli, Susanna; Calogero, Raffaele A.
2013-01-01
Background. Gene fusions arising from chromosomal translocations have been implicated in cancer. RNA-seq has the potential to discover such rearrangements generating functional proteins (chimera/fusion). Recently, many methods for chimeras detection have been published. However, specificity and sensitivity of those tools were not extensively investigated in a comparative way. Results. We tested eight fusion-detection tools (FusionHunter, FusionMap, FusionFinder, MapSplice, deFuse, Bellerophontes, ChimeraScan, and TopHat-fusion) to detect fusion events using synthetic and real datasets encompassing chimeras. The comparison analysis run only on synthetic data could generate misleading results since we found no counterpart on real dataset. Furthermore, most tools report a very high number of false positive chimeras. In particular, the most sensitive tool, ChimeraScan, reports a large number of false positives that we were able to significantly reduce by devising and applying two filters to remove fusions not supported by fusion junction-spanning reads or encompassing large intronic regions. Conclusions. The discordant results obtained using synthetic and real datasets suggest that synthetic datasets encompassing fusion events may not fully catch the complexity of RNA-seq experiment. Moreover, fusion detection tools are still limited in sensitivity or specificity; thus, there is space for further improvement in the fusion-finder algorithms. PMID:23555082
Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.
Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion
Deng, Ning
2014-01-01
In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317
Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.
Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning
2014-01-01
In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.
NASA Astrophysics Data System (ADS)
Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran
2018-05-01
Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.
Márquez, Cristina; López, M Isabel; Ruisánchez, Itziar; Callao, M Pilar
2016-12-01
Two data fusion strategies (high- and mid-level) combined with a multivariate classification approach (Soft Independent Modelling of Class Analogy, SIMCA) have been applied to take advantage of the synergistic effect of the information obtained from two spectroscopic techniques: FT-Raman and NIR. Mid-level data fusion consists of merging some of the previous selected variables from the spectra obtained from each spectroscopic technique and then applying the classification technique. High-level data fusion combines the SIMCA classification results obtained individually from each spectroscopic technique. Of the possible ways to make the necessary combinations, we decided to use fuzzy aggregation connective operators. As a case study, we considered the possible adulteration of hazelnut paste with almond. Using the two-class SIMCA approach, class 1 consisted of unadulterated hazelnut samples and class 2 of samples adulterated with almond. Models performance was also studied with samples adulterated with chickpea. The results show that data fusion is an effective strategy since the performance parameters are better than the individual ones: sensitivity and specificity values between 75% and 100% for the individual techniques and between 96-100% and 88-100% for the mid- and high-level data fusion strategies, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
Machinery Diagnostic Feature Extraction and Fusion Techniques Using Diverse Sources
2001-04-05
laboratories AMTEC Corporation P.O. Box 077777 500 Wynn Drive, Suite 314 Huntsville, AL 35807-7777 Huntsville, AL 35816-3429 chongchanlee(~hnt.wvlelabs.com...for helicopters. Other distinguished features of JAHUMS system developed by AMTEC Corporation and Wyle Laboratories Incorporated are their data
Aperture tolerances for neutron-imaging systems in inertial confinement fusion.
Ghilea, M C; Sangster, T C; Meyerhofer, D D; Lerche, R A; Disdier, L
2008-02-01
Neutron-imaging systems are being considered as an ignition diagnostic for the National Ignition Facility (NIF) [Hogan et al., Nucl. Fusion 41, 567 (2001)]. Given the importance of these systems, a neutron-imaging design tool is being used to quantify the effects of aperture fabrication and alignment tolerances on reconstructed neutron images for inertial confinement fusion. The simulations indicate that alignment tolerances of more than 1 mrad would introduce measurable features in a reconstructed image for both pinholes and penumbral aperture systems. These simulations further show that penumbral apertures are several times less sensitive to fabrication errors than pinhole apertures.
Tyrosine kinase gene rearrangements in epithelial malignancies
Shaw, Alice T.; Hsu, Peggy P.; Awad, Mark M.; Engelman, Jeffrey A.
2014-01-01
Chromosomal rearrangements that lead to oncogenic kinase activation are observed in many epithelial cancers. These cancers express activated fusion kinases that drive the initiation and progression of malignancy, and often have a considerable response to small-molecule kinase inhibitors, which validates these fusion kinases as ‘druggable’ targets. In this Review, we examine the aetiologic, pathogenic and clinical features that are associated with cancers harbouring oncogenic fusion kinases, including anaplastic lymphoma kinase (ALK), ROS1 and RET. We discuss the clinical outcomes with targeted therapies and explore strategies to discover additional kinases that are activated by chromosomal rearrangements in solid tumours. PMID:24132104
Method for vacuum fusion bonding
Ackler, Harold D.; Swierkowski, Stefan P.; Tarte, Lisa A.; Hicks, Randall K.
2001-01-01
An improved vacuum fusion bonding structure and process for aligned bonding of large area glass plates, patterned with microchannels and access holes and slots, for elevated glass fusion temperatures. Vacuum pumpout of all components is through the bottom platform which yields an untouched, defect free top surface which greatly improves optical access through this smooth surface. Also, a completely non-adherent interlayer, such as graphite, with alignment and location features is located between the main steel platform and the glass plate pair, which makes large improvements in quality, yield, and ease of use, and enables aligned bonding of very large glass structures.
Fusion bonding and alignment fixture
Ackler, Harold D.; Swierkowski, Stefan P.; Tarte, Lisa A.; Hicks, Randall K.
2000-01-01
An improved vacuum fusion bonding structure and process for aligned bonding of large area glass plates, patterned with microchannels and access holes and slots, for elevated glass fusion temperatures. Vacuum pumpout of all the components is through the bottom platform which yields an untouched, defect free top surface which greatly improves optical access through this smooth surface. Also, a completely non-adherent interlayer, such as graphite, with alignment and location features is located between the main steel platform and the glass plate pair, which makes large improvements in quality, yield, and ease of use, and enables aligned bonding of very large glass structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieser, Patti; Hopkins, David
The DOE Princeton Plasma Physics Laboratory (PPPL) collaborates to develop fusion as a safe, clean and abundant energy source for the future. This video discusses PPPL's research and development on plasma, the fourth state of matter. In this simulation of plasma turbulence inside PPPL's National Spherical Torus Experiment, the colorful strings represent higher and lower electron density in turbulent plasma as it circles around a donut-shaped fusion reactor; red and orange are higher density. This image is among those featured in the slide show, "Plasmas are Hot and Fusion is Cool," a production of PPPL and the Princeton University Broadcastmore » Center.« less
Stalk model of membrane fusion: solution of energy crisis.
Kozlovsky, Yonathan; Kozlov, Michael M
2002-01-01
Membrane fusion proceeds via formation of intermediate nonbilayer structures. The stalk model of fusion intermediate is commonly recognized to account for the major phenomenology of the fusion process. However, in its current form, the stalk model poses a challenge. On one hand, it is able to describe qualitatively the modulation of the fusion reaction by the lipid composition of the membranes. On the other, it predicts very large values of the stalk energy, so that the related energy barrier for fusion cannot be overcome by membranes within a biologically reasonable span of time. We suggest a new structure for the fusion stalk, which resolves the energy crisis of the model. Our approach is based on a combined deformation of the stalk membrane including bending of the membrane surface and tilt of the hydrocarbon chains of lipid molecules. We demonstrate that the energy of the fusion stalk is a few times smaller than those predicted previously and the stalks are feasible in real systems. We account quantitatively for the experimental results on dependence of the fusion reaction on the lipid composition of different membrane monolayers. We analyze the dependence of the stalk energy on the distance between the fusing membranes and provide the experimentally testable predictions for the structural features of the stalk intermediates. PMID:11806930
Cell fusion and nuclear fusion in plants.
Maruyama, Daisuke; Ohtsu, Mina; Higashiyama, Tetsuya
2016-12-01
Eukaryotic cells are surrounded by a plasma membrane and have a large nucleus containing the genomic DNA, which is enclosed by a nuclear envelope consisting of the outer and inner nuclear membranes. Although these membranes maintain the identity of cells, they sometimes fuse to each other, such as to produce a zygote during sexual reproduction or to give rise to other characteristically polyploid tissues. Recent studies have demonstrated that the mechanisms of plasma membrane or nuclear membrane fusion in plants are shared to some extent with those of yeasts and animals, despite the unique features of plant cells including thick cell walls and intercellular connections. Here, we summarize the key factors in the fusion of these membranes during plant reproduction, and also focus on "non-gametic cell fusion," which was thought to be rare in plant tissue, in which each cell is separated by a cell wall. Copyright © 2016 Elsevier Ltd. All rights reserved.
Weber-aware weighted mutual information evaluation for infrared-visible image fusion
NASA Astrophysics Data System (ADS)
Luo, Xiaoyan; Wang, Shining; Yuan, Ding
2016-10-01
A performance metric for infrared and visible image fusion is proposed based on Weber's law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.
Structure of the uncleaved ectodomain of the paramyxovirus (hPIV3) fusion protein
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, Hsien-Sheng; Paterson, Reay G.; Wen, Xiaolin
2010-03-08
Class I viral fusion proteins share common mechanistic and structural features but little sequence similarity. Structural insights into the protein conformational changes associated with membrane fusion are based largely on studies of the influenza virus hemagglutinin in pre- and postfusion conformations. Here, we present the crystal structure of the secreted, uncleaved ectodomain of the paramyxovirus, human parainfluenza virus 3 fusion (F) protein, a member of the class I viral fusion protein group. The secreted human parainfluenza virus 3 F forms a trimer with distinct head, neck, and stalk regions. Unexpectedly, the structure reveals a six-helix bundle associated with the postfusionmore » form of F, suggesting that the anchor-minus ectodomain adopts a conformation largely similar to the postfusion state. The transmembrane anchor domains of F may therefore profoundly influence the folding energetics that establish and maintain a metastable, prefusion state.« less
Yang, Jie; Yin, Yingying; Zhang, Zuping; Long, Jun; Dong, Jian; Zhang, Yuqun; Xu, Zhi; Li, Lei; Liu, Jie; Yuan, Yonggui
2018-02-05
Major depressive disorder (MDD) is characterized by dysregulation of distributed structural and functional networks. It is now recognized that structural and functional networks are related at multiple temporal scales. The recent emergence of multimodal fusion methods has made it possible to comprehensively and systematically investigate brain networks and thereby provide essential information for influencing disease diagnosis and prognosis. However, such investigations are hampered by the inconsistent dimensionality features between structural and functional networks. Thus, a semi-multimodal fusion hierarchical feature reduction framework is proposed. Feature reduction is a vital procedure in classification that can be used to eliminate irrelevant and redundant information and thereby improve the accuracy of disease diagnosis. Our proposed framework primarily consists of two steps. The first step considers the connection distances in both structural and functional networks between MDD and healthy control (HC) groups. By adding a constraint based on sparsity regularization, the second step fully utilizes the inter-relationship between the two modalities. However, in contrast to conventional multi-modality multi-task methods, the structural networks were considered to play only a subsidiary role in feature reduction and were not included in the following classification. The proposed method achieved a classification accuracy, specificity, sensitivity, and area under the curve of 84.91%, 88.6%, 81.29%, and 0.91, respectively. Moreover, the frontal-limbic system contributed the most to disease diagnosis. Importantly, by taking full advantage of the complementary information from multimodal neuroimaging data, the selected consensus connections may be highly reliable biomarkers of MDD. Copyright © 2017 Elsevier B.V. All rights reserved.
TMS combined with EEG in genetic generalized epilepsy: A phase II diagnostic accuracy study.
Kimiskidis, Vasilios K; Tsimpiris, Alkiviadis; Ryvlin, Philippe; Kalviainen, Reetta; Koutroumanidis, Michalis; Valentin, Antonio; Laskaris, Nikolaos; Kugiumtzis, Dimitris
2017-02-01
(A) To develop a TMS-EEG stimulation and data analysis protocol in genetic generalized epilepsy (GGE). (B) To investigate the diagnostic accuracy of TMS-EEG in GGE. Pilot experiments resulted in the development and optimization of a paired-pulse TMS-EEG protocol at rest, during hyperventilation (HV), and post-HV combined with multi-level data analysis. This protocol was applied in 11 controls (C) and 25 GGE patients (P), further dichotomized into responders to antiepileptic drugs (R, n=13) and non-responders (n-R, n=12).Features (n=57) extracted from TMS-EEG responses after multi-level analysis were given to a feature selection scheme and a Bayesian classifier, and the accuracy of assigning participants into the classes P-C and R-nR was computed. On the basis of the optimal feature subset, the cross-validated accuracy of TMS-EEG for the classification P-C was 0.86 at rest, 0.81 during HV and 0.92 at post-HV, whereas for R-nR the corresponding figures are 0.80, 0.78 and 0.65, respectively. Applying a fusion approach on all conditions resulted in an accuracy of 0.84 for the classification P-C and 0.76 for the classification R-nR. TMS-EEG can be used for diagnostic purposes and for assessing the response to antiepileptic drugs. TMS-EEG holds significant diagnostic potential in GGE. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Intertumoral Heterogeneity within Medulloblastoma Subgroups.
Cavalli, Florence M G; Remke, Marc; Rampasek, Ladislav; Peacock, John; Shih, David J H; Luu, Betty; Garzia, Livia; Torchia, Jonathon; Nor, Carolina; Morrissy, A Sorana; Agnihotri, Sameer; Thompson, Yuan Yao; Kuzan-Fischer, Claudia M; Farooq, Hamza; Isaev, Keren; Daniels, Craig; Cho, Byung-Kyu; Kim, Seung-Ki; Wang, Kyu-Chang; Lee, Ji Yeoun; Grajkowska, Wieslawa A; Perek-Polnik, Marta; Vasiljevic, Alexandre; Faure-Conter, Cecile; Jouvet, Anne; Giannini, Caterina; Nageswara Rao, Amulya A; Li, Kay Ka Wai; Ng, Ho-Keung; Eberhart, Charles G; Pollack, Ian F; Hamilton, Ronald L; Gillespie, G Yancey; Olson, James M; Leary, Sarah; Weiss, William A; Lach, Boleslaw; Chambless, Lola B; Thompson, Reid C; Cooper, Michael K; Vibhakar, Rajeev; Hauser, Peter; van Veelen, Marie-Lise C; Kros, Johan M; French, Pim J; Ra, Young Shin; Kumabe, Toshihiro; López-Aguilar, Enrique; Zitterbart, Karel; Sterba, Jaroslav; Finocchiaro, Gaetano; Massimino, Maura; Van Meir, Erwin G; Osuka, Satoru; Shofuda, Tomoko; Klekner, Almos; Zollo, Massimo; Leonard, Jeffrey R; Rubin, Joshua B; Jabado, Nada; Albrecht, Steffen; Mora, Jaume; Van Meter, Timothy E; Jung, Shin; Moore, Andrew S; Hallahan, Andrew R; Chan, Jennifer A; Tirapelli, Daniela P C; Carlotti, Carlos G; Fouladi, Maryam; Pimentel, José; Faria, Claudia C; Saad, Ali G; Massimi, Luca; Liau, Linda M; Wheeler, Helen; Nakamura, Hideo; Elbabaa, Samer K; Perezpeña-Diazconti, Mario; Chico Ponce de León, Fernando; Robinson, Shenandoah; Zapotocky, Michal; Lassaletta, Alvaro; Huang, Annie; Hawkins, Cynthia E; Tabori, Uri; Bouffet, Eric; Bartels, Ute; Dirks, Peter B; Rutka, James T; Bader, Gary D; Reimand, Jüri; Goldenberg, Anna; Ramaswamy, Vijay; Taylor, Michael D
2017-06-12
While molecular subgrouping has revolutionized medulloblastoma classification, the extent of heterogeneity within subgroups is unknown. Similarity network fusion (SNF) applied to genome-wide DNA methylation and gene expression data across 763 primary samples identifies very homogeneous clusters of patients, supporting the presence of medulloblastoma subtypes. After integration of somatic copy-number alterations, and clinical features specific to each cluster, we identify 12 different subtypes of medulloblastoma. Integrative analysis using SNF further delineates group 3 from group 4 medulloblastoma, which is not as readily apparent through analyses of individual data types. Two clear subtypes of infants with Sonic Hedgehog medulloblastoma with disparate outcomes and biology are identified. Medulloblastoma subtypes identified through integrative clustering have important implications for stratification of future clinical trials. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bruder, Daniel
2010-11-01
The DC Glow Discharge Exhibit is intended to demonstrate the effects a magnetic field produces on a plasma in a vacuum chamber. The display, which will be featured as a part of The Liberty Science Center's ``Energy Quest Exhibition,'' consists of a DC glow discharge tube and information panels to educate the general public on plasma and its relation to fusion energy. Wall posters and an information booklet will offer brief descriptions of fusion-based science and technology, and will portray plasma's role in the development of fusion as a viable source of energy. The display features a horse-shoe magnet on a movable track, allowing viewers to witness the effects of a magnetic field upon a plasma. The plasma is created from air within a vacuum averaging between 100-200 mTorr. Signage within the casing describes the hardware components. The display is pending delivery to The Liberty Science Center, and will replace a similar, older exhibit presently at the museum.
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-01
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612
NASA Astrophysics Data System (ADS)
Li, Jing; Xie, Weixin; Pei, Jihong
2018-03-01
Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.
[MRI/TRUS fusion-guided prostate biopsy : Value in the context of focal therapy].
Franz, T; von Hardenberg, J; Blana, A; Cash, H; Baumunk, D; Salomon, G; Hadaschik, B; Henkel, T; Herrmann, J; Kahmann, F; Köhrmann, K-U; Köllermann, J; Kruck, S; Liehr, U-B; Machtens, S; Peters, I; Radtke, J P; Roosen, A; Schlemmer, H-P; Sentker, L; Wendler, J J; Witzsch, U; Stolzenburg, J-U; Schostak, M; Ganzer, R
2017-02-01
Several systems for MRI/TRUS fusion-guided biopsy of the prostate are commercially available. Many studies have shown superiority of fusion systems for tumor detection and diagnostic quality compared to random biopsy. The benefit of fusion systems in focal therapy of prostate cancer (PC) is less clear. Critical considerations of fusion systems for planning and monitoring of focal therapy of PC were investigated. A systematic literature review of available fusion systems for the period 2013-5/2016 was performed. A checklist of technical details, suitability for special anatomic situations and suitability for focal therapy was established by the German working group for focal therapy (Arbeitskreis fokale und Mikrotherapie). Eight fusion systems were considered (Artemis™, BioJet, BiopSee®, iSR´obot™ Mona Lisa, Hitachi HI-RVS, UroNav and Urostation®). Differences were found for biopsy mode (transrectal, perineal, both), fusion mode (elastic or rigid), navigation (image-based, electromagnetic sensor-based or mechanical sensor-based) and space requirements. Several consensus groups recommend fusion systems for focal therapy. Useful features are "needle tracking" and compatibility between fusion system and treatment device (available for Artemis™, BiopSee® and Urostation® with Focal One®; BiopSee®, Hitachi HI-RVS with NanoKnife®; BioJet, BiopSee® with cryoablation, brachytherapy). There are a few studies for treatment planning. However, studies on treatment monitoring after focal therapy are missing.
Structural health monitoring for ship structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrar, Charles; Park, Gyuhae; Angel, Marian
2009-01-01
Currently the Office of Naval Research is supporting the development of structural health monitoring (SHM) technology for U.S. Navy ship structures. This application is particularly challenging because of the physical size of these structures, the widely varying and often extreme operational and environmental conditions associated with these ships missions, lack of data from known damage conditions, limited sensing that was not designed specifically for SHM, and the management of the vast amounts of data that can be collected during a mission. This paper will first define a statistical pattern recognition paradigm for SHM by describing the four steps of (1)more » Operational Evaluation, (2) Data Acquisition, (3) Feature Extraction, and (4) Statistical Classification of Features as they apply to ship structures. Note that inherent in the last three steps of this process are additional tasks of data cleansing, compression, normalization and fusion. The presentation will discuss ship structure SHM challenges in the context of applying various SHM approaches to sea trials data measured on an aluminum multi-hull high-speed ship, the HSV-2 Swift. To conclude, the paper will discuss several outstanding issues that need to be addressed before SHM can make the transition from a research topic to actual field applications on ship structures and suggest approaches for addressing these issues.« less
Qi, Luming; Liu, Honggao; Li, Jieqing; Li, Tao; Wang, Yuanzhong
2018-01-15
Origin traceability is an important step to control the nutritional and pharmacological quality of food products. Boletus edulis mushroom is a well-known food resource in the world. Its nutritional and medicinal properties are drastically varied depending on geographical origins. In this study, three sensor systems (inductively coupled plasma atomic emission spectrophotometer (ICP-AES), ultraviolet-visible (UV-Vis) and Fourier transform mid-infrared spectroscopy (FT-MIR)) were applied for the origin traceability of 192 mushroom samples (caps and stipes) in combination with chemometrics. The difference between cap and stipe was clearly illustrated based on a single sensor technique, respectively. Feature variables from three instruments were used for origin traceability. Two supervised classification methods, partial least square discriminant analysis (FLS-DA) and grid search support vector machine (GS-SVM), were applied to develop mathematical models. Two steps (internal cross-validation and external prediction for unknown samples) were used to evaluate the performance of a classification model. The result is satisfactory with high accuracies ranging from 90.625% to 100%. These models also have an excellent generalization ability with the optimal parameters. Based on the combination of three sensory systems, our study provides a multi-sensory and comprehensive origin traceability of B. edulis mushrooms.
Qi, Luming; Liu, Honggao; Li, Jieqing; Li, Tao
2018-01-01
Origin traceability is an important step to control the nutritional and pharmacological quality of food products. Boletus edulis mushroom is a well-known food resource in the world. Its nutritional and medicinal properties are drastically varied depending on geographical origins. In this study, three sensor systems (inductively coupled plasma atomic emission spectrophotometer (ICP-AES), ultraviolet-visible (UV-Vis) and Fourier transform mid-infrared spectroscopy (FT-MIR)) were applied for the origin traceability of 184 mushroom samples (caps and stipes) in combination with chemometrics. The difference between cap and stipe was clearly illustrated based on a single sensor technique, respectively. Feature variables from three instruments were used for origin traceability. Two supervised classification methods, partial least square discriminant analysis (FLS-DA) and grid search support vector machine (GS-SVM), were applied to develop mathematical models. Two steps (internal cross-validation and external prediction for unknown samples) were used to evaluate the performance of a classification model. The result is satisfactory with high accuracies ranging from 90.625% to 100%. These models also have an excellent generalization ability with the optimal parameters. Based on the combination of three sensory systems, our study provides a multi-sensory and comprehensive origin traceability of B. edulis mushrooms. PMID:29342969
Versatile fusion source integrator AFSI for fast ion and neutron studies in fusion devices
NASA Astrophysics Data System (ADS)
Sirén, Paula; Varje, Jari; Äkäslompolo, Simppa; Asunta, Otto; Giroud, Carine; Kurki-Suonio, Taina; Weisen, Henri; JET Contributors, The
2018-01-01
ASCOT Fusion Source Integrator AFSI, an efficient tool for calculating fusion reaction rates and characterizing the fusion products, based on arbitrary reactant distributions, has been developed and is reported in this paper. Calculation of reactor-relevant D-D, D-T and D-3He fusion reactions has been implemented based on the Bosch-Hale fusion cross sections. The reactions can be calculated between arbitrary particle populations, including Maxwellian thermal particles and minority energetic particles. Reaction rate profiles, energy spectra and full 4D phase space distributions can be calculated for the non-isotropic reaction products. The code is especially suitable for integrated modelling in self-consistent plasma physics simulations as well as in the Serpent neutronics calculation chain. Validation of the model has been performed for neutron measurements at the JET tokamak and the code has been applied to predictive simulations in ITER.
Multisensor Parallel Largest Ellipsoid Distributed Data Fusion with Unknown Cross-Covariances
Liu, Baoyu; Zhan, Xingqun; Zhu, Zheng H.
2017-01-01
As the largest ellipsoid (LE) data fusion algorithm can only be applied to two-sensor system, in this contribution, parallel fusion structure is proposed to introduce the LE algorithm into a multisensor system with unknown cross-covariances, and three parallel fusion structures based on different estimate pairing methods are presented and analyzed. In order to assess the influence of fusion structure on fusion performance, two fusion performance assessment parameters are defined as Fusion Distance and Fusion Index. Moreover, the formula for calculating the upper bounds of actual fused error covariances of the presented multisensor LE fusers is also provided. Demonstrated with simulation examples, the Fusion Index indicates fuser’s actual fused accuracy and its sensitivity to the sensor orders, as well as its robustness to the accuracy of newly added sensors. Compared to the LE fuser with sequential structure, the LE fusers with proposed parallel structures not only significantly improve their properties in these aspects, but also embrace better performances in consistency and computation efficiency. The presented multisensor LE fusers generally have better accuracies than covariance intersection (CI) fusion algorithm and are consistent when the local estimates are weakly correlated. PMID:28661442
Intratumor heterogeneity of DCE-MRI reveals Ki-67 proliferation status in breast cancer
NASA Astrophysics Data System (ADS)
Cheng, Hu; Fan, Ming; Zhang, Peng; Liu, Bin; Shao, Guoliang; Li, Lihua
2018-03-01
Breast cancer is a highly heterogeneous disease both biologically and clinically, and certain pathologic parameters, i.e., Ki67 expression, are useful in predicting the prognosis of patients. The aim of the study is to identify intratumor heterogeneity of breast cancer for predicting Ki-67 proliferation status in estrogen receptor (ER)-positive breast cancer patients. A dataset of 77 patients was collected who underwent dynamic contrast enhancement magnetic resonance imaging (DCE-MRI) examination. Of these patients, 51 were high-Ki-67 expression and 26 were low-Ki-67 expression. We partitioned the breast tumor into subregions using two methods based on the values of time to peak (TTP) and peak enhancement rate (PER). Within each tumor subregion, image features were extracted including statistical and morphological features from DCE-MRI. The classification models were applied on each region separately to assess whether the classifiers based on features extracted from various subregions features could have different performance for prediction. An area under a receiver operating characteristic curve (AUC) was computed using leave-one-out cross-validation (LOOCV) method. The classifier using features related with moderate time to peak achieved best performance with AUC of 0.826 than that based on the other regions. While using multi-classifier fusion method, the AUC value was significantly (P=0.03) increased to 0.858+/-0.032 compare to classifier with AUC of 0.778 using features from the entire tumor. The results demonstrated that features reflect heterogeneity in intratumoral subregions can improve the classifier performance to predict the Ki-67 proliferation status than the classifier using features from entire tumor alone.
Unsupervised Metric Fusion Over Multiview Data by Graph Random Walk-Based Cross-View Diffusion.
Wang, Yang; Zhang, Wenjie; Wu, Lin; Lin, Xuemin; Zhao, Xiang
2017-01-01
Learning an ideal metric is crucial to many tasks in computer vision. Diverse feature representations may combat this problem from different aspects; as visual data objects described by multiple features can be decomposed into multiple views, thus often provide complementary information. In this paper, we propose a cross-view fusion algorithm that leads to a similarity metric for multiview data by systematically fusing multiple similarity measures. Unlike existing paradigms, we focus on learning distance measure by exploiting a graph structure of data samples, where an input similarity matrix can be improved through a propagation of graph random walk. In particular, we construct multiple graphs with each one corresponding to an individual view, and a cross-view fusion approach based on graph random walk is presented to derive an optimal distance measure by fusing multiple metrics. Our method is scalable to a large amount of data by enforcing sparsity through an anchor graph representation. To adaptively control the effects of different views, we dynamically learn view-specific coefficients, which are leveraged into graph random walk to balance multiviews. However, such a strategy may lead to an over-smooth similarity metric where affinities between dissimilar samples may be enlarged by excessively conducting cross-view fusion. Thus, we figure out a heuristic approach to controlling the iteration number in the fusion process in order to avoid over smoothness. Extensive experiments conducted on real-world data sets validate the effectiveness and efficiency of our approach.
Loose fusion based on SLAM and IMU for indoor environment
NASA Astrophysics Data System (ADS)
Zhu, Haijiang; Wang, Zhicheng; Zhou, Jinglin; Wang, Xuejing
2018-04-01
The simultaneous localization and mapping (SLAM) method based on the RGB-D sensor is widely researched in recent years. However, the accuracy of the RGB-D SLAM relies heavily on correspondence feature points, and the position would be lost in case of scenes with sparse textures. Therefore, plenty of fusion methods using the RGB-D information and inertial measurement unit (IMU) data have investigated to improve the accuracy of SLAM system. However, these fusion methods usually do not take into account the size of matched feature points. The pose estimation calculated by RGB-D information may not be accurate while the number of correct matches is too few. Thus, considering the impact of matches in SLAM system and the problem of missing position in scenes with few textures, a loose fusion method combining RGB-D with IMU is proposed in this paper. In the proposed method, we design a loose fusion strategy based on the RGB-D camera information and IMU data, which is to utilize the IMU data for position estimation when the corresponding point matches are quite few. While there are a lot of matches, the RGB-D information is still used to estimate position. The final pose would be optimized by General Graph Optimization (g2o) framework to reduce error. The experimental results show that the proposed method is better than the RGB-D camera's method. And this method can continue working stably for indoor environment with sparse textures in the SLAM system.
Zwolak, Pawel; Farei-Campagna, Jan; Jentzsch, Thorsten; von Rechenberg, Brigitte; Werner, Clément M
2018-01-01
Posterolateral spinal fusion is a common orthopaedic surgery performed to treat degenerative and traumatic deformities of the spinal column. In posteriolateral spinal fusion, different osteoinductive demineralized bone matrix products have been previously investigated. We evaluated the effect of locally applied zoledronic acid in combination with commercially available demineralized bone matrix putty on new bone formation in posterolateral spinal fusion in a murine in vivo model. A posterolateral sacral spine fusion in murine model was used to evaluate the new bone formation. We used the sacral spine fusion model to model the clinical situation in which a bone graft or demineralized bone matrix is applied after dorsal instrumentation of the spine. In our study, group 1 received decortications only (n = 10), group 2 received decortication, and absorbable collagen sponge carrier, group 3 received decortication and absorbable collagen sponge carrier with zoledronic acid in dose 10 µg, group 4 received demineralized bone matrix putty (DBM putty) plus decortication (n = 10), and group 5 received DBM putty, decortication and locally applied zoledronic acid in dose 10 µg. Imaging was performed using MicroCT for new bone formation assessment. Also, murine spines were harvested for histopathological analysis 10 weeks after surgery. The surgery performed through midline posterior approach was reproducible. In group with decortication alone there was no new bone formation. Application of demineralized bone matrix putty alone produced new bone formation which bridged the S1-S4 laminae. Local application of zoledronic acid to demineralized bone matrix putty resulted in significant increase of new bone formation as compared to demineralized bone matrix putty group alone. A single local application of zoledronic acid with DBM putty during posterolateral fusion in sacral murine spine model increased significantly new bone formation in situ in our model. Therefore, our results justify further investigations to potentially use local application of zoledronic acid in future clinical studies.
A sensitive HIV-1 envelope induced fusion assay identifies fusion enhancement of thrombin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, De-Chun; Zhong, Guo-Cai; Su, Ju-Xiang
2010-01-22
To evaluate the interaction between HIV-1 envelope glycoprotein (Env) and target cell receptors, various cell-cell-fusion assays have been developed. In the present study, we established a novel fusion system. In this system, the expression of the sensitive reporter gene, firefly luciferase (FL) gene, in the target cells was used to evaluate cell fusion event. Simultaneously, constitutively expressed Renilla luciferase (RL) gene was used to monitor effector cell number and viability. FL gave a wider dynamic range than other known reporters and the introduction of RL made the assay accurate and reproducible. This system is especially beneficial for investigation of potentialmore » entry-influencing agents, for its power of ruling out the false inhibition or enhancement caused by the artificial cell-number variation. As a case study, we applied this fusion system to observe the effect of a serine protease, thrombin, on HIV Env-mediated cell-cell fusion and have found the fusion enhancement activity of thrombin over two R5-tropic HIV strains.« less
Yang Li; Wei Liang; Yinlong Zhang; Haibo An; Jindong Tan
2016-08-01
Automatic and accurate lumbar vertebrae detection is an essential step of image-guided minimally invasive spine surgery (IG-MISS). However, traditional methods still require human intervention due to the similarity of vertebrae, abnormal pathological conditions and uncertain imaging angle. In this paper, we present a novel convolutional neural network (CNN) model to automatically detect lumbar vertebrae for C-arm X-ray images. Training data is augmented by DRR and automatic segmentation of ROI is able to reduce the computational complexity. Furthermore, a feature fusion deep learning (FFDL) model is introduced to combine two types of features of lumbar vertebrae X-ray images, which uses sobel kernel and Gabor kernel to obtain the contour and texture of lumbar vertebrae, respectively. Comprehensive qualitative and quantitative experiments demonstrate that our proposed model performs more accurate in abnormal cases with pathologies and surgical implants in multi-angle views.
Processing LiDAR Data to Predict Natural Hazards
NASA Technical Reports Server (NTRS)
Fairweather, Ian; Crabtree, Robert; Hager, Stacey
2008-01-01
ELF-Base and ELF-Hazards (wherein 'ELF' signifies 'Extract LiDAR Features' and 'LiDAR' signifies 'light detection and ranging') are developmental software modules for processing remote-sensing LiDAR data to identify past natural hazards (principally, landslides) and predict future ones. ELF-Base processes raw LiDAR data, including LiDAR intensity data that are often ignored in other software, to create digital terrain models (DTMs) and digital feature models (DFMs) with sub-meter accuracy. ELF-Hazards fuses raw LiDAR data, data from multispectral and hyperspectral optical images, and DTMs and DFMs generated by ELF-Base to generate hazard risk maps. Advanced algorithms in these software modules include line-enhancement and edge-detection algorithms, surface-characterization algorithms, and algorithms that implement innovative data-fusion techniques. The line-extraction and edge-detection algorithms enable users to locate such features as faults and landslide headwall scarps. Also implemented in this software are improved methodologies for identification and mapping of past landslide events by use of (1) accurate, ELF-derived surface characterizations and (2) three LiDAR/optical-data-fusion techniques: post-classification data fusion, maximum-likelihood estimation modeling, and hierarchical within-class discrimination. This software is expected to enable faster, more accurate forecasting of natural hazards than has previously been possible.
Estimating workload using EEG spectral power and ERPs in the n-back task
NASA Astrophysics Data System (ADS)
Brouwer, Anne-Marie; Hogervorst, Maarten A.; van Erp, Jan B. F.; Heffelaar, Tobias; Zimmerman, Patrick H.; Oostenveld, Robert
2012-08-01
Previous studies indicate that both electroencephalogram (EEG) spectral power (in particular the alpha and theta band) and event-related potentials (ERPs) (in particular the P300) can be used as a measure of mental work or memory load. We compare their ability to estimate workload level in a well-controlled task. In addition, we combine both types of measures in a single classification model to examine whether this results in higher classification accuracy than either one alone. Participants watched a sequence of visually presented letters and indicated whether or not the current letter was the same as the one (n instances) before. Workload was varied by varying n. We developed different classification models using ERP features, frequency power features or a combination (fusion). Training and testing of the models simulated an online workload estimation situation. All our ERP, power and fusion models provide classification accuracies between 80% and 90% when distinguishing between the highest and the lowest workload condition after 2 min. For 32 out of 35 participants, classification was significantly higher than chance level after 2.5 s (or one letter) as estimated by the fusion model. Differences between the models are rather small, though the fusion model performs better than the other models when only short data segments are available for estimating workload.
Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian
2014-03-21
This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.
Ignition threshold for non-Maxwellian plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hay, Michael J., E-mail: hay@princeton.edu; Fisch, Nathaniel J.; Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543
2015-11-15
An optically thin p-{sup 11}B plasma loses more energy to bremsstrahlung than it gains from fusion reactions, unless the ion temperature can be elevated above the electron temperature. In thermal plasmas, the temperature differences required are possible in small Coulomb logarithm regimes, characterized by high density and low temperature. Ignition could be reached more easily if the fusion reactivity can be improved with nonthermal ion distributions. To establish an upper bound for the potential utility of a nonthermal distribution, we consider a monoenergetic beam with particle energy selected to maximize the beam-thermal reactivity. Comparing deuterium-tritium (DT) and p-{sup 11}B, themore » minimum Lawson criteria and minimum ρR required for inertial confinement fusion (ICF) volume ignition are calculated with and without the nonthermal feature. It turns out that channeling fusion alpha energy to maintain such a beam facilitates ignition at lower densities and ρR, improves reactivity at constant pressure, and could be used to remove helium ash. On the other hand, the reactivity gains that could be realized in DT plasmas are significant, the excess electron density in p-{sup 11}B plasmas increases the recirculated power cost to maintain a nonthermal feature and thereby constrains its utility to ash removal.« less
Saavoss, Josh D; Koenig, Lane; Cher, Daniel J
2016-01-01
Sacroiliac joint (SIJ) dysfunction is associated with a marked decrease in quality of life. Increasing evidence supports minimally invasive SIJ fusion as a safe and effective procedure for the treatment of chronic SIJ dysfunction. The impact of SIJ fusion on worker productivity is not known. Regression modeling using data from the National Health Interview Survey was applied to determine the relationship between responses to selected interview questions related to function and economic outcomes. Regression coefficients were then applied to prospectively collected, individual patient data in a randomized trial of SIJ fusion (INSITE, NCT01681004) to estimate expected differences in economic outcomes across treatments. Patients who receive SIJ fusion using iFuse Implant System(®) have an expected increase in the probability of working of 16% (95% confidence interval [CI] 11%-21%) relative to nonsurgical patients. The expected change in earnings across groups was US $3,128 (not statistically significant). Combining the two metrics, the annual increase in worker productivity given surgical vs nonsurgical care was $6,924 (95% CI $1,890-$11,945). For employees with chronic, severe SIJ dysfunction, minimally invasive SIJ fusion may improve worker productivity compared to nonsurgical treatment.
a Comparative Analysis of Spatiotemporal Data Fusion Models for Landsat and Modis Data
NASA Astrophysics Data System (ADS)
Hazaymeh, K.; Almagbile, A.
2018-04-01
In this study, three documented spatiotemporal data fusion models were applied to Landsat-7 and MODIS surface reflectance, and NDVI. The algorithms included the spatial and temporal adaptive reflectance fusion model (STARFM), sparse representation based on a spatiotemporal reflectance fusion model (SPSTFM), and spatiotemporal image-fusion model (STI-FM). The objectives of this study were to (i) compare the performance of these three fusion models using a one Landsat-MODIS spectral reflectance image pairs using time-series datasets from the Coleambally irrigation area in Australia, and (ii) quantitatively evaluate the accuracy of the synthetic images generated from each fusion model using statistical measurements. Results showed that the three fusion models predicted the synthetic Landsat-7 image with adequate agreements. The STI-FM produced more accurate reconstructions of both Landsat-7 spectral bands and NDVI. Furthermore, it produced surface reflectance images having the highest correlation with the actual Landsat-7 images. This study indicated that STI-FM would be more suitable for spatiotemporal data fusion applications such as vegetation monitoring, drought monitoring, and evapotranspiration.
Systematic identification and analysis of frequent gene fusion events in metabolic pathways
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henry, Christopher S.; Lerma-Ortiz, Claudia; Gerdes, Svetlana Y.
Here, gene fusions are the most powerful type of in silico-derived functional associations. However, many fusion compilations were made when <100 genomes were available, and algorithms for identifying fusions need updating to handle the current avalanche of sequenced genomes. The availability of a large fusion dataset would help probe functional associations and enable systematic analysis of where and why fusion events occur. As a result, here we present a systematic analysis of fusions in prokaryotes. We manually generated two training sets: (i) 121 fusions in the model organism Escherichia coli; (ii) 131 fusions found in B vitamin metabolism. These setsmore » were used to develop a fusion prediction algorithm that captured the training set fusions with only 7 % false negatives and 50 % false positives, a substantial improvement over existing approaches. This algorithm was then applied to identify 3.8 million potential fusions across 11,473 genomes. The results of the analysis are available in a searchable database. A functional analysis identified 3,000 reactions associated with frequent fusion events and revealed areas of metabolism where fusions are particularly prevalent. In conclusion, customary definitions of fusions were shown to be ambiguous, and a stricter one was proposed. Exploring the genes participating in fusion events showed that they most commonly encode transporters, regulators, and metabolic enzymes. The major rationales for fusions between metabolic genes appear to be overcoming pathway bottlenecks, avoiding toxicity, controlling competing pathways, and facilitating expression and assembly of protein complexes. Finally, our fusion dataset provides powerful clues to decipher the biological activities of domains of unknown function.« less
Systematic identification and analysis of frequent gene fusion events in metabolic pathways
Henry, Christopher S.; Lerma-Ortiz, Claudia; Gerdes, Svetlana Y.; ...
2016-06-24
Here, gene fusions are the most powerful type of in silico-derived functional associations. However, many fusion compilations were made when <100 genomes were available, and algorithms for identifying fusions need updating to handle the current avalanche of sequenced genomes. The availability of a large fusion dataset would help probe functional associations and enable systematic analysis of where and why fusion events occur. As a result, here we present a systematic analysis of fusions in prokaryotes. We manually generated two training sets: (i) 121 fusions in the model organism Escherichia coli; (ii) 131 fusions found in B vitamin metabolism. These setsmore » were used to develop a fusion prediction algorithm that captured the training set fusions with only 7 % false negatives and 50 % false positives, a substantial improvement over existing approaches. This algorithm was then applied to identify 3.8 million potential fusions across 11,473 genomes. The results of the analysis are available in a searchable database. A functional analysis identified 3,000 reactions associated with frequent fusion events and revealed areas of metabolism where fusions are particularly prevalent. In conclusion, customary definitions of fusions were shown to be ambiguous, and a stricter one was proposed. Exploring the genes participating in fusion events showed that they most commonly encode transporters, regulators, and metabolic enzymes. The major rationales for fusions between metabolic genes appear to be overcoming pathway bottlenecks, avoiding toxicity, controlling competing pathways, and facilitating expression and assembly of protein complexes. Finally, our fusion dataset provides powerful clues to decipher the biological activities of domains of unknown function.« less
Adjoint affine fusion and tadpoles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urichuk, Andrew, E-mail: andrew.urichuk@uleth.ca; Walton, Mark A., E-mail: walton@uleth.ca; International School for Advanced Studies
2016-06-15
We study affine fusion with the adjoint representation. For simple Lie algebras, elementary and universal formulas determine the decomposition of a tensor product of an integrable highest-weight representation with the adjoint representation. Using the (refined) affine depth rule, we prove that equally striking results apply to adjoint affine fusion. For diagonal fusion, a coefficient equals the number of nonzero Dynkin labels of the relevant affine highest weight, minus 1. A nice lattice-polytope interpretation follows and allows the straightforward calculation of the genus-1 1-point adjoint Verlinde dimension, the adjoint affine fusion tadpole. Explicit formulas, (piecewise) polynomial in the level, are writtenmore » for the adjoint tadpoles of all classical Lie algebras. We show that off-diagonal adjoint affine fusion is obtained from the corresponding tensor product by simply dropping non-dominant representations.« less
Pairwise diversity ranking of polychotomous features for ensemble physiological signal classifiers.
Gupta, Lalit; Kota, Srinivas; Molfese, Dennis L; Vaidyanathan, Ravi
2013-06-01
It is well known that fusion classifiers for physiological signal classification with diverse components (classifiers or data sets) outperform those with less diverse components. Determining component diversity, therefore, is of the utmost importance in the design of fusion classifiers that are often employed in clinical diagnostic and numerous other pattern recognition problems. In this article, a new pairwise diversity-based ranking strategy is introduced to select a subset of ensemble components, which when combined will be more diverse than any other component subset of the same size. The strategy is unified in the sense that the components can be classifiers or data sets. Moreover, the classifiers and data sets can be polychotomous. Classifier-fusion and data-fusion systems are formulated based on the diversity-based selection strategy, and the application of the two fusion strategies are demonstrated through the classification of multichannel event-related potentials. It is observed that for both classifier and data fusion, the classification accuracy tends to increase/decrease when the diversity of the component ensemble increases/decreases. For the four sets of 14-channel event-related potentials considered, it is shown that data fusion outperforms classifier fusion. Furthermore, it is demonstrated that the combination of data components that yield the best performance, in a relative sense, can be determined through the diversity-based selection strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Shijia, E-mail: wangsg@mail.ustc.edu.cn; Wang, Shaojie
2015-04-15
The evolution of the plasma temperature and density in an international thermonuclear experimental reactor (ITER)-like fusion device has been studied by numerically solving the energy transport equation coupled with the particle transport equation. The effect of particle pinch, which depends on the magnetic curvature and the safety factor, has been taken into account. The plasma is primarily heated by the alpha particles which are produced by the deuterium-tritium fusion reactions. A semi-empirical method, which adopts the ITERH-98P(y,2) scaling law, has been used to evaluate the transport coefficients. The fusion performances (the fusion energy gain factor, Q) similar to the ITERmore » inductive scenario and non-inductive scenario (with reversed magnetic shear) are obtained. It is shown that the particle pinch has significant effects on the fusion performance and profiles of a fusion reactor. When the volume-averaged density is fixed, particle pinch can lower the pedestal density by ∼30%, with the Q value and the central pressure almost unchanged. When the particle source or the pedestal density is fixed, the particle pinch can significantly enhance the Q value by 60%, with the central pressure also significantly raised.« less
Physiological and molecular triggers for SARS-CoV membrane fusion and entry into host cells.
Millet, Jean Kaoru; Whittaker, Gary R
2018-04-01
During viral entry, enveloped viruses require the fusion of their lipid envelope with host cell membranes. For coronaviruses, this critical step is governed by the virally-encoded spike (S) protein, a class I viral fusion protein that has several unique features. Coronavirus entry is unusual in that it is often biphasic in nature, and can occur at or near the cell surface or in late endosomes. Recent advances in structural, biochemical and molecular biology of the coronavirus S protein has shed light on the intricacies of coronavirus entry, in particular the molecular triggers of coronavirus S-mediated membrane fusion. Furthermore, characterization of the coronavirus fusion peptide (FP), the segment of the fusion protein that inserts to a target lipid bilayer during membrane fusion, has revealed its particular attributes which imparts some of the unusual properties of the S protein, such as Ca 2+ -dependency. These unusual characteristics can explain at least in part the biphasic nature of coronavirus entry. In this review, using severe acute respiratory syndrome coronavirus (SARS-CoV) as model virus, we give an overview of advances in research on the coronavirus fusion peptide with an emphasis on its role and properties within the biological context of host cell entry. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)
2001-01-01
The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.
Multiclassifier information fusion methods for microarray pattern recognition
NASA Astrophysics Data System (ADS)
Braun, Jerome J.; Glina, Yan; Judson, Nicholas; Herzig-Marx, Rachel
2004-04-01
This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
Comprehensive characterization of RSPO fusions in colorectal traditional serrated adenomas.
Sekine, Shigeki; Ogawa, Reiko; Hashimoto, Taiki; Motohiro, Kojima; Yoshida, Hiroshi; Taniguchi, Hirokazu; Saito, Yutaka; Yasuhiro, Ohno; Ochiai, Atsushi; Hiraoka, Nobuyoshi
2017-10-01
Traditional serrated adenoma (TSA) is a rare but distinct type of colorectal polyp. Our previous study showed that PTPRK-RSPO3 fusions are frequent and characteristic genetic alterations in TSAs. This study aimed to characterize comprehensively the prevalence and variability of RSPO fusions in colorectal TSAs. We examined RSPO expression and explored novel RSPO fusions in 129 TSAs, including 66 lesions analysed previously for WNT pathway gene mutations. Quantitative polymerase chain reaction (qPCR) analyses identified three and 43 TSAs overexpressing RSPO2 and RSPO3, respectively, whereas the expression of RSPO1 and RSPO4 was marginal or undetectable in all cases. RSPO overexpression was always mutually exclusive with other WNT pathway gene mutations. Known PTPRK-RSPO3 fusions were detected in 37 TSAs, all but one of which overexpressed RSPO3. In addition, rapid amplification of cDNA ends revealed three novel RSPO fusion transcripts, an NRIP1-RSPO2 fusion and two PTPRK-RSPO3 fusion isoforms, in six TSAs. Overall, 43 TSAs had RSPO fusions (33%), whereas four TSAs (3%) overexpressed RSPO in the absence of RSPO fusions. TSAs with RSPO fusions showed several clinicopathological features, including distal localization (P = 0.0063), larger size (P = 0.0055), prominent ectopic crypt foci (P = 8.4 × 10 -4 ), association of a high-grade component (P = 1.1 × 10 -4 ), and the presence of KRAS mutations (P = 4.5 × 10 -5 ). The present study identified RSPO fusion transcripts, including three novel transcripts, in one-third of colorectal TSAs and showed that PTPRK-RSPO3 fusions were the predominant cause of RSPO overexpression in colorectal TSA. © 2017 John Wiley & Sons Ltd.
Liu, Yanjie; Pei, Jimin; Grishin, Nick; Snell, William J
2015-03-01
Cell-cell fusion between gametes is a defining step during development of eukaryotes, yet we know little about the cellular and molecular mechanisms of the gamete membrane fusion reaction. HAP2 is the sole gamete-specific protein in any system that is broadly conserved and shown by gene disruption to be essential for gamete fusion. The wide evolutionary distribution of HAP2 (also known as GCS1) indicates it was present in the last eukaryotic common ancestor and, therefore, dissecting its molecular properties should provide new insights into fundamental features of fertilization. HAP2 acts at a step after membrane adhesion, presumably directly in the merger of the lipid bilayers. Here, we use the unicellular alga Chlamydomonas to characterize contributions of key regions of HAP2 to protein location and function. We report that mutation of three strongly conserved residues in the ectodomain has no effect on targeting or fusion, although short deletions that include those residues block surface expression and fusion. Furthermore, HAP2 lacking a 237-residue segment of the cytoplasmic region is expressed at the cell surface, but fails to localize at the apical membrane patch specialized for fusion and fails to rescue fusion. Finally, we provide evidence that the ancient HAP2 contained a juxta-membrane, multi-cysteine motif in its cytoplasmic region, and that mutation of a cysteine dyad in this motif preserves protein localization, but substantially impairs HAP2 fusion activity. Thus, the ectodomain of HAP2 is essential for its surface expression, and the cytoplasmic region targets HAP2 to the site of fusion and regulates the fusion reaction. © 2015. Published by The Company of Biologists Ltd.
Progressive multi-atlas label fusion by dictionary evolution.
Song, Yantao; Wu, Guorong; Bahrami, Khosro; Sun, Quansen; Shen, Dinggang
2017-02-01
Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary. Copyright © 2016 Elsevier B.V. All rights reserved.
Multispectral multisensor image fusion using wavelet transforms
Lemeshewsky, George P.
1999-01-01
Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.
Tomographic data fusion with CFD simulations associated with a planar sensor
NASA Astrophysics Data System (ADS)
Liu, J.; Liu, S.; Sun, S.; Zhou, W.; Schlaberg, I. H. I.; Wang, M.; Yan, Y.
2017-04-01
Tomographic techniques have great abilities to interrogate the combustion processes, especially when it is combined with the physical models of the combustion itself. In this study, a data fusion algorithm is developed to investigate the flame distribution of a swirl-induced environmental (EV) burner, a new type of burner for low NOx combustion. An electric capacitance tomography (ECT) system is used to acquire 3D flame images and computational fluid dynamics (CFD) is applied to calculate an initial distribution of the temperature profile for the EV burner. Experiments were also carried out to visualize flames at a series of locations above the burner. While the ECT images essentially agree with the CFD temperature distribution, discrepancies exist at a certain height. When data fusion is applied, the discrepancy is visibly reduced and the ECT images are improved. The methods used in this study can lead to a new route where combustion visualization can be much improved and applied to clean energy conversion and new burner development.
[Design and research progress of zero profile cervical Interbody cage].
Zhu, Jia; Wang, Song; Liao, Zhenhua; Liu, Weiqiang
2017-02-01
Zero profile cervical interbody cage is an improvement of traditional fusion products and necessary supplement of emerging artificial intervertebral disc products. When applied in Anterior Cervical Decompression Fusion(ACDF), zero profile cervical interbody cage can preserve the advantages of traditional fusion and reduce the incidence of postoperative complications. Moreover, zero profile cervical interbody cage can be applied under the tabu symptoms of Artificial Cervical Disc Replacement(ACDR). This article summarizes zero profile interbody cage products that are commonly recognized and widely used in clinical practice in recent years, and reviews the progress of structure design and material research of zero profile cervical interbody cage products. Based on the latest clinical demands and research progress, this paper also discusses the future development directions of zero profile interbody cage.
NASA Astrophysics Data System (ADS)
Price, Stanton R.; Murray, Bryce; Hu, Lequn; Anderson, Derek T.; Havens, Timothy C.; Luke, Robert H.; Keller, James M.
2016-05-01
A serious threat to civilians and soldiers is buried and above ground explosive hazards. The automatic detection of such threats is highly desired. Many methods exist for explosive hazard detection, e.g., hand-held based sensors, downward and forward looking vehicle mounted platforms, etc. In addition, multiple sensors are used to tackle this extreme problem, such as radar and infrared (IR) imagery. In this article, we explore the utility of feature and decision level fusion of learned features for forward looking explosive hazard detection in IR imagery. Specifically, we investigate different ways to fuse learned iECO features pre and post multiple kernel (MK) support vector machine (SVM) based classification. Three MK strategies are explored; fixed rule, heuristics and optimization-based. Performance is assessed in the context of receiver operating characteristic (ROC) curves on data from a U.S. Army test site that contains multiple target and clutter types, burial depths and times of day. Specifically, the results reveal two interesting things. First, the different MK strategies appear to indicate that the different iECO individuals are all more-or-less important and there is not a dominant feature. This is reinforcing as our hypothesis was that iECO provides different ways to approach target detection. Last, we observe that while optimization-based MK is mathematically appealing, i.e., it connects the learning of the fusion to the underlying classification problem we are trying to solve, it appears to be highly susceptible to over fitting and simpler, e.g., fixed rule and heuristics approaches help us realize more generalizable iECO solutions.
Induction of Cell-Cell Fusion by Ebola Virus Glycoprotein: Low pH Is Not a Trigger.
Markosyan, Ruben M; Miao, Chunhui; Zheng, Yi-Min; Melikyan, Gregory B; Liu, Shan-Lu; Cohen, Fredric S
2016-01-01
Ebola virus (EBOV) is a highly pathogenic filovirus that causes hemorrhagic fever in humans and animals. Currently, how EBOV fuses its envelope membrane within an endosomal membrane to cause infection is poorly understood. We successfully measure cell-cell fusion mediated by the EBOV fusion protein, GP, assayed by the transfer of both cytoplasmic and membrane dyes. A small molecule fusion inhibitor, a neutralizing antibody, as well as mutations in EBOV GP known to reduce viral infection, all greatly reduce fusion. By monitoring redistribution of small aqueous dyes between cells and by electrical capacitance measurements, we discovered that EBOV GP-mediated fusion pores do not readily enlarge-a marked difference from the behavior of other viral fusion proteins. EBOV GP must be cleaved by late endosome-resident cathepsins B or L in order to become fusion-competent. Cleavage of cell surface-expressed GP appears to occur in endosomes, as evidenced by the fusion block imposed by cathepsin inhibitors, agents that raise endosomal pH, or an inhibitor of anterograde trafficking. Treating effector cells with a recombinant soluble cathepsin B or thermolysin, which cleaves GP into an active form, increases the extent of fusion, suggesting that a fraction of surface-expressed GP is not cleaved. Whereas the rate of fusion is increased by a brief exposure to acidic pH, fusion does occur at neutral pH. Importantly, the extent of fusion is independent of external pH in experiments in which cathepsin activity is blocked and EBOV GP is cleaved by thermolysin. These results imply that low pH promotes fusion through the well-known pH-dependent activity of cathepsins; fusion induced by cleaved EBOV GP is a process that is fundamentally independent of pH. The cell-cell fusion system has revealed some previously unappreciated features of EBOV entry, which could not be readily elucidated in the context of endosomal entry.
Induction of Cell-Cell Fusion by Ebola Virus Glycoprotein: Low pH Is Not a Trigger
Zheng, Yi-Min; Melikyan, Gregory B.; Liu, Shan-Lu; Cohen, Fredric S.
2016-01-01
Ebola virus (EBOV) is a highly pathogenic filovirus that causes hemorrhagic fever in humans and animals. Currently, how EBOV fuses its envelope membrane within an endosomal membrane to cause infection is poorly understood. We successfully measure cell-cell fusion mediated by the EBOV fusion protein, GP, assayed by the transfer of both cytoplasmic and membrane dyes. A small molecule fusion inhibitor, a neutralizing antibody, as well as mutations in EBOV GP known to reduce viral infection, all greatly reduce fusion. By monitoring redistribution of small aqueous dyes between cells and by electrical capacitance measurements, we discovered that EBOV GP-mediated fusion pores do not readily enlarge—a marked difference from the behavior of other viral fusion proteins. EBOV GP must be cleaved by late endosome-resident cathepsins B or L in order to become fusion-competent. Cleavage of cell surface-expressed GP appears to occur in endosomes, as evidenced by the fusion block imposed by cathepsin inhibitors, agents that raise endosomal pH, or an inhibitor of anterograde trafficking. Treating effector cells with a recombinant soluble cathepsin B or thermolysin, which cleaves GP into an active form, increases the extent of fusion, suggesting that a fraction of surface-expressed GP is not cleaved. Whereas the rate of fusion is increased by a brief exposure to acidic pH, fusion does occur at neutral pH. Importantly, the extent of fusion is independent of external pH in experiments in which cathepsin activity is blocked and EBOV GP is cleaved by thermolysin. These results imply that low pH promotes fusion through the well-known pH-dependent activity of cathepsins; fusion induced by cleaved EBOV GP is a process that is fundamentally independent of pH. The cell-cell fusion system has revealed some previously unappreciated features of EBOV entry, which could not be readily elucidated in the context of endosomal entry. PMID:26730950
Multimodal biometric method that combines veins, prints, and shape of a finger
NASA Astrophysics Data System (ADS)
Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Kim, Jeong Nyeo
2011-01-01
Multimodal biometrics provides high recognition accuracy and population coverage by using various biometric features. A single finger contains finger veins, fingerprints, and finger geometry features; by using multimodal biometrics, information on these multiple features can be simultaneously obtained in a short time and their fusion can outperform the use of a single feature. This paper proposes a new finger recognition method based on the score-level fusion of finger veins, fingerprints, and finger geometry features. This research is novel in the following four ways. First, the performances of the finger-vein and fingerprint recognition are improved by using a method based on a local derivative pattern. Second, the accuracy of the finger geometry recognition is greatly increased by combining a Fourier descriptor with principal component analysis. Third, a fuzzy score normalization method is introduced; its performance is better than the conventional Z-score normalization method. Fourth, finger-vein, fingerprint, and finger geometry recognitions are combined by using three support vector machines and a weighted SUM rule. Experimental results showed that the equal error rate of the proposed method was 0.254%, which was lower than those of the other methods.
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002
Detecting Parkinson's disease from sustained phonation and speech signals.
Vaiciukynas, Evaldas; Verikas, Antanas; Gelzinis, Adas; Bacauskiene, Marija
2017-01-01
This study investigates signals from sustained phonation and text-dependent speech modalities for Parkinson's disease screening. Phonation corresponds to the vowel /a/ voicing task and speech to the pronunciation of a short sentence in Lithuanian language. Signals were recorded through two channels simultaneously, namely, acoustic cardioid (AC) and smart phone (SP) microphones. Additional modalities were obtained by splitting speech recording into voiced and unvoiced parts. Information in each modality is summarized by 18 well-known audio feature sets. Random forest (RF) is used as a machine learning algorithm, both for individual feature sets and for decision-level fusion. Detection performance is measured by the out-of-bag equal error rate (EER) and the cost of log-likelihood-ratio. Essentia audio feature set was the best using the AC speech modality and YAAFE audio feature set was the best using the SP unvoiced modality, achieving EER of 20.30% and 25.57%, respectively. Fusion of all feature sets and modalities resulted in EER of 19.27% for the AC and 23.00% for the SP channel. Non-linear projection of a RF-based proximity matrix into the 2D space enriched medical decision support by visualization.
Assessment of Data and Knowledge Fusion Strategies for Diagnostics and Prognostics
2001-04-05
prognostic technologies has proven effective in reducing false alarm rates, increasing confidence levels in early fault detection , and predicting time...or better than the sum of the parts. Specific to health management, this means reduced uncertainty in current condition assessment reduced (improving...achieve time synchronous averaged vibration features. Semmm Amy -U....1A MreN T.g 4 Id F~As- Anomaly DEtection Figure 1 - Fusion Application Areas At a
Riniker, Sereina; Fechner, Nikolas; Landrum, Gregory A
2013-11-25
The concept of data fusion - the combination of information from different sources describing the same object with the expectation to generate a more accurate representation - has found application in a very broad range of disciplines. In the context of ligand-based virtual screening (VS), data fusion has been applied to combine knowledge from either different active molecules or different fingerprints to improve similarity search performance. Machine-learning (ML) methods based on fusion of multiple homogeneous classifiers, in particular random forests, have also been widely applied in the ML literature. The heterogeneous version of classifier fusion - fusing the predictions from different model types - has been less explored. Here, we investigate heterogeneous classifier fusion for ligand-based VS using three different ML methods, RF, naïve Bayes (NB), and logistic regression (LR), with four 2D fingerprints, atom pairs, topological torsions, RDKit fingerprint, and circular fingerprint. The methods are compared using a previously developed benchmarking platform for 2D fingerprints which is extended to ML methods in this article. The original data sets are filtered for difficulty, and a new set of challenging data sets from ChEMBL is added. Data sets were also generated for a second use case: starting from a small set of related actives instead of diverse actives. The final fused model consistently outperforms the other approaches across the broad variety of targets studied, indicating that heterogeneous classifier fusion is a very promising approach for ligand-based VS. The new data sets together with the adapted source code for ML methods are provided in the Supporting Information .
Goreczny, Sebastian; Dryzek, Pawel; Morgan, Gareth J; Lukaszewski, Maciej; Moll, Jadwiga A; Moszura, Tomasz
2017-08-01
We report initial experience with novel three-dimensional (3D) image fusion software for guidance of transcatheter interventions in congenital heart disease. Developments in fusion imaging have facilitated the integration of 3D roadmaps from computed tomography or magnetic resonance imaging datasets. The latest software allows live fusion of two-dimensional (2D) fluoroscopy with pre-registered 3D roadmaps. We reviewed all cardiac catheterizations guided with this software (Philips VesselNavigator). Pre-catheterization imaging and catheterization data were collected focusing on fusion of 3D roadmap, intervention guidance, contrast and radiation exposure. From 09/2015 until 06/2016, VesselNavigator was applied in 34 patients for guidance (n = 28) or planning (n = 6) of cardiac catheterization. In all 28 patients successful 2D-3D registration was performed. Bony structures combined with the cardiovascular silhouette were used for fusion in 26 patients (93%), calcifications in 9 (32%), previously implanted devices in 8 (29%) and low-volume contrast injection in 7 patients (25%). Accurate initial 3D roadmap alignment was achieved in 25 patients (89%). Six patients (22%) required realignment during the procedure due to distortion of the anatomy after introduction of stiff equipment. Overall, VesselNavigator was applied successfully in 27 patients (96%) without any complications related to 3D image overlay. VesselNavigator was useful in guidance of nearly all of cardiac catheterizations. The combination of anatomical markers and low-volume contrast injections allowed reliable 2D-3D registration in the vast majority of patients.
A Remote Sensing Image Fusion Method based on adaptive dictionary learning
NASA Astrophysics Data System (ADS)
He, Tongdi; Che, Zongxi
2018-01-01
This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.
Effective Multifocus Image Fusion Based on HVS and BP Neural Network
Yang, Yong
2014-01-01
The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations. PMID:24683327
The next large helical devices
NASA Astrophysics Data System (ADS)
Iiyoshi, Atsuo; Yamazaki, Kozo
1995-06-01
Helical systems have the strong advantage of inherent steady-state operation for fusion reactors. Two large helical devices with fully superconducting coil systems are presently under design and construction. One is the LHD (Large Helical Device) [Fusion Technol. 17, 169 (1990)] with major radius=3.9 m and magnetic field=3-4 T, that is under construction during 1990-1997 at NIFS (National Institute for Fusion Science), Nagoya/Toki, Japan; it features continuous helical coils and a clean helical divertor focusing on edge configuration optimization. The other one in the W7-X (Wendelstein 7-X) [in Plasma Physics and Controlled Fusion Nuclear Research, 1990, (International Atomic Energy Agency, Vienna, 1991), Vol. 3, p. 525] with major radius=5.5 m and magnetic field=3 T, that is under review at IPP (Max-Planck Institute for Plasma Physics), Garching, Germany; it has adopted a modular coil system after elaborate optimization studies. These two programs are complementary in promoting world helical fusion research and in extending the understanding of toroidal plasmas through comparisons with large tokamaks.
Multi-sensor information fusion method for vibration fault diagnosis of rolling bearing
NASA Astrophysics Data System (ADS)
Jiao, Jing; Yue, Jianhai; Pei, Di
2017-10-01
Bearing is a key element in high-speed electric multiple unit (EMU) and any defect of it can cause huge malfunctioning of EMU under high operation speed. This paper presents a new method for bearing fault diagnosis based on least square support vector machine (LS-SVM) in feature-level fusion and Dempster-Shafer (D-S) evidence theory in decision-level fusion which were used to solve the problems about low detection accuracy, difficulty in extracting sensitive characteristics and unstable diagnosis system of single-sensor in rolling bearing fault diagnosis. Wavelet de-nosing technique was used for removing the signal noises. LS-SVM was used to make pattern recognition of the bearing vibration signal, and then fusion process was made according to the D-S evidence theory, so as to realize recognition of bearing fault. The results indicated that the data fusion method improved the performance of the intelligent approach in rolling bearing fault detection significantly. Moreover, the results showed that this method can efficiently improve the accuracy of fault diagnosis.
Raghavendra, U; Rajendra Acharya, U; Gudigar, Anjan; Hong Tan, Jen; Fujita, Hamido; Hagiwara, Yuki; Molinari, Filippo; Kongmebhol, Pailin; Hoong Ng, Kwan
2017-05-01
Thyroid is a small gland situated at the anterior side of the neck and one of the largest glands of the endocrine system. The abrupt cell growth or malignancy in the thyroid gland may cause thyroid cancer. Ultrasound images distinctly represent benign and malignant lesions, but accuracy may be poor due to subjective interpretation. Computer Aided Diagnosis (CAD) can minimize the errors created due to subjective interpretation and assists to make fast accurate diagnosis. In this work, fusion of Spatial Gray Level Dependence Features (SGLDF) and fractal textures are used to decipher the intrinsic structure of benign and malignant thyroid lesions. These features are subjected to graph based Marginal Fisher Analysis (MFA) to reduce the number of features. The reduced features are subjected to various ranking methods and classifiers. We have achieved an average accuracy, sensitivity and specificity of 97.52%, 90.32% and 98.57% respectively using Support Vector Machine (SVM) classifier. The achieved maximum Area Under Curve (AUC) is 0.9445. Finally, Thyroid Clinical Risk Index (TCRI) a single number is developed using two MFA features to discriminate the two classes. This prototype system is ready to be tested with huge diverse database. Copyright © 2017 Elsevier B.V. All rights reserved.
Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.
Liu, Da; Li, Jianxun
2016-12-16
Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.
Case retrieval in medical databases by fusing heterogeneous information.
Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice
2011-01-01
A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.
Preliminary Comparison of Radioactive Waste Disposal Cost for Fusion and Fission Reactors
NASA Astrophysics Data System (ADS)
Seki, Yasushi; Aoki, Isao; Yamano, Naoki; Tabara, Takashi
1997-09-01
The environmental and economic impact of radioactive waste (radwaste) generated from fusion power reactors using five types of structural materials and a fission reactor has been evaluated and compared. Possible radwaste disposal scenario of fusion radwaste in Japan is considered. The exposure doses were evaluated for the skyshine of gamma-ray during the disposal operation, groundwater migration scenario during the institutional control period of 300 years and future site use scenario after the institutional period. The radwaste generated from a typical light water fission reactor was evaluated using the same methodology as for the fusion reactors. It is found that radwaste from the fusion reactors using F82H and SiC/SiC composites without impurities could be disposed by the shallow land disposal presently applied to the low level waste in Japan. The disposal cost of radwaste from five fusion power reactors and a typical light water reactor were roughly evaluated and compared.
Visualization of multi-INT fusion data using Java Viewer (JVIEW)
NASA Astrophysics Data System (ADS)
Blasch, Erik; Aved, Alex; Nagy, James; Scott, Stephen
2014-05-01
Visualization is important for multi-intelligence fusion and we demonstrate issues for presenting physics-derived (i.e., hard) and human-derived (i.e., soft) fusion results. Physics-derived solutions (e.g., imagery) typically involve sensor measurements that are objective, while human-derived (e.g., text) typically involve language processing. Both results can be geographically displayed for user-machine fusion. Attributes of an effective and efficient display are not well understood, so we demonstrate issues and results for filtering, correlation, and association of data for users - be they operators or analysts. Operators require near-real time solutions while analysts have the opportunities of non-real time solutions for forensic analysis. In a use case, we demonstrate examples using the JVIEW concept that has been applied to piloting, space situation awareness, and cyber analysis. Using the open-source JVIEW software, we showcase a big data solution for multi-intelligence fusion application for context-enhanced information fusion.
Complexin and Ca2+ stimulate SNARE-mediated membrane fusion
Yoon, Tae-Young; Lu, Xiaobind; Diao, Jiajie; Lee, Soo-Min; Ha, Taekjip; Shin, Yeon-Kyun
2008-01-01
Ca2+-triggered, synchronized synaptic vesicle fusion underlies interneuronal communication. Complexin is a major binding partner of the SNARE complex, the core fusion machinery at the presynapse. The physiological data on complexin, however, have been at odds with each other, making delineation of its molecular function difficult. Here we report direct observation of two-faceted functions of complexin using the single-vesicle fluorescence fusion assay and EPR. We show that complexin I has two opposing effects on trans-SNARE assembly: inhibition of SNARE complex formation and stabilization of assembled SNARE complexes. Of note, SNARE-mediated fusion is markedly stimulated by complexin, and it is further accelerated by two orders of magnitude in response to an externally applied Ca2+ wave. We suggest that SNARE complexes, complexins and phospholipids collectively form a complex substrate for Ca2+ and Ca2+-sensing fusion effectors in neurotransmitter release. PMID:18552825
Jiang, Quansheng; Shen, Yehu; Li, Hua; Xu, Fengyu
2018-01-24
Feature recognition and fault diagnosis plays an important role in equipment safety and stable operation of rotating machinery. In order to cope with the complexity problem of the vibration signal of rotating machinery, a feature fusion model based on information entropy and probabilistic neural network is proposed in this paper. The new method first uses information entropy theory to extract three kinds of characteristics entropy in vibration signals, namely, singular spectrum entropy, power spectrum entropy, and approximate entropy. Then the feature fusion model is constructed to classify and diagnose the fault signals. The proposed approach can combine comprehensive information from different aspects and is more sensitive to the fault features. The experimental results on simulated fault signals verified better performances of our proposed approach. In real two-span rotor data, the fault detection accuracy of the new method is more than 10% higher compared with the methods using three kinds of information entropy separately. The new approach is proved to be an effective fault recognition method for rotating machinery.
Tyrosine kinase fusion genes in pediatric BCR-ABL1-like acute lymphoblastic leukemia
Boer, Judith M.; Steeghs, Elisabeth M.P.; Marchante, João R.M.; Boeree, Aurélie; Beaudoin, James J.; Berna Beverloo, H.; Kuiper, Roland P.; Escherich, Gabriele; van der Velden, Vincent H.J.; van der Schoot, C. Ellen; de Groot-Kruseman, Hester A.; Pieters, Rob; den Boer, Monique L.
2017-01-01
Approximately 15% of pediatric B cell precursor acute lymphoblastic leukemia (BCP-ALL) is characterized by gene expression similar to that of BCR-ABL1-positive disease and unfavorable prognosis. This BCR-ABL1-like subtype shows a high frequency of B-cell development gene aberrations and tyrosine kinase-activating lesions. To evaluate the clinical significance of tyrosine kinase gene fusions in children with BCP-ALL, we studied the frequency of recently identified tyrosine kinase fusions, associated genetic features, and prognosis in a representative Dutch/German cohort. We identified 14 tyrosine kinase fusions among 77 BCR-ABL1-like cases (18%) and none among 76 non-BCR-ABL1-like B-other cases. Novel exon fusions were identified for RCSD1-ABL2 and TERF2-JAK2. JAK2 mutation was mutually exclusive with tyrosine kinase fusions and only occurred in cases with high CRLF2 expression. The non/late response rate and levels of minimal residual disease in the fusion-positive BCR-ABL1-like group were higher than in the non-BCR-ABL1-like B-others (p<0.01), and also higher, albeit not statistically significant, compared with the fusion-negative BCR-ABL1-like group. The 8-year cumulative incidence of relapse in the fusion-positive BCR-ABL1-like group (35%) was comparable with that in the fusion-negative BCR-ABL1-like group (35%), and worse than in the non-BCR-ABL1-like B-other group (17%, p=0.07). IKZF1 deletions, predominantly other than the dominant-negative isoform and full deletion, co-occurred with tyrosine kinase fusions. This study shows that tyrosine kinase fusion-positive cases are a high-risk subtype of BCP-ALL, which warrants further studies with specific kinase inhibitors to improve outcome. PMID:27894077
Recurrent hyperactive ESR1 fusion proteins in endocrine therapy-resistant breast cancer.
Hartmaier, R J; Trabucco, S E; Priedigkeit, N; Chung, J H; Parachoniak, C A; Vanden Borre, P; Morley, S; Rosenzweig, M; Gay, L M; Goldberg, M E; Suh, J; Ali, S M; Ross, J; Leyland-Jones, B; Young, B; Williams, C; Park, B; Tsai, M; Haley, B; Peguero, J; Callahan, R D; Sachelarie, I; Cho, J; Atkinson, J M; Bahreini, A; Nagle, A M; Puhalla, S L; Watters, R J; Erdogan-Yildirim, Z; Cao, L; Oesterreich, S; Mathew, A; Lucas, P C; Davidson, N E; Brufsky, A M; Frampton, G M; Stephens, P J; Chmielecki, J; Lee, A V
2018-04-01
Estrogen receptor-positive (ER-positive) metastatic breast cancer is often intractable due to endocrine therapy resistance. Although ESR1 promoter switching events have been associated with endocrine-therapy resistance, recurrent ESR1 fusion proteins have yet to be identified in advanced breast cancer. To identify genomic structural rearrangements (REs) including gene fusions in acquired resistance, we undertook a multimodal sequencing effort in three breast cancer patient cohorts: (i) mate-pair and/or RNAseq in 6 patient-matched primary-metastatic tumors and 51 metastases, (ii) high coverage (>500×) comprehensive genomic profiling of 287-395 cancer-related genes across 9542 solid tumors (5216 from metastatic disease), and (iii) ultra-high coverage (>5000×) genomic profiling of 62 cancer-related genes in 254 ctDNA samples. In addition to traditional gene fusion detection methods (i.e. discordant reads, split reads), ESR1 REs were detected from targeted sequencing data by applying a novel algorithm (copyshift) that identifies major copy number shifts at rearrangement hotspots. We identify 88 ESR1 REs across 83 unique patients with direct confirmation of 9 ESR1 fusion proteins (including 2 via immunoblot). ESR1 REs are highly enriched in ER-positive, metastatic disease and co-occur with known ESR1 missense alterations, suggestive of polyclonal resistance. Importantly, all fusions result from a breakpoint in or near ESR1 intron 6 and therefore lack an intact ligand binding domain (LBD). In vitro characterization of three fusions reveals ligand-independence and hyperactivity dependent upon the 3' partner gene. Our lower-bound estimate of ESR1 fusions is at least 1% of metastatic solid breast cancers, the prevalence in ctDNA is at least 10× enriched. We postulate this enrichment may represent secondary resistance to more aggressive endocrine therapies applied to patients with ESR1 LBD missense alterations. Collectively, these data indicate that N-terminal ESR1 fusions involving exons 6-7 are a recurrent driver of endocrine therapy resistance and are impervious to ER-targeted therapies.
Finn, Michael A; Samuelson, Mical M; Bishop, Frank; Bachus, Kent N; Brodke, Darrel S
2011-03-15
Biomechanical study. To determine biomechanical forces exerted on intermediate and adjacent segments after two- or three-level fusion for treatment of noncontiguous levels. Increased motion adjacent to fused spinal segments is postulated to be a driving force in adjacent segment degeneration. Occasionally, a patient requires treatment of noncontiguous levels on either side of a normal level. The biomechanical forces exerted on the intermediate and adjacent levels are unknown. Seven intact human cadaveric cervical spines (C3-T1) were mounted in a custom seven-axis spine simulator equipped with a follower load apparatus and OptoTRAK three-dimensional tracking system. Each intact specimen underwent five cycles each of flexion/extension, lateral bending, and axial rotation under a ± 1.5 Nm moment and a 100-Nm axial follower load. Applied torque and motion data in each axis of motion and level were recorded. Testing was repeated under the same parameters after C4-C5 and C6-C7 diskectomies were performed and fused with rigid cervical plates and interbody spacers and again after a three-level fusion from C4 to C7. Range of motion was modestly increased (35%) in the intermediate and adjacent levels in the skip fusion construct. A significant or nearly significant difference was reached in seven of nine moments. With the three-level fusion construct, motion at the infra- and supra-adjacent levels was significantly or nearly significantly increased in all applied moments over the intact and the two-level noncontiguous construct. The magnitude of this change was substantial (72%). Infra- and supra-adjacent levels experienced a marked increase in strain in all moments with a three-level fusion, whereas the intermediate, supra-, and infra-adjacent segments of a two-level fusion experienced modest strain moments relative to intact. It would be appropriate to consider noncontiguous fusions instead of a three-level fusion when confronted with nonadjacent disease.
Radiological Determination of Postoperative Cervical Fusion: A Systematic Review.
Rhee, John M; Chapman, Jens R; Norvell, Daniel C; Smith, Justin; Sherry, Ned A; Riew, K Daniel
2015-07-01
Systematic review. To determine best criteria for radiological determination of postoperative subaxial cervical fusion to be applied to current clinical practice and ongoing future research assessing fusion to standardize assessment and improve comparability. Despite availability of multiple imaging modalities and criteria, there remains no method of determining cervical fusion with absolute certainty, nor clear consensus on specific criteria to be applied. A systematic search in MEDLINE/Cochrane Collaboration Library (through March 2014). Included studies assessed C2 to C7 via anterior or posterior approach, at 12 weeks or more postoperative, with any graft or implant. Overall body of evidence with respect to 6 posited key questions was determined using Grading of Recommendations Assessment, Development and Evaluation and Agency for Healthcare Research and Quality precepts. Of plain radiographical modalities, there is moderate evidence that the interspinous process motion method (<1 mm) is more accurate than the Cobb angle method for assessing anterior cervical fusion. Of the advanced imaging modalities, there is moderate evidence that computed tomography (CT) is more accurate and reliable than magnetic resonance imaging in assessing anterior cervical fusion. There is insufficient evidence regarding the optimal modality and criteria for assessing posterior cervical fusions and insufficient evidence to support a single time point after surgery as being optimal for determining fusion, although some evidence suggest that reliability of radiography and CT improves with increasing time postoperatively. We recommend using less than 1-mm motion as the initial modality for determining anterior cervical arthrodesis for both clinical and research applications. If further imaging is needed because of indeterminate radiographical evaluation, we recommend CT, which has relatively high accuracy and reliability, but due to greater radiation exposure and cost, it is not routinely suggested. We recommend that plain radiographs also be the initial method of determining posterior cervical fusion but suggest a lower threshold for obtaining CT scans because dynamic radiographs may not be as useful if spinous processes have been removed by laminectomy. 1.
Haller, Florian; Knopf, Jasmin; Ackermann, Anne; Bieg, Matthias; Kleinheinz, Kortine; Schlesner, Matthias; Moskalev, Evgeny A; Will, Rainer; Satir, Ali Abdel; Abdelmagid, Ibtihalat E; Giedl, Johannes; Carbon, Roman; Rompel, Oliver; Hartmann, Arndt; Wiemann, Stefan; Metzler, Markus; Agaimy, Abbas
2016-04-01
Neoplasms with a myopericytomatous pattern represent a morphological spectrum of lesions encompassing myopericytoma of the skin and soft tissue, angioleiomyoma, myofibromatosis/infantile haemangiopericytoma and putative neoplasms reported as malignant myopericytoma. Lack of reproducible phenotypic and genetic features of malignant myopericytic neoplasms have prevented the establishment of myopericytic sarcoma as an acceptable diagnostic category. Following detection of a LMNA-NTRK1 gene fusion in an index case of paediatric haemangiopericytoma-like sarcoma by combined whole-genome and RNA sequencing, we identified three additional sarcomas harbouring NTRK1 gene fusions, termed 'spindle cell sarcoma, NOS with myo/haemangiopericytic growth pattern'. The patients were two children aged 11 months and 2 years and two adults aged 51 and 80 years. While the tumours of the adults were strikingly myopericytoma-like, but with clear-cut atypical features, the paediatric cases were more akin to infantile myofibromatosis/haemangiopericytoma. All cases contained numerous thick-walled dysplastic-like vessels with segmental or diffuse nodular myxohyaline myo-intimal proliferations of smooth muscle actin-positive cells, occasionally associated with thrombosis. Immunohistochemistry showed variable expression of smooth muscle actin and CD34, but other mesenchymal markers, including STAT6, were negative. This study showed a novel variant of myo/haemangiopericytic sarcoma with recurrent NTRK1 gene fusions. Given the recent introduction of a novel therapeutic approach targeting NTRK fusion-positive neoplasms, recognition of this rare but likely under-reported sarcoma variant is strongly encouraged. Copyright © 2016 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.
Jun, Yong Woong; Wang, Taejun; Hwang, Sekyu; Kim, Dokyoung; Ma, Donghee; Kim, Ki Hean; Kim, Sungjee; Jung, Junyang; Ahn, Kyo Han
2018-06-05
Vesicles exchange its contents through membrane fusion processes-kiss-and-run and full-collapse fusion. Indirect observation of these fusion processes using artificial vesicles enhanced our understanding on the molecular mechanisms involved. Direct observation of the fusion processes in a real biological system, however, remains a challenge owing to many technical obstacles. We disclose a ratiometric two-photon probe offering real-time tracking of lysosomal ATP with quantitative information for the first time. By applying the probe to two-photon live-cell imaging technique, lysosomal membrane fusion process in cells has been directly observed along with the concentration of its content-lysosomal ATP. Results show that the kiss-and-run process between lysosomes proceeds through repeating transient interactions with gradual content mixing, whereas the full-fusion process occurs at once. Furthermore, it is confirmed that both the fusion processes proceed with conservation of the content. Such a small-molecule probe exerts minimal disturbance and hence has potential for studying various biological processes associated with lysosomal ATP. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Xia, Youshen; Kamel, Mohamed S
2007-06-01
Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.
Multiplier, moderator, and reflector materials for lithium-vanadium fusion blankets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gohar, Y.; Smith, D. L.
1999-10-07
The self-cooled lithium-vanadium fusion blanket concept has several attractive operational and environmental features. In this concept, liquid lithium works as the tritium breeder and coolant to alleviate issues of coolant breeder compatibility and reactivity. Vanadium alloy (V-4Cr-4Ti) is used as the structural material because of its superior performance relative to other alloys for this application. However, this concept has poor attenuation characteristics and energy multiplication for the DT neutrons. An advanced self-cooled lithium-vanadium fusion blanket concept has been developed to eliminate these drawbacks while maintaining all the attractive features of the conventional concept. An electrical insulator coating for the coolantmore » channels, spectral shifter (multiplier, and moderator) and reflector were utilized in the blanket design to enhance the blanket performance. In addition, the blanket was designed to have the capability to operate at high loading conditions of 2 MW/m{sup 2} surface heat flux and 10 MW/m{sup 2} neutron wall loading. This paper assesses the spectral shifter and the reflector materials and it defines the technological requirements of this advanced blanket concept.« less
Multiplier, moderator, and reflector materials for advanced lithium?vanadium fusion blankets
NASA Astrophysics Data System (ADS)
Gohar, Y.; Smith, D. L.
2000-12-01
The self-cooled lithium-vanadium fusion blanket concept has several attractive operational and environmental features. In this concept, liquid lithium works as the tritium breeder and coolant to alleviate issues of coolant breeder compatibility and reactivity. Vanadium alloy (V-4Cr-4Ti) is used as the structural material because of its superior performance relative to other alloys for this application. However, this concept has poor attenuation characteristics and energy multiplication for the DT neutrons. An advanced self-cooled lithium-vanadium fusion blanket concept has been developed to eliminate these drawbacks while maintaining all the attractive features of the conventional concept. An electrical insulator coating for the coolant channels, spectral shifter (multiplier, and moderator) and reflector were utilized in the blanket design to enhance the blanket performance. In addition, the blanket was designed to have the capability to operate at average loading conditions of 2 MW/m 2 surface heat flux and 10 MW/m 2 neutron wall loading. This paper assesses the spectral shifter and the reflector materials and it defines the technological requirements of this advanced blanket concept.
Driver fatigue detection through multiple entropy fusion analysis in an EEG-based system.
Min, Jianliang; Wang, Ping; Hu, Jianfeng
2017-01-01
Driver fatigue is an important contributor to road accidents, and fatigue detection has major implications for transportation safety. The aim of this research is to analyze the multiple entropy fusion method and evaluate several channel regions to effectively detect a driver's fatigue state based on electroencephalogram (EEG) records. First, we fused multiple entropies, i.e., spectral entropy, approximate entropy, sample entropy and fuzzy entropy, as features compared with autoregressive (AR) modeling by four classifiers. Second, we captured four significant channel regions according to weight-based electrodes via a simplified channel selection method. Finally, the evaluation model for detecting driver fatigue was established with four classifiers based on the EEG data from four channel regions. Twelve healthy subjects performed continuous simulated driving for 1-2 hours with EEG monitoring on a static simulator. The leave-one-out cross-validation approach obtained an accuracy of 98.3%, a sensitivity of 98.3% and a specificity of 98.2%. The experimental results verified the effectiveness of the proposed method, indicating that the multiple entropy fusion features are significant factors for inferring the fatigue state of a driver.
Olayan, Rawan S; Ashoor, Haitham; Bajic, Vladimir B
2018-04-01
Finding computationally drug-target interactions (DTIs) is a convenient strategy to identify new DTIs at low cost with reasonable accuracy. However, the current DTI prediction methods suffer the high false positive prediction rate. We developed DDR, a novel method that improves the DTI prediction accuracy. DDR is based on the use of a heterogeneous graph that contains known DTIs with multiple similarities between drugs and multiple similarities between target proteins. DDR applies non-linear similarity fusion method to combine different similarities. Before fusion, DDR performs a pre-processing step where a subset of similarities is selected in a heuristic process to obtain an optimized combination of similarities. Then, DDR applies a random forest model using different graph-based features extracted from the DTI heterogeneous graph. Using 5-repeats of 10-fold cross-validation, three testing setups, and the weighted average of area under the precision-recall curve (AUPR) scores, we show that DDR significantly reduces the AUPR score error relative to the next best start-of-the-art method for predicting DTIs by 34% when the drugs are new, by 23% when targets are new and by 34% when the drugs and the targets are known but not all DTIs between them are not known. Using independent sources of evidence, we verify as correct 22 out of the top 25 DDR novel predictions. This suggests that DDR can be used as an efficient method to identify correct DTIs. The data and code are provided at https://bitbucket.org/RSO24/ddr/. vladimir.bajic@kaust.edu.sa. Supplementary data are available at Bioinformatics online.
High-Energy Space Propulsion Based on Magnetized Target Fusion
NASA Technical Reports Server (NTRS)
Thio, Y. C. F.; Freeze, B.; Kirkpatrick, R. C.; Landrum, B.; Gerrish, H.; Schmidt, G. R.
1999-01-01
A conceptual study is made to explore the feasibility of applying magnetized target fusion (MTF) to space propulsion for omniplanetary travel. Plasma-jet driven MTF not only is highly amenable to space propulsion, but also has a number of very attractive features for this application: 1) The pulsed fusion scheme provides in situ a very dense hydrogenous liner capable of moderating the neutrons, converting more than 97% of the neutron energy into charged particle energy of the fusion plasma available for propulsion. 2) The fusion yield per pulse can be maintained at an attractively low level (< 1 GJ) despite a respectable gain in excess of 70. A compact, low-weight engine is the result. An engine with a jet power of 25 GW, a thrust of 66 kN, and a specific impulse of 77,000 s, can be achieved with an overall engine mass of about 41 metric tons, with a specific power density of 605 kW/kg, and a specific thrust density of 1.6 N/kg. The engine is rep-rated at 40 Hz to provide this power and thrust level. At a practical rep-rate limit of 200 Hz, the engine can deliver 128 GW jet power and 340 kN of thrust, at specific power and thrust density of 1,141 kW/kg and 3 N/kg respectively. 3) It is possible to operate the magnetic nozzle as a magnetic flux compression generator in this scheme, while attaining a high nozzle efficiency of 80% in converting the spherically radial momentum of the fusion plasma to an axial impulse. 4) A small fraction of the electrical energy generated from the flux compression is used directly to recharge the capacitor bank and other energy storage equipment, without the use of a highvoltage DC power supply. A separate electrical generator is not necessary. 5) Due to the simplicity of the electrical circuit and the components, involving mainly inductors, capacitors, and plasma guns, which are connected directly to each other without any intermediate equipment, a high rep-rate (with a maximum of 200 Hz) appears practicable. 6) All fusion related components are within the current state of the art for pulsed power technology. Experimental facilities with the required pulsed power capabilities already exist. 7) The scheme does not require prefabricated fuel target and liner hardware in any esoteric form or state. All necessary fuel and liner material are introduced into the engine in the form of ordinary matter in gaseous state at room temperature, greatly simplifying their handling on board. They are delivered into the fusion reaction chamber in a completely standoff manner.
Lasche, George P.
1988-01-01
A high-power-density laser or charged-particle-beam fusion reactor system maximizes the directed kinetic energy imparted to a large mass of liquid lithium by a centrally located fusion target. A fusion target is embedded in a large mass of lithium, of sufficient radius to act as a tritium breeding blanket, and provided with ports for the access of beam energy to implode the target. The directed kinetic energy is converted directly to electricity with high efficiency by work done against a pulsed magnetic field applied exterior to the lithium. Because the system maximizes the blanket thickness per unit volume of lithium, neutron-induced radioactivities in the reaction chamber wall are several orders of magnitude less than is typical of other fusion reactor systems.
Lasche, G.P.
1987-02-20
A high-power-density-laser or charged-particle-beam fusion reactor system maximizes the directed kinetic energy imparted to a large mass of liquid lithium by a centrally located fusion target. A fusion target is embedded in a large mass of lithium, of sufficient radius to act as a tritium breeding blanket, and provided with ports for the access of beam energy to implode the target. The directed kinetic energy is converted directly to electricity with high efficiency by work done against a pulsed magnetic field applied exterior to the lithium. Because the system maximizes the blanket thickness per unit volume of lithium, neutron-induced radioactivities in the reaction chamber wall are several orders of magnitude less than is typical of other fusion reactor systems. 25 figs.
Interaction of intense ultrashort pulse lasers with clusters.
NASA Astrophysics Data System (ADS)
Petrov, George
2007-11-01
The last ten years have witnessed an explosion of activity involving the interaction of clusters with intense ultrashort pulse lasers. Atomic or molecular clusters are targets with unique properties, as they are halfway between solid and gases. The intense laser radiation creates hot dense plasma, which can provide a compact source of x-rays and energetic particles. The focus of this investigation is to understand the salient features of energy absorption and Coulomb explosion by clusters. The evolution of clusters is modeled with a relativistic time-dependent 3D Molecular Dynamics (MD) model [1]. The Coulomb interaction between particles is handled by a fast tree algorithm, which allows large number of particles to be used in simulations [2]. The time histories of all particles in a cluster are followed in time and space. The model accounts for ionization-ignition effects (enhancement of the laser field in the vicinity of ions) and a variety of elementary processes for free electrons and charged ions, such as optical field and collisional ionization, outer ionization and electron recapture. The MD model was applied to study small clusters (1-20 nm) irradiated by a high-intensity (10^16-10^20 W/cm^2) sub-picosecond laser pulse. We studied fundamental cluster features such as energy absorption, x-ray emission, particle distribution, average charge per atom, and cluster explosion as a function of initial cluster radius, laser peak intensity and wavelength. Simulations of novel applications, such as table-top nuclear fusion from exploding deuterium clusters [3] and high power synchrotron radiation for biological applications and imaging [4] have been performed. The application for nuclear fusion was motivated by the efficient absorption of laser energy (˜100%) and its high conversion efficiency into ion kinetic energy (˜50%), resulting in neutron yield of 10^6 neutrons/Joule laser energy. Contributors: J. Davis and A. L. Velikovich. [1] G. M. Petrov, et al Phys. Plasmas 12 063103 (2005); 13 033106 (2006) [2] G. M. Petrov, J. Davis, European Phys. J. D 41 629 (2007) [3] G. M. Petrov, J. Davis, A. L. Velikovich, Plasma Phys. Contr. Fusion 48 1721 (2006) [4] G. M. Petrov, J. Davis, A. L. Velikovich, J. Phys. B 39 4617 (2006)
Fusion for Space Propulsion and Plasma Liner Driven MTF
NASA Technical Reports Server (NTRS)
Thio, Y.C. Francis; Rodgers, Stephen L. (Technical Monitor)
2001-01-01
The need for fusion propulsion for interplanetary flights is discussed. For a propulsion system, there are three important system attributes: (1) The absolute amount of energy available, (2) the propellant exhaust velocity, and (3) the jet power per unit mass of the propulsion system (specific power). For human exploration and development of the solar system, propellant exhaust velocity in excess of 100 km/s and specific power in excess of 10 kW/kg are required. Chemical combustion cannot meet the requirement in propellant exhaust velocity. Nuclear fission processes typically result in producing energy in the form of heat that needs to be manipulated at temperatures limited by materials to about 2,800 K. Using the energy to heat a low atomic weight propellant cannot overcome the problem. Alternatively the energy can be converted into electricity which is then used to accelerate particles to high exhaust velocity. The necessary power conversion and conditioning equipment, however, increases the mass of the propulsion system for the same jet power by more than two orders of magnitude over chemical system, thus greatly limits the thrust-to-weight ratio attainable. If fusion can be developed, fusion appears to have the best of all worlds in terms of propulsion - it can provide the absolute amount, the propellant exhaust velocity, and the high specific jet power. An intermediate step towards pure fusion propulsion is a bimodal system in which a fission reactor is used to provide some of the energy to drive a fusion propulsion unit. The technical issues related to fusion for space propulsion are discussed. There are similarities as well as differences at the system level between applying fusion to propulsion and to terrestrial electrical power generation. The differences potentially provide a wider window of opportunities for applying fusion to propulsion. For example, pulsed approaches to fusion may be attractive for the propulsion application. This is particularly so in the light of significant development of the enabling pulsed power component technologies that have occurred in the last two decades because of defense and other energy requirements. The extreme states of matter required to produce fusion reactions may be more readily realizable in the pulsed states with less system mass than in steady states. Significant saving in system mass may result in pulsed fusion systems using plasmas in the appropriate density regimes. Magnetized target fusion, which attempts to combine the favorable attributes of magnetic confinement and inertial compression-containment into one single integrated fusion scheme, appears to have benefits that are worth exploring for propulsion application.
Levaot, Noam; Ottolenghi, Aner; Mann, Mati; Guterman-Ram, Gali; Kam, Zvi; Geiger, Benjamin
2015-10-01
Osteoclasts are multinucleated, bone-resorbing cells formed via fusion of monocyte progenitors, a process triggered by prolonged stimulation with RANKL, the osteoclast master regulator cytokine. Monocyte fusion into osteoclasts has been shown to play a key role in bone remodeling and homeostasis; therefore, aberrant fusion may be involved in a variety of bone diseases. Indeed, research in the last decade has led to the discovery of genes regulating osteoclast fusion; yet the basic cellular regulatory mechanism underlying the fusion process is poorly understood. Here, we applied a novel approach for tracking the fusion processes, using live-cell imaging of RANKL-stimulated and non-stimulated progenitor monocytes differentially expressing dsRED or GFP, respectively. We show that osteoclast fusion is initiated by a small (~2.4%) subset of precursors, termed "fusion founders", capable of fusing either with other founders or with non-stimulated progenitors (fusion followers), which alone, are unable to initiate fusion. Careful examination indicates that the fusion between a founder and a follower cell consists of two distinct phases: an initial pairing of the two cells, typically lasting 5-35 min, during which the cells nevertheless maintain their initial morphology; and the fusion event itself. Interestingly, during the initial pre-fusion phase, a transfer of the fluorescent reporter proteins from nucleus to nucleus was noticed, suggesting crosstalk between the founder and follower progenitors via the cytoplasm that might directly affect the fusion process, as well as overall transcriptional regulation in the developing heterokaryon. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wen, Hongwei; Liu, Yue; Wang, Shengpei; Li, Zuoyong; Zhang, Jishui; Peng, Yun; He, Huiguang
2017-03-01
Tourette syndrome (TS) is a childhood-onset neurobehavioral disorder. To date, TS is still misdiagnosed due to its varied presentation and lacking of obvious clinical symptoms. Therefore, studies of objective imaging biomarkers are of great importance for early TS diagnosis. As tic generation has been linked to disturbed structural networks, and many efforts have been made recently to investigate brain functional or structural networks using machine learning methods, for the purpose of disease diagnosis. However, few studies were related to TS and some drawbacks still existed in them. Therefore, we propose a novel classification framework integrating a multi-threshold strategy and a network fusion scheme to address the preexisting drawbacks. Here we used diffusion MRI probabilistic tractography to construct the structural networks of 44 TS children and 48 healthy children. We ameliorated the similarity network fusion algorithm specially to fuse the multi-threshold structural networks. Graph theoretical analysis was then implemented, and nodal degree, nodal efficiency and nodal betweenness centrality were selected as features. Finally, support vector machine recursive feature extraction (SVM-RFE) algorithm was used for feature selection, and then optimal features are fed into SVM to automatically discriminate TS children from controls. We achieved a high accuracy of 89.13% evaluated by a nested cross validation, demonstrated the superior performance of our framework over other comparison methods. The involved discriminative regions for classification primarily located in the basal ganglia and frontal cortico-cortical networks, all highly related to the pathology of TS. Together, our study may provide potential neuroimaging biomarkers for early-stage TS diagnosis.
Sensor fusion to enable next generation low cost Night Vision systems
NASA Astrophysics Data System (ADS)
Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.
2010-04-01
The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be compensated.
Seismic data fusion anomaly detection
NASA Astrophysics Data System (ADS)
Harrity, Kyle; Blasch, Erik; Alford, Mark; Ezekiel, Soundararajan; Ferris, David
2014-06-01
Detecting anomalies in non-stationary signals has valuable applications in many fields including medicine and meteorology. These include uses such as identifying possible heart conditions from an Electrocardiography (ECG) signals or predicting earthquakes via seismographic data. Over the many choices of anomaly detection algorithms, it is important to compare possible methods. In this paper, we examine and compare two approaches to anomaly detection and see how data fusion methods may improve performance. The first approach involves using an artificial neural network (ANN) to detect anomalies in a wavelet de-noised signal. The other method uses a perspective neural network (PNN) to analyze an arbitrary number of "perspectives" or transformations of the observed signal for anomalies. Possible perspectives may include wavelet de-noising, Fourier transform, peak-filtering, etc.. In order to evaluate these techniques via signal fusion metrics, we must apply signal preprocessing techniques such as de-noising methods to the original signal and then use a neural network to find anomalies in the generated signal. From this secondary result it is possible to use data fusion techniques that can be evaluated via existing data fusion metrics for single and multiple perspectives. The result will show which anomaly detection method, according to the metrics, is better suited overall for anomaly detection applications. The method used in this study could be applied to compare other signal processing algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Choong-Seock; Greenwald, Martin; Riley, Katherine
The additional computing power offered by the planned exascale facilities could be transformational across the spectrum of plasma and fusion research — provided that the new architectures can be efficiently applied to our problem space. The collaboration that will be required to succeed should be viewed as an opportunity to identify and exploit cross-disciplinary synergies. To assess the opportunities and requirements as part of the development of an overall strategy for computing in the exascale era, the Exascale Requirements Review meeting of the Fusion Energy Sciences (FES) community was convened January 27–29, 2016, with participation from a broad range ofmore » fusion and plasma scientists, specialists in applied mathematics and computer science, and representatives from the U.S. Department of Energy (DOE) and its major computing facilities. This report is a summary of that meeting and the preparatory activities for it and includes a wealth of detail to support the findings. Technical opportunities, requirements, and challenges are detailed in this report (and in the recent report on the Workshop on Integrated Simulation). Science applications are described, along with mathematical and computational enabling technologies. Also see http://exascaleage.org/fes/ for more information.« less
The effects of local insulin application to lumbar spinal fusions in a rat model.
Koerner, John D; Yalamanchili, Praveen; Munoz, William; Uko, Linda; Chaudhary, Saad B; Lin, Sheldon S; Vives, Michael J
2013-01-01
The rates of pseudoarthrosis after a single-level spinal fusion have been reported up to 35%, and the agents that increase the rate of fusion have an important role in decreasing pseudoarthrosis after spinal fusion. Previous studies have analyzed the effects of local insulin application to an autograft in a rat segmental defect model. Defects treated with a time-released insulin implant had significantly more new bone formation and greater quality of bone compared with controls based on histology and histomorphometry. A time-released insulin implant may have similar effects when applied in a lumbar spinal fusion model. This study analyzes the effects of a local time-released insulin implant applied to the fusion bed in a rat posterolateral lumbar spinal fusion model. Our hypothesis was twofold: first, a time-released insulin implant applied to the autograft bed in a rat posterolateral lumbar fusion will increase the rate of successful fusion and second, will alter the local environment of the fusion site by increasing the levels of local growth factors. Animal model (Institutional Animal Care and Use Committee approved) using 40 adult male Sprague-Dawley rats. Forty skeletally mature Sprague-Dawley rats weighing approximately 500 g each underwent posterolateral intertransverse lumbar fusions with iliac crest autograft from L4 to L5 using a Wiltse-type approach. After exposure of the transverse processes and high-speed burr decortication, a Linplant (Linshin Canada, Inc., ON, Canada) consisting of 95% microrecrystalized palmitic acid and 5% bovine insulin (experimental group) or a sham implant consisting of only palmitic acid (control group) was implanted on the fusion bed with iliac crest autograft. As per the manufacturer, the Linplant has a release rate of 2 U/day for a minimum of 40 days. The transverse processes and autograft beds of 10 animals from the experimental and 10 from the control group were harvested at Day 4 and analyzed for growth factors. The remaining 20 spines were harvested at 8 weeks and underwent a radiographic examination, manual palpation, and microcomputed tomographic (micro-CT) examination. One of the 8-week control animals died on postoperative Day 1, likely due to anesthesia. In the groups sacrificed at Day 4, there was a significant increase in insulinlike growth factor-I (IGF-I) in the insulin treatment group compared with the controls (0.185 vs. 0.129; p=.001). No significant differences were demonstrated in the levels of transforming growth factor beta-1, platelet-derived growth factor-AB, and vascular endothelial growth factor between the groups (p=.461, .452, and .767 respectively). Based on the radiographs, 1 of 9 controls had a solid bilateral fusion mass, 2 of 9 had unilateral fusion mass, 3 of 9 had small fusion mass bilaterally, and 3 of 9 had graft resorption. The treatment group had solid bilateral fusion mass in 6 of 10 and unilateral fusion mass in 4 of 10, whereas a small bilateral fusion mass and graft resorption were not observed. The difference between the groups was significant (p=.0067). Based on manual palpation, only 1 of 9 controls was considered fused, 4 of 9 were partially fused, and 4 of 9 were not fused. In the treatment group, there were 6 of 10 fusions, 3 of 10 partial fusions, and 1 of 10 were not fused. The difference between the groups was significant (p=.0084). Based on the micro-CT, the mean bone volume of the control group was 126.7 mm(3) and 203.8 mm(3) in the insulin treatment group. The difference between the groups was significant (p=.0007). This study demonstrates the potential role of a time-released insulin implant as a bone graft enhancer using a rat posterolateral intertransverse lumbar fusion model. The insulin-treatment group had significantly higher fusion rates based on the radiographs and manual palpation and had significantly higher levels of IGF-I and significantly more bone volume on micro-CT. Copyright © 2013 Elsevier Inc. All rights reserved.
Fusion Energy Division progress report, 1 January 1990--31 December 1991
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheffield, J.; Baker, C.C.; Saltmarsh, M.J.
1994-03-01
The Fusion Program of the Oak Ridge National Laboratory (ORNL), a major part of the national fusion program, encompasses nearly all areas of magnetic fusion research. The program is directed toward the development of fusion as an economical and environmentally attractive energy source for the future. The program involves staff from ORNL, Martin Marietta Energy systems, Inc., private industry, the academic community, and other fusion laboratories, in the US and abroad. Achievements resulting from this collaboration are documented in this report, which is issued as the progress report of the ORNL Fusion Energy Division; it also contains information from componentsmore » for the Fusion Program that are external to the division (about 15% of the program effort). The areas addressed by the Fusion Program include the following: experimental and theoretical research on magnetic confinement concepts; engineering and physics of existing and planned devices, including remote handling; development and testing of diagnostic tools and techniques in support of experiments; assembly and distribution to the fusion community of databases on atomic physics and radiation effects; development and testing of technologies for heating and fueling fusion plasmas; development and testing of superconducting magnets for containing fusion plasmas; development and testing of materials for fusion devices; and exploration of opportunities to apply the unique skills, technology, and techniques developed in the course of this work to other areas (about 15% of the Division`s activities). Highlights from program activities during 1990 and 1991 are presented.« less
Real-time sensor validation and fusion for distributed autonomous sensors
NASA Astrophysics Data System (ADS)
Yuan, Xiaojing; Li, Xiangshang; Buckles, Bill P.
2004-04-01
Multi-sensor data fusion has found widespread applications in industrial and research sectors. The purpose of real time multi-sensor data fusion is to dynamically estimate an improved system model from a set of different data sources, i.e., sensors. This paper presented a systematic and unified real time sensor validation and fusion framework (RTSVFF) based on distributed autonomous sensors. The RTSVFF is an open architecture which consists of four layers - the transaction layer, the process fusion layer, the control layer, and the planning layer. This paradigm facilitates distribution of intelligence to the sensor level and sharing of information among sensors, controllers, and other devices in the system. The openness of the architecture also provides a platform to test different sensor validation and fusion algorithms and thus facilitates the selection of near optimal algorithms for specific sensor fusion application. In the version of the model presented in this paper, confidence weighted averaging is employed to address the dynamic system state issue noted above. The state is computed using an adaptive estimator and dynamic validation curve for numeric data fusion and a robust diagnostic map for decision level qualitative fusion. The framework is then applied to automatic monitoring of a gas-turbine engine, including a performance comparison of the proposed real-time sensor fusion algorithms and a traditional numerical weighted average.
[New features in the 2014 WHO classification of uterine neoplasms].
Lax, S F
2016-11-01
The 2014 World Health Organization (WHO) classification of uterine tumors revealed simplification of the classification by fusion of several entities and the introduction of novel entities. Among the multitude of alterations, the following are named: a simplified classification for precursor lesions of endometrial carcinoma now distinguishes between hyperplasia without atypia and atypical hyperplasia, the latter also known as endometrioid intraepithelial neoplasia (EIN). For endometrial carcinoma a differentiation is made between type 1 (endometrioid carcinoma with variants and mucinous carcinoma) and type 2 (serous and clear cell carcinoma). Besides a papillary architecture serous carcinomas may show solid and glandular features and TP53 immunohistochemistry with an "all or null pattern" assists in the diagnosis of serous carcinoma with ambiguous features. Neuroendocrine neoplasms are categorized in a similar way to the gastrointestinal tract into well differentiated neuroendocrine tumors and poorly differentiated neuroendocrine carcinomas (small cell and large cell types). Leiomyosarcomas of the uterus are typically high grade and characterized by marked nuclear atypia and lively mitotic activity. Low grade stromal neoplasms frequently show gene fusions, such as JAZF1/SUZ12. High grade endometrial stromal sarcoma is newly defined by cyclin D1 overexpression and the presence of the fusion gene YWHAE/FAM22 and must be distinguished from undifferentiated uterine sarcoma. Carcinosarcomas (malignant mixed Mullerian tumors MMMT) show biological and molecular similarities to high-grade carcinomas.
Fusion of fuzzy statistical distributions for classification of thyroid ultrasound patterns.
Iakovidis, Dimitris K; Keramidas, Eystratios G; Maroulis, Dimitris
2010-09-01
This paper proposes a novel approach for thyroid ultrasound pattern representation. Considering that texture and echogenicity are correlated with thyroid malignancy, the proposed approach encodes these sonographic features via a noise-resistant representation. This representation is suitable for the discrimination of nodules of high malignancy risk from normal thyroid parenchyma. The material used in this study includes a total of 250 thyroid ultrasound patterns obtained from 75 patients in Greece. The patterns are represented by fused vectors of fuzzy features. Ultrasound texture is represented by fuzzy local binary patterns, whereas echogenicity is represented by fuzzy intensity histograms. The encoded thyroid ultrasound patterns are discriminated by support vector classifiers. The proposed approach was comprehensively evaluated using receiver operating characteristics (ROCs). The results show that the proposed fusion scheme outperforms previous thyroid ultrasound pattern representation methods proposed in the literature. The best classification accuracy was obtained with a polynomial kernel support vector machine, and reached 97.5% as estimated by the area under the ROC curve. The fusion of fuzzy local binary patterns and fuzzy grey-level histogram features is more effective than the state of the art approaches for the representation of thyroid ultrasound patterns and can be effectively utilized for the detection of nodules of high malignancy risk in the context of an intelligent medical system. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Forecasting Chronic Diseases Using Data Fusion.
Acar, Evrim; Gürdeniz, Gözde; Savorani, Francesco; Hansen, Louise; Olsen, Anja; Tjønneland, Anne; Dragsted, Lars Ove; Bro, Rasmus
2017-07-07
Data fusion, that is, extracting information through the fusion of complementary data sets, is a topic of great interest in metabolomics because analytical platforms such as liquid chromatography-mass spectrometry (LC-MS) and nuclear magnetic resonance (NMR) spectroscopy commonly used for chemical profiling of biofluids provide complementary information. In this study, with a goal of forecasting acute coronary syndrome (ACS), breast cancer, and colon cancer, we jointly analyzed LC-MS, NMR measurements of plasma samples, and the metadata corresponding to the lifestyle of participants. We used supervised data fusion based on multiple kernel learning and exploited the linearity of the models to identify significant metabolites/features for the separation of healthy referents and the cases developing a disease. We demonstrated that (i) fusing LC-MS, NMR, and metadata provided better separation of ACS cases and referents compared with individual data sets, (ii) NMR data performed the best in terms of forecasting breast cancer, while fusion degraded the performance, and (iii) neither the individual data sets nor their fusion performed well for colon cancer. Furthermore, we showed the strengths and limitations of the fusion models by discussing their performance in terms of capturing known biomarkers for smoking and coffee. While fusion may improve performance in terms of separating certain conditions by jointly analyzing metabolomics and metadata sets, it is not necessarily always the best approach as in the case of breast cancer.
Ishibashi, Kenichiro; Ito, Yohei; Masaki, Ayako; Fujii, Kana; Beppu, Shintaro; Sakakibara, Takeo; Takino, Hisashi; Takase, Hiroshi; Ijichi, Kei; Shimozato, Kazuo; Inagaki, Hiroshi
2015-11-01
There has been some debate as to whether a subset of metaplastic Warthin tumors (mWTs) harbor the mucoepidermoid carcinoma (MEC)-associated CRTC1-MAML2 fusion. We analyzed 15 tumors originally diagnosed as mWT (mWT-like tumors), 2 of which had concurrent MECs. We looked for the CRTC1/3-MAML2 fusion transcripts and performed immunohistochemistry for p63 and fluorescence in situ hybridization (FISH) for the MAML2 split. To localize MAML2 split-positive cells at the cellular level, whole tumor tissue sections were digitalized (whole-slide imaging [WSI]). The CRTC1-MAML2, but not CRTC3-MAML2 was detected in 5/15 mWT-like tumors. FISH-WSI results showed that all epithelial cells harbored the MAML2 split in fusion-positive mWT-like tumors and were totally negative in fusion-negative mWT-like tumors. A review of the hematoxylin and eosin-stained slides showed that morphology of the "metaplastic" epithelium was virtually indistinguishable between fusion-positive and fusion-negative tumors. However, oncocytic bilayered tumor epithelium, characteristic to typical WT, was always found somewhere in the fusion-negative tumors but not in the fusion-positive tumors. This distinguishing histologic finding enabled 5 pathologists to easily differentiate the 2 tumor groups with 100% accuracy. The age and sex distribution of fusion-positive mWT-like tumor cases was similar to that of fusion-positive MEC cases and significantly different from those of fusion-negative mWT-like tumor and typical WT cases. In addition, only fusion-positive mWT-like tumors possessed concurrent low-grade MECs. In conclusion, a subset of mWT-like tumors were positive for the CRTC1-MAML2 fusion and had many features that are more in accord with MEC than with WT. The term Warthin-like MEC should be considered for fusion-positive mWT-like tumors.
Dynamic Creation of Social Networks for Syndromic Surveillance Using Information Fusion
NASA Astrophysics Data System (ADS)
Holsopple, Jared; Yang, Shanchieh; Sudit, Moises; Stotz, Adam
To enhance the effectiveness of health care, many medical institutions have started transitioning to electronic health and medical records and sharing these records between institutions. The large amount of complex and diverse data makes it difficult to identify and track relationships and trends, such as disease outbreaks, from the data points. INFERD: Information Fusion Engine for Real-Time Decision-Making is an information fusion tool that dynamically correlates and tracks event progressions. This paper presents a methodology that utilizes the efficient and flexible structure of INFERD to create social networks representing progressions of disease outbreaks. Individual symptoms are treated as features allowing multiple hypothesis being tracked and analyzed for effective and comprehensive syndromic surveillance.
Kagan, Grigory; Svyatskiy, D.; Rinderknecht, H. G.; ...
2015-09-03
The distribution function of suprathermal ions is found to be self-similar under conditions relevant to inertial confinement fusion hot spots. By utilizing this feature, interference between the hydrodynamic instabilities and kinetic effects is for the first time assessed quantitatively to find that the instabilities substantially aggravate the fusion reactivity reduction. Thus, the ion tail depletion is also shown to lower the experimentally inferred ion temperature, a novel kinetic effect that may explain the discrepancy between the exploding pusher experiments and rad-hydro simulations and contribute to the observation that temperature inferred from DD reaction products is lower than from DT atmore » the National Ignition Facility.« less
NASA Astrophysics Data System (ADS)
Kagan, Grigory; Svyatskiy, D.; Rinderknecht, H. G.; Rosenberg, M. J.; Zylstra, A. B.; Huang, C.-K.; McDevitt, C. J.
2015-09-01
The distribution function of suprathermal ions is found to be self-similar under conditions relevant to inertial confinement fusion hot spots. By utilizing this feature, interference between the hydrodynamic instabilities and kinetic effects is for the first time assessed quantitatively to find that the instabilities substantially aggravate the fusion reactivity reduction. The ion tail depletion is also shown to lower the experimentally inferred ion temperature, a novel kinetic effect that may explain the discrepancy between the exploding pusher experiments and rad-hydro simulations and contribute to the observation that temperature inferred from DD reaction products is lower than from DT at the National Ignition Facility.
A Fusion of Horizons: Students' Encounters with "Will and Wave"
ERIC Educational Resources Information Center
Myers, James L.
2006-01-01
In a case study, I applied philosophical hermeneutic principles in an advanced level EFL writing class in Taiwan. A "fusion of horizons" occurs at the junction of two intertwined interpretations: one from our socio-historical tradition and the other from our experience of novel phenomena. I explored students' hermeneutic horizons in…