Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying
2016-12-20
The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.
Multimodality Image Fusion-Guided Procedures: Technique, Accuracy, and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abi-Jaoudeh, Nadine, E-mail: naj@mail.nih.gov; Kruecker, Jochen, E-mail: jochen.kruecker@philips.com; Kadoury, Samuel, E-mail: samuel.kadoury@polymtl.ca
2012-10-15
Personalized therapies play an increasingly critical role in cancer care: Image guidance with multimodality image fusion facilitates the targeting of specific tissue for tissue characterization and plays a role in drug discovery and optimization of tailored therapies. Positron-emission tomography (PET), magnetic resonance imaging (MRI), and contrast-enhanced computed tomography (CT) may offer additional information not otherwise available to the operator during minimally invasive image-guided procedures, such as biopsy and ablation. With use of multimodality image fusion for image-guided interventions, navigation with advanced modalities does not require the physical presence of the PET, MRI, or CT imaging system. Several commercially available methodsmore » of image-fusion and device navigation are reviewed along with an explanation of common tracking hardware and software. An overview of current clinical applications for multimodality navigation is provided.« less
Calhoun, Vince D; Sui, Jing
2016-01-01
It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness. PMID:27347565
Calhoun, Vince D; Sui, Jing
2016-05-01
It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness.
Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images. PMID:25214889
Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.
Franchi, G; Angulo, J; Moreaud, M; Sorbier, L
2018-01-01
The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-01-01
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-09-15
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
Application of Virtual Navigation with Multimodality Image Fusion in Foramen Ovale Cannulation.
Qiu, Xixiong; Liu, Weizong; Zhang, Mingdong; Lin, Hengzhou; Zhou, Shoujun; Lei, Yi; Xia, Jun
2017-11-01
Idiopathic trigeminal neuralgia (ITN) can be effectively treated with radiofrequency thermocoagulation. However, this procedure requires cannulation of the foramen ovale, and conventional cannulation methods are associated with high failure rates. Multimodality imaging can improve the accuracy of cannulation because each imaging method can compensate for the drawbacks of the other. We aim to determine the feasibility and accuracy of percutaneous foramen ovale cannulation under the guidance of virtual navigation with multimodality image fusion in a self-designed anatomical model of human cadaveric heads. Five cadaveric head specimens were investigated in this study. Spiral computed tomography (CT) scanning clearly displayed the foramen ovale in all five specimens (10 foramina), which could not be visualized using two-dimensional ultrasound alone. The ultrasound and spiral CT images were fused, and percutaneous cannulation of the foramen ovale was performed under virtual navigation. After this, spiral CT scanning was immediately repeated to confirm the accuracy of the cannulation. Postprocedural spiral CT confirmed that the ultrasound and CT images had been successfully fused for all 10 foramina, which were accurately and successfully cannulated. The success rates of both image fusion and cannulation were 100%. Virtual navigation with multimodality image fusion can substantially facilitate foramen ovale cannulation and is worthy of clinical application. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Introduction to clinical and laboratory (small-animal) image registration and fusion.
Zanzonico, Pat B; Nehmeh, Sadek A
2006-01-01
Imaging has long been a vital component of clinical medicine and, increasingly, of biomedical research in small-animals. Clinical and laboratory imaging modalities can be divided into two general categories, structural (or anatomical) and functional (or physiological). The latter, in particular, has spawned what has come to be known as "molecular imaging". Image registration and fusion have rapidly emerged as invaluable components of both clinical and small-animal imaging and has lead to the development and marketing of a variety of multi-modality, e.g. PET-CT, devices which provide registered and fused three-dimensional image sets. This paper briefly reviews the basics of image registration and fusion and available clinical and small-animal multi-modality instrumentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gambhir, Sanjiv; Pritha, Ray
Novel double and triple fusion reporter gene constructs harboring distinct imagable reporter genes are provided, as well as applications for the use of such double and triple fusion constructs in living cells and in living animals using distinct imaging technologies.
Gambhir, Sanjiv; Pritha, Ray
2015-07-14
Novel double and triple fusion reporter gene constructs harboring distinct imagable reporter genes are provided, as well as applications for the use of such double and triple fusion constructs in living cells and in living animals using distinct imaging technologies.
Qi, Shile; Calhoun, Vince D.; van Erp, Theo G. M.; Bustillo, Juan; Damaraju, Eswar; Turner, Jessica A.; Du, Yuhui; Chen, Jiayu; Yu, Qingbao; Mathalon, Daniel H.; Ford, Judith M.; Voyvodic, James; Mueller, Bryon A.; Belger, Aysenil; Ewen, Sarah Mc; Potkin, Steven G.; Preda, Adrian; Jiang, Tianzi
2017-01-01
Multimodal fusion is an effective approach to take advantage of cross-information among multiple imaging data to better understand brain diseases. However, most current fusion approaches are blind, without adopting any prior information. To date, there is increasing interest to uncover the neurocognitive mapping of specific behavioral measurement on enriched brain imaging data; hence, a supervised, goal-directed model that enables a priori information as a reference to guide multimodal data fusion is in need and a natural option. Here we proposed a fusion with reference model, called “multi-site canonical correlation analysis with reference plus joint independent component analysis” (MCCAR+jICA), which can precisely identify co-varying multimodal imaging patterns closely related to reference information, such as cognitive scores. In a 3-way fusion simulation, the proposed method was compared with its alternatives on estimation accuracy of both target component decomposition and modality linkage detection. MCCAR+jICA outperforms others with higher precision. In human imaging data, working memory performance was utilized as a reference to investigate the covarying functional and structural brain patterns among 3 modalities and how they are impaired in schizophrenia. Two independent cohorts (294 and 83 subjects respectively) were used. Interestingly, similar brain maps were identified between the two cohorts, with substantial overlap in the executive control networks in fMRI, salience network in sMRI, and major white matter tracts in dMRI. These regions have been linked with working memory deficits in schizophrenia in multiple reports, while MCCAR+jICA further verified them in a repeatable, joint manner, demonstrating the potential of such results to identify potential neuromarkers for mental disorders. PMID:28708547
Hamm, Klaus D; Surber, Gunnar; Schmücking, Michael; Wurm, Reinhard E; Aschenbach, Rene; Kleinert, Gabriele; Niesen, A; Baum, Richard P
2004-11-01
Innovative new software solutions may enable image fusion to produce the desired data superposition for precise target definition and follow-up studies in radiosurgery/stereotactic radiotherapy in patients with intracranial lesions. The aim is to integrate the anatomical and functional information completely into the radiation treatment planning and to achieve an exact comparison for follow-up examinations. Special conditions and advantages of BrainLAB's fully automatic image fusion system are evaluated and described for this purpose. In 458 patients, the radiation treatment planning and some follow-up studies were performed using an automatic image fusion technique involving the use of different imaging modalities. Each fusion was visually checked and corrected as necessary. The computerized tomography (CT) scans for radiation treatment planning (slice thickness 1.25 mm), as well as stereotactic angiography for arteriovenous malformations, were acquired using head fixation with stereotactic arc or, in the case of stereotactic radiotherapy, with a relocatable stereotactic mask. Different magnetic resonance (MR) imaging sequences (T1, T2, and fluid-attenuated inversion-recovery images) and positron emission tomography (PET) scans were obtained without head fixation. Fusion results and the effects on radiation treatment planning and follow-up studies were analyzed. The precision level of the results of the automatic fusion depended primarily on the image quality, especially the slice thickness and the field homogeneity when using MR images, as well as on patient movement during data acquisition. Fully automated image fusion of different MR, CT, and PET studies was performed for each patient. Only in a few cases was it necessary to correct the fusion manually after visual evaluation. These corrections were minor and did not materially affect treatment planning. High-quality fusion of thin slices of a region of interest with a complete head data set could be performed easily. The target volume for radiation treatment planning could be accurately delineated using multimodal information provided by CT, MR, angiography, and PET studies. The fusion of follow-up image data sets yielded results that could be successfully compared and quantitatively evaluated. Depending on the quality of the originally acquired image, automated image fusion can be a very valuable tool, allowing for fast (approximately 1-2 minute) and precise fusion of all relevant data sets. Fused multimodality imaging improves the target volume definition for radiation treatment planning. High-quality follow-up image data sets should be acquired for image fusion to provide exactly comparable slices and volumetric results that will contribute to quality contol.
Ray, Pritha
2011-04-01
Development and marketing of new drugs require stringent validation that are expensive and time consuming. Non-invasive multimodality molecular imaging using reporter genes holds great potential to expedite these processes at reduced cost. New generations of smarter molecular imaging strategies such as Split reporter, Bioluminescence resonance energy transfer, Multimodality fusion reporter technologies will further assist to streamline and shorten the drug discovery and developmental process. This review illustrates the importance and potential of molecular imaging using multimodality reporter genes in drug development at preclinical phases.
Drug-related webpages classification based on multi-modal local decision fusion
NASA Astrophysics Data System (ADS)
Hu, Ruiguang; Su, Xiaojing; Liu, Yanxin
2018-03-01
In this paper, multi-modal local decision fusion is used for drug-related webpages classification. First, meaningful text are extracted through HTML parsing, and effective images are chosen by the FOCARSS algorithm. Second, six SVM classifiers are trained for six kinds of drug-taking instruments, which are represented by PHOG. One SVM classifier is trained for the cannabis, which is represented by the mid-feature of BOW model. For each instance in a webpage, seven SVMs give seven labels for its image, and other seven labels are given by searching the names of drug-taking instruments and cannabis in its related text. Concatenating seven labels of image and seven labels of text, the representation of those instances in webpages are generated. Last, Multi-Instance Learning is used to classify those drugrelated webpages. Experimental results demonstrate that the classification accuracy of multi-instance learning with multi-modal local decision fusion is much higher than those of single-modal classification.
Multimodal Medical Image Fusion by Adaptive Manifold Filter.
Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna
2015-01-01
Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.
Moche, M; Busse, H; Dannenberg, C; Schulz, T; Schmitgen, A; Trantakis, C; Winkler, D; Schmidt, F; Kahn, T
2001-11-01
The aim of this work was to realize and clinically evaluate an image fusion platform for the integration of preoperative MRI and fMRI data into the intraoperative images of an interventional MRI system with a focus on neurosurgical procedures. A vertically open 0.5 T MRI scanner was equipped with a dedicated navigation system enabling the registration of additional imaging modalities (MRI, fMRI, CT) with the intraoperatively acquired data sets. These merged image data served as the basis for interventional planning and multimodal navigation. So far, the system has been used in 70 neurosurgical interventions (13 of which involved image data fusion--requiring 15 minutes extra time). The augmented navigation system is characterized by a higher frame rate and a higher image quality as compared to the system-integrated navigation based on continuously acquired (near) real time images. Patient movement and tissue shifts can be immediately detected by monitoring the morphological differences between both navigation scenes. The multimodal image fusion allowed a refined navigation planning especially for the resection of deeply seated brain lesions or pathologies close to eloquent areas. Augmented intraoperative orientation and instrument guidance improve the safety and accuracy of neurosurgical interventions.
Distributed multimodal data fusion for large scale wireless sensor networks
NASA Astrophysics Data System (ADS)
Ertin, Emre
2006-05-01
Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang
2018-04-01
A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.
Deep features for efficient multi-biometric recognition with face and ear images
NASA Astrophysics Data System (ADS)
Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng
2017-07-01
Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.
Multimodal medical information retrieval with unsupervised rank fusion.
Mourão, André; Martins, Flávio; Magalhães, João
2015-01-01
Modern medical information retrieval systems are paramount to manage the insurmountable quantities of clinical data. These systems empower health care experts in the diagnosis of patients and play an important role in the clinical decision process. However, the ever-growing heterogeneous information generated in medical environments poses several challenges for retrieval systems. We propose a medical information retrieval system with support for multimodal medical case-based retrieval. The system supports medical information discovery by providing multimodal search, through a novel data fusion algorithm, and term suggestions from a medical thesaurus. Our search system compared favorably to other systems in 2013 ImageCLEFMedical. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multimodal Deep Autoencoder for Human Pose Recovery.
Hong, Chaoqun; Yu, Jun; Wan, Jian; Tao, Dacheng; Wang, Meng
2015-12-01
Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.
Multiscale Medical Image Fusion in Wavelet Domain
Khare, Ashish
2013-01-01
Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach. PMID:24453868
Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.
Chao, Zhen; Kim, Dohyeon; Kim, Hee-Joung
2018-04-01
In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. Recently, neural network technique was applied to medical image fusion by many researchers, but there are still many deficiencies. In this study, we propose a novel fusion method to combine multi-modality medical images based on the enhanced fuzzy radial basis function neural network (Fuzzy-RBFNN), which includes five layers: input, fuzzy partition, front combination, inference, and output. Moreover, we propose a hybrid of the gravitational search algorithm (GSA) and error back propagation algorithm (EBPA) to train the network to update the parameters of the network. Two different patterns of images are used as inputs of the neural network, and the output is the fused image. A comparison with the conventional fusion methods and another neural network method through subjective observation and objective evaluation indexes reveals that the proposed method effectively synthesized the information of input images and achieved better results. Meanwhile, we also trained the network by using the EBPA and GSA, individually. The results reveal that the EBPGSA not only outperformed both EBPA and GSA, but also trained the neural network more accurately by analyzing the same evaluation indexes. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Face-iris multimodal biometric scheme based on feature level fusion
NASA Astrophysics Data System (ADS)
Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei
2015-11-01
Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.
Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryu, Seun; Lin, Guang; Sun, Xin
2013-01-01
Fast image reconstruction from statistical information is critical in image fusion from multimodality chemical imaging instrumentation to create high resolution image with large domain. Stochastic methods have been used widely in image reconstruction from two point correlation function. The main challenge is to increase the efficiency of reconstruction. A novel simulated annealing method is proposed for fast solution of image reconstruction. Combining the advantage of very fast cooling schedules, dynamic adaption and parallelization, the new simulation annealing algorithm increases the efficiencies by several orders of magnitude, making the large domain image fusion feasible.
Rational Design of a Triple Reporter Gene for Multimodality Molecular Imaging
Hsieh, Ya-Ju; Ke, Chien-Chih; Yeh, Skye Hsin-Hsien; Lin, Chien-Feng; Chen, Fu-Du; Lin, Kang-Ping; Chen, Ran-Chou; Liu, Ren-Shyan
2014-01-01
Multimodality imaging using noncytotoxic triple fusion (TF) reporter genes is an important application for cell-based tracking, drug screening, and therapy. The firefly luciferase (fl), monomeric red fluorescence protein (mrfp), and truncated herpes simplex virus type 1 thymidine kinase SR39 mutant (ttksr39) were fused together to create TF reporter gene constructs with different order. The enzymatic activities of TF protein in vitro and in vivo were determined by luciferase reporter assay, H-FEAU cellular uptake experiment, bioluminescence imaging, and micropositron emission tomography (microPET). The TF construct expressed in H1299 cells possesses luciferase activity and red fluorescence. The tTKSR39 activity is preserved in TF protein and mediates high levels of H-FEAU accumulation and significant cell death from ganciclovir (GCV) prodrug activation. In living animals, the luciferase and tTKSR39 activities of TF protein have also been successfully validated by multimodality imaging systems. The red fluorescence signal is relatively weak for in vivo imaging but may expedite FACS-based selection of TF reporter expressing cells. We have developed an optimized triple fusion reporter construct DsRedm-fl-ttksr39 for more effective and sensitive in vivo animal imaging using fluorescence, bioluminescence, and PET imaging modalities, which may facilitate different fields of biomedical research and applications. PMID:24809057
Volume curtaining: a focus+context effect for multimodal volume visualization
NASA Astrophysics Data System (ADS)
Fairfield, Adam J.; Plasencia, Jonathan; Jang, Yun; Theodore, Nicholas; Crawford, Neil R.; Frakes, David H.; Maciejewski, Ross
2014-03-01
In surgical preparation, physicians will often utilize multimodal imaging scans to capture complementary information to improve diagnosis and to drive patient-specific treatment. These imaging scans may consist of data from magnetic resonance imaging (MR), computed tomography (CT), or other various sources. The challenge in using these different modalities is that the physician must mentally map the two modalities together during the diagnosis and planning phase. Furthermore, the different imaging modalities will be generated at various resolutions as well as slightly different orientations due to patient placement during scans. In this work, we present an interactive system for multimodal data fusion, analysis and visualization. Developed with partners from neurological clinics, this work discusses initial system requirements and physician feedback at the various stages of component development. Finally, we present a novel focus+context technique for the interactive exploration of coregistered multi-modal data.
Image Fusion During Vascular and Nonvascular Image-Guided Procedures☆
Abi-Jaoudeh, Nadine; Kobeiter, Hicham; Xu, Sheng; Wood, Bradford J.
2013-01-01
Image fusion may be useful in any procedure where previous imaging such as positron emission tomography, magnetic resonance imaging, or contrast-enhanced computed tomography (CT) defines information that is referenced to the procedural imaging, to the needle or catheter, or to an ultrasound transducer. Fusion of prior and intraoperative imaging provides real-time feedback on tumor location or margin, metabolic activity, device location, or vessel location. Multimodality image fusion in interventional radiology was initially introduced for biopsies and ablations, especially for lesions only seen on arterial phase CT, magnetic resonance imaging, or positron emission tomography/CT but has more recently been applied to other vascular and nonvascular procedures. Two different types of platforms are commonly used for image fusion and navigation: (1) electromagnetic tracking and (2) cone-beam CT. Both technologies would be reviewed as well as their strengths and weaknesses, indications, when to use one vs the other, tips and guidance to streamline use, and early evidence defining clinical benefits of these rapidly evolving, commercially available and emerging techniques. PMID:23993079
A Review of Multivariate Methods for Multimodal Fusion of Brain Imaging Data
Adali, Tülay; Yu, Qingbao; Calhoun, Vince D.
2011-01-01
The development of various neuroimaging techniques is rapidly improving the measurements of brain function/structure. However, despite improvements in individual modalities, it is becoming increasingly clear that the most effective research approaches will utilize multi-modal fusion, which takes advantage of the fact that each modality provides a limited view of the brain. The goal of multimodal fusion is to capitalize on the strength of each modality in a joint analysis, rather than a separate analysis of each. This is a more complicated endeavor that must be approached more carefully and efficient methods should be developed to draw generalized and valid conclusions from high dimensional data with a limited number of subjects. Numerous research efforts have been reported in the field based on various statistical approaches, e.g. independent component analysis (ICA), canonical correlation analysis (CCA) and partial least squares (PLS). In this review paper, we survey a number of multivariate methods appearing in previous reports, which are performed with or without prior information and may have utility for identifying potential brain illness biomarkers. We also discuss the possible strengths and limitations of each method, and review their applications to brain imaging data. PMID:22108139
Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.
Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias
2017-11-27
Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.
Feature-based fusion of medical imaging data.
Calhoun, Vince D; Adali, Tülay
2009-09-01
The acquisition of multiple brain imaging types for a given study is a very common practice. There have been a number of approaches proposed for combining or fusing multitask or multimodal information. These can be roughly divided into those that attempt to study convergence of multimodal imaging, for example, how function and structure are related in the same region of the brain, and those that attempt to study the complementary nature of modalities, for example, utilizing temporal EEG information and spatial functional magnetic resonance imaging information. Within each of these categories, one can attempt data integration (the use of one imaging modality to improve the results of another) or true data fusion (in which multiple modalities are utilized to inform one another). We review both approaches and present a recent computational approach that first preprocesses the data to compute features of interest. The features are then analyzed in a multivariate manner using independent component analysis. We describe the approach in detail and provide examples of how it has been used for different fusion tasks. We also propose a method for selecting which combination of modalities provides the greatest value in discriminating groups. Finally, we summarize and describe future research topics.
Maza, Sofiane; Taupitz, Mathias; Taymoorian, Kasra; Winzer, Klaus Jürgen; Rückert, Jens; Paschen, Christian; Räber, Gert; Schneider, Sylke; Trefzer, Uwe; Munz, Dieter L
2007-03-01
There are situations where exact identification and localisation of sentinel lymph nodes (SLNs) are very difficult using lymphoscintigraphy, a hand-held gamma probe and vital dye, either a priori or a posteriori. We developed a new method using a simultaneous injection of two lymphotropic agents for exact topographical tomographic localisation and biopsy of draining SLNs. The purpose of this prospective pilot study was to investigate the feasibility and efficacy of this method ensemble. Fourteen patients with different tumour entities were enrolled. A mixture of (99m)Tc-nanocolloid and a dissolved superparamagnetic iron oxide was injected interstitially. Dynamic, sequential static lymphoscintigraphy and SPECT served as pathfinders. MR imaging was performed 2 h after injection. SPECT, contrast MRI and, if necessary, CT scan data sets were fused and evaluated with special regard to the topographical location of SLNs. The day after injection, nine patients underwent SLN biopsy and, in the presence of SLN metastasis, an elective lymph node dissection. Twenty-five SLNs were localised in the 14 patients examined. A 100% fusion correlation was achieved in all patients. The anatomical sites of SLNs detected during surgery showed 100% agreement with those localised on the multimodal fusion images. SLNs could be excised in 11/14 patients, six of whom had nodal metastasis. Our novel approach of multimodal fusion imaging for targeted SLN management in primary tumours with lymphatic drainage to anatomically difficult regions enables SLN biopsy even in patients with lymphatic drainage to obscure regions. Currently, we are testing its validity in larger patient groups and other tumour entities.
PET-CT image fusion using random forest and à-trous wavelet transform.
Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo
2018-03-01
New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.
Image Fusion of CT and MR with Sparse Representation in NSST Domain
Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan
2017-01-01
Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. PMID:29250134
Image Fusion of CT and MR with Sparse Representation in NSST Domain.
Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan; Xia, Shunren
2017-01-01
Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.
Sethi, A; Rusu, I; Surucu, M; Halama, J
2012-06-01
Evaluate accuracy of multi-modality image registration in radiotherapy planning process. A water-filled anthropomorphic head phantom containing eight 'donut-shaped' fiducial markers (3 internal + 5 external) was selected for this study. Seven image sets (3CTs, 3MRs and PET) of phantom were acquired and fused in a commercial treatment planning system. First, a narrow slice (0.75mm) baseline CT scan was acquired (CT1). Subsequently, the phantom was re-scanned with a coarse slice width = 1.5mm (CT2) and after subjecting phantom to rotation/displacement (CT3). Next, the phantom was scanned in a 1.5 Tesla MR scanner and three MR image sets (axial T1, axial T2, coronal T1) were acquired at 2mm slice width. Finally, the phantom and center of fiducials were doped with 18F and a PET scan was performed with 2mm cubic voxels. All image scans (CT/MR/PET) were fused to the baseline (CT1) data using automated mutual-information based fusion algorithm. Difference between centroids of fiducial markers in various image modalities was used to assess image registration accuracy. CT/CT image registration was superior to CT/MR and CT/PET: average CT/CT fusion error was found to be 0.64 ± 0.14 mm. Corresponding values for CT/MR and CT/PET fusion were 1.33 ± 0.71mm and 1.11 ± 0.37mm. Internal markers near the center of phantom fused better than external markers placed on the phantom surface. This was particularly true for the CT/MR and CT/PET. The inferior quality of external marker fusion indicates possible distortion effects toward the edges of MR image. Peripheral targets in the PET scan may be subject to parallax error caused by depth of interaction of photons in detectors. Current widespread use of multimodality imaging in radiotherapy planning calls for periodic quality assurance of image registration process. Such studies may help improve safety and accuracy in treatment planning. © 2012 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Z; Gong, G
2014-06-01
Purpose: To design an external marking body (EMB) that could be visible on computed tomography (CT), magnetic resonance (MR), positron emission tomography (PET) and single-photon emission computed tomography (SPECT) images and to investigate the use of the EMB for multiple medical images registration and fusion in the clinic. Methods: We generated a solution containing paramagnetic metal ions and iodide ions (CT'MR dual-visible solution) that could be viewed on CT and MR images and multi-mode image visible solution (MIVS) that could be obtained by mixing radioactive nuclear material. A globular plastic theca (diameter: 3–6 mm) that mothball the MIVS and themore » EMB was brought by filling MIVS. The EMBs were fixed on the patient surface and CT, MR, PET and SPECT scans were obtained. The feasibility of clinical application and the display and registration error of EMB among different image modalities were investigated. Results: The dual-visible solution was highly dense on CT images (HU>700). A high signal was also found in all MR scanning (T1, T2, STIR and FLAIR) images, and the signal was higher than subcutaneous fat. EMB with radioactive nuclear material caused a radionuclide concentration area on PET and SPECT images, and the signal of EMB was similar to or higher than tumor signals. The theca with MIVS was clearly visible on all the images without artifact, and the shape was round or oval with a sharp edge. The maximum diameter display error was 0.3 ± 0.2mm on CT and MRI images, and 1.0 ± 0.3mm on PET and SPECT images. In addition, the registration accuracy of the theca center among multi-mode images was less than 1mm. Conclusion: The application of EMB with MIVS improves the registration and fusion accuracy of multi-mode medical images. Furthermore, it has the potential to ameliorate disease diagnosis and treatment outcome.« less
Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.
Ganasala, Padma; Kumar, Vinod
2016-02-01
Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.
Medical image registration based on normalized multidimensional mutual information
NASA Astrophysics Data System (ADS)
Li, Qi; Ji, Hongbing; Tong, Ming
2009-10-01
Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.
NASA Astrophysics Data System (ADS)
Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo
2012-02-01
As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.
Depth and thermal sensor fusion to enhance 3D thermographic reconstruction.
Cao, Yanpeng; Xu, Baobei; Ye, Zhangyu; Yang, Jiangxin; Cao, Yanlong; Tisse, Christel-Loic; Li, Xin
2018-04-02
Three-dimensional geometrical models with incorporated surface temperature data provide important information for various applications such as medical imaging, energy auditing, and intelligent robots. In this paper we present a robust method for mobile and real-time 3D thermographic reconstruction through depth and thermal sensor fusion. A multimodal imaging device consisting of a thermal camera and a RGB-D sensor is calibrated geometrically and used for data capturing. Based on the underlying principle that temperature information remains robust against illumination and viewpoint changes, we present a Thermal-guided Iterative Closest Point (T-ICP) methodology to facilitate reliable 3D thermal scanning applications. The pose of sensing device is initially estimated using correspondences found through maximizing the thermal consistency between consecutive infrared images. The coarse pose estimate is further refined by finding the motion parameters that minimize a combined geometric and thermographic loss function. Experimental results demonstrate that complimentary information captured by multimodal sensors can be utilized to improve performance of 3D thermographic reconstruction. Through effective fusion of thermal and depth data, the proposed approach generates more accurate 3D thermal models using significantly less scanning data.
Hierarchical patch-based co-registration of differently stained histopathology slides
NASA Astrophysics Data System (ADS)
Yigitsoy, Mehmet; Schmidt, Günter
2017-03-01
Over the past decades, digital pathology has emerged as an alternative way of looking at the tissue at subcellular level. It enables multiplexed analysis of different cell types at micron level. Information about cell types can be extracted by staining sections of a tissue block using different markers. However, robust fusion of structural and functional information from different stains is necessary for reproducible multiplexed analysis. Such a fusion can be obtained via image co-registration by establishing spatial correspondences between tissue sections. Spatial correspondences can then be used to transfer various statistics about cell types between sections. However, the multi-modal nature of images and sparse distribution of interesting cell types pose several challenges for the registration of differently stained tissue sections. In this work, we propose a co-registration framework that efficiently addresses such challenges. We present a hierarchical patch-based registration of intensity normalized tissue sections. Preliminary experiments demonstrate the potential of the proposed technique for the fusion of multi-modal information from differently stained digital histopathology sections.
F-18 Labeled Diabody-Luciferase Fusion Proteins for Optical-ImmunoPET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Anna M.
2013-01-18
The goal of the proposed work is to develop novel dual-labeled molecular imaging probes for multimodality imaging. Based on small, engineered antibodies called diabodies, these probes will be radioactively tagged with Fluorine-18 for PET imaging, and fused to luciferases for optical (bioluminescence) detection. Performance will be evaluated and validated using a prototype integrated optical-PET imaging system, OPET. Multimodality probes for optical-PET imaging will be based on diabodies that are dually labeled with 18F for PET detection and fused to luciferases for optical imaging. 1) Two sets of fusion proteins will be built, targeting the cell surface markers CEA or HER2.more » Coelenterazine-based luciferases and variant forms will be evaluated in combination with native substrate and analogs, in order to obtain two distinct probes recognizing different targets with different spectral signatures. 2) Diabody-luciferase fusion proteins will be labeled with 18F using amine reactive [18F]-SFB produced using a novel microwave-assisted, one-pot method. 3) Sitespecific, chemoselective radiolabeling methods will be devised, to reduce the chance that radiolabeling will inactivate either the target-binding properties or the bioluminescence properties of the diabody-luciferase fusion proteins. 4) Combined optical and PET imaging of these dual modality probes will be evaluated and validated in vitro and in vivo using a prototype integrated optical-PET imaging system, OPET. Each imaging modality has its strengths and weaknesses. Development and use of dual modality probes allows optical imaging to benefit from the localization and quantitation offered by the PET mode, and enhances the PET imaging by enabling simultaneous detection of more than one probe.« less
Cross-modal face recognition using multi-matcher face scores
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2015-05-01
The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.
Quantitative multi-modal NDT data analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heideklang, René; Shokouhi, Parisa
2014-02-18
A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundantmore » information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity.« less
NASA Astrophysics Data System (ADS)
Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.
1994-09-01
A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.
Walter, Uwe; Niendorf, Thoralf; Graessl, Andreas; Rieger, Jan; Krüger, Paul-Christian; Langner, Sönke; Guthoff, Rudolf F; Stachs, Oliver
2014-05-01
A combination of magnetic resonance images with real-time high-resolution ultrasound known as fusion imaging may improve ophthalmologic examination. This study was undertaken to evaluate the feasibility of orbital high-field magnetic resonance and real-time colour Doppler ultrasound image fusion and navigation. This case study, performed between April and June 2013, included one healthy man (age, 47 years) and two patients (one woman, 57 years; one man, 67 years) with choroidal melanomas. All cases underwent 7.0-T magnetic resonance imaging using a custom-made ocular imaging surface coil. The Digital Imaging and Communications in Medicine volume data set was then loaded into the ultrasound system for manual registration of the live ultrasound image and fusion imaging examination. Data registration, matching and then volume navigation were feasible in all cases. Fusion imaging provided real-time imaging capabilities and high tissue contrast of choroidal tumour and optic nerve. It also allowed adding a real-time colour Doppler signal on magnetic resonance images for assessment of vasculature of tumour and retrobulbar structures. The combination of orbital high-field magnetic resonance and colour Doppler ultrasound image fusion and navigation is feasible. Multimodal fusion imaging promises to foster assessment and monitoring of choroidal melanoma and optic nerve disorders. • Orbital magnetic resonance and colour Doppler ultrasound real-time fusion imaging is feasible • Fusion imaging combines the spatial and temporal resolution advantages of each modality • Magnetic resonance and ultrasound fusion imaging improves assessment of choroidal melanoma vascularisation.
Multimodal system for the planning and guidance of bronchoscopy
NASA Astrophysics Data System (ADS)
Higgins, William E.; Cheirsilp, Ronnarit; Zang, Xiaonan; Byrnes, Patrick
2015-03-01
Many technical innovations in multimodal radiologic imaging and bronchoscopy have emerged recently in the effort against lung cancer. Modern X-ray computed-tomography (CT) scanners provide three-dimensional (3D) high-resolution chest images, positron emission tomography (PET) scanners give complementary molecular imaging data, and new integrated PET/CT scanners combine the strengths of both modalities. State-of-the-art bronchoscopes permit minimally invasive tissue sampling, with vivid endobronchial video enabling navigation deep into the airway-tree periphery, while complementary endobronchial ultrasound (EBUS) reveals local views of anatomical structures outside the airways. In addition, image-guided intervention (IGI) systems have proven their utility for CT-based planning and guidance of bronchoscopy. Unfortunately, no IGI system exists that integrates all sources effectively through the complete lung-cancer staging work flow. This paper presents a prototype of a computer-based multimodal IGI system that strives to fill this need. The system combines a wide range of automatic and semi-automatic image-processing tools for multimodal data fusion and procedure planning. It also provides a flexible graphical user interface for follow-on guidance of bronchoscopy/EBUS. Human-study results demonstrate the system's potential.
2013-10-17
imagery. 2. Report describing the registration algorithms and parameters . 3.2 Deliverables from Phase 2 (current phase, ongoing) 1. Selected and...representation. To be submitted to Information Fusion [Impact factor 2.262 ] . 2. De Jong, M., Toet, A., Hogervorst, M.A., Hooge , I., Pinkus, A. R. (in...Impact factor 3.376] . 3. Koenderink, J.J., van Doorn, A., De Jong, M., Toet, A., Hogervorst, M.A., Hooge , I., Pinkus, A. R. (in preparation
Paprottka, P M; Zengel, P; Cyran, C C; Ingrisch, M; Nikolaou, K; Reiser, M F; Clevert, D A
2014-01-01
To evaluate the ultrasound tissue elasticity imaging by comparison to multimodality imaging using image fusion with Magnetic Resonance Imaging (MRI) and conventional grey scale imaging with additional elasticity-ultrasound in an experimental small-animal-squamous-cell carcinoma-model for the assessment of tissue morphology. Human hypopharynx carcinoma cells were subcutaneously injected into the left flank of 12 female athymic nude rats. After 10 days (SD ± 2) of subcutaneous tumor growth, sonographic grey scale including elasticity imaging and MRI measurements were performed using a high-end ultrasound system and a 3T MR. For image fusion the contrast-enhanced MRI DICOM data set was uploaded in the ultrasonic device which has a magnetic field generator, a linear array transducer (6-15 MHz) and a dedicated software package (GE Logic E9), that can detect transducers by means of a positioning system. Conventional grey scale and elasticity imaging were integrated in the image fusion examination. After successful registration and image fusion the registered MR-images were simultaneously shown with the respective ultrasound sectional plane. Data evaluation was performed using the digitally stored video sequence data sets by two experienced radiologist using a modified Tsukuba Elasticity score. The colors "red and green" are assigned for an area of soft tissue, "blue" indicates hard tissue. In all cases a successful image fusion and plan registration with MRI and ultrasound imaging including grey scale and elasticity imaging was possible. The mean tumor volume based on caliper measurements in 3 dimensions was ~323 mm3. 4/12 rats were evaluated with Score I, 5/12 rates were evaluated with Score II, 3/12 rates were evaluated with Score III. There was a close correlation in the fused MRI with existing small necrosis in the tumor. None of the scored II or III lesions was visible by conventional grey scale. The comparison of ultrasound tissue elasticity imaging enables a secure differentiation between different tumor tissue areas in comparison to image fusion with MRI in our small study group. Therefore ultrasound tissue elasticity imaging might be used for fast detection of tumor response in the future whereas conventional grey scale imaging alone could not provide the additional information. By using standard, contrast-enhanced MRI images for reliable and reproducible slice positioning, the strongly user-dependent limitation of ultrasound tissue elasticity imaging may be overcome, especially for a comparison between baseline and follow-up measurements.
NASA Astrophysics Data System (ADS)
Mu, Wei; Qi, Jin; Lu, Hong; Schabath, Matthew; Balagurunathan, Yoganand; Tunali, Ilke; Gillies, Robert James
2018-02-01
Purpose: Investigate the ability of using complementary information provided by the fusion of PET/CT images to predict immunotherapy response in non-small cell lung cancer (NSCLC) patients. Materials and methods: We collected 64 patients diagnosed with primary NSCLC treated with anti PD-1 checkpoint blockade. Using PET/CT images, fused images were created following multiple methodologies, resulting in up to 7 different images for the tumor region. Quantitative image features were extracted from the primary image (PET/CT) and the fused images, which included 195 from primary images and 1235 features from the fusion images. Three clinical characteristics were also analyzed. We then used support vector machine (SVM) classification models to identify discriminant features that predict immunotherapy response at baseline. Results: A SVM built with 87 fusion features and 13 primary PET/CT features on validation dataset had an accuracy and area under the ROC curve (AUROC) of 87.5% and 0.82, respectively, compared to a model built with 113 original PET/CT features on validation dataset 78.12% and 0.68. Conclusion: The fusion features shows better ability to predict immunotherapy response prediction compared to individual image features.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813
Visual tracking for multi-modality computer-assisted image guidance
NASA Astrophysics Data System (ADS)
Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp
2017-03-01
With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
Image fusion and navigation platforms for percutaneous image-guided interventions.
Rajagopal, Manoj; Venkatesan, Aradhana M
2016-04-01
Image-guided interventional procedures, particularly image guided biopsy and ablation, serve an important role in the care of the oncology patient. The need for tumor genomic and proteomic profiling, early tumor response assessment and confirmation of early recurrence are common scenarios that may necessitate successful biopsies of targets, including those that are small, anatomically unfavorable or inconspicuous. As image-guided ablation is increasingly incorporated into interventional oncology practice, similar obstacles are posed for the ablation of technically challenging tumor targets. Navigation tools, including image fusion and device tracking, can enable abdominal interventionalists to more accurately target challenging biopsy and ablation targets. Image fusion technologies enable multimodality fusion and real-time co-displays of US, CT, MRI, and PET/CT data, with navigational technologies including electromagnetic tracking, robotic, cone beam CT, optical, and laser guidance of interventional devices. Image fusion and navigational platform technology is reviewed in this article, including the results of studies implementing their use for interventional procedures. Pre-clinical and clinical experiences to date suggest these technologies have the potential to reduce procedure risk, time, and radiation dose to both the patient and the operator, with a valuable role to play for complex image-guided interventions.
Design and implementation of a contactless multiple hand feature acquisition system
NASA Astrophysics Data System (ADS)
Zhao, Qiushi; Bu, Wei; Wu, Xiangqian; Zhang, David
2012-06-01
In this work, an integrated contactless multiple hand feature acquisition system is designed. The system can capture palmprint, palm vein, and palm dorsal vein images simultaneously. Moreover, the images are captured in a contactless manner, that is, users need not to touch any part of the device when capturing. Palmprint is imaged under visible illumination while palm vein and palm dorsal vein are imaged under near infrared (NIR) illumination. The capturing is controlled by computer and the whole process is less than 1 second, which is sufficient for online biometric systems. Based on this device, this paper also implements a contactless hand-based multimodal biometric system. Palmprint, palm vein, palm dorsal vein, finger vein, and hand geometry features are extracted from the captured images. After similarity measure, the matching scores are fused using weighted sum fusion rule. Experimental results show that although the verification accuracy of each uni-modality is not as high as that of state-of-the-art, the fusion result is superior to most of the existing hand-based biometric systems. This result indicates that the proposed device is competent in the application of contactless multimodal hand-based biometrics.
Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery
Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir
2017-01-01
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659
Thaden, Jeremy J; Sanon, Saurabh; Geske, Jeffrey B; Eleid, Mackram F; Nijhof, Niels; Malouf, Joseph F; Rihal, Charanjit S; Bruce, Charles J
2016-06-01
There has been significant growth in the volume and complexity of percutaneous structural heart procedures in the past decade. Increasing procedural complexity and accompanying reliance on multimodality imaging have fueled the development of fusion imaging to facilitate procedural guidance. The first clinically available system capable of echocardiographic and fluoroscopic fusion for real-time guidance of structural heart procedures was approved by the US Food and Drug Administration in 2012. Echocardiographic-fluoroscopic fusion imaging combines the precise catheter and device visualization of fluoroscopy with the soft tissue anatomy and color flow Doppler information afforded by echocardiography in a single image. This allows the interventionalist to perform precise catheter manipulations under fluoroscopy guidance while visualizing critical tissue anatomy provided by echocardiography. However, there are few data available addressing this technology's strengths and limitations in routine clinical practice. The authors provide a critical review of currently available echocardiographic-fluoroscopic fusion imaging for guidance of structural heart interventions to highlight its strengths, limitations, and potential clinical applications and to guide further research into value of this emerging technology. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Evtushenko, Alexander S.; Faskhutdinov, Lenar M.; Kafarova, Anastasia M.; Kazakov, Vadim S.; Kuznetzov, Artem A.; Minaeva, Alina Yu.; Sevruk, Nikita L.; Nureev, Ilnur I.; Vasilets, Alexander A.; Andreev, Vladimir A.; Morozov, Oleg G.; Burdin, Vladimir A.; Bourdine, Anton V.
2017-04-01
This work presents method for performing precision macro-structure defects "tapers" and "up-tapers" written in conventional silica telecommunication multimode optical fibers by commercially available field fusion splicer with modified software settings and following writing fiber Bragg gratings over or near them. We developed technique for macrodefect geometry parameters estimation via analysis of photo-image performed after defect writing and displayed on fusion splicer screen. Some research results of defect geometry dependence on fusion current and fusion time values re-set in splicer program are represented that provided ability to choose their "the best" combination. Also experimental statistical researches concerned with "taper" and "up-taper" diameter stability as well as their insertion loss values during their writing under fixed corrected splicer program parameters were performed. We developed technique for FBG writing over or near macro-structure defect. Some results of spectral response measurements produced for short-length samples of multimode optical fiber with fiber Bragg gratings written over and near macro-defects prepared by using proposed technique are presented.
NASA Astrophysics Data System (ADS)
Pan, Feng; Deng, Yating; Ma, Xichao; Xiao, Wen
2017-11-01
Digital holographic microtomography is improved and applied to the measurements of three-dimensional refractive index distributions of fusion spliced optical fibers. Tomographic images are reconstructed from full-angle phase projection images obtained with a setup-rotation approach, in which the laser source, the optical system and the image sensor are arranged on an optical breadboard and synchronously rotated around the fixed object. For retrieving high-quality tomographic images, a numerical method is proposed to compensate the unwanted movements of the object in the lateral, axial and vertical directions during rotation. The compensation is implemented on the two-dimensional phase images instead of the sinogram. The experimental results exhibit distinctly the internal structures of fusion splices between a single-mode fiber and other fibers, including a multi-mode fiber, a panda polarization maintaining fiber, a bow-tie polarization maintaining fiber and a photonic crystal fiber. In particular, the internal structure distortion in the fusion areas can be intuitively observed, such as the expansion of the stress zones of polarization maintaining fibers, the collapse of the air holes of photonic crystal fibers, etc.
Towards Omni-Tomography—Grand Fusion of Multiple Modalities for Simultaneous Interior Tomography
Wang, Ge; Zhang, Jie; Gao, Hao; Weir, Victor; Yu, Hengyong; Cong, Wenxiang; Xu, Xiaochen; Shen, Haiou; Bennett, James; Furth, Mark; Wang, Yue; Vannier, Michael
2012-01-01
We recently elevated interior tomography from its origin in computed tomography (CT) to a general tomographic principle, and proved its validity for other tomographic modalities including SPECT, MRI, and others. Here we propose “omni-tomography”, a novel concept for the grand fusion of multiple tomographic modalities for simultaneous data acquisition in a region of interest (ROI). Omni-tomography can be instrumental when physiological processes under investigation are multi-dimensional, multi-scale, multi-temporal and multi-parametric. Both preclinical and clinical studies now depend on in vivo tomography, often requiring separate evaluations by different imaging modalities. Over the past decade, two approaches have been used for multimodality fusion: Software based image registration and hybrid scanners such as PET-CT, PET-MRI, and SPECT-CT among others. While there are intrinsic limitations with both approaches, the main obstacle to the seamless fusion of multiple imaging modalities has been the bulkiness of each individual imager and the conflict of their physical (especially spatial) requirements. To address this challenge, omni-tomography is now unveiled as an emerging direction for biomedical imaging and systems biomedicine. PMID:22768108
Gamma-Ray imaging for nuclear security and safety: Towards 3-D gamma-ray vision
NASA Astrophysics Data System (ADS)
Vetter, Kai; Barnowksi, Ross; Haefner, Andrew; Joshi, Tenzing H. Y.; Pavlovsky, Ryan; Quiter, Brian J.
2018-01-01
The development of portable gamma-ray imaging instruments in combination with the recent advances in sensor and related computer vision technologies enable unprecedented capabilities in the detection, localization, and mapping of radiological and nuclear materials in complex environments relevant for nuclear security and safety. Though multi-modal imaging has been established in medicine and biomedical imaging for some time, the potential of multi-modal data fusion for radiological localization and mapping problems in complex indoor and outdoor environments remains to be explored in detail. In contrast to the well-defined settings in medical or biological imaging associated with small field-of-view and well-constrained extension of the radiation field, in many radiological search and mapping scenarios, the radiation fields are not constrained and objects and sources are not necessarily known prior to the measurement. The ability to fuse radiological with contextual or scene data in three dimensions, in analog to radiological and functional imaging with anatomical fusion in medicine, provides new capabilities enhancing image clarity, context, quantitative estimates, and visualization of the data products. We have developed new means to register and fuse gamma-ray imaging with contextual data from portable or moving platforms. These developments enhance detection and mapping capabilities as well as provide unprecedented visualization of complex radiation fields, moving us one step closer to the realization of gamma-ray vision in three dimensions.
Hofstad, Erlend Fagertun; Amundsen, Tore; Langø, Thomas; Bakeng, Janne Beate Lervik; Leira, Håkon Olav
2017-01-01
Background Endobronchial ultrasound transbronchial needle aspiration (EBUS-TBNA) is the endoscopic method of choice for confirming lung cancer metastasis to mediastinal lymph nodes. Precision is crucial for correct staging and clinical decision-making. Navigation and multimodal imaging can potentially improve EBUS-TBNA efficiency. Aims To demonstrate the feasibility of a multimodal image guiding system using electromagnetic navigation for ultrasound bronchoschopy in humans. Methods Four patients referred for lung cancer diagnosis and staging with EBUS-TBNA were enrolled in the study. Target lymph nodes were predefined from the preoperative computed tomography (CT) images. A prototype convex probe ultrasound bronchoscope with an attached sensor for position tracking was used for EBUS-TBNA. Electromagnetic tracking of the ultrasound bronchoscope and ultrasound images allowed fusion of preoperative CT and intraoperative ultrasound in the navigation software. Navigated EBUS-TBNA was used to guide target lymph node localization and sampling. Navigation system accuracy was calculated, measured by the deviation between lymph node position in ultrasound and CT in three planes. Procedure time, diagnostic yield and adverse events were recorded. Results Preoperative CT and real-time ultrasound images were successfully fused and displayed in the navigation software during the procedures. Overall navigation accuracy (11 measurements) was 10.0 ± 3.8 mm, maximum 17.6 mm, minimum 4.5 mm. An adequate sample was obtained in 6/6 (100%) of targeted lymph nodes. No adverse events were registered. Conclusions Electromagnetic navigated EBUS-TBNA was feasible, safe and easy in this human pilot study. The clinical usefulness was clearly demonstrated. Fusion of real-time ultrasound, preoperative CT and electromagnetic navigational bronchoscopy provided a controlled guiding to level of target, intraoperative overview and procedure documentation. PMID:28182758
Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.
Reena Benjamin, J; Jayasree, T
2018-02-01
In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.
NASA Astrophysics Data System (ADS)
Bismuth, Vincent; Vancamberg, Laurence; Gorges, Sébastien
2009-02-01
During interventional radiology procedures, guide-wires are usually inserted into the patients vascular tree for diagnosis or healing purpose. These procedures are monitored with an Xray interventional system providing images of the interventional devices navigating through the patient's body. The automatic detection of such tools by image processing means has gained maturity over the past years and enables applications ranging from image enhancement to multimodal image fusion. Sophisticated detection methods are emerging, which rely on a variety of device enhancement techniques. In this article we reviewed and classified these techniques into three families. We chose a state of the art approach in each of them and built a rigorous framework to compare their detection capability and their computational complexity. Through simulations and the intensive use of ROC curves we demonstrated that the Hessian based methods are the most robust to strong curvature of the devices and that the family of rotated filters technique is the most suited for detecting low CNR and low curvature devices. The steerable filter approach demonstrated less interesting detection capabilities and appears to be the most expensive one to compute. Finally we demonstrated the interest of automatic guide-wire detection on a clinical topic: the compensation of respiratory motion in multimodal image fusion.
Yang, Minglei; Ding, Hui; Zhu, Lei; Wang, Guangzhi
2016-12-01
Ultrasound fusion imaging is an emerging tool and benefits a variety of clinical applications, such as image-guided diagnosis and treatment of hepatocellular carcinoma and unresectable liver metastases. However, respiratory liver motion-induced misalignment of multimodal images (i.e., fusion error) compromises the effectiveness and practicability of this method. The purpose of this paper is to develop a subject-specific liver motion model and automatic registration-based method to correct the fusion error. An online-built subject-specific motion model and automatic image registration method for 2D ultrasound-3D magnetic resonance (MR) images were combined to compensate for the respiratory liver motion. The key steps included: 1) Build a subject-specific liver motion model for current subject online and perform the initial registration of pre-acquired 3D MR and intra-operative ultrasound images; 2) During fusion imaging, compensate for liver motion first using the motion model, and then using an automatic registration method to further correct the respiratory fusion error. Evaluation experiments were conducted on liver phantom and five subjects. In the phantom study, the fusion error (superior-inferior axis) was reduced from 13.90±2.38mm to 4.26±0.78mm by using the motion model only. The fusion error further decreased to 0.63±0.53mm by using the registration method. The registration method also decreased the rotation error from 7.06±0.21° to 1.18±0.66°. In the clinical study, the fusion error was reduced from 12.90±9.58mm to 6.12±2.90mm by using the motion model alone. Moreover, the fusion error decreased to 1.96±0.33mm by using the registration method. The proposed method can effectively correct the respiration-induced fusion error to improve the fusion image quality. This method can also reduce the error correction dependency on the initial registration of ultrasound and MR images. Overall, the proposed method can improve the clinical practicability of ultrasound fusion imaging. Copyright © 2016 Elsevier Ltd. All rights reserved.
Neural network fusion: a novel CT-MR aortic aneurysm image segmentation method
NASA Astrophysics Data System (ADS)
Wang, Duo; Zhang, Rui; Zhu, Jin; Teng, Zhongzhao; Huang, Yuan; Spiga, Filippo; Du, Michael Hong-Fei; Gillard, Jonathan H.; Lu, Qingsheng; Liò, Pietro
2018-03-01
Medical imaging examination on patients usually involves more than one imaging modalities, such as Computed Tomography (CT), Magnetic Resonance (MR) and Positron Emission Tomography(PET) imaging. Multimodal imaging allows examiners to benefit from the advantage of each modalities. For example, for Abdominal Aortic Aneurysm, CT imaging shows calcium deposits in the aorta clearly while MR imaging distinguishes thrombus and soft tissues better.1 Analysing and segmenting both CT and MR images to combine the results will greatly help radiologists and doctors to treat the disease. In this work, we present methods on using deep neural network models to perform such multi-modal medical image segmentation. As CT image and MR image of the abdominal area cannot be well registered due to non-affine deformations, a naive approach is to train CT and MR segmentation network separately. However, such approach is time-consuming and resource-inefficient. We propose a new approach to fuse the high-level part of the CT and MR network together, hypothesizing that neurons recognizing the high level concepts of Aortic Aneurysm can be shared across multiple modalities. Such network is able to be trained end-to-end with non-registered CT and MR image using shorter training time. Moreover network fusion allows a shared representation of Aorta in both CT and MR images to be learnt. Through experiments we discovered that for parts of Aorta showing similar aneurysm conditions, their neural presentations in neural network has shorter distances. Such distances on the feature level is helpful for registering CT and MR image.
Wang, Hongzhi; Yushkevich, Paul A.
2013-01-01
Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427
Angiogram, fundus, and oxygen saturation optic nerve head image fusion
NASA Astrophysics Data System (ADS)
Cao, Hua; Khoobehi, Bahram
2009-02-01
A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.
NASA Astrophysics Data System (ADS)
Ilehag, R.; Schenk, A.; Hinz, S.
2017-08-01
This paper presents a concept for classification of facade elements, based on the material and the geometry of the elements in addition to the thermal radiation of the facade with the usage of a multimodal Unmanned Aerial Vehicle (UAV) system. Once the concept is finalized and functional, the workflow can be used for energy demand estimations for buildings by exploiting existing methods for estimation of heat transfer coefficient and the transmitted heat loss. The multimodal system consists of a thermal, a hyperspectral and an optical sensor, which can be operational with a UAV. While dealing with sensors that operate in different spectra and have different technical specifications, such as the radiometric and the geometric resolution, the challenges that are faced are presented. Addressed are the different approaches of data fusion, such as image registration, generation of 3D models by performing image matching and the means for classification based on either the geometry of the object or the pixel values. As a first step towards realizing the concept, the result from a geometric calibration with a designed multimodal calibration pattern is presented.
NASA Astrophysics Data System (ADS)
He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Chen, Ying
2014-05-01
A multimodal biometric system has been considered a promising technique to overcome the defects of unimodal biometric systems. We have introduced a fusion scheme to gain a better understanding and fusion method for a face-iris-fingerprint multimodal biometric system. In our case, we use particle swarm optimization to train a set of adaptive Gabor filters in order to achieve the proper Gabor basic functions for each modality. For a closer analysis of texture information, two different local Gabor features for each modality are produced by the corresponding Gabor coefficients. Next, all matching scores of the two Gabor features for each modality are projected to a single-scalar score via a trained, supported, vector regression model for a final decision. A large-scale dataset is formed to validate the proposed scheme using the Facial Recognition Technology database-fafb and CASIA-V3-Interval together with FVC2004-DB2a datasets. The experimental results demonstrate that as well as achieving further powerful local Gabor features of multimodalities and obtaining better recognition performance by their fusion strategy, our architecture also outperforms some state-of-the-art individual methods and other fusion approaches for face-iris-fingerprint multimodal biometric systems.
A Multi-modal, Discriminative and Spatially Invariant CNN for RGB-D Object Labeling.
Asif, Umar; Bennamoun, Mohammed; Sohel, Ferdous
2017-08-30
While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multimodal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multimodal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance - this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability - this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multimodal hierarchical fusion - this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., imageand pixel-levels), and fused into a Conditional Random Field (CRF)- based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.
Data fusion algorithm for rapid multi-mode dust concentration measurement system based on MEMS
NASA Astrophysics Data System (ADS)
Liao, Maohao; Lou, Wenzhong; Wang, Jinkui; Zhang, Yan
2018-03-01
As single measurement method cannot fully meet the technical requirements of dust concentration measurement, the multi-mode detection method is put forward, as well as the new requirements for data processing. This paper presents a new dust concentration measurement system which contains MEMS ultrasonic sensor and MEMS capacitance sensor, and presents a new data fusion algorithm for this multi-mode dust concentration measurement system. After analyzing the relation between the data of the composite measurement method, the data fusion algorithm based on Kalman filtering is established, which effectively improve the measurement accuracy, and ultimately forms a rapid data fusion model of dust concentration measurement. Test results show that the data fusion algorithm is able to realize the rapid and exact concentration detection.
Ahmed, Shaheen; Iftekharuddin, Khan M; Vossough, Arastoo
2011-03-01
Our previous works suggest that fractal texture feature is useful to detect pediatric brain tumor in multimodal MRI. In this study, we systematically investigate efficacy of using several different image features such as intensity, fractal texture, and level-set shape in segmentation of posterior-fossa (PF) tumor for pediatric patients. We explore effectiveness of using four different feature selection and three different segmentation techniques, respectively, to discriminate tumor regions from normal tissue in multimodal brain MRI. We further study the selective fusion of these features for improved PF tumor segmentation. Our result suggests that Kullback-Leibler divergence measure for feature ranking and selection and the expectation maximization algorithm for feature fusion and tumor segmentation offer the best results for the patient data in this study. We show that for T1 and fluid attenuation inversion recovery (FLAIR) MRI modalities, the best PF tumor segmentation is obtained using the texture feature such as multifractional Brownian motion (mBm) while that for T2 MRI is obtained by fusing level-set shape with intensity features. In multimodality fused MRI (T1, T2, and FLAIR), mBm feature offers the best PF tumor segmentation performance. We use different similarity metrics to evaluate quality and robustness of these selected features for PF tumor segmentation in MRI for ten pediatric patients.
Spinal fusion-hardware construct: Basic concepts and imaging review
Nouh, Mohamed Ragab
2012-01-01
The interpretation of spinal images fixed with metallic hardware forms an increasing bulk of daily practice in a busy imaging department. Radiologists are required to be familiar with the instrumentation and operative options used in spinal fixation and fusion procedures, especially in his or her institute. This is critical in evaluating the position of implants and potential complications associated with the operative approaches and spinal fixation devices used. Thus, the radiologist can play an important role in patient care and outcome. This review outlines the advantages and disadvantages of commonly used imaging methods and reports on the best yield for each modality and how to overcome the problematic issues associated with the presence of metallic hardware during imaging. Baseline radiographs are essential as they are the baseline point for evaluation of future studies should patients develop symptoms suggesting possible complications. They may justify further imaging workup with computed tomography, magnetic resonance and/or nuclear medicine studies as the evaluation of a patient with a spinal implant involves a multi-modality approach. This review describes imaging features of potential complications associated with spinal fusion surgery as well as the instrumentation used. This basic knowledge aims to help radiologists approach everyday practice in clinical imaging. PMID:22761979
Parashurama, Natesh; Ahn, Byeong-Cheol; Ziv, Keren; Ito, Ken; Paulmurugan, Ramasamy; Willmann, Jürgen K.; Chung, Jaehoon; Ikeno, Fumiaki; Swanson, Julia C.; Merk, Denis R.; Lyons, Jennifer K.; Yerushalmi, David; Teramoto, Tomohiko; Kosuge, Hisanori; Dao, Catherine N.; Ray, Pritha; Patel, Manishkumar; Chang, Ya-fang; Mahmoudi, Morteza; Cohen, Jeff Eric; Goldstone, Andrew Brooks; Habte, Frezghi; Bhaumik, Srabani; Yaghoubi, Shahriar; Robbins, Robert C.; Dash, Rajesh; Yang, Phillip C.; Brinton, Todd J.; Yock, Paul G.; McConnell, Michael V.
2016-01-01
Purpose To use multimodality reporter-gene imaging to assess the serial survival of marrow stromal cells (MSC) after therapy for myocardial infarction (MI) and to determine if the requisite preclinical imaging end point was met prior to a follow-up large-animal MSC imaging study. Materials and Methods Animal studies were approved by the Institutional Administrative Panel on Laboratory Animal Care. Mice (n = 19) that had experienced MI were injected with bone marrow–derived MSC that expressed a multimodality triple fusion (TF) reporter gene. The TF reporter gene (fluc2-egfp-sr39ttk) consisted of a human promoter, ubiquitin, driving firefly luciferase 2 (fluc2), enhanced green fluorescent protein (egfp), and the sr39tk positron emission tomography reporter gene. Serial bioluminescence imaging of MSC-TF and ex vivo luciferase assays were performed. Correlations were analyzed with the Pearson product-moment correlation, and serial imaging results were analyzed with a mixed-effects regression model. Results Analysis of the MSC-TF after cardiac cell therapy showed significantly lower signal on days 8 and 14 than on day 2 (P = .011 and P = .001, respectively). MSC-TF with MI demonstrated significantly higher signal than MSC-TF without MI at days 4, 8, and 14 (P = .016). Ex vivo luciferase activity assay confirmed the presence of MSC-TF on days 8 and 14 after MI. Conclusion Multimodality reporter-gene imaging was successfully used to assess serial MSC survival after therapy for MI, and it was determined that the requisite preclinical imaging end point, 14 days of MSC survival, was met prior to a follow-up large-animal MSC study. © RSNA, 2016 Online supplemental material is available for this article. PMID:27308957
Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis.
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2014-11-01
For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)(2), a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET. Copyright © 2014 Elsevier Inc. All rights reserved.
Hierarchical Feature Representation and Multimodal Fusion with Deep Learning for AD/MCI Diagnosis
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2014-01-01
For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)1, a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET. PMID:25042445
2013-01-01
Background In the present study, we used multimodal imaging to investigate biodistribution in rats after intravenous administration of a new 99mTc-labeled delivery system consisting of polymer-shelled microbubbles (MBs) functionalized with diethylenetriaminepentaacetic acid (DTPA), thiolated poly(methacrylic acid) (PMAA), chitosan, 1,4,7-triacyclononane-1,4,7-triacetic acid (NOTA), NOTA-super paramagnetic iron oxide nanoparticles (SPION), or DTPA-SPION. Methods Examinations utilizing planar dynamic scintigraphy and hybrid imaging were performed using a commercially available single-photon emission computed tomography (SPECT)/computed tomography (CT) system. For SPION containing MBs, the biodistribution pattern of 99mTc-labeled NOTA-SPION and DTPA-SPION MBs was investigated and co-registered using fusion SPECT/CT and magnetic resonance imaging (MRI). Moreover, to evaluate the biodistribution, organs were removed and radioactivity was measured and calculated as percentage of injected dose. Results SPECT/CT and MRI showed that the distribution of 99mTc-labeled ligand-functionalized MBs varied with the type of ligand as well as with the presence of SPION. The highest uptake was observed in the lungs 1 h post injection of 99mTc-labeled DTPA and chitosan MBs, while a similar distribution to the lungs and the liver was seen after the administration of PMAA MBs. The highest counts of 99mTc-labeled NOTA-SPION and DTPA-SPION MBs were observed in the lungs, liver, and kidneys 1 h post injection. The highest counts were observed in the liver, spleen, and kidneys as confirmed by MRI 24 h post injection. Furthermore, the results obtained from organ measurements were in good agreement with those obtained from SPECT/CT. Conclusions In conclusion, microbubbles functionalized by different ligands can be labeled with radiotracers and utilized for SPECT/CT imaging, while the incorporation of SPION in MB shells enables imaging using MR. Our investigation revealed that biodistribution may be modified using different ligands. Furthermore, using a single contrast agent with fusion SPECT/CT/MR multimodal imaging enables visualization of functional and anatomical information in one image, thus improving the diagnostic benefit for patients. PMID:23442550
Linked 4-Way Multimodal Brain Differences in Schizophrenia in a Large Chinese Han Population.
Liu, Shengfeng; Wang, Haiying; Song, Ming; Lv, Luxian; Cui, Yue; Liu, Yong; Fan, Lingzhong; Zuo, Nianming; Xu, Kaibin; Du, Yuhui; Yu, Qingbao; Luo, Na; Qi, Shile; Yang, Jian; Xie, Sangma; Li, Jian; Chen, Jun; Chen, Yunchun; Wang, Huaning; Guo, Hua; Wan, Ping; Yang, Yongfeng; Li, Peng; Lu, Lin; Yan, Hao; Yan, Jun; Wang, Huiling; Zhang, Hongxing; Zhang, Dai; Calhoun, Vince D; Jiang, Tianzi; Sui, Jing
2018-04-20
Multimodal fusion has been regarded as a promising tool to discover covarying patterns of multiple imaging types impaired in brain diseases, such as schizophrenia (SZ). In this article, we aim to investigate the covarying abnormalities underlying SZ in a large Chinese Han population (307 SZs, 298 healthy controls [HCs]). Four types of magnetic resonance imaging (MRI) features, including regional homogeneity (ReHo) from resting-state functional MRI, gray matter volume (GM) from structural MRI, fractional anisotropy (FA) from diffusion MRI, and functional network connectivity (FNC) resulted from group independent component analysis, were jointly analyzed by a data-driven multivariate fusion method. Results suggest that a widely distributed network disruption appears in SZ patients, with synchronous changes in both functional and structural regions, especially the basal ganglia network, salience network (SAN), and the frontoparietal network. Such a multimodal coalteration was also replicated in another independent Chinese sample (40 SZs, 66 HCs). Our results on auditory verbal hallucination (AVH) also provide evidence for the hypothesis that prefrontal hypoactivation and temporal hyperactivation in SZ may lead to failure of executive control and inhibition, which is relevant to AVH. In addition, impaired working memory performance was found associated with GM reduction and FA decrease in SZ in prefrontal and superior temporal area, in both discovery and replication datasets. In summary, by leveraging multiple imaging and clinical information into one framework to observe brain in multiple views, we can integrate multiple inferences about SZ from large-scale population and offer unique perspectives regarding the missing links between the brain function and structure that may not be achieved by separate unimodal analyses.
Brock, Kristy K; Mutic, Sasa; McNutt, Todd R; Li, Hua; Kessler, Marc L
2017-07-01
Image registration and fusion algorithms exist in almost every software system that creates or uses images in radiotherapy. Most treatment planning systems support some form of image registration and fusion to allow the use of multimodality and time-series image data and even anatomical atlases to assist in target volume and normal tissue delineation. Treatment delivery systems perform registration and fusion between the planning images and the in-room images acquired during the treatment to assist patient positioning. Advanced applications are beginning to support daily dose assessment and enable adaptive radiotherapy using image registration and fusion to propagate contours and accumulate dose between image data taken over the course of therapy to provide up-to-date estimates of anatomical changes and delivered dose. This information aids in the detection of anatomical and functional changes that might elicit changes in the treatment plan or prescription. As the output of the image registration process is always used as the input of another process for planning or delivery, it is important to understand and communicate the uncertainty associated with the software in general and the result of a specific registration. Unfortunately, there is no standard mathematical formalism to perform this for real-world situations where noise, distortion, and complex anatomical variations can occur. Validation of the software systems performance is also complicated by the lack of documentation available from commercial systems leading to use of these systems in undesirable 'black-box' fashion. In view of this situation and the central role that image registration and fusion play in treatment planning and delivery, the Therapy Physics Committee of the American Association of Physicists in Medicine commissioned Task Group 132 to review current approaches and solutions for image registration (both rigid and deformable) in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. © 2017 American Association of Physicists in Medicine.
Kapfhammer, A; Winkens, T; Lesser, T; Reissig, A; Steinert, M; Freesmeyer, M
2015-01-01
To retrospectively evaluate the feasibility and value of CT-CT image fusion to assess the shift of peripheral lung cancers with/-out chest wall infiltration, comparing computed tomography acquisitions in shallow-breathing (SB-CT) and deep-inspiration breath-hold (DIBH-CT) in patients undergoing FDG-PET/CT for lung cancer staging. Image fusion of SB-CT and DIBH-CT was performed with a multimodal workstation used for nuclear medicine fusion imaging. The distance of intrathoracic landmarks and the positional shift of tumours were measured using semi-transparent overlay of both CT series. Statistical analyses were adjusted for confounders of tumour infiltration. Cutoff levels were calculated for prediction of no-/infiltration. Lateral pleural recessus and diaphragm showed the largest respiratory excursions. Infiltrating lung cancers showed more limited respiratory shifts than non-infiltrating tumours. A large respiratory tumour-motility accurately predicted non-infiltration. However, the tumour shifts were limited and variable, limiting the accuracy of prediction. This pilot fusion study proved feasible and allowed a simple analysis of the respiratory shifts of peripheral lung tumours using CT-CT image fusion in a PET/CT setting. The calculated cutoffs were useful in predicting the exclusion of chest wall infiltration but did not accurately predict tumour infiltration. This method can provide additional qualitative information in patients with lung cancers with contact to the chest wall but unclear CT evidence of infiltration undergoing PET/CT without the need of additional investigations. Considering the small sample size investigated, further studies are necessary to verify the obtained results.
[Medical imaging in tumor precision medicine: opportunities and challenges].
Xu, Jingjing; Tan, Yanbin; Zhang, Minming
2017-05-25
Tumor precision medicine is an emerging approach for tumor diagnosis, treatment and prevention, which takes account of individual variability of environment, lifestyle and genetic information. Tumor precision medicine is built up on the medical imaging innovations developed during the past decades, including the new hardware, new imaging agents, standardized protocols, image analysis and multimodal imaging fusion technology. Also the development of automated and reproducible analysis algorithm has extracted large amount of information from image-based features. With the continuous development and mining of tumor clinical and imaging databases, the radiogenomics, radiomics and artificial intelligence have been flourishing. Therefore, these new technological advances bring new opportunities and challenges to the application of imaging in tumor precision medicine.
Chavan, Satishkumar S; Mahajan, Abhishek; Talbar, Sanjay N; Desai, Subhash; Thakur, Meenakshi; D'cruz, Anil
2017-02-01
Neurocysticercosis (NCC) is a parasite infection caused by the tapeworm Taenia solium in its larvae stage which affects the central nervous system of the human body (a definite host). It results in the formation of multiple lesions in the brain at different locations during its various stages. During diagnosis of such symptomatic patients, these lesions can be better visualized using a feature based fusion of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). This paper presents a novel approach to Multimodality Medical Image Fusion (MMIF) used for the analysis of the lesions for the diagnostic purpose and post treatment review of NCC. The MMIF presented here is a technique of combining CT and MRI data of the same patient into a new slice using a Nonsubsampled Rotated Complex Wavelet Transform (NSRCxWT). The forward NSRCxWT is applied on both the source modalities separately to extract the complementary and the edge related features. These features are then combined to form a composite spectral plane using average and maximum value selection fusion rules. The inverse transformation on this composite plane results into a new, visually better, and enriched fused image. The proposed technique is tested on the pilot study data sets of patients infected with NCC. The quality of these fused images is measured using objective and subjective evaluation metrics. Objective evaluation is performed by estimating the fusion parameters like entropy, fusion factor, image quality index, edge quality measure, mean structural similarity index measure, etc. The fused images are also evaluated for their visual quality using subjective analysis with the help of three expert radiologists. The experimental results on 43 image data sets of 17 patients are promising and superior when compared with the state of the art wavelet based fusion algorithms. The proposed algorithm can be a part of computer-aided detection and diagnosis (CADD) system which assists the radiologists in clinical practices. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fall Risk Assessment and Early-Warning for Toddler Behaviors at Home
Yang, Mau-Tsuen; Chuang, Min-Wen
2013-01-01
Accidental falls are the major cause of serious injuries in toddlers, with most of these falls happening at home. Instead of providing immediate fall detection based on short-term observations, this paper proposes an early-warning childcare system to monitor fall-prone behaviors of toddlers at home. Using 3D human skeleton tracking and floor plane detection based on depth images captured by a Kinect system, eight fall-prone behavioral modules of toddlers are developed and organized according to four essential criteria: posture, motion, balance, and altitude. The final fall risk assessment is generated by a multi-modal fusion using either a weighted mean thresholding or a support vector machine (SVM) classification. Optimizations are performed to determine local parameter in each module and global parameters of the multi-modal fusion. Experimental results show that the proposed system can assess fall risks and trigger alarms with an accuracy rate of 92% at a speed of 20 frames per second. PMID:24335727
Fall risk assessment and early-warning for toddler behaviors at home.
Yang, Mau-Tsuen; Chuang, Min-Wen
2013-12-10
Accidental falls are the major cause of serious injuries in toddlers, with most of these falls happening at home. Instead of providing immediate fall detection based on short-term observations, this paper proposes an early-warning childcare system to monitor fall-prone behaviors of toddlers at home. Using 3D human skeleton tracking and floor plane detection based on depth images captured by a Kinect system, eight fall-prone behavioral modules of toddlers are developed and organized according to four essential criteria: posture, motion, balance, and altitude. The final fall risk assessment is generated by a multi-modal fusion using either a weighted mean thresholding or a support vector machine (SVM) classification. Optimizations are performed to determine local parameter in each module and global parameters of the multi-modal fusion. Experimental results show that the proposed system can assess fall risks and trigger alarms with an accuracy rate of 92% at a speed of 20 frames per second.
Multimodal Neuroimaging: Basic Concepts and Classification of Neuropsychiatric Diseases.
Tulay, Emine Elif; Metin, Barış; Tarhan, Nevzat; Arıkan, Mehmet Kemal
2018-06-01
Neuroimaging techniques are widely used in neuroscience to visualize neural activity, to improve our understanding of brain mechanisms, and to identify biomarkers-especially for psychiatric diseases; however, each neuroimaging technique has several limitations. These limitations led to the development of multimodal neuroimaging (MN), which combines data obtained from multiple neuroimaging techniques, such as electroencephalography, functional magnetic resonance imaging, and yields more detailed information about brain dynamics. There are several types of MN, including visual inspection, data integration, and data fusion. This literature review aimed to provide a brief summary and basic information about MN techniques (data fusion approaches in particular) and classification approaches. Data fusion approaches are generally categorized as asymmetric and symmetric. The present review focused exclusively on studies based on symmetric data fusion methods (data-driven methods), such as independent component analysis and principal component analysis. Machine learning techniques have recently been introduced for use in identifying diseases and biomarkers of disease. The machine learning technique most widely used by neuroscientists is classification-especially support vector machine classification. Several studies differentiated patients with psychiatric diseases and healthy controls with using combined datasets. The common conclusion among these studies is that the prediction of diseases increases when combining data via MN techniques; however, there remain a few challenges associated with MN, such as sample size. Perhaps in the future N-way fusion can be used to combine multiple neuroimaging techniques or nonimaging predictors (eg, cognitive ability) to overcome the limitations of MN.
Bridges, Robert L; Wiley, Chris R; Christian, John C; Strohm, Adam P
2007-06-01
Na(18)F, an early bone scintigraphy agent, is poised to reenter mainstream clinical imaging with the present generations of stand-alone PET and PET/CT hybrid scanners. (18)F PET scans promise improved imaging quality for both benign and malignant bone disease, with significantly improved sensitivity and specificity over conventional planar and SPECT bone scans. In this article, basic acquisition information will be presented along with examples of studies related to oncology, sports medicine, and general orthopedics. The use of image fusion of PET bone scans with CT and MRI will be demonstrated. The objectives of this article are to provide the reader with an understanding of the history of early bone scintigraphy in relation to Na(18)F scanning, a familiarity with basic imaging techniques for PET bone scanning, an appreciation of the extent of disease processes that can be imaged with PET bone scanning, an appreciation for the added value of multimodality image fusion with bone disease, and a recognition of the potential role PET bone scanning may play in clinical imaging.
Spectral embedding-based registration (SERg) for multimodal fusion of prostate histology and MRI
NASA Astrophysics Data System (ADS)
Hwuang, Eileen; Rusu, Mirabela; Karthigeyan, Sudha; Agner, Shannon C.; Sparks, Rachel; Shih, Natalie; Tomaszewski, John E.; Rosen, Mark; Feldman, Michael; Madabhushi, Anant
2014-03-01
Multi-modal image registration is needed to align medical images collected from different protocols or imaging sources, thereby allowing the mapping of complementary information between images. One challenge of multimodal image registration is that typical similarity measures rely on statistical correlations between image intensities to determine anatomical alignment. The use of alternate image representations could allow for mapping of intensities into a space or representation such that the multimodal images appear more similar, thus facilitating their co-registration. In this work, we present a spectral embedding based registration (SERg) method that uses non-linearly embedded representations obtained from independent components of statistical texture maps of the original images to facilitate multimodal image registration. Our methodology comprises the following main steps: 1) image-derived textural representation of the original images, 2) dimensionality reduction using independent component analysis (ICA), 3) spectral embedding to generate the alternate representations, and 4) image registration. The rationale behind our approach is that SERg yields embedded representations that can allow for very different looking images to appear more similar, thereby facilitating improved co-registration. Statistical texture features are derived from the image intensities and then reduced to a smaller set by using independent component analysis to remove redundant information. Spectral embedding generates a new representation by eigendecomposition from which only the most important eigenvectors are selected. This helps to accentuate areas of salience based on modality-invariant structural information and therefore better identifies corresponding regions in both the template and target images. The spirit behind SERg is that image registration driven by these areas of salience and correspondence should improve alignment accuracy. In this work, SERg is implemented using Demons to allow the algorithm to more effectively register multimodal images. SERg is also tested within the free-form deformation framework driven by mutual information. Nine pairs of synthetic T1-weighted to T2-weighted brain MRI were registered under the following conditions: five levels of noise (0%, 1%, 3%, 5%, and 7%) and two levels of bias field (20% and 40%) each with and without noise. We demonstrate that across all of these conditions, SERg yields a mean squared error that is 81.51% lower than that of Demons driven by MRI intensity alone. We also spatially align twenty-six ex vivo histology sections and in vivo prostate MRI in order to map the spatial extent of prostate cancer onto corresponding radiologic imaging. SERg performs better than intensity registration by decreasing the root mean squared distance of annotated landmarks in the prostate gland via both Demons algorithm and mutual information-driven free-form deformation. In both synthetic and clinical experiments, the observed improvement in alignment of the template and target images suggest the utility of parametric eigenvector representations and hence SERg for multimodal image registration.
Chen, T N; Yin, X T; Li, X G; Zhao, J; Wang, L; Mu, N; Ma, K; Huo, K; Liu, D; Gao, B Y; Feng, H; Li, F
2018-05-08
Objective: To explore the clinical and teaching application value of virtual reality technology in preoperative planning and intraoperative guide of glioma located in central sulcus region. Method: Ten patients with glioma in the central sulcus region were proposed to surgical treatment. The neuro-imaging data, including CT, CTA, DSA, MRI, fMRI were input to 3dgo sczhry workstation for image fusion and 3D reconstruction. Spatial relationships between the lesions and the surrounding structures on the virtual reality image were obtained. These images were applied to the operative approach design, operation process simulation, intraoperative auxiliary decision and the training of specialist physician. Results: Intraoperative founding of 10 patients were highly consistent with preoperative simulation with virtual reality technology. Preoperative 3D reconstruction virtual reality images improved the feasibility of operation planning and operation accuracy. This technology had not only shown the advantages for neurological function protection and lesion resection during surgery, but also improved the training efficiency and effectiveness of dedicated physician by turning the abstract comprehension to virtual reality. Conclusion: Image fusion and 3D reconstruction based virtual reality technology in glioma resection is helpful for formulating the operation plan, improving the operation safety, increasing the total resection rate, and facilitating the teaching and training of the specialist physician.
Vitali, Paolo; Nobili, Flavio; Raiteri, Umberto; Canfora, Michela; Rosa, Marco; Calvini, Piero; Girtler, Nicola; Regesta, Giovanni; Rodriguez, Guido
2004-01-15
This article describes the unusual case of a 60-year-old woman suffering from pure progressive aphemia. The fusion of multimodal neuroimaging (MRI, perfusion SPECT) implicated the right frontal lobe, especially the inferior frontal gyrus. This area also showed the greatest functional MRI activation during the performance of a covert phonemic fluency task. Results are discussed in terms of bihemispheric language representation. The fusion of three sets of neuroimages has aided in the interpretation of the patient's cognitive brain dysfunction.
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2009-04-01
The rapidly advancing hardware technology, smart sensors and sensor networks are advancing environment sensing. One major potential of this technology is Large-Scale Surveillance Systems (LS3) especially for, homeland security, battlefield intelligence, facility guarding and other civilian applications. The efficient and effective deployment of LS3 requires addressing number of aspects impacting the scalability of such systems. The scalability factors are related to: computation and memory utilization efficiency, communication bandwidth utilization, network topology (e.g., centralized, ad-hoc, hierarchical or hybrid), network communication protocol and data routing schemes; and local and global data/information fusion scheme for situational awareness. Although, many models have been proposed to address one aspect or another of these issues but, few have addressed the need for a multi-modality multi-agent data/information fusion that has characteristics satisfying the requirements of current and future intelligent sensors and sensor networks. In this paper, we have presented a novel scalable fusion engine for multi-modality multi-agent information fusion for LS3. The new fusion engine is based on a concept we call: Energy Logic. Experimental results of this work as compared to a Fuzzy logic model strongly supported the validity of the new model and inspired future directions for different levels of fusion and different applications.
Prada, Francesco; Del Bene, Massimiliano; Moiraghi, Alessandro; Casali, Cecilia; Legnani, Federico Giuseppe; Saladino, Andrea; Perin, Alessandro; Vetrano, Ignazio Gaspare; Mattei, Luca; Richetta, Carla; Saini, Marco; DiMeco, Francesco
2015-01-01
The main goal in meningioma surgery is to achieve complete tumor removal, when possible, while improving or preserving patient neurological functions. Intraoperative imaging guidance is one fundamental tool for such achievement. In this regard, intra-operative ultrasound (ioUS) is a reliable solution to obtain real-time information during surgery and it has been applied in many different aspect of neurosurgery. In the last years, different ioUS modalities have been described: B-mode, Fusion Imaging with pre-operative acquired MRI, Doppler, contrast enhanced ultrasound (CEUS), and elastosonography. In this paper, we present our US based multimodal approach in meningioma surgery. We describe all the most relevant ioUS modalities and their intraoperative application to obtain precise and specific information regarding the lesion for a tailored approach in meningioma surgery. For each modality, we perform a review of the literature accompanied by a pictorial essay based on our routinely use of ioUS for meningioma resection.
Sun, Yang; Stephens, Douglas N.; Park, Jesung; Sun, Yinghua; Marcu, Laura; Cannata, Jonathan M.; Shung, K. Kirk
2010-01-01
We report the development and validate a multi-modal tissue diagnostic technology, which combines three complementary techniques into one system including ultrasound backscatter microscopy (UBM), photoacoustic imaging (PAI), and time-resolved laser-induced fluorescence spectroscopy (TR-LIFS). UBM enables the reconstruction of the tissue microanatomy. PAI maps the optical absorption heterogeneity of the tissue associated with structure information and has the potential to provide functional imaging of the tissue. Examination of the UBM and PAI images allows for localization of regions of interest for TR-LIFS evaluation of the tissue composition. The hybrid probe consists of a single element ring transducer with concentric fiber optics for multi-modal data acquisition. Validation and characterization of the multi-modal system and ultrasonic, photoacoustic, and spectroscopic data coregistration were conducted in a physical phantom with properties of ultrasound scattering, optical absorption, and fluorescence. The UBM system with the 41 MHz ring transducer can reach the axial and lateral resolution of 30 and 65 μm, respectively. The PAI system with 532 nm excitation light from a Nd:YAG laser shows great contrast for the distribution of optical absorbers. The TR-LIFS system records the fluorescence decay with the time resolution of ~300 ps and a high sensitivity of nM concentration range. Biological phantom constructed with different types of tissues (tendon and fat) was used to demonstrate the complementary information provided by the three modalities. Fluorescence spectra and lifetimes were compared to differentiate chemical composition of tissues at the regions of interest determined by the coregistered high resolution UBM and PAI image. Current results demonstrate that the fusion of these techniques enables sequentially detection of functional, morphological, and compositional features of biological tissue, suggesting potential applications in diagnosis of tumors and atherosclerotic plaques. PMID:21894259
Sun, Yang; Stephens, Douglas N; Park, Jesung; Sun, Yinghua; Marcu, Laura; Cannata, Jonathan M; Shung, K Kirk
2008-01-01
We report the development and validate a multi-modal tissue diagnostic technology, which combines three complementary techniques into one system including ultrasound backscatter microscopy (UBM), photoacoustic imaging (PAI), and time-resolved laser-induced fluorescence spectroscopy (TR-LIFS). UBM enables the reconstruction of the tissue microanatomy. PAI maps the optical absorption heterogeneity of the tissue associated with structure information and has the potential to provide functional imaging of the tissue. Examination of the UBM and PAI images allows for localization of regions of interest for TR-LIFS evaluation of the tissue composition. The hybrid probe consists of a single element ring transducer with concentric fiber optics for multi-modal data acquisition. Validation and characterization of the multi-modal system and ultrasonic, photoacoustic, and spectroscopic data coregistration were conducted in a physical phantom with properties of ultrasound scattering, optical absorption, and fluorescence. The UBM system with the 41 MHz ring transducer can reach the axial and lateral resolution of 30 and 65 μm, respectively. The PAI system with 532 nm excitation light from a Nd:YAG laser shows great contrast for the distribution of optical absorbers. The TR-LIFS system records the fluorescence decay with the time resolution of ~300 ps and a high sensitivity of nM concentration range. Biological phantom constructed with different types of tissues (tendon and fat) was used to demonstrate the complementary information provided by the three modalities. Fluorescence spectra and lifetimes were compared to differentiate chemical composition of tissues at the regions of interest determined by the coregistered high resolution UBM and PAI image. Current results demonstrate that the fusion of these techniques enables sequentially detection of functional, morphological, and compositional features of biological tissue, suggesting potential applications in diagnosis of tumors and atherosclerotic plaques.
Heideklang, René; Shokouhi, Parisa
2016-01-01
This article focuses on the fusion of flaw indications from multi-sensor nondestructive materials testing. Because each testing method makes use of a different physical principle, a multi-method approach has the potential of effectively differentiating actual defect indications from the many false alarms, thus enhancing detection reliability. In this study, we propose a new technique for aggregating scattered two- or three-dimensional sensory data. Using a density-based approach, the proposed method explicitly addresses localization uncertainties such as registration errors. This feature marks one of the major of advantages of this approach over pixel-based image fusion techniques. We provide guidelines on how to set all the key parameters and demonstrate the technique’s robustness. Finally, we apply our fusion approach to experimental data and demonstrate its capability to locate small defects by substantially reducing false alarms under conditions where no single-sensor method is adequate. PMID:26784200
A multimodal 3D framework for fire characteristics estimation
NASA Astrophysics Data System (ADS)
Toulouse, T.; Rossi, L.; Akhloufi, M. A.; Pieri, A.; Maldague, X.
2018-02-01
In the last decade we have witnessed an increasing interest in using computer vision and image processing in forest fire research. Image processing techniques have been successfully used in different fire analysis areas such as early detection, monitoring, modeling and fire front characteristics estimation. While the majority of the work deals with the use of 2D visible spectrum images, recent work has introduced the use of 3D vision in this field. This work proposes a new multimodal vision framework permitting the extraction of the three-dimensional geometrical characteristics of fires captured by multiple 3D vision systems. The 3D system is a multispectral stereo system operating in both the visible and near-infrared (NIR) spectral bands. The framework supports the use of multiple stereo pairs positioned so as to capture complementary views of the fire front during its propagation. Multimodal registration is conducted using the captured views in order to build a complete 3D model of the fire front. The registration process is achieved using multisensory fusion based on visual data (2D and NIR images), GPS positions and IMU inertial data. Experiments were conducted outdoors in order to show the performance of the proposed framework. The obtained results are promising and show the potential of using the proposed framework in operational scenarios for wildland fire research and as a decision management system in fighting.
Multimodal microscopy and the stepwise multi-photon activation fluorescence of melanin
NASA Astrophysics Data System (ADS)
Lai, Zhenhua
The author's work is divided into three aspects: multimodal microscopy, stepwise multi-photon activation fluorescence (SMPAF) of melanin, and customized-profile lenses (CPL) for on-axis laser scanners, which will be introduced respectively. A multimodal microscope provides the ability to image samples with multiple modalities on the same stage, which incorporates the benefits of all modalities. The multimodal microscopes developed in this dissertation are the Keck 3D fusion multimodal microscope 2.0 (3DFM 2.0), upgraded from the old 3DFM with improved performance and flexibility, and the multimodal microscope for targeting small particles (the "Target" system). The control systems developed for both microscopes are low-cost and easy-to-build, with all components off-the-shelf. The control system have not only significantly decreased the complexity and size of the microscope, but also increased the pixel resolution and flexibility. The SMPAF of melanin, activated by a continuous-wave (CW) mode near-infrared (NIR) laser, has potential applications for a low-cost and reliable method of detecting melanin. The photophysics of melanin SMPAF has been studied by theoretical analysis of the excitation process and investigation of the spectra, activation threshold, and photon number absorption of melanin SMPAF. SMPAF images of melanin in mouse hair and skin, mouse melanoma, and human black and white hairs are compared with images taken by conventional multi-photon fluorescence microscopy (MPFM) and confocal reflectance microscopy (CRM). SMPAF images significantly increase specificity and demonstrate the potential to increase sensitivity for melanin detection compared to MPFM images and CRM images. Employing melanin SMPAF imaging to detect melanin inside human skin in vivo has been demonstrated, which proves the effectiveness of melanin detection using SMPAF for medical purposes. Selective melanin ablation with micrometer resolution has been presented using the Target system. Compared to the traditional selective photothermolysis, this method demonstrates higher precision, higher specificity and deeper penetration. Therefore, the SMPAF guided selective ablation of melanin is a promising tool of removing melanin for both medical and cosmetic purposes. Three CPLs have been designed for low-cost linear-motion scanners, low-cost fast spinning scanners and high-precision fast spinning scanners. Each design has been tailored to the industrial manufacturing ability and market demands.
Hatt, Charles R.; Jain, Ameet K.; Parthasarathy, Vijay; Lang, Andrew; Raval, Amish N.
2014-01-01
Myocardial infarction (MI) is one of the leading causes of death in the world. Small animal studies have shown that stem-cell therapy offers dramatic functional improvement post-MI. An endomyocardial catheter injection approach to therapeutic agent delivery has been proposed to improve efficacy through increased cell retention. Accurate targeting is critical for reaching areas of greatest therapeutic potential while avoiding a life-threatening myocardial perforation. Multimodal image fusion has been proposed as a way to improve these procedures by augmenting traditional intra-operative imaging modalities with high resolution pre-procedural images. Previous approaches have suffered from a lack of real-time tissue imaging and dependence on X-ray imaging to track devices, leading to increased ionizing radiation dose. In this paper, we present a new image fusion system for catheter-based targeted delivery of therapeutic agents. The system registers real-time 3D echocardiography, magnetic resonance, X-ray, and electromagnetic sensor tracking within a single flexible framework. All system calibrations and registrations were validated and found to have target registration errors less than 5 mm in the worst case. Injection accuracy was validated in a motion enabled cardiac injection phantom, where targeting accuracy ranged from 0.57 to 3.81 mm. Clinical feasibility was demonstrated with in-vivo swine experiments, where injections were successfully made into targeted regions of the heart. PMID:23561056
NASA Astrophysics Data System (ADS)
Poinsot, Audrey; Yang, Fan; Brost, Vincent
2011-02-01
Including multiple sources of information in personal identity recognition and verification gives the opportunity to greatly improve performance. We propose a contactless biometric system that combines two modalities: palmprint and face. Hardware implementations are proposed on the Texas Instrument Digital Signal Processor and Xilinx Field-Programmable Gate Array (FPGA) platforms. The algorithmic chain consists of a preprocessing (which includes palm extraction from hand images), Gabor feature extraction, comparison by Hamming distance, and score fusion. Fusion possibilities are discussed and tested first using a bimodal database of 130 subjects that we designed (uB database), and then two common public biometric databases (AR for face and PolyU for palmprint). High performance has been obtained for recognition and verification purpose: a recognition rate of 97.49% with AR-PolyU database and an equal error rate of 1.10% on the uB database using only two training samples per subject have been obtained. Hardware results demonstrate that preprocessing can easily be performed during the acquisition phase, and multimodal biometric recognition can be treated almost instantly (0.4 ms on FPGA). We show the feasibility of a robust and efficient multimodal hardware biometric system that offers several advantages, such as user-friendliness and flexibility.
A biometric identification system based on eigenpalm and eigenfinger features.
Ribaric, Slobodan; Fratric, Ivan
2005-11-01
This paper presents a multimodal biometric identification system based on the features of the human hand. We describe a new biometric approach to personal identification using eigenfinger and eigenpalm features, with fusion applied at the matching-score level. The identification process can be divided into the following phases: capturing the image; preprocessing; extracting and normalizing the palm and strip-like finger subimages; extracting the eigenpalm and eigenfinger features based on the K-L transform; matching and fusion; and, finally, a decision based on the (k, l)-NN classifier and thresholding. The system was tested on a database of 237 people (1,820 hand images). The experimental results showed the effectiveness of the system in terms of the recognition rate (100 percent), the equal error rate (EER = 0.58 percent), and the total error rate (TER = 0.72 percent).
Probabilistic sparse matching for robust 3D/3D fusion in minimally invasive surgery.
Neumann, Dominik; Grbic, Sasa; John, Matthias; Navab, Nassir; Hornegger, Joachim; Ionasec, Razvan
2015-01-01
Classical surgery is being overtaken by minimally invasive and transcatheter procedures. As there is no direct view or access to the affected anatomy, advanced imaging techniques such as 3D C-arm computed tomography (CT) and C-arm fluoroscopy are routinely used in clinical practice for intraoperative guidance. However, due to constraints regarding acquisition time and device configuration, intraoperative modalities have limited soft tissue image quality and reliable assessment of the cardiac anatomy typically requires contrast agent, which is harmful to the patient and requires complex acquisition protocols. We propose a probabilistic sparse matching approach to fuse high-quality preoperative CT images and nongated, noncontrast intraoperative C-arm CT images by utilizing robust machine learning and numerical optimization techniques. Thus, high-quality patient-specific models can be extracted from the preoperative CT and mapped to the intraoperative imaging environment to guide minimally invasive procedures. Extensive quantitative experiments on 95 clinical datasets demonstrate that our model-based fusion approach has an average execution time of 1.56 s, while the accuracy of 5.48 mm between the anchor anatomy in both images lies within expert user confidence intervals. In direct comparison with image-to-image registration based on an open-source state-of-the-art medical imaging library and a recently proposed quasi-global, knowledge-driven multi-modal fusion approach for thoracic-abdominal images, our model-based method exhibits superior performance in terms of registration accuracy and robustness with respect to both target anatomy and anchor anatomy alignment errors.
Quality dependent fusion of intramodal and multimodal biometric experts
NASA Astrophysics Data System (ADS)
Kittler, J.; Poh, N.; Fatukasi, O.; Messer, K.; Kryszczuk, K.; Richiardi, J.; Drygajlo, A.
2007-04-01
We address the problem of score level fusion of intramodal and multimodal experts in the context of biometric identity verification. We investigate the merits of confidence based weighting of component experts. In contrast to the conventional approach where confidence values are derived from scores, we use instead raw measures of biometric data quality to control the influence of each expert on the final fused score. We show that quality based fusion gives better performance than quality free fusion. The use of quality weighted scores as features in the definition of the fusion functions leads to further improvements. We demonstrate that the achievable performance gain is also affected by the choice of fusion architecture. The evaluation of the proposed methodology involves 6 face and one speech verification experts. It is carried out on the XM2VTS data base.
Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks
NASA Astrophysics Data System (ADS)
Audebert, Nicolas; Le Saux, Bertrand; Lefèvre, Sébastien
2018-06-01
In this work, we investigate various methods to deal with semantic labeling of very high resolution multi-modal remote sensing data. Especially, we study how deep fully convolutional networks can be adapted to deal with multi-modal and multi-scale remote sensing data for semantic labeling. Our contributions are threefold: (a) we present an efficient multi-scale approach to leverage both a large spatial context and the high resolution data, (b) we investigate early and late fusion of Lidar and multispectral data, (c) we validate our methods on two public datasets with state-of-the-art results. Our results indicate that late fusion make it possible to recover errors steaming from ambiguous data, while early fusion allows for better joint-feature learning but at the cost of higher sensitivity to missing data.
Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming
2018-02-19
The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.
Multimodal biometric approach for cancelable face template generation
NASA Astrophysics Data System (ADS)
Paul, Padma Polash; Gavrilova, Marina
2012-06-01
Due to the rapid growth of biometric technology, template protection becomes crucial to secure integrity of the biometric security system and prevent unauthorized access. Cancelable biometrics is emerging as one of the best solutions to secure the biometric identification and verification system. We present a novel technique for robust cancelable template generation algorithm that takes advantage of the multimodal biometric using feature level fusion. Feature level fusion of different facial features is applied to generate the cancelable template. A proposed algorithm based on the multi-fold random projection and fuzzy communication scheme is used for this purpose. In cancelable template generation, one of the main difficulties is keeping interclass variance of the feature. We have found that interclass variations of the features that are lost during multi fold random projection can be recovered using fusion of different feature subsets and projecting in a new feature domain. Applying the multimodal technique in feature level, we enhance the interclass variability hence improving the performance of the system. We have tested the system for classifier fusion for different feature subset and different cancelable template fusion. Experiments have shown that cancelable template improves the performance of the biometric system compared with the original template.
Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition
Islam, Md. Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676
Feature and score fusion based multiple classifier selection for iris recognition.
Islam, Md Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.
Dual tracer imaging of SPECT and PET probes in living mice using a sequential protocol
Chapman, Sarah E; Diener, Justin M; Sasser, Todd A; Correcher, Carlos; González, Antonio J; Avermaete, Tony Van; Leevy, W Matthew
2012-01-01
Over the past 20 years, multimodal imaging strategies have motivated the fusion of Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography (SPECT) scans with an X-ray computed tomography (CT) image to provide anatomical information, as well as a framework with which molecular and functional images may be co-registered. Recently, pre-clinical nuclear imaging technology has evolved to capture multiple SPECT or multiple PET tracers to further enhance the information content gathered within an imaging experiment. However, the use of SPECT and PET probes together, in the same animal, has remained a challenge. Here we describe a straightforward method using an integrated trimodal imaging system and a sequential dosing/acquisition protocol to achieve dual tracer imaging with 99mTc and 18F isotopes, along with anatomical CT, on an individual specimen. Dosing and imaging is completed so that minimal animal manipulations are required, full trimodal fusion is conserved, and tracer crosstalk including down-scatter of the PET tracer in SPECT mode is avoided. This technique will enhance the ability of preclinical researchers to detect multiple disease targets and perform functional, molecular, and anatomical imaging on individual specimens to increase the information content gathered within longitudinal in vivo studies. PMID:23145357
Appearance-based human gesture recognition using multimodal features for human computer interaction
NASA Astrophysics Data System (ADS)
Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun
2011-03-01
The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.
Meyer, C R; Boes, J L; Kim, B; Bland, P H; Zasadny, K R; Kison, P V; Koral, K; Frey, K A; Wahl, R L
1997-04-01
This paper applies and evaluates an automatic mutual information-based registration algorithm across a broad spectrum of multimodal volume data sets. The algorithm requires little or no pre-processing, minimal user input and easily implements either affine, i.e. linear or thin-plate spline (TPS) warped registrations. We have evaluated the algorithm in phantom studies as well as in selected cases where few other algorithms could perform as well, if at all, to demonstrate the value of this new method. Pairs of multimodal gray-scale volume data sets were registered by iteratively changing registration parameters to maximize mutual information. Quantitative registration errors were assessed in registrations of a thorax phantom using PET/CT and in the National Library of Medicine's Visible Male using MRI T2-/T1-weighted acquisitions. Registrations of diverse clinical data sets were demonstrated including rotate-translate mapping of PET/MRI brain scans with significant missing data, full affine mapping of thoracic PET/CT and rotate-translate mapping of abdominal SPECT/CT. A five-point thin-plate spline (TPS) warped registration of thoracic PET/CT is also demonstrated. The registration algorithm converged in times ranging between 3.5 and 31 min for affine clinical registrations and 57 min for TPS warping. Mean error vector lengths for rotate-translate registrations were measured to be subvoxel in phantoms. More importantly the rotate-translate algorithm performs well even with missing data. The demonstrated clinical fusions are qualitatively excellent at all levels. We conclude that such automatic, rapid, robust algorithms significantly increase the likelihood that multimodality registrations will be routinely used to aid clinical diagnoses and post-therapeutic assessment in the near future.
Levin-Schwartz, Yuri; Song, Yang; Schreier, Peter J.; Calhoun, Vince D.; Adalı, Tülay
2016-01-01
Due to their data-driven nature, multivariate methods such as canonical correlation analysis (CCA) have proven very useful for fusion of multimodal neurological data. However, being able to determine the degree of similarity between datasets and appropriate order selection are crucial to the success of such techniques. The standard methods for calculating the order of multimodal data focus only on sources with the greatest individual energy and ignore relations across datasets. Additionally, these techniques as well as the most widely-used methods for determining the degree of similarity between datasets assume sufficient sample support and are not effective in the sample-poor regime. In this paper, we propose to jointly estimate the degree of similarity between datasets and their order when few samples are present using principal component analysis and canonical correlation analysis (PCA-CCA). By considering these two problems simultaneously, we are able to minimize the assumptions placed on the data and achieve superior performance in the sample-poor regime compared to traditional techniques. We apply PCA-CCA to the pairwise combinations of functional magnetic resonance imaging (fMRI), structural magnetic resonance imaging (sMRI), and electroencephalogram (EEG) data drawn from patients with schizophrenia and healthy controls while performing an auditory oddball task. The PCA-CCA results indicate that the fMRI and sMRI datasets are the most similar, whereas the sMRI and EEG datasets share the least similarity. We also demonstrate that the degree of similarity obtained by PCA-CCA is highly predictive of the degree of significance found for components generated using CCA. PMID:27039696
NASA Astrophysics Data System (ADS)
Le, Minh Hung; Chen, Jingyu; Wang, Liang; Wang, Zhiwei; Liu, Wenyu; (Tim Cheng, Kwang-Ting; Yang, Xin
2017-08-01
Automated methods for prostate cancer (PCa) diagnosis in multi-parametric magnetic resonance imaging (MP-MRIs) are critical for alleviating requirements for interpretation of radiographs while helping to improve diagnostic accuracy (Artan et al 2010 IEEE Trans. Image Process. 19 2444-55, Litjens et al 2014 IEEE Trans. Med. Imaging 33 1083-92, Liu et al 2013 SPIE Medical Imaging (International Society for Optics and Photonics) p 86701G, Moradi et al 2012 J. Magn. Reson. Imaging 35 1403-13, Niaf et al 2014 IEEE Trans. Image Process. 23 979-91, Niaf et al 2012 Phys. Med. Biol. 57 3833, Peng et al 2013a SPIE Medical Imaging (International Society for Optics and Photonics) p 86701H, Peng et al 2013b Radiology 267 787-96, Wang et al 2014 BioMed. Res. Int. 2014). This paper presents an automated method based on multimodal convolutional neural networks (CNNs) for two PCa diagnostic tasks: (1) distinguishing between cancerous and noncancerous tissues and (2) distinguishing between clinically significant (CS) and indolent PCa. Specifically, our multimodal CNNs effectively fuse apparent diffusion coefficients (ADCs) and T2-weighted MP-MRI images (T2WIs). To effectively fuse ADCs and T2WIs we design a new similarity loss function to enforce consistent features being extracted from both ADCs and T2WIs. The similarity loss is combined with the conventional classification loss functions and integrated into the back-propagation procedure of CNN training. The similarity loss enables better fusion results than existing methods as the feature learning processes of both modalities are mutually guided, jointly facilitating CNN to ‘see’ the true visual patterns of PCa. The classification results of multimodal CNNs are further combined with the results based on handcrafted features using a support vector machine classifier. To achieve a satisfactory accuracy for clinical use, we comprehensively investigate three critical factors which could greatly affect the performance of our multimodal CNNs but have not been carefully studied previously. (1) Given limited training data, how can these be augmented in sufficient numbers and variety for fine-tuning deep CNN networks for PCa diagnosis? (2) How can multimodal MP-MRI information be effectively combined in CNNs? (3) What is the impact of different CNN architectures on the accuracy of PCa diagnosis? Experimental results on extensive clinical data from 364 patients with a total of 463 PCa lesions and 450 identified noncancerous image patches demonstrate that our system can achieve a sensitivity of 89.85% and a specificity of 95.83% for distinguishing cancer from noncancerous tissues and a sensitivity of 100% and a specificity of 76.92% for distinguishing indolent PCa from CS PCa. This result is significantly superior to the state-of-the-art method relying on handcrafted features.
Evaluation of GMI and PMI diffeomorphic‐based demons algorithms for aligning PET and CT Images
Yang, Juan; Zhang, You; Yin, Yong
2015-01-01
Fusion of anatomic information in computed tomography (CT) and functional information in F18‐FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined F18‐FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole‐body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)‐based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point‐wise mutual information (PMI) diffeomorphic‐based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB‐approved study. Whole‐body PET and CT images were acquired from a combined F18‐FDG PET/CT scanner for each patient. The modified Hausdorff distance (dMH) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of dMH were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI‐based demons and the PMI diffeomorphic‐based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined F18‐FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic‐based demons algorithm was more accurate than the GMI‐based demons algorithm in registering PET/CT esophageal images. PACS numbers: 87.57.nj, 87.57. Q‐, 87.57.uk PMID:26218993
Evaluation of GMI and PMI diffeomorphic-based demons algorithms for aligning PET and CT Images.
Yang, Juan; Wang, Hongjun; Zhang, You; Yin, Yong
2015-07-08
Fusion of anatomic information in computed tomography (CT) and functional information in 18F-FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined 18F-FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole-body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)-based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point-wise mutual information (PMI) diffeomorphic-based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB-approved study. Whole-body PET and CT images were acquired from a combined 18F-FDG PET/CT scanner for each patient. The modified Hausdorff distance (d(MH)) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of d(MH) were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI-based demons and the PMI diffeomorphic-based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined 18F-FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic-based demons algorithm was more accurate than the GMI-based demons algorithm in registering PET/CT esophageal images.
Medical Image Fusion Based on Feature Extraction and Sparse Representation
Wei, Gao; Zongxi, Song
2017-01-01
As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods. PMID:28321246
NASA Astrophysics Data System (ADS)
Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting
2018-02-01
Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.
Deformable Medical Image Registration: A Survey
Sotiras, Aristeidis; Davatzikos, Christos; Paragios, Nikos
2013-01-01
Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: i) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; ii) longitudinal studies, where temporal structural or anatomical changes are investigated; and iii) population modeling and statistical atlases used to study normal anatomical variability. In this paper, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this paper is to provide an extensive account of registration techniques in a systematic manner. PMID:23739795
Tangible interactive system for document browsing and visualisation of multimedia data
NASA Astrophysics Data System (ADS)
Rytsar, Yuriy; Voloshynovskiy, Sviatoslav; Koval, Oleksiy; Deguillaume, Frederic; Topak, Emre; Startchik, Sergei; Pun, Thierry
2006-01-01
In this paper we introduce and develop a framework for document interactive navigation in multimodal databases. First, we analyze the main open issues of existing multimodal interfaces and then discuss two applications that include interaction with documents in several human environments, i.e., the so-called smart rooms. Second, we propose a system set-up dedicated to the efficient navigation in the printed documents. This set-up is based on the fusion of data from several modalities that include images and text. Both modalities can be used as cover data for hidden indexes using data-hiding technologies as well as source data for robust visual hashing. The particularities of the proposed robust visual hashing are described in the paper. Finally, we address two practical applications of smart rooms for tourism and education and demonstrate the advantages of the proposed solution.
Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P
2016-03-24
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.
Multispectral Palmprint Recognition Using a Quaternion Matrix
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%. PMID:22666049
Multispectral palmprint recognition using a quaternion matrix.
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.
Wong, K K; Chondrogiannis, S; Bowles, H; Fuster, D; Sánchez, N; Rampin, L; Rubello, D
Nuclear medicine traditionally employs planar and single photon emission computed tomography (SPECT) imaging techniques to depict the biodistribution of radiotracers for the diagnostic investigation of a range of disorders of endocrine gland function. The usefulness of combining functional information with anatomy derived from computed tomography (CT), magnetic resonance imaging (MRI), and high resolution ultrasound (US), has long been appreciated, either using visual side-by-side correlation, or software-based co-registration. The emergence of hybrid SPECT/CT camera technology now allows the simultaneous acquisition of combined multi-modality imaging, with seamless fusion of 3D volume datasets. Thus, it is not surprising that there is growing literature describing the many advantages that contemporary SPECT/CT technology brings to radionuclide investigation of endocrine disorders, showing potential advantages for the pre-operative locating of the parathyroid adenoma using a minimally invasive surgical approach, especially in the presence of ectopic glands and in multiglandular disease. In conclusion, hybrid SPECT/CT imaging has become an essential tool to ensure the most accurate diagnostic in the management of patients with hyperparathyroidism. Copyright © 2016 Elsevier España, S.L.U. y SEMNIM. All rights reserved.
Direct estimation of evoked hemoglobin changes by multimodality fusion imaging
Huppert, Theodore J.; Diamond, Solomon G.; Boas, David A.
2009-01-01
In the last two decades, both diffuse optical tomography (DOT) and blood oxygen level dependent (BOLD)-based functional magnetic resonance imaging (fMRI) methods have been developed as noninvasive tools for imaging evoked cerebral hemodynamic changes in studies of brain activity. Although these two technologies measure functional contrast from similar physiological sources, i.e., changes in hemoglobin levels, these two modalities are based on distinct physical and biophysical principles leading to both limitations and strengths to each method. In this work, we describe a unified linear model to combine the complimentary spatial, temporal, and spectroscopic resolutions of concurrently measured optical tomography and fMRI signals. Using numerical simulations, we demonstrate that concurrent optical and BOLD measurements can be used to create cross-calibrated estimates of absolute micromolar deoxyhemoglobin changes. We apply this new analysis tool to experimental data acquired simultaneously with both DOT and BOLD imaging during a motor task, demonstrate the ability to more robustly estimate hemoglobin changes in comparison to DOT alone, and show how this approach can provide cross-calibrated estimates of hemoglobin changes. Using this multimodal method, we estimate the calibration of the 3 tesla BOLD signal to be −0.55% ± 0.40% signal change per micromolar change of deoxyhemoglobin. PMID:19021411
Adali, Tülay; Levin-Schwartz, Yuri; Calhoun, Vince D.
2015-01-01
Fusion of information from multiple sets of data in order to extract a set of features that are most useful and relevant for the given task is inherent to many problems we deal with today. Since, usually, very little is known about the actual interaction among the datasets, it is highly desirable to minimize the underlying assumptions. This has been the main reason for the growing importance of data-driven methods, and in particular of independent component analysis (ICA) as it provides useful decompositions with a simple generative model and using only the assumption of statistical independence. A recent extension of ICA, independent vector analysis (IVA) generalizes ICA to multiple datasets by exploiting the statistical dependence across the datasets, and hence, as we discuss in this paper, provides an attractive solution to fusion of data from multiple datasets along with ICA. In this paper, we focus on two multivariate solutions for multi-modal data fusion that let multiple modalities fully interact for the estimation of underlying features that jointly report on all modalities. One solution is the Joint ICA model that has found wide application in medical imaging, and the second one is the the Transposed IVA model introduced here as a generalization of an approach based on multi-set canonical correlation analysis. In the discussion, we emphasize the role of diversity in the decompositions achieved by these two models, present their properties and implementation details to enable the user make informed decisions on the selection of a model along with its associated parameters. Discussions are supported by simulation results to help highlight the main issues in the implementation of these methods. PMID:26525830
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep
2010-06-05
Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less
Context-Aware Fusion of RGB and Thermal Imagery for Traffic Monitoring
Alldieck, Thiemo; Bahnsen, Chris H.; Moeslund, Thomas B.
2016-01-01
In order to enable a robust 24-h monitoring of traffic under changing environmental conditions, it is beneficial to observe the traffic scene using several sensors, preferably from different modalities. To fully benefit from multi-modal sensor output, however, one must fuse the data. This paper introduces a new approach for fusing color RGB and thermal video streams by using not only the information from the videos themselves, but also the available contextual information of a scene. The contextual information is used to judge the quality of a particular modality and guides the fusion of two parallel segmentation pipelines of the RGB and thermal video streams. The potential of the proposed context-aware fusion is demonstrated by extensive tests of quantitative and qualitative characteristics on existing and novel video datasets and benchmarked against competing approaches to multi-modal fusion. PMID:27869730
Radiolabeled Nanoparticles for Multimodality Tumor Imaging
Xing, Yan; Zhao, Jinhua; Conti, Peter S.; Chen, Kai
2014-01-01
Each imaging modality has its own unique strengths. Multimodality imaging, taking advantages of strengths from two or more imaging modalities, can provide overall structural, functional, and molecular information, offering the prospect of improved diagnostic and therapeutic monitoring abilities. The devices of molecular imaging with multimodality and multifunction are of great value for cancer diagnosis and treatment, and greatly accelerate the development of radionuclide-based multimodal molecular imaging. Radiolabeled nanoparticles bearing intrinsic properties have gained great interest in multimodality tumor imaging over the past decade. Significant breakthrough has been made toward the development of various radiolabeled nanoparticles, which can be used as novel cancer diagnostic tools in multimodality imaging systems. It is expected that quantitative multimodality imaging with multifunctional radiolabeled nanoparticles will afford accurate and precise assessment of biological signatures in cancer in a real-time manner and thus, pave the path towards personalized cancer medicine. This review addresses advantages and challenges in developing multimodality imaging probes by using different types of nanoparticles, and summarizes the recent advances in the applications of radiolabeled nanoparticles for multimodal imaging of tumor. The key issues involved in the translation of radiolabeled nanoparticles to the clinic are also discussed. PMID:24505237
Pseudo-circulator implemented as a multimode fiber coupler
NASA Astrophysics Data System (ADS)
Bulota, F.; Bélanger, P.; Leduc, M.; Boudoux, C.; Godbout, N.
2016-03-01
We present a linear all-fiber device exhibiting the functionality of a circulator, albeit for multimode fibers. We define a pseudo-circulator as a linear three-port component that transfers most of a multimode light signal from Port 1 to Port 2, and from Port 2 to Port 3. Unlike a traditional circulator which depends on a nonlinear phenomenon to achieve a non-reciprocal behavior, our device is a linear component that seemingly breaks the principle of reciprocity by exploiting the variations of etendue of the multimode fibers in the coupler. The pseudo-circulator is implemented as a 2x2 asymmetric multimode fiber coupler, fabricated using the fusion-tapering technique. The coupler is asymmetric in its transverse fused section. The two multimode fibers differ in area, thus favoring the transfer of light from the smaller to the bigger fiber. The desired difference of area is obtained by tapering one of the fiber before the fusion process. Using this technique, we have successfully fabricated a pseudo-circulator surpassing in efficiency a 50/50 beam-splitter. In all the visible and near-IR spectrum, the transmission ratio exceeds 77% from Port 1 to Port 2, and 80% from Port 2 to Port 3. The excess loss is less than 0.5 dB, regardless of the entry port.
NASA Astrophysics Data System (ADS)
Hanson, Jeffrey A.; McLaughlin, Keith L.; Sereno, Thomas J.
2011-06-01
We have developed a flexible, target-driven, multi-modal, physics-based fusion architecture that efficiently searches sensor detections for targets and rejects clutter while controlling the combinatoric problems that commonly arise in datadriven fusion systems. The informational constraints imposed by long lifetime requirements make systems vulnerable to false alarms. We demonstrate that our data fusion system significantly reduces false alarms while maintaining high sensitivity to threats. In addition, mission goals can vary substantially in terms of targets-of-interest, required characterization, acceptable latency, and false alarm rates. Our fusion architecture provides the flexibility to match these trade-offs with mission requirements unlike many conventional systems that require significant modifications for each new mission. We illustrate our data fusion performance with case studies that span many of the potential mission scenarios including border surveillance, base security, and infrastructure protection. In these studies, we deployed multi-modal sensor nodes - including geophones, magnetometers, accelerometers and PIR sensors - with low-power processing algorithms and low-bandwidth wireless mesh networking to create networks capable of multi-year operation. The results show our data fusion architecture maintains high sensitivities while suppressing most false alarms for a variety of environments and targets.
Computer Based Behavioral Biometric Authentication via Multi-Modal Fusion
2013-03-01
the decisions made by each individual modality. Fusion of features is the simple concatenation of feature vectors from multiple modalities to be...of Features BayesNet MDL 330 LibSVM PCA 80 J48 Wrapper Evaluator 11 3.5.3 Ensemble Based Decision Level Fusion. In ensemble learning multiple ...The high fusion percentages validate our hypothesis that by combining features from multiple modalities, classification accuracy can be improved. As
Yang, Jie; Yin, Yingying; Zhang, Zuping; Long, Jun; Dong, Jian; Zhang, Yuqun; Xu, Zhi; Li, Lei; Liu, Jie; Yuan, Yonggui
2018-02-05
Major depressive disorder (MDD) is characterized by dysregulation of distributed structural and functional networks. It is now recognized that structural and functional networks are related at multiple temporal scales. The recent emergence of multimodal fusion methods has made it possible to comprehensively and systematically investigate brain networks and thereby provide essential information for influencing disease diagnosis and prognosis. However, such investigations are hampered by the inconsistent dimensionality features between structural and functional networks. Thus, a semi-multimodal fusion hierarchical feature reduction framework is proposed. Feature reduction is a vital procedure in classification that can be used to eliminate irrelevant and redundant information and thereby improve the accuracy of disease diagnosis. Our proposed framework primarily consists of two steps. The first step considers the connection distances in both structural and functional networks between MDD and healthy control (HC) groups. By adding a constraint based on sparsity regularization, the second step fully utilizes the inter-relationship between the two modalities. However, in contrast to conventional multi-modality multi-task methods, the structural networks were considered to play only a subsidiary role in feature reduction and were not included in the following classification. The proposed method achieved a classification accuracy, specificity, sensitivity, and area under the curve of 84.91%, 88.6%, 81.29%, and 0.91, respectively. Moreover, the frontal-limbic system contributed the most to disease diagnosis. Importantly, by taking full advantage of the complementary information from multimodal neuroimaging data, the selected consensus connections may be highly reliable biomarkers of MDD. Copyright © 2017 Elsevier B.V. All rights reserved.
Correa, Nicolle M; Li, Yi-Ou; Adalı, Tülay; Calhoun, Vince D
2008-12-01
Typically data acquired through imaging techniques such as functional magnetic resonance imaging (fMRI), structural MRI (sMRI), and electroencephalography (EEG) are analyzed separately. However, fusing information from such complementary modalities promises to provide additional insight into connectivity across brain networks and changes due to disease. We propose a data fusion scheme at the feature level using canonical correlation analysis (CCA) to determine inter-subject covariations across modalities. As we show both with simulation results and application to real data, multimodal CCA (mCCA) proves to be a flexible and powerful method for discovering associations among various data types. We demonstrate the versatility of the method with application to two datasets, an fMRI and EEG, and an fMRI and sMRI dataset, both collected from patients diagnosed with schizophrenia and healthy controls. CCA results for fMRI and EEG data collected for an auditory oddball task reveal associations of the temporal and motor areas with the N2 and P3 peaks. For the application to fMRI and sMRI data collected for an auditory sensorimotor task, CCA results show an interesting joint relationship between fMRI and gray matter, with patients with schizophrenia showing more functional activity in motor areas and less activity in temporal areas associated with less gray matter as compared to healthy controls. Additionally, we compare our scheme with an independent component analysis based fusion method, joint-ICA that has proven useful for such a study and note that the two methods provide complementary perspectives on data fusion.
Evaluation of image registration in PET/CT of the liver and recommendations for optimized imaging.
Vogel, Wouter V; van Dalen, Jorn A; Wiering, Bas; Huisman, Henkjan; Corstens, Frans H M; Ruers, Theo J M; Oyen, Wim J G
2007-06-01
Multimodality PET/CT of the liver can be performed with an integrated (hybrid) PET/CT scanner or with software fusion of dedicated PET and CT. Accurate anatomic correlation and good image quality of both modalities are important prerequisites, regardless of the applied method. Registration accuracy is influenced by breathing motion differences on PET and CT, which may also have impact on (attenuation correction-related) artifacts, especially in the upper abdomen. The impact of these issues was evaluated for both hybrid PET/CT and software fusion, focused on imaging of the liver. Thirty patients underwent hybrid PET/CT, 20 with CT during expiration breath-hold (EB) and 10 with CT during free breathing (FB). Ten additional patients underwent software fusion of dedicated PET and dedicated expiration breath-hold CT (SF). The image registration accuracy was evaluated at the location of liver borders on CT and uncorrected PET images and at the location of liver lesions. Attenuation-correction artifacts were evaluated by comparison of liver borders on uncorrected and attenuation-corrected PET images. CT images were evaluated for the presence of breathing artifacts. In EB, 40% of patients had an absolute registration error of the diaphragm in the craniocaudal direction of >1 cm (range, -16 to 44 mm), and 45% of lesions were mispositioned >1 cm. In 50% of cases, attenuation-correction artifacts caused a deformation of the liver dome on PET of >1 cm. Poor compliance to breath-hold instructions caused CT artifacts in 55% of cases. In FB, 30% had registration errors of >1 cm (range, -4 to 16 mm) and PET artifacts were less extensive, but all CT images had breathing artifacts. As SF allows independent alignment of PET and CT, no registration errors or artifacts of >1 cm of the diaphragm occurred. Hybrid PET/CT of the liver may have significant registration errors and artifacts related to breathing motion. The extent of these issues depends on the selected breathing protocol and the speed of the CT scanner. No protocol or scanner can guarantee perfect image fusion. On the basis of these findings, recommendations were formulated with regard to scanner requirements, breathing protocols, and reporting.
Hardware implementation of hierarchical volume subdivision-based elastic registration.
Dandekar, Omkar; Walimbe, Vivek; Shekhar, Raj
2006-01-01
Real-time, elastic and fully automated 3D image registration is critical to the efficiency and effectiveness of many image-guided diagnostic and treatment procedures relying on multimodality image fusion or serial image comparison. True, real-time performance will make many 3D image registration-based techniques clinically viable. Hierarchical volume subdivision-based image registration techniques are inherently faster than most elastic registration techniques, e.g. free-form deformation (FFD)-based techniques, and are more amenable for achieving real-time performance through hardware acceleration. Our group has previously reported an FPGA-based architecture for accelerating FFD-based image registration. In this article we show how our existing architecture can be adapted to support hierarchical volume subdivision-based image registration. A proof-of-concept implementation of the architecture achieved speedups of 100 for elastic registration against an optimized software implementation on a 3.2 GHz Pentium III Xeon workstation. Due to inherent parallel nature of the hierarchical volume subdivision-based image registration techniques further speedup can be achieved by using several computing modules in parallel.
Kadoury, Samuel; Abi-Jaoudeh, Nadine; Levy, Elliot B.; Maass-Moreno, Roberto; Krücker, Jochen; Dalal, Sandeep; Xu, Sheng; Glossop, Neil; Wood, Bradford J.
2011-01-01
Purpose: To assess the feasibility of combined electromagnetic device tracking and computed tomography (CT)/ultrasonography (US)/fluorine 18 fluorodeoxyglucose (FDG) positron emission tomography (PET) fusion for real-time feedback during percutaneous and intraoperative biopsies and hepatic radiofrequency (RF) ablation. Materials and Methods: In this HIPAA-compliant, institutional review board–approved prospective study with written informed consent, 25 patients (17 men, eight women) underwent 33 percutaneous and three intraoperative biopsies of 36 FDG-avid targets between November 2007 and August 2010. One patient underwent biopsy and RF ablation of an FDG-avid hepatic focus. Targets demonstrated heterogeneous FDG uptake or were not well seen or were totally inapparent at conventional imaging. Preprocedural FDG PET scans were rigidly registered through a semiautomatic method to intraprocedural CT scans. Coaxial biopsy needle introducer tips and RF ablation electrode guider needle tips containing electromagnetic sensor coils were spatially tracked through an electromagnetic field generator. Real-time US scans were registered through a fiducial-based method, allowing US scans to be fused with intraprocedural CT and preacquired FDG PET scans. A visual display of US/CT image fusion with overlaid coregistered FDG PET targets was used for guidance; navigation software enabled real-time biopsy needle and needle electrode navigation and feedback. Results: Successful fusion of real-time US to coregistered CT and FDG PET scans was achieved in all patients. Thirty-one of 36 biopsies were diagnostic (malignancy in 18 cases, benign processes in 13 cases). RF ablation resulted in resolution of targeted FDG avidity, with no local treatment failure during short follow-up (56 days). Conclusion: Combined electromagnetic device tracking and image fusion with real-time feedback may facilitate biopsies and ablations of focal FDG PET abnormalities that would be challenging with conventional image guidance. © RSNA, 2011 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101985/-/DC1 PMID:21734159
Multimodality Imaging of Gene Transfer with a Receptor-Based Reporter Gene
Chen, Ron; Parry, Jesse J.; Akers, Walter J.; Berezin, Mikhail Y.; El Naqa, Issam M.; Achilefu, Samuel; Edwards, W. Barry; Rogers, Buck E.
2010-01-01
Gene therapy trials have traditionally used tumor and tissue biopsies for assessing the efficacy of gene transfer. Non-invasive imaging techniques offer a distinct advantage over tissue biopsies in that the magnitude and duration of gene transfer can be monitored repeatedly. Human somatostatin receptor subtype 2 (SSTR2) has been used for the nuclear imaging of gene transfer. To extend this concept, we have developed a somatostatin receptor–enhanced green fluorescent protein fusion construct (SSTR2-EGFP) for nuclear and fluorescent multimodality imaging. Methods An adenovirus containing SSTR2-EGFP (AdSSTR2-EGFP) was constructed and evaluated in vitro and in vivo. SCC-9 human squamous cell carcinoma cells were infected with AdEGFP, AdSSTR2, or AdSSTR2-EGFP for in vitro evaluation by saturation binding, internalization, and fluorescence spectroscopy assays. In vivo biodistribution and nano-SPECT imaging studies were conducted with mice bearing SCC-9 tumor xenografts directly injected with AdSSTR2-EGFP or AdSSTR2 to determine the tumor localization of 111In-diethylenetriaminepentaacetic acid (DTPA)-Tyr3-octreotate. Fluorescence imaging was conducted in vivo with mice receiving intratumoral injections of AdSSTR2, AdSSTR2-EGFP, or AdEGFP as well as ex vivo with tissues extracted from mice. Results The similarity between AdSSTR2-EGFP and wild-type AdSSTR2 was demonstrated in vitro by the saturation binding and internalization assays, and the fluorescence emission spectra of cells infected with AdSSTR2-EGFP was almost identical to the spectra of cells infected with wild-type AdEGFP. Biodistribution studies demonstrated that the tumor uptake of 111In-DTPA-Tyr3-octreotate was not significantly different (P > 0.05) when tumors (n = 5) were injected with AdSSTR2 or AdSSTR2-EGFP but was significantly greater than the uptake in control tumors. Fluorescence was observed in tumors injected with AdSSTR2-EGFP and AdEGFP in vivo and ex vivo but not in tumors injected with AdSSTR2. Although fluorescence was observed, there were discrepancies between in vivo imaging and ex vivo imaging as well as between nuclear imaging and fluorescent imaging. Conclusion These studies showed that the SSTR2-EGFP fusion construct can be used for in vivo nuclear and optical imaging of gene transfer. PMID:20720053
An atlas-based multimodal registration method for 2D images with discrepancy structures.
Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng
2018-06-04
An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.
A review of multivariate methods in brain imaging data fusion
NASA Astrophysics Data System (ADS)
Sui, Jing; Adali, Tülay; Li, Yi-Ou; Yang, Honghui; Calhoun, Vince D.
2010-03-01
On joint analysis of multi-task brain imaging data sets, a variety of multivariate methods have shown their strengths and been applied to achieve different purposes based on their respective assumptions. In this paper, we provide a comprehensive review on optimization assumptions of six data fusion models, including 1) four blind methods: joint independent component analysis (jICA), multimodal canonical correlation analysis (mCCA), CCA on blind source separation (sCCA) and partial least squares (PLS); 2) two semi-blind methods: parallel ICA and coefficient-constrained ICA (CC-ICA). We also propose a novel model for joint blind source separation (BSS) of two datasets using a combination of sCCA and jICA, i.e., 'CCA+ICA', which, compared with other joint BSS methods, can achieve higher decomposition accuracy as well as the correct automatic source link. Applications of the proposed model to real multitask fMRI data are compared to joint ICA and mCCA; CCA+ICA further shows its advantages in capturing both shared and distinct information, differentiating groups, and interpreting duration of illness in schizophrenia patients, hence promising applicability to a wide variety of medical imaging problems.
Contactless and pose invariant biometric identification using hand surface.
Kanhangad, Vivek; Kumar, Ajay; Zhang, David
2011-05-01
This paper presents a novel approach for hand matching that achieves significantly improved performance even in the presence of large hand pose variations. The proposed method utilizes a 3-D digitizer to simultaneously acquire intensity and range images of the user's hand presented to the system in an arbitrary pose. The approach involves determination of the orientation of the hand in 3-D space followed by pose normalization of the acquired 3-D and 2-D hand images. Multimodal (2-D as well as 3-D) palmprint and hand geometry features, which are simultaneously extracted from the user's pose normalized textured 3-D hand, are used for matching. Individual matching scores are then combined using a new dynamic fusion strategy. Our experimental results on the database of 114 subjects with significant pose variations yielded encouraging results. Consistent (across various hand features considered) performance improvement achieved with the pose correction demonstrates the usefulness of the proposed approach for hand based biometric systems with unconstrained and contact-free imaging. The experimental results also suggest that the dynamic fusion approach employed in this work helps to achieve performance improvement of 60% (in terms of EER) over the case when matching scores are combined using the weighted sum rule.
Adiabatically tapered splice for selective excitation of the fundamental mode in a multimode fiber.
Jung, Yongmin; Jeong, Yoonchan; Brambilla, Gilberto; Richardson, David J
2009-08-01
We propose a simple and effective method to selectively excite the fundamental mode of a multimode fiber by adiabatically tapering a fusion splice to a single-mode fiber. We experimentally demonstrate the method by adiabatically tapering splice (taper waist=15 microm, uniform length=40 mm) between single-mode and multimode fiber and show that it provides a successful mode conversion/connection and allows for almost perfect fundamental mode excitation in the multimode fiber. Excellent beam quality (M(2) approximately 1.08) was achieved with low loss and high environmental stability.
Large-scale evaluation of multimodal biometric authentication using state-of-the-art systems.
Snelick, Robert; Uludag, Umut; Mink, Alan; Indovina, Michael; Jain, Anil
2005-03-01
We examine the performance of multimodal biometric authentication systems using state-of-the-art Commercial Off-the-Shelf (COTS) fingerprint and face biometric systems on a population approaching 1,000 individuals. The majority of prior studies of multimodal biometrics have been limited to relatively low accuracy non-COTS systems and populations of a few hundred users. Our work is the first to demonstrate that multimodal fingerprint and face biometric systems can achieve significant accuracy gains over either biometric alone, even when using highly accurate COTS systems on a relatively large-scale population. In addition to examining well-known multimodal methods, we introduce new methods of normalization and fusion that further improve the accuracy.
Fusion Teaching: Utilizing Course Management Technology to Deliver an Effective Multimodal Pedagogy
ERIC Educational Resources Information Center
Childs, Bradley D.; Cochran, Howard H.; Velikova, Marieta
2013-01-01
Fusion teaching merges several pedagogies into a coherent whole. Course management technology allows for the digitization and delivery of pedagogies in an effective and exciting manner. Online course management options more easily enable outcome assessment and monitoring for continuous improvement.
Gérard, Maxime; Michaud, François; Bigot, Alexandre; Tang, An; Soulez, Gilles; Kadoury, Samuel
2017-06-01
Modulating the chemotherapy injection rate with regard to blood flow velocities in the tumor-feeding arteries during intra-arterial therapies may help improve liver tumor targeting while decreasing systemic exposure. These velocities can be obtained noninvasively using Doppler ultrasound (US). However, small vessels situated in the liver are difficult to identify and follow in US. We propose a multimodal fusion approach that non-rigidly registers a 3D geometric mesh model of the hepatic arteries obtained from preoperative MR angiography (MRA) acquisitions with intra-operative 3D US imaging. The proposed fusion tool integrates 3 imaging modalities: an arterial MRA, a portal phase MRA and an intra-operative 3D US. Preoperatively, the arterial phase MRA is used to generate a 3D model of the hepatic arteries, which is then non-rigidly co-registered with the portal phase MRA. Once the intra-operative 3D US is acquired, we register it with the portal MRA using a vessel-based rigid initialization followed by a non-rigid registration using an image-based metric based on linear correlation of linear combination. Using the combined non-rigid transformation matrices, the 3D mesh model is fused with the 3D US. 3D US and multi-phase MRA images acquired from 10 porcine models were used to test the performance of the proposed fusion tool. Unimodal registration of the MRA phases yielded a target registration error (TRE) of [Formula: see text] mm. Initial rigid alignment of the portal MRA and 3D US yielded a mean TRE of [Formula: see text] mm, which was significantly reduced to [Formula: see text] mm ([Formula: see text]) after affine image-based registration. The following deformable registration step allowed for further decrease of the mean TRE to [Formula: see text] mm. The proposed tool could facilitate visualization and localization of these vessels when using 3D US intra-operatively for either intravascular or percutaneous interventions to avoid vessel perforation.
Fusing DTI and FMRI Data: A Survey of Methods and Applications
Zhu, Dajiang; Zhang, Tuo; Jiang, Xi; Hu, Xintao; Chen, Hanbo; Yang, Ning; Lv, Jinglei; Han, Junwei; Guo, Lei; Liu, Tianming
2014-01-01
The relationship between brain structure and function has been one of the centers of research in neuroimaging for decades. In recent years, diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI) techniques have been widely available and popular in cognitive and clinical neurosciences for examining the brain’s white matter (WM) micro-structures and gray matter (GM) functions, respectively. Given the intrinsic integration of WM/GM and the complementary information embedded in DTI/fMRI data, it is natural and well-justified to combine these two neuroimaging modalities together to investigate brain structure and function and their relationships simultaneously. In the past decade, there have been remarkable achievements of DTI/fMRI fusion methods and applications in neuroimaging and human brain mapping community. This survey paper aims to review recent advancements on methodologies and applications in incorporating multimodal DTI and fMRI data, and offer our perspectives on future research directions. We envision that effective fusion of DTI/fMRI techniques will play increasingly important roles in neuroimaging and brain sciences in the years to come. PMID:24103849
Pedestrian detection from thermal images: A sparse representation based approach
NASA Astrophysics Data System (ADS)
Qi, Bin; John, Vijay; Liu, Zheng; Mita, Seiichi
2016-05-01
Pedestrian detection, a key technology in computer vision, plays a paramount role in the applications of advanced driver assistant systems (ADASs) and autonomous vehicles. The objective of pedestrian detection is to identify and locate people in a dynamic environment so that accidents can be avoided. With significant variations introduced by illumination, occlusion, articulated pose, and complex background, pedestrian detection is a challenging task for visual perception. Different from visible images, thermal images are captured and presented with intensity maps based objects' emissivity, and thus have an enhanced spectral range to make human beings perceptible from the cool background. In this study, a sparse representation based approach is proposed for pedestrian detection from thermal images. We first adopted the histogram of sparse code to represent image features and then detect pedestrian with the extracted features in an unimodal and a multimodal framework respectively. In the unimodal framework, two types of dictionaries, i.e. joint dictionary and individual dictionary, are built by learning from prepared training samples. In the multimodal framework, a weighted fusion scheme is proposed to further highlight the contributions from features with higher separability. To validate the proposed approach, experiments were conducted to compare with three widely used features: Haar wavelets (HWs), histogram of oriented gradients (HOG), and histogram of phase congruency (HPC) as well as two classification methods, i.e. AdaBoost and support vector machine (SVM). Experimental results on a publicly available data set demonstrate the superiority of the proposed approach.
The biometric recognition on contactless multi-spectrum finger images
NASA Astrophysics Data System (ADS)
Kang, Wenxiong; Chen, Xiaopeng; Wu, Qiuxia
2015-01-01
This paper presents a novel multimodal biometric system based on contactless multi-spectrum finger images, which aims to deal with the limitations of unimodal biometrics. The chief merits of the system are the richness of the permissible texture and the ease of data access. We constructed a multi-spectrum instrument to simultaneously acquire three different types of biometrics from a finger: contactless fingerprint, finger vein, and knuckleprint. On the basis of the samples with these characteristics, a moderate database was built for the evaluation of our system. Considering the real-time requirements and the respective characteristics of the three biometrics, the block local binary patterns algorithm was used to extract features and match for the fingerprints and finger veins, while the Oriented FAST and Rotated BRIEF algorithm was applied for knuckleprints. Finally, score-level fusion was performed on the matching results from the aforementioned three types of biometrics. The experiments showed that our proposed multimodal biometric recognition system achieves an equal error rate of 0.109%, which is 88.9%, 94.6%, and 89.7% lower than the individual fingerprint, knuckleprint, and finger vein recognitions, respectively. Nevertheless, our proposed system also satisfies the real-time requirements of the applications.
ERIC Educational Resources Information Center
Li, Ming
2013-01-01
The goal of this work is to enhance the robustness and efficiency of the multimodal human states recognition task. Human states recognition can be considered as a joint term for identifying/verifing various kinds of human related states, such as biometric identity, language spoken, age, gender, emotion, intoxication level, physical activity, vocal…
Dauwe, Dieter Frans; Nuyens, Dieter; De Buck, Stijn; Claus, Piet; Gheysens, Olivier; Koole, Michel; Coudyzer, Walter; Vanden Driessche, Nina; Janssens, Laurens; Ector, Joris; Dymarkowski, Steven; Bogaert, Jan; Heidbuchel, Hein; Janssens, Stefan
2014-08-01
Biological therapies for ischaemic heart disease require efficient, safe, and affordable intramyocardial delivery. Integration of multiple imaging modalities within the fluoroscopy framework can provide valuable information to guide these procedures. We compared an anatomo-electric method (LARCA) with a non-fluoroscopic electromechanical mapping system (NOGA(®)). LARCA integrates selective three-dimensional-rotational angiograms with biplane fluoroscopy. To identify the infarct region, we studied LARCA-fusion with pre-procedural magnetic resonance imaging (MRI), dedicated CT, or (18)F-FDG-PET/CT. We induced myocardial infarction in 20 pigs by 90-min LAD occlusion. Six weeks later, we compared peri-infarct delivery accuracy of coloured fluospheres using sequential NOGA(®)- and LARCA-MRI-guided vs. LARCA-CT- and LARCA-(18)F-FDG-PET/CT-guided intramyocardial injections. MRI after 6 weeks revealed significant left ventricular (LV) functional impairment and remodelling (LVEF 31 ± 3%, LVEDV 178 ± 15 mL, infarct size 17 ± 2% LV mass). During NOGA(®)-procedures, three of five animals required DC-shock for major ventricular arrhythmias vs. one of ten during LARCA-procedures. Online procedure time was shorter for LARCA than NOGA(®) (77 ± 6 vs. 130 ± 3 min, P < 0.0001). Absolute distance of injection spots to the infarct border was similar for LARCA-MRI (4.8 ± 0.5 mm) and NOGA(®) (5.4 ± 0.5 mm). LARCA-CT-integration allowed closer approximation of the targeted border zone than LARCA-PET (4.0 ± 0.5 mm vs. 6.2 ± 0.6 mm, P < 0.05). Three-dimensional -rotational angiography fused with multimodal imaging offers a new, cost-effective, and safe strategy to guide intramyocardial injections. Endoventricular procedure times and arrhythmias compare favourably to NOGA(®), without compromising injection accuracy. LARCA-based fusion imaging is a promising enabling technology for cardiac biological therapies. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2014. For permissions please email: journals.permissions@oup.com.
Stendahl, John C; Sinusas, Albert J
2015-10-01
Imaging agents made from nanoparticles are functionally versatile and have unique properties that may translate to clinical utility in several key cardiovascular imaging niches. Nanoparticles exhibit size-based circulation, biodistribution, and elimination properties different from those of small molecules and microparticles. In addition, nanoparticles provide versatile platforms that can be engineered to create both multimodal and multifunctional imaging agents with tunable properties. With these features, nanoparticulate imaging agents can facilitate fusion of high-sensitivity and high-resolution imaging modalities and selectively bind tissues for targeted molecular imaging and therapeutic delivery. Despite their intriguing attributes, nanoparticulate imaging agents have thus far achieved only limited clinical use. The reasons for this restricted advancement include an evolving scope of applications, the simplicity and effectiveness of existing small-molecule agents, pharmacokinetic limitations, safety concerns, and a complex regulatory environment. This review describes general features of nanoparticulate imaging agents and therapeutics and discusses challenges associated with clinical translation. A second, related review to appear in a subsequent issue of JNM highlights nuclear-based nanoparticulate probes in preclinical cardiovascular imaging. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Multimodal biometric system using rank-level fusion approach.
Monwar, Md Maruf; Gavrilova, Marina L
2009-08-01
In many real-world applications, unimodal biometric systems often face significant limitations due to sensitivity to noise, intraclass variability, data quality, nonuniversality, and other factors. Attempting to improve the performance of individual matchers in such situations may not prove to be highly effective. Multibiometric systems seek to alleviate some of these problems by providing multiple pieces of evidence of the same identity. These systems help achieve an increase in performance that may not be possible using a single-biometric indicator. This paper presents an effective fusion scheme that combines information presented by multiple domain experts based on the rank-level fusion integration method. The developed multimodal biometric system possesses a number of unique qualities, starting from utilizing principal component analysis and Fisher's linear discriminant methods for individual matchers (face, ear, and signature) identity authentication and utilizing the novel rank-level fusion method in order to consolidate the results obtained from different biometric matchers. The ranks of individual matchers are combined using the highest rank, Borda count, and logistic regression approaches. The results indicate that fusion of individual modalities can improve the overall performance of the biometric system, even in the presence of low quality data. Insights on multibiometric design using rank-level fusion and its performance on a variety of biometric databases are discussed in the concluding section.
Cognitive Load Measurement in a Virtual Reality-based Driving System for Autism Intervention.
Zhang, Lian; Wade, Joshua; Bian, Dayi; Fan, Jing; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan
2017-01-01
Autism Spectrum Disorder (ASD) is a highly prevalent neurodevelopmental disorder with enormous individual and social cost. In this paper, a novel virtual reality (VR)-based driving system was introduced to teach driving skills to adolescents with ASD. This driving system is capable of gathering eye gaze, electroencephalography, and peripheral physiology data in addition to driving performance data. The objective of this paper is to fuse multimodal information to measure cognitive load during driving such that driving tasks can be individualized for optimal skill learning. Individualization of ASD intervention is an important criterion due to the spectrum nature of the disorder. Twenty adolescents with ASD participated in our study and the data collected were used for systematic feature extraction and classification of cognitive loads based on five well-known machine learning methods. Subsequently, three information fusion schemes-feature level fusion, decision level fusion and hybrid level fusion-were explored. Results indicate that multimodal information fusion can be used to measure cognitive load with high accuracy. Such a mechanism is essential since it will allow individualization of driving skill training based on cognitive load, which will facilitate acceptance of this driving system for clinical use and eventual commercialization.
Sato, Mitsuru; Tateishi, Kensuke; Murata, Hidetoshi; Kin, Taichi; Suenaga, Jun; Takase, Hajime; Yoneyama, Tomohiro; Nishii, Toshiaki; Tateishi, Ukihide; Yamamoto, Tetsuya; Saito, Nobuhito; Inoue, Tomio; Kawahara, Nobutaka
2018-06-26
The utility of surgical simulation with three-dimensional multimodality fusion imaging (3D-MFI) has been demonstrated. However, its potential in deep-seated brain lesions remains unknown. The aim of this study was to investigate the impact of 3D-MFI in deep-seated meningioma operations. Fourteen patients with deeply located meningiomas were included in this study. We constructed 3D-MFIs by fusing high-resolution magnetic resonance (MR) and computed tomography (CT) images with a rotational digital subtraction angiogram (DSA) in all patients. The surgical procedure was simulated by 3D-MFI prior to operation. To assess the impact on neurosurgical education, the objective values of surgical simulation by 3D-MFIs/virtual reality (VR) video were evaluated. To validate the quality of 3D-MFIs, intraoperative findings were compared. The identification rate (IR) and positive predictive value (PPV) for the tumor feeding arteries and involved perforating arteries and veins were also assessed for quality assessment of 3D-MFI. After surgical simulation by 3D-MFIs, near-total resection was achieved in 13 of 14 (92.9%) patients without neurological complications. 3D-MFIs significantly contributed to the understanding of surgical anatomy and optimal surgical view (p < .0001) and learning how to preserve critical vessels (p < .0001) and resect tumors safety and extensively (p < .0001) by neurosurgical residents/fellows. The IR of 3D-MFI for tumor-feeding arteries and perforating arteries and veins was 100% and 92.9%, respectively. The PPV of 3D-MFI for tumor-feeding arteries and perforating arteries and veins was 98.8% and 76.5%, respectively. 3D-MFI contributed to learn skull base meningioma surgery. Also, 3D-MFI provided high quality to identify critical anatomical structures within or adjacent to deep-seated meningiomas. Thus, 3D-MFI is promising educational and surgical planning tool for meningiomas in deep-seated regions.
Multi-Sensory, Multi-Modal Concepts for Information Understanding
2004-04-01
September 20022-2 Outline • The modern dilemma of knowledge acquisition • A vision for information access and understanding • Emerging concepts for...Multi-Sensory, Multi-Modal Concepts for Information Understanding David L. Hall, Ph.D. School of Information Sciences and Technology The... understanding . INTRODUCTION Historically, information displays for display and understanding of data fusion products have focused on the use of vision
Molecular brain imaging in the multimodality era
Price, Julie C
2012-01-01
Multimodality molecular brain imaging encompasses in vivo visualization, evaluation, and measurement of cellular/molecular processes. Instrumentation and software developments over the past 30 years have fueled advancements in multimodality imaging platforms that enable acquisition of multiple complementary imaging outcomes by either combined sequential or simultaneous acquisition. This article provides a general overview of multimodality neuroimaging in the context of positron emission tomography as a molecular imaging tool and magnetic resonance imaging as a structural and functional imaging tool. Several image examples are provided and general challenges are discussed to exemplify complementary features of the modalities, as well as important strengths and weaknesses of combined assessments. Alzheimer's disease is highlighted, as this clinical area has been strongly impacted by multimodality neuroimaging findings that have improved understanding of the natural history of disease progression, early disease detection, and informed therapy evaluation. PMID:22434068
Comparative study of multimodal biometric recognition by fusion of iris and fingerprint.
Benaliouche, Houda; Touahria, Mohamed
2014-01-01
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results.
Comparative Study of Multimodal Biometric Recognition by Fusion of Iris and Fingerprint
Benaliouche, Houda; Touahria, Mohamed
2014-01-01
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results. PMID:24605065
NASA Astrophysics Data System (ADS)
Wasserman, Richard Marc
The radiation therapy treatment planning (RTTP) process may be subdivided into three planning stages: gross tumor delineation, clinical target delineation, and modality dependent target definition. The research presented will focus on the first two planning tasks. A gross tumor target delineation methodology is proposed which focuses on the integration of MRI, CT, and PET imaging data towards the generation of a mathematically optimal tumor boundary. The solution to this problem is formulated within a framework integrating concepts from the fields of deformable modelling, region growing, fuzzy logic, and data fusion. The resulting fuzzy fusion algorithm can integrate both edge and region information from multiple medical modalities to delineate optimal regions of pathological tissue content. The subclinical boundaries of an infiltrating neoplasm cannot be determined explicitly via traditional imaging methods and are often defined to extend a fixed distance from the gross tumor boundary. In order to improve the clinical target definition process an estimation technique is proposed via which tumor growth may be modelled and subclinical growth predicted. An in vivo, macroscopic primary brain tumor growth model is presented, which may be fit to each patient undergoing treatment, allowing for the prediction of future growth and consequently the ability to estimate subclinical local invasion. Additionally, the patient specific in vivo tumor model will be of significant utility in multiple diagnostic clinical applications.
Robust QRS peak detection by multimodal information fusion of ECG and blood pressure signals.
Ding, Quan; Bai, Yong; Erol, Yusuf Bugra; Salas-Boni, Rebeca; Zhang, Xiaorong; Hu, Xiao
2016-11-01
QRS peak detection is a challenging problem when ECG signal is corrupted. However, additional physiological signals may also provide information about the QRS position. In this study, we focus on a unique benchmark provided by PhysioNet/Computing in Cardiology Challenge 2014 and Physiological Measurement focus issue: robust detection of heart beats in multimodal data, which aimed to explore robust methods for QRS detection in multimodal physiological signals. A dataset of 200 training and 210 testing records are used, where the testing records are hidden for evaluating the performance only. An information fusion framework for robust QRS detection is proposed by leveraging existing ECG and ABP analysis tools and combining heart beats derived from different sources. Results show that our approach achieves an overall accuracy of 90.94% and 88.66% on the training and testing datasets, respectively. Furthermore, we observe expected performance at each step of the proposed approach, as an evidence of the effectiveness of our approach. Discussion on the limitations of our approach is also provided.
Sensor and information fusion for improved hostile fire situational awareness
NASA Astrophysics Data System (ADS)
Scanlon, Michael V.; Ludwig, William D.
2010-04-01
A research-oriented Army Technology Objective (ATO) named Sensor and Information Fusion for Improved Hostile Fire Situational Awareness uniquely focuses on the underpinning technologies to detect and defeat any hostile threat; before, during, and after its occurrence. This is a joint effort led by the Army Research Laboratory, with the Armaments and the Communications and Electronics Research, Development, and Engineering Centers (CERDEC and ARDEC) partners. It addresses distributed sensor fusion and collaborative situational awareness enhancements, focusing on the underpinning technologies to detect/identify potential hostile shooters prior to firing a shot and to detect/classify/locate the firing point of hostile small arms, mortars, rockets, RPGs, and missiles after the first shot. A field experiment conducted addressed not only diverse modality sensor performance and sensor fusion benefits, but gathered useful data to develop and demonstrate the ad hoc networking and dissemination of relevant data and actionable intelligence. Represented at this field experiment were various sensor platforms such as UGS, soldier-worn, manned ground vehicles, UGVs, UAVs, and helicopters. This ATO continues to evaluate applicable technologies to include retro-reflection, UV, IR, visible, glint, LADAR, radar, acoustic, seismic, E-field, narrow-band emission and image processing techniques to detect the threats with very high confidence. Networked fusion of multi-modal data will reduce false alarms and improve actionable intelligence by distributing grid coordinates, detection report features, and imagery of threats.
Meng, Xing; Jiang, Rongtao; Lin, Dongdong; Bustillo, Juan; Jones, Thomas; Chen, Jiayu; Yu, Qingbao; Du, Yuhui; Zhang, Yu; Jiang, Tianzi; Sui, Jing; Calhoun, Vince D.
2016-01-01
Neuroimaging techniques have greatly enhanced the understanding of neurodiversity (human brain variation across individuals) in both health and disease. The ultimate goal of using brain imaging biomarkers is to perform individualized predictions. Here we proposed a generalized framework that can predict explicit values of the targeted measures by taking advantage of joint information from multiple modalities. This framework also enables whole brain voxel-wise searching by combining multivariate techniques such as ReliefF, clustering, correlation-based feature selection and multiple regression models, which is more flexible and can achieve better prediction performance than alternative atlas-based methods. For 50 healthy controls and 47 schizophrenia patients, three kinds of features derived from resting-state fMRI (fALFF), sMRI (gray matter) and DTI (fractional anisotropy) were extracted and fed into a regression model, achieving high prediction for both cognitive scores (MCCB composite r = 0.7033, MCCB social cognition r = 0.7084) and symptomatic scores (positive and negative syndrome scale [PANSS] positive r = 0.7785, PANSS negative r = 0.7804). Moreover, the brain areas likely responsible for cognitive deficits of schizophrenia, including middle temporal gyrus, dorsolateral prefrontal cortex, striatum, cuneus and cerebellum, were located with different weights, as well as regions predicting PANSS symptoms, including thalamus, striatum and inferior parietal lobule, pinpointing the potential neuromarkers. Finally, compared to a single modality, multimodal combination achieves higher prediction accuracy and enables individualized prediction on multiple clinical measures. There is more work to be done, but the current results highlight the potential utility of multimodal brain imaging biomarkers to eventually inform clinical decision-making. PMID:27177764
Multimodal Discourse Analysis of the Movie "Argo"
ERIC Educational Resources Information Center
Bo, Xu
2018-01-01
Based on multimodal discourse theory, this paper makes a multimodal discourse analysis of some shots in the movie "Argo" from the perspective of context of culture, context of situation and meaning of image. Results show that this movie constructs multimodal discourse through particular context, language and image, and successfully…
Automatic tissue image segmentation based on image processing and deep learning
NASA Astrophysics Data System (ADS)
Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting
2018-02-01
Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.
Construction of a multimodal CT-video chest model
NASA Astrophysics Data System (ADS)
Byrnes, Patrick D.; Higgins, William E.
2014-03-01
Bronchoscopy enables a number of minimally invasive chest procedures for diseases such as lung cancer and asthma. For example, using the bronchoscope's continuous video stream as a guide, a physician can navigate through the lung airways to examine general airway health, collect tissue samples, or administer a disease treatment. In addition, physicians can now use new image-guided intervention (IGI) systems, which draw upon both three-dimensional (3D) multi-detector computed tomography (MDCT) chest scans and bronchoscopic video, to assist with bronchoscope navigation. Unfortunately, little use is made of the acquired video stream, a potentially invaluable source of information. In addition, little effort has been made to link the bronchoscopic video stream to the detailed anatomical information given by a patient's 3D MDCT chest scan. We propose a method for constructing a multimodal CT-video model of the chest. After automatically computing a patient's 3D MDCT-based airway-tree model, the method next parses the available video data to generate a positional linkage between a sparse set of key video frames and airway path locations. Next, a fusion/mapping of the video's color mucosal information and MDCT-based endoluminal surfaces is performed. This results in the final multimodal CT-video chest model. The data structure constituting the model provides a history of those airway locations visited during bronchoscopy. It also provides for quick visual access to relevant sections of the airway wall by condensing large portions of endoscopic video into representative frames containing important structural and textural information. When examined with a set of interactive visualization tools, the resulting fused data structure provides a rich multimodal data source. We demonstrate the potential of the multimodal model with both phantom and human data.
Multimodal Imaging of Human Brain Activity: Rational, Biophysical Aspects and Modes of Integration
Blinowska, Katarzyna; Müller-Putz, Gernot; Kaiser, Vera; Astolfi, Laura; Vanderperren, Katrien; Van Huffel, Sabine; Lemieux, Louis
2009-01-01
Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims of multi-modal imaging and the ways in which it can be accomplished using representative applications. Given the importance of haemodynamic and electrophysiological signals in current multi-modal imaging applications, we also review some of the basic physiology relevant to understanding their relationship. PMID:19547657
Nanoparticles in Higher-Order Multimodal Imaging
NASA Astrophysics Data System (ADS)
Rieffel, James Ki
Imaging procedures are a cornerstone in our current medical infrastructure. In everything from screening, diagnostics, and treatment, medical imaging is perhaps our greatest tool in evaluating individual health. Recently, there has been tremendous increase in the development of multimodal systems that combine the strengths of complimentary imaging technologies to overcome their independent weaknesses. Clinically, this has manifested in the virtually universal manufacture of combined PET-CT scanners. With this push toward more integrated imaging, new contrast agents with multimodal functionality are needed. Nanoparticle-based systems are ideal candidates based on their unique size, properties, and diversity. In chapter 1, an extensive background on recent multimodal imaging agents capable of enhancing signal or contrast in three or more modalities is presented. Chapter 2 discusses the development and characterization of a nanoparticulate probe with hexamodal imaging functionality. It is my hope that the information contained in this thesis will demonstrate the many benefits of nanoparticles in multimodal imaging, and provide insight into the potential of fully integrated imaging.
Imaging and machine learning techniques for diagnosis of Alzheimer's disease.
Mirzaei, Golrokh; Adeli, Anahita; Adeli, Hojjat
2016-12-01
Alzheimer's disease (AD) is a common health problem in elderly people. There has been considerable research toward the diagnosis and early detection of this disease in the past decade. The sensitivity of biomarkers and the accuracy of the detection techniques have been defined to be the key to an accurate diagnosis. This paper presents a state-of-the-art review of the research performed on the diagnosis of AD based on imaging and machine learning techniques. Different segmentation and machine learning techniques used for the diagnosis of AD are reviewed including thresholding, supervised and unsupervised learning, probabilistic techniques, Atlas-based approaches, and fusion of different image modalities. More recent and powerful classification techniques such as the enhanced probabilistic neural network of Ahmadlou and Adeli should be investigated with the goal of improving the diagnosis accuracy. A combination of different image modalities can help improve the diagnosis accuracy rate. Research is needed on the combination of modalities to discover multi-modal biomarkers.
2014-10-01
1 AWARD NUMBER: W81XWH-13-1-0494 TITLE: Tinnitus Multimodal Imaging PRINCIPAL INVESTIGATOR...TYPE Annual 3. DATES COVERED 30 Sept 2013 – 29 Oct 2014 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Tinnitus Multimodal Imaging...AVAILABILITY STATEMENT Approved for Public Release; Distribution Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Tinnitus is a common auditory
NASA Astrophysics Data System (ADS)
Rusu, Mirabela; Wang, Haibo; Golden, Thea; Gow, Andrew; Madabhushi, Anant
2013-03-01
Mouse lung models facilitate the investigation of conditions such as chronic inflammation which are associated with common lung diseases. The multi-scale manifestation of lung inflammation prompted us to use multi-scale imaging - both in vivo, ex vivo MRI along with ex vivo histology, for its study in a new quantitative way. Some imaging modalities, such as MRI, are non-invasive and capture macroscopic features of the pathology, while others, e.g. ex vivo histology, depict detailed structures. Registering such multi-modal data to the same spatial coordinates will allow the construction of a comprehensive 3D model to enable the multi-scale study of diseases. Moreover, it may facilitate the identification and definition of quantitative of in vivo imaging signatures for diseases and pathologic processes. We introduce a quantitative, image analytic framework to integrate in vivo MR images of the entire mouse with ex vivo histology of the lung alone, using lung ex vivo MRI as conduit to facilitate their co-registration. In our framework, we first align the MR images by registering the in vivo and ex vivo MRI of the lung using an interactive rigid registration approach. Then we reconstruct the 3D volume of the ex vivo histological specimen by efficient group wise registration of the 2D slices. The resulting 3D histologic volume is subsequently registered to the MRI volumes by interactive rigid registration, directly to the ex vivo MRI, and implicitly to in vivo MRI. Qualitative evaluation of the registration framework was performed by comparing airway tree structures in ex vivo MRI and ex vivo histology where airways are visible and may be annotated. We present a use case for evaluation of our co-registration framework in the context of studying chronic inammation in a diseased mouse.
Cross contrast multi-channel image registration using image synthesis for MR brain images.
Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L
2017-02-01
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.
Multi-Modality Phantom Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, Jennifer S.; Peng, Qiyu; Moses, William W.
2009-03-20
Multi-modality imaging has an increasing role in the diagnosis and treatment of a large number of diseases, particularly if both functional and anatomical information are acquired and accurately co-registered. Hence, there is a resulting need for multi modality phantoms in order to validate image co-registration and calibrate the imaging systems. We present our PET-ultrasound phantom development, including PET and ultrasound images of a simple prostate phantom. We use agar and gelatin mixed with a radioactive solution. We also present our development of custom multi-modality phantoms that are compatible with PET, transrectal ultrasound (TRUS), MRI and CT imaging. We describe bothmore » our selection of tissue mimicking materials and phantom construction procedures. These custom PET-TRUS-CT-MRI prostate phantoms use agargelatin radioactive mixtures with additional contrast agents and preservatives. We show multi-modality images of these custom prostate phantoms, as well as discuss phantom construction alternatives. Although we are currently focused on prostate imaging, this phantom development is applicable to many multi-modality imaging applications.« less
Photoacoustic-Based Multimodal Nanoprobes: from Constructing to Biological Applications.
Gao, Duyang; Yuan, Zhen
2017-01-01
Multimodal nanoprobes have attracted intensive attentions since they can integrate various imaging modalities to obtain complementary merits of single modality. Meanwhile, recent interest in laser-induced photoacoustic imaging is rapidly growing due to its unique advantages in visualizing tissue structure and function with high spatial resolution and satisfactory imaging depth. In this review, we summarize multimodal nanoprobes involving photoacoustic imaging. In particular, we focus on the method to construct multimodal nanoprobes. We have divided the synthetic methods into two types. First, we call it "one for all" concept, which involves intrinsic properties of the element in a single particle. Second, "all in one" concept, which means integrating different functional blocks in one particle. Then, we simply introduce the applications of the multifunctional nanoprobes for in vivo imaging and imaging-guided tumor therapy. At last, we discuss the advantages and disadvantages of the present methods to construct the multimodal nanoprobes and share our viewpoints in this area.
Long-range dismount activity classification: LODAC
NASA Astrophysics Data System (ADS)
Garagic, Denis; Peskoe, Jacob; Liu, Fang; Cuevas, Manuel; Freeman, Andrew M.; Rhodes, Bradley J.
2014-06-01
Continuous classification of dismount types (including gender, age, ethnicity) and their activities (such as walking, running) evolving over space and time is challenging. Limited sensor resolution (often exacerbated as a function of platform standoff distance) and clutter from shadows in dense target environments, unfavorable environmental conditions, and the normal properties of real data all contribute to the challenge. The unique and innovative aspect of our approach is a synthesis of multimodal signal processing with incremental non-parametric, hierarchical Bayesian machine learning methods to create a new kind of target classification architecture. This architecture is designed from the ground up to optimally exploit correlations among the multiple sensing modalities (multimodal data fusion) and rapidly and continuously learns (online self-tuning) patterns of distinct classes of dismounts given little a priori information. This increases classification performance in the presence of challenges posed by anti-access/area denial (A2/AD) sensing. To fuse multimodal features, Long-range Dismount Activity Classification (LODAC) develops a novel statistical information theoretic approach for multimodal data fusion that jointly models multimodal data (i.e., a probabilistic model for cross-modal signal generation) and discovers the critical cross-modal correlations by identifying components (features) with maximal mutual information (MI) which is efficiently estimated using non-parametric entropy models. LODAC develops a generic probabilistic pattern learning and classification framework based on a new class of hierarchical Bayesian learning algorithms for efficiently discovering recurring patterns (classes of dismounts) in multiple simultaneous time series (sensor modalities) at multiple levels of feature granularity.
Verma, Gyanendra K; Tiwary, Uma Shanker
2014-11-15
The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for 'Depressing' with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach. Copyright © 2013 Elsevier Inc. All rights reserved.
Multimodal Diffuse Optical Imaging
NASA Astrophysics Data System (ADS)
Intes, Xavier; Venugopal, Vivek; Chen, Jin; Azar, Fred S.
Diffuse optical imaging, particularly diffuse optical tomography (DOT), is an emerging clinical modality capable of providing unique functional information, at a relatively low cost, and with nonionizing radiation. Multimodal diffuse optical imaging has enabled a synergistic combination of functional and anatomical information: the quality of DOT reconstructions has been significantly improved by incorporating the structural information derived by the combined anatomical modality. In this chapter, we will review the basic principles of diffuse optical imaging, including instrumentation and reconstruction algorithm design. We will also discuss the approaches for multimodal imaging strategies that integrate DOI with clinically established modalities. The merit of the multimodal imaging approaches is demonstrated in the context of optical mammography, but the techniques described herein can be translated to other clinical scenarios such as brain functional imaging or muscle functional imaging.
Quang, Timothy; Tran, Emily Q; Schwarz, Richard A; Williams, Michelle D; Vigneswaran, Nadarajah; Gillenwater, Ann M; Richards-Kortum, Rebecca
2017-10-01
The 5-year survival rate for patients with oral cancer remains low, in part because diagnosis often occurs at a late stage. Early and accurate identification of oral high-grade dysplasia and cancer can help improve patient outcomes. Multimodal optical imaging is an adjunctive diagnostic technique in which autofluorescence imaging is used to identify high-risk regions within the oral cavity, followed by high-resolution microendoscopy to confirm or rule out the presence of neoplasia. Multimodal optical images were obtained from 206 sites in 100 patients. Histologic diagnosis, either from a punch biopsy or an excised surgical specimen, was used as the gold standard for all sites. Histopathologic diagnoses of moderate dysplasia or worse were considered neoplastic. Images from 92 sites in the first 30 patients were used as a training set to develop automated image analysis methods for identification of neoplasia. Diagnostic performance was evaluated prospectively using images from 114 sites in the remaining 70 patients as a test set. In the training set, multimodal optical imaging with automated image analysis correctly classified 95% of nonneoplastic sites and 94% of neoplastic sites. Among the 56 sites in the test set that were biopsied, multimodal optical imaging correctly classified 100% of nonneoplastic sites and 85% of neoplastic sites. Among the 58 sites in the test set that corresponded to a surgical specimen, multimodal imaging correctly classified 100% of nonneoplastic sites and 61% of neoplastic sites. These findings support the potential of multimodal optical imaging to aid in the early detection of oral cancer. Cancer Prev Res; 10(10); 563-70. ©2017 AACR . ©2017 American Association for Cancer Research.
Fusion and Sense Making of Heterogeneous Sensor Network and Other Sources
2017-03-16
multimodal fusion framework that uses both training data and web resources for scene classification, the experimental results on the benchmark datasets...show that the proposed text-aided scene classification framework could significantly improve classification performance. Experimental results also show...human whose adaptability is achieved by reliability- dependent weighting of different sensory modalities. Experimental results show that the proposed
Cognitive Load Measurement in a Virtual Reality-based Driving System for Autism Intervention
Zhang, Lian; Wade, Joshua; Bian, Dayi; Fan, Jing; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan
2016-01-01
Autism Spectrum Disorder (ASD) is a highly prevalent neurodevelopmental disorder with enormous individual and social cost. In this paper, a novel virtual reality (VR)-based driving system was introduced to teach driving skills to adolescents with ASD. This driving system is capable of gathering eye gaze, electroencephalography, and peripheral physiology data in addition to driving performance data. The objective of this paper is to fuse multimodal information to measure cognitive load during driving such that driving tasks can be individualized for optimal skill learning. Individualization of ASD intervention is an important criterion due to the spectrum nature of the disorder. Twenty adolescents with ASD participated in our study and the data collected were used for systematic feature extraction and classification of cognitive loads based on five well-known machine learning methods. Subsequently, three information fusion schemes—feature level fusion, decision level fusion and hybrid level fusion—were explored. Results indicate that multimodal information fusion can be used to measure cognitive load with high accuracy. Such a mechanism is essential since it will allow individualization of driving skill training based on cognitive load, which will facilitate acceptance of this driving system for clinical use and eventual commercialization. PMID:28966730
Multimodal quantitative phase and fluorescence imaging of cell apoptosis
NASA Astrophysics Data System (ADS)
Fu, Xinye; Zuo, Chao; Yan, Hao
2017-06-01
Fluorescence microscopy, utilizing fluorescence labeling, has the capability to observe intercellular changes which transmitted and reflected light microscopy techniques cannot resolve. However, the parts without fluorescence labeling are not imaged. Hence, the processes simultaneously happen in these parts cannot be revealed. Meanwhile, fluorescence imaging is 2D imaging where information in the depth is missing. Therefore the information in labeling parts is also not complete. On the other hand, quantitative phase imaging is capable to image cells in 3D in real time through phase calculation. However, its resolution is limited by the optical diffraction and cannot observe intercellular changes below 200 nanometers. In this work, fluorescence imaging and quantitative phase imaging are combined to build a multimodal imaging system. Such system has the capability to simultaneously observe the detailed intercellular phenomenon and 3D cell morphology. In this study the proposed multimodal imaging system is used to observe the cell behavior in the cell apoptosis. The aim is to highlight the limitations of fluorescence microscopy and to point out the advantages of multimodal quantitative phase and fluorescence imaging. The proposed multimodal quantitative phase imaging could be further applied in cell related biomedical research, such as tumor.
Multimodal imaging of ischemic wounds
NASA Astrophysics Data System (ADS)
Zhang, Shiwu; Gnyawali, Surya; Huang, Jiwei; Liu, Peng; Gordillo, Gayle; Sen, Chandan K.; Xu, Ronald
2012-12-01
The wound healing process involves the reparative phases of inflammation, proliferation, and remodeling. Interrupting any of these phases may result in chronically unhealed wounds, amputation, or even patient death. Quantitative assessment of wound tissue ischemia, perfusion, and inflammation provides critical information for appropriate detection, staging, and treatment of chronic wounds. However, no method is available for noninvasive, simultaneous, and quantitative imaging of these tissue parameters. We integrated hyperspectral, laser speckle, and thermographic imaging modalities into a single setup for multimodal assessment of tissue oxygenation, perfusion, and inflammation characteristics. Advanced algorithms were developed for accurate reconstruction of wound oxygenation and appropriate co-registration between different imaging modalities. The multimodal wound imaging system was validated by an ongoing clinical trials approved by OSU IRB. In the clinical trial, a wound of 3mm in diameter was introduced on a healthy subject's lower extremity and the healing process was serially monitored by the multimodal imaging setup. Our experiments demonstrated the clinical usability of multimodal wound imaging.
In vivo multimodal nonlinear optical imaging of mucosal tissue
NASA Astrophysics Data System (ADS)
Sun, Ju; Shilagard, Tuya; Bell, Brent; Motamedi, Massoud; Vargas, Gracie
2004-05-01
We present a multimodal nonlinear imaging approach to elucidate microstructures and spectroscopic features of oral mucosa and submucosa in vivo. The hamster buccal pouch was imaged using 3-D high resolution multiphoton and second harmonic generation microscopy. The multimodal imaging approach enables colocalization and differentiation of prominent known spectroscopic and structural features such as keratin, epithelial cells, and submucosal collagen at various depths in tissue. Visualization of cellular morphology and epithelial thickness are in excellent agreement with histological observations. These results suggest that multimodal nonlinear optical microscopy can be an effective tool for studying the physiology and pathology of mucosal tissue.
Markel, D; Naqa, I El; Freeman, C; Vallières, M
2012-06-01
To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. It was found that JR divergence when used for segmentation has an improved robustness to noise compared to using mutual information, or other entropy-based metrics. The MI metric failed at around 2/3 the noise power than the JR divergence. The JR divergence metric is useful for the task of joint segmentation/registration of multimodality images and shows improved results compared entropy based metric. The algorithm can be easily modified to incorporate non-intensity based images, which would allow applications into multi-modality and texture analysis. © 2012 American Association of Physicists in Medicine.
Ridge-branch-based blood vessel detection algorithm for multimodal retinal images
NASA Astrophysics Data System (ADS)
Li, Y.; Hutchings, N.; Knighton, R. W.; Gregori, G.; Lujan, B. J.; Flanagan, J. G.
2009-02-01
Automatic detection of retinal blood vessels is important to medical diagnoses and imaging. With the development of imaging technologies, various modals of retinal images are available. Few of currently published algorithms are applied to multimodal retinal images. Besides, the performance of algorithms with pathologies is expected to be improved. The purpose of this paper is to propose an automatic Ridge-Branch-Based (RBB) detection algorithm of blood vessel centerlines and blood vessels for multimodal retinal images (color fundus photographs, fluorescein angiograms, fundus autofluorescence images, SLO fundus images and OCT fundus images, for example). Ridges, which can be considered as centerlines of vessel-like patterns, are first extracted. The method uses the connective branching information of image ridges: if ridge pixels are connected, they are more likely to be in the same class, vessel ridge pixels or non-vessel ridge pixels. Thanks to the good distinguishing ability of the designed "Segment-Based Ridge Features", the classifier and its parameters can be easily adapted to multimodal retinal images without ground truth training. We present thorough experimental results on SLO images, color fundus photograph database and other multimodal retinal images, as well as comparison between other published algorithms. Results showed that the RBB algorithm achieved a good performance.
NaGdF4:Nd3+/Yb3+ Nanoparticles as Multimodal Imaging Agents
NASA Astrophysics Data System (ADS)
Pedraza, Francisco; Rightsell, Chris; Kumar, Ga; Giuliani, Jason; Monton, Car; Sardar, Dhiraj
Medical imaging is a fundamental tool used for the diagnosis of numerous ailments. Each imaging modality has unique advantages; however, they possess intrinsic limitations. Some of which include low spatial resolution, sensitivity, penetration depth, and radiation damage. To circumvent this problem, the combination of imaging modalities, or multimodal imaging, has been proposed, such as Near Infrared Fluorescence imaging (NIRF) and Magnetic Resonance Imaging (MRI). Combining individual advantages, specificity and selectivity of NIRF with the deep penetration and high spatial resolution of MRI, it is possible to circumvent their shortcomings for a more robust imaging technique. In addition, both imaging modalities are very safe and minimally invasive. Fluorescent nanoparticles, such as NaGdF4:Nd3 +/Yb3 +, are excellent candidates for NIRF/MRI multimodal imaging. The dopants, Nd and Yb, absorb and emit within the biological window; where near infrared light is less attenuated by soft tissue. This results in less tissue damage and deeper tissue penetration making it a viable candidate in biological imaging. In addition, the inclusion of Gd results in paramagnetic properties, allowing their use as contrast agents in multimodal imaging. The work presented will include crystallographic results, as well as full optical and magnetic characterization to determine the nanoparticle's viability in multimodal imaging.
Alignment of multimodality, 2D and 3D breast images
NASA Astrophysics Data System (ADS)
Grevera, George J.; Udupa, Jayaram K.
2003-05-01
In a larger effort, we are studying methods to improve the specificity of the diagnosis of breast cancer by combining the complementary information available from multiple imaging modalities. Merging information is important for a number of reasons. For example, contrast uptake curves are an indication of malignancy. The determination of anatomical locations in corresponding images from various modalities is necessary to ascertain the extent of regions of tissue. To facilitate this fusion, registration becomes necessary. We describe in this paper a framework in which 2D and 3D breast images from MRI, PET, Ultrasound, and Digital Mammography can be registered to facilitate this goal. Briefly, prior to image acquisition, an alignment grid is drawn on the breast skin. Modality-specific markers are then placed at the indicated grid points. Images are then acquired by a specific modality with the modality specific external markers in place causing the markers to appear in the images. This is the first study that we are aware of that has undertaken the difficult task of registering 2D and 3D images of such a highly deformable (the breast) across such a wide variety of modalities. This paper reports some very preliminary results from this project.
Developing single-laser sources for multimodal coherent anti-Stokes Raman scattering microscopy
NASA Astrophysics Data System (ADS)
Pegoraro, Adrian Frank
Coherent anti-Stokes Raman scattering (CARS) microscopy has developed rapidly and is opening the door to new types of experiments. This work describes the development of new laser sources for CARS microscopy and their use for different applications. It is specifically focused on multimodal nonlinear optical microscopy—the simultaneous combination of different imaging techniques. This allows us to address a diverse range of applications, such as the study of biomaterials, fluid inclusions, atherosclerosis, hepatitis C infection in cells, and ice formation in cells. For these applications new laser sources are developed that allow for practical multimodal imaging. For example, it is shown that using a single Ti:sapphire oscillator with a photonic crystal fiber, it is possible to develop a versatile multimodal imaging system using optimally chirped laser pulses. This system can perform simultaneous two photon excited fluorescence, second harmonic generation, and CARS microscopy. The versatility of the system is further demonstrated by showing that it is possible to probe different Raman modes using CARS microscopy simply by changing a time delay between the excitation beams. Using optimally chirped pulses also enables further simplification of the laser system required by using a single fiber laser combined with nonlinear optical fibers to perform effective multimodal imaging. While these sources are useful for practical multimodal imaging, it is believed that for further improvements in CARS microscopy sensitivity, new excitation schemes are necessary. This has led to the design of a new, high power, extended cavity oscillator that should be capable of implementing new excitation schemes for CARS microscopy as well as other techniques. Our interest in multimodal imaging has led us to other areas of research as well. For example, a fiber-coupling scheme for signal collection in the forward direction is demonstrated that allows for fluorescence lifetime imaging without significant temporal distortion. Also highlighted is an imaging artifact that is unique to CARS microscopy that can alter image interpretation, especially when using multimodal imaging. By combining expertise in nonlinear optics, laser development, fiber optics, and microscopy, we have developed systems and techniques that will be of benefit for multimodal CARS microscopy.
Mobile, Multi-modal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia
2015-10-01
eyes and image choroidal vessels/capillaries using CARS intravital microscopy Subtask 3: Measure oxy-hemoglobin levels in PBI test and control eyes...AWARD NUMBER: W81XWH-14-1-0537 TITLE: Mobile, Multi-modal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia...4. TITLE AND SUBTITLE Mobile, Multimodal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia 5a. CONTRACT NUMBER W81XWH
CT and Ultrasound Guided Stereotactic High Intensity Focused Ultrasound (HIFU)
NASA Astrophysics Data System (ADS)
Wood, Bradford J.; Yanof, J.; Frenkel, V.; Viswanathan, A.; Dromi, S.; Oh, K.; Kruecker, J.; Bauer, C.; Seip, R.; Kam, A.; Li, K. C. P.
2006-05-01
To demonstrate the feasibility of CT and B-mode Ultrasound (US) targeted HIFU, a prototype coaxial focused ultrasound transducer was registered and integrated to a CT scanner. CT and diagnostic ultrasound were used for HIFU targeting and monitoring, with the goals of both thermal ablation and non-thermal enhanced drug delivery. A 1 megahertz coaxial ultrasound transducer was custom fabricated and attached to a passive position-sensing arm and an active six degree-of-freedom robotic arm via a CT stereotactic frame. The outer therapeutic transducer with a 10 cm fixed focal zone was coaxially mounted to an inner diagnostic US transducer (2-4 megahertz, Philips Medical Systems). This coaxial US transducer was connected to a modified commercial focused ultrasound generator (Focus Surgery, Indianapolis, IN) with a maximum total acoustic power of 100 watts. This pre-clinical paradigm was tested for ability to heat tissue in phantoms with monitoring and navigation from CT and live US. The feasibility of navigation via image fusion of CT with other modalities such as PET and MRI was demonstrated. Heated water phantoms were tested for correlation between CT numbers and temperature (for ablation monitoring). The prototype transducer and integrated CT/US imaging system enabled simultaneous multimodality imaging and therapy. Pre-clinical phantom models validated the treatment paradigm and demonstrated integrated multimodality guidance and treatment monitoring. Temperature changes during phantom cooling corresponded to CT number changes. Contrast enhanced or non-enhanced CT numbers may potentially be used to monitor thermal ablation with HIFU. Integrated CT, diagnostic US, and therapeutic focused ultrasound bridges a gap between diagnosis and therapy. Preliminary results show that the multimodality system may represent a relatively inexpensive, accessible, and simple method of both targeting and monitoring HIFU effects. Small animal pre-clinical models may be translated to large animals and humans for HIFU-induced ablation and drug delivery. Integrated CT-guided focused ultrasound holds promise for tissue ablation, enhancing local drug delivery, and CT thermometry for monitoring ablation in near real-time.
Radioactive Nanomaterials for Multimodality Imaging
Chen, Daiqin; Dougherty, Casey A.; Yang, Dongzhi; Wu, Hongwei; Hong, Hao
2016-01-01
Nuclear imaging techniques, including primarily positron emission tomography (PET) and single-photon emission computed tomography (SPECT), can provide quantitative information for a biological event in vivo with ultra-high sensitivity, however, the comparatively low spatial resolution is their major limitation in clinical application. By convergence of nuclear imaging with other imaging modalities like computed tomography (CT), magnetic resonance imaging (MRI) and optical imaging, the hybrid imaging platforms can overcome the limitations from each individual imaging technique. Possessing versatile chemical linking ability and good cargo-loading capacity, radioactive nanomaterials can serve as ideal imaging contrast agents. In this review, we provide a brief overview about current state-of-the-art applications of radioactive nanomaterials in the circumstances of multimodality imaging. We present strategies for incorporation of radioisotope(s) into nanomaterials along with applications of radioactive nanomaterials in multimodal imaging. Advantages and limitations of radioactive nanomaterials for multimodal imaging applications are discussed. Finally, a future perspective of possible radioactive nanomaterial utilization is presented for improving diagnosis and patient management in a variety of diseases. PMID:27227167
NASA Astrophysics Data System (ADS)
Khan, Faisal M.; Kulikowski, Casimir A.
2016-03-01
A major focus area for precision medicine is in managing the treatment of newly diagnosed prostate cancer patients. For patients with a positive biopsy, clinicians aim to develop an individualized treatment plan based on a mechanistic understanding of the disease factors unique to each patient. Recently, there has been a movement towards a multi-modal view of the cancer through the fusion of quantitative information from multiple sources, imaging and otherwise. Simultaneously, there have been significant advances in machine learning methods for medical prognostics which integrate a multitude of predictive factors to develop an individualized risk assessment and prognosis for patients. An emerging area of research is in semi-supervised approaches which transduce the appropriate survival time for censored patients. In this work, we apply a novel semi-supervised approach for support vector regression to predict the prognosis for newly diagnosed prostate cancer patients. We integrate clinical characteristics of a patient's disease with imaging derived metrics for biomarker expression as well as glandular and nuclear morphology. In particular, our goal was to explore the performance of nuclear and glandular architecture within the transduction algorithm and assess their predictive power when compared with the Gleason score manually assigned by a pathologist. Our analysis in a multi-institutional cohort of 1027 patients indicates that not only do glandular and morphometric characteristics improve the predictive power of the semi-supervised transduction algorithm; they perform better when the pathological Gleason is absent. This work represents one of the first assessments of quantitative prostate biopsy architecture versus the Gleason grade in the context of a data fusion paradigm which leverages a semi-supervised approach for risk prognosis.
Mobile robots traversability awareness based on terrain visual sensory data fusion
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir
2007-04-01
In this paper, we have presented methods that significantly improve the robot awareness of its terrain traversability conditions. The terrain traversability awareness is achieved by association of terrain image appearances from different poses and fusion of extracted information from multimodality imaging and range sensor data for localization and clustering environment landmarks. Initially, we describe methods for extraction of salient features of the terrain for the purpose of landmarks registration from two or more images taken from different via points along the trajectory path of the robot. The method of image registration is applied as a means of overlaying (two or more) of the same terrain scene at different viewpoints. The registration geometrically aligns salient landmarks of two images (the reference and sensed images). A Similarity matching techniques is proposed for matching the terrain salient landmarks. Secondly, we present three terrain classifier models based on rule-based, supervised neural network, and fuzzy logic for classification of terrain condition under uncertainty and mapping the robot's terrain perception to apt traversability measures. This paper addresses the technical challenges and navigational skill requirements of mobile robots for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions and classifies terrain condition exclusively within each sub-terrain region based on terrain spatial and textural cues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kapur, T.
At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less
Robust Multimodal Dictionary Learning
Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc
2014-01-01
We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674
Fluorescence Imaging Topography Scanning System for intraoperative multimodal imaging
Quang, Tri T.; Kim, Hye-Yeong; Bao, Forrest Sheng; Papay, Francis A.; Edwards, W. Barry; Liu, Yang
2017-01-01
Fluorescence imaging is a powerful technique with diverse applications in intraoperative settings. Visualization of three dimensional (3D) structures and depth assessment of lesions, however, are oftentimes limited in planar fluorescence imaging systems. In this study, a novel Fluorescence Imaging Topography Scanning (FITS) system has been developed, which offers color reflectance imaging, fluorescence imaging and surface topography scanning capabilities. The system is compact and portable, and thus suitable for deployment in the operating room without disturbing the surgical flow. For system performance, parameters including near infrared fluorescence detection limit, contrast transfer functions and topography depth resolution were characterized. The developed system was tested in chicken tissues ex vivo with simulated tumors for intraoperative imaging. We subsequently conducted in vivo multimodal imaging of sentinel lymph nodes in mice using FITS and PET/CT. The PET/CT/optical multimodal images were co-registered and conveniently presented to users to guide surgeries. Our results show that the developed system can facilitate multimodal intraoperative imaging. PMID:28437441
Microscopy with multimode fibers
NASA Astrophysics Data System (ADS)
Moser, Christophe; Papadopoulos, Ioannis; Farahi, Salma; Psaltis, Demetri
2013-04-01
Microscopes are usually thought of comprising imaging elements such as objectives and eye-piece lenses. A different type of microscope, used for endoscopy, consists of waveguiding elements such as fiber bundles, where each fiber in the bundle transports the light corresponding to one pixel in the image. Recently a new type of microscope has emerged that exploits the large number of propagating modes in a single multimode fiber. We have successfully produced fluorescence images of neural cells with sub-micrometer resolution via a 200 micrometer core multimode fiber. The method for achieving imaging consists of using digital phase conjugation to reproduce a focal spot at the tip of the multimode fiber. The image is formed by scanning the focal spot digitally and collecting the fluorescence point by point.
Landmark Image Retrieval by Jointing Feature Refinement and Multimodal Classifier Learning.
Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun; Ma, Shuai; Xiaoming Zhang; Senzhang Wang; Zhoujun Li; Shuai Ma; Ma, Shuai; Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun
2018-06-01
Landmark retrieval is to return a set of images with their landmarks similar to those of the query images. Existing studies on landmark retrieval focus on exploiting the geometries of landmarks for visual similarity matches. However, the visual content of social images is of large diversity in many landmarks, and also some images share common patterns over different landmarks. On the other side, it has been observed that social images usually contain multimodal contents, i.e., visual content and text tags, and each landmark has the unique characteristic of both visual content and text content. Therefore, the approaches based on similarity matching may not be effective in this environment. In this paper, we investigate whether the geographical correlation among the visual content and the text content could be exploited for landmark retrieval. In particular, we propose an effective multimodal landmark classification paradigm to leverage the multimodal contents of social image for landmark retrieval, which integrates feature refinement and landmark classifier with multimodal contents by a joint model. The geo-tagged images are automatically labeled for classifier learning. Visual features are refined based on low rank matrix recovery, and multimodal classification combined with group sparse is learned from the automatically labeled images. Finally, candidate images are ranked by combining classification result and semantic consistence measuring between the visual content and text content. Experiments on real-world datasets demonstrate the superiority of the proposed approach as compared to existing methods.
Soltaninejad, Mohammadreza; Yang, Guang; Lambrou, Tryphon; Allinson, Nigel; Jones, Timothy L; Barrick, Thomas R; Howe, Franklyn A; Ye, Xujiong
2018-04-01
Accurate segmentation of brain tumour in magnetic resonance images (MRI) is a difficult task due to various tumour types. Using information and features from multimodal MRI including structural MRI and isotropic (p) and anisotropic (q) components derived from the diffusion tensor imaging (DTI) may result in a more accurate analysis of brain images. We propose a novel 3D supervoxel based learning method for segmentation of tumour in multimodal MRI brain images (conventional MRI and DTI). Supervoxels are generated using the information across the multimodal MRI dataset. For each supervoxel, a variety of features including histograms of texton descriptor, calculated using a set of Gabor filters with different sizes and orientations, and first order intensity statistical features are extracted. Those features are fed into a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The method is evaluated on two datasets: 1) Our clinical dataset: 11 multimodal images of patients and 2) BRATS 2013 clinical dataset: 30 multimodal images. For our clinical dataset, the average detection sensitivity of tumour (including tumour core and oedema) using multimodal MRI is 86% with balanced error rate (BER) 7%; while the Dice score for automatic tumour segmentation against ground truth is 0.84. The corresponding results of the BRATS 2013 dataset are 96%, 2% and 0.89, respectively. The method demonstrates promising results in the segmentation of brain tumour. Adding features from multimodal MRI images can largely increase the segmentation accuracy. The method provides a close match to expert delineation across all tumour grades, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. Copyright © 2018 Elsevier B.V. All rights reserved.
MO-DE-202-04: Multimodality Image-Guided Surgery and Intervention: For the Rest of Us
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shekhar, R.
At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less
A multimodal biometric authentication system based on 2D and 3D palmprint features
NASA Astrophysics Data System (ADS)
Aggithaya, Vivek K.; Zhang, David; Luo, Nan
2008-03-01
This paper presents a new personal authentication system that simultaneously exploits 2D and 3D palmprint features. Here, we aim to improve the accuracy and robustness of existing palmprint authentication systems using 3D palmprint features. The proposed system uses an active stereo technique, structured light, to capture 3D image or range data of the palm and a registered intensity image simultaneously. The surface curvature based method is employed to extract features from 3D palmprint and Gabor feature based competitive coding scheme is used for 2D representation. We individually analyze these representations and attempt to combine them with score level fusion technique. Our experiments on a database of 108 subjects achieve significant improvement in performance (Equal Error Rate) with the integration of 3D features as compared to the case when 2D palmprint features alone are employed.
2015-10-01
AWARD NUMBER: W81XWH-13-1-0494 TITLE: Tinnitus Multimodal Imaging PRINCIPAL INVESTIGATOR: Steven Wan Cheung CONTRACTING ORGANIZATION...NUMBER W81XWH-13-1-0494 Tinnitus Multimodal Imaging 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Steven W. Cheung...13. SUPPLEMENTARY NOTES 14. ABSTRACT Tinnitus is a common auditory perceptual disorder whose neural substrates are under intense debate. This project
Multimodal Imaging of the Normal Eye.
Kawali, Ankush; Pichi, Francesco; Avadhani, Kavitha; Invernizzi, Alessandro; Hashimoto, Yuki; Mahendradas, Padmamalini
2017-10-01
Multimodal imaging is the concept of "bundling" images obtained from various imaging modalities, viz., fundus photograph, fundus autofluorescence imaging, infrared (IR) imaging, simultaneous fluorescein and indocyanine angiography, optical coherence tomography (OCT), and, more recently, OCT angiography. Each modality has its pros and cons as well as its limitations. Combination of multiple imaging techniques will overcome their individual weaknesses and give a comprehensive picture. Such approach helps in accurate localization of a lesion and understanding the pathology in posterior segment. It is important to know imaging of normal eye before one starts evaluating pathology. This article describes multimodal imaging modalities in detail and discusses healthy eye features as seen on various imaging modalities mentioned above.
Theory-based transport simulations of TFTR L-mode temperature profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bateman, G.
1992-03-01
The temperature profiles from a selection of Tokamak Fusion Test Reactor (TFTR) L-mode discharges (17{ital th} {ital European} {ital Conference} {ital on} {ital Controlled} {ital Fusion} {ital and} {ital Plasma} {ital Heating}, Amsterdam, 1990 (EPS, Petit-Lancy, Switzerland, 1990, p. 114)) are simulated with the 1 (1)/(2) -D baldur transport code (Comput. Phys. Commun. {bold 49}, 275 (1988)) using a combination of theoretically derived transport models, called the Multi-Mode Model (Comments Plasma Phys. Controlled Fusion {bold 11}, 165 (1988)). The present version of the Multi-Mode Model consists of effective thermal diffusivities resulting from trapped electron modes and ion temperature gradient ({eta}{submore » {ital i}}) modes, which dominate in the core of the plasma, together with resistive ballooning modes, which dominate in the periphery. Within the context of this transport model and the TFTR simulations reported here, the scaling of confinement with heating power comes from the temperature dependence of the {eta}{sub {ital i}} and trapped electron modes, while the scaling with current comes mostly from resistive ballooning modes.« less
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2016-09-01
In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.
NASA Astrophysics Data System (ADS)
Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.
2013-08-01
Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.
Appearance-based multimodal human tracking and identification for healthcare in the digital home.
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-08-05
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.
Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-01-01
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare. PMID:25098207
Joint sparse representation for robust multimodal biometrics recognition.
Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama
2014-01-01
Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.
Multimodal Image Registration through Simultaneous Segmentation.
Aganj, Iman; Fischl, Bruce
2017-11-01
Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.
Multimodal hard x-ray imaging with resolution approaching 10 nm for studies in material science
NASA Astrophysics Data System (ADS)
Yan, Hanfei; Bouet, Nathalie; Zhou, Juan; Huang, Xiaojing; Nazaretski, Evgeny; Xu, Weihe; Cocco, Alex P.; Chiu, Wilson K. S.; Brinkman, Kyle S.; Chu, Yong S.
2018-03-01
We report multimodal scanning hard x-ray imaging with spatial resolution approaching 10 nm and its application to contemporary studies in the field of material science. The high spatial resolution is achieved by focusing hard x-rays with two crossed multilayer Laue lenses and raster-scanning a sample with respect to the nanofocusing optics. Various techniques are used to characterize and verify the achieved focus size and imaging resolution. The multimodal imaging is realized by utilizing simultaneously absorption-, phase-, and fluorescence-contrast mechanisms. The combination of high spatial resolution and multimodal imaging enables a comprehensive study of a sample on a very fine length scale. In this work, the unique multimodal imaging capability was used to investigate a mixed ionic-electronic conducting ceramic-based membrane material employed in solid oxide fuel cells and membrane separations (compound of Ce0.8Gd0.2O2‑x and CoFe2O4) which revealed the existence of an emergent material phase and quantified the chemical complexity at the nanoscale.
Towards a Compact Fiber Laser for Multimodal Imaging
NASA Astrophysics Data System (ADS)
Nie, Bai; Saytashev, Ilyas; Dantus, Marcos
We report on multimodal depth-resolved imaging of unstained living Drosophila Melanogaster larva using sub-50 fs pulses centered at 1060 nm wavelength. Both second harmonic and third harmonic generation imaging modalities are demonstrated.
Towards a compact fiber laser for multimodal imaging
NASA Astrophysics Data System (ADS)
Nie, Bai; Saytashev, Ilyas; Dantus, Marcos
2014-03-01
We report on multimodal depth-resolved imaging of unstained living Drosophila Melanogaster larva using sub-50 fs pulses centered at 1060 nm wavelength. Both second harmonic and third harmonic generation imaging modalities are demonstrated.
Handheld real-time volumetric 3-D gamma-ray imaging
NASA Astrophysics Data System (ADS)
Haefner, Andrew; Barnowski, Ross; Luke, Paul; Amman, Mark; Vetter, Kai
2017-06-01
This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.
A wireless modular multi-modal multi-node patch platform for robust biosignal monitoring.
Pantelopoulos, Alexandros; Saldivar, Enrique; Roham, Masoud
2011-01-01
In this paper a wireless modular, multi-modal, multi-node patch platform is described. The platform comprises low-cost semi-disposable patch design aiming at unobtrusive ambulatory monitoring of multiple physiological parameters. Owing to its modular design it can be interfaced with various low-power RF communication and data storage technologies, while the data fusion of multi-modal and multi-node features facilitates measurement of several biosignals from multiple on-body locations for robust feature extraction. Preliminary results of the patch platform are presented which illustrate the capability to extract respiration rate from three different independent metrics, which combined together can give a more robust estimate of the actual respiratory rate.
A multimodal image sensor system for identifying water stress in grapevines
NASA Astrophysics Data System (ADS)
Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong
2012-11-01
Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.
Multimodal biometric method that combines veins, prints, and shape of a finger
NASA Astrophysics Data System (ADS)
Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Kim, Jeong Nyeo
2011-01-01
Multimodal biometrics provides high recognition accuracy and population coverage by using various biometric features. A single finger contains finger veins, fingerprints, and finger geometry features; by using multimodal biometrics, information on these multiple features can be simultaneously obtained in a short time and their fusion can outperform the use of a single feature. This paper proposes a new finger recognition method based on the score-level fusion of finger veins, fingerprints, and finger geometry features. This research is novel in the following four ways. First, the performances of the finger-vein and fingerprint recognition are improved by using a method based on a local derivative pattern. Second, the accuracy of the finger geometry recognition is greatly increased by combining a Fourier descriptor with principal component analysis. Third, a fuzzy score normalization method is introduced; its performance is better than the conventional Z-score normalization method. Fourth, finger-vein, fingerprint, and finger geometry recognitions are combined by using three support vector machines and a weighted SUM rule. Experimental results showed that the equal error rate of the proposed method was 0.254%, which was lower than those of the other methods.
A hybrid multimodal non-rigid registration of MR images based on diffeomorphic demons.
Lu, Huanxiang; Cattin, Philippe C; Reyes, Mauricio
2010-01-01
In this paper we present a novel hybrid approach for multimodal medical image registration based on diffeomorphic demons. Diffeomorphic demons have proven to be a robust and efficient way for intensity-based image registration. A very recent extension even allows to use mutual information (MI) as a similarity measure to registration multimodal images. However, due to the intensity correspondence uncertainty existing in some anatomical parts, it is difficult for a purely intensity-based algorithm to solve the registration problem. Therefore, we propose to combine the resulting transformations from both intensity-based and landmark-based methods for multimodal non-rigid registration based on diffeomorphic demons. Several experiments on different types of MR images were conducted, for which we show that a better anatomical correspondence between the images can be obtained using the hybrid approach than using either intensity information or landmarks alone.
Multimodal imaging of cutaneous wound tissue
NASA Astrophysics Data System (ADS)
Zhang, Shiwu; Gnyawali, Surya; Huang, Jiwei; Ren, Wenqi; Gordillo, Gayle; Sen, Chandan K.; Xu, Ronald
2015-01-01
Quantitative assessment of wound tissue ischemia, perfusion, and inflammation provides critical information for appropriate detection, staging, and treatment of chronic wounds. However, few methods are available for simultaneous assessment of these tissue parameters in a noninvasive and quantitative fashion. We integrated hyperspectral, laser speckle, and thermographic imaging modalities in a single-experimental setup for multimodal assessment of tissue oxygenation, perfusion, and inflammation characteristics. Algorithms were developed for appropriate coregistration between wound images acquired by different imaging modalities at different times. The multimodal wound imaging system was validated in an occlusion experiment, where oxygenation and perfusion maps of a healthy subject's upper extremity were continuously monitored during a postocclusive reactive hyperemia procedure and compared with standard measurements. The system was also tested in a clinical trial where a wound of three millimeters in diameter was introduced on a healthy subject's lower extremity and the healing process was continuously monitored. Our in vivo experiments demonstrated the clinical feasibility of multimodal cutaneous wound imaging.
MULTIMODAL IMAGING OF SYPHILITIC MULTIFOCAL RETINITIS.
Curi, Andre L; Sarraf, David; Cunningham, Emmett T
2015-01-01
To describe multimodal imaging of syphilitic multifocal retinitis. Observational case series. Two patients developed multifocal retinitis after treatment of unrecognized syphilitic uveitis with systemic corticosteroids in the absence of appropriate antibiotic therapy. Multimodal imaging localized the foci of retinitis within the retina in contrast to superficial retinal precipitates that accumulate on the surface of the retina in eyes with untreated syphilitic uveitis. Although the retinitis resolved after treatment with systemic penicillin in both cases, vision remained poor in the patient with multifocal retinitis involving the macula. Treatment of unrecognized syphilitic uveitis with corticosteroids in the absence of antitreponemal treatment can lead to the development of multifocal retinitis. Multimodal imaging, and optical coherence tomography in particular, can be used to distinguish multifocal retinitis from superficial retinal precipitates or accumulations.
Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.
Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng
2017-12-01
How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.
Multimodality cardiac imaging at IRCCS Policlinico San Donato: a new interdisciplinary vision.
Lombardi, Massimo; Secchi, Francesco; Pluchinotta, Francesca R; Castelvecchio, Serenella; Montericcio, Vincenzo; Camporeale, Antonia; Bandera, Francesco
2016-04-28
Multimodality imaging is the efficient integration of various methods of cardiovascular imaging to improve the ability to diagnose, guide therapy, or predict outcome. This approach implies both the availability of different technologies in a single unit and the presence of dedicated staff with cardiologic and radiologic background and certified competence in more than one imaging technique. Interaction with clinical practice and existence of research programmes and educational activities are pivotal for the success of this model. The aim of this paper is to describe the multimodality cardiac imaging programme recently started at San Donato Hospital.
Jiang, Lu; Greenwood, Tiffany R.; Amstalden van Hove, Erika R.; Chughtai, Kamila; Raman, Venu; Winnard, Paul T.; Heeren, Ron; Artemov, Dmitri; Glunde, Kristine
2014-01-01
Applications of molecular imaging in cancer and other diseases frequently require combining in vivo imaging modalities, such as magnetic resonance and optical imaging, with ex vivo optical, fluorescence, histology, and immunohistochemical (IHC) imaging, to investigate and relate molecular and biological processes to imaging parameters within the same region of interest. We have developed a multimodal image reconstruction and fusion framework that accurately combines in vivo magnetic resonance imaging (MRI) and magnetic resonance spectroscopic imaging (MRSI), ex vivo brightfield and fluorescence microscopic imaging, and ex vivo histology imaging. Ex vivo brightfield microscopic imaging was used as an intermediate modality to facilitate the ultimate link between ex vivo histology and in vivo MRI/MRSI. Tissue sectioning necessary for optical and histology imaging required generation of a three-dimensional (3D) reconstruction module for 2D ex vivo optical and histology imaging data. We developed an external fiducial marker based 3D reconstruction method, which was able to fuse optical brightfield and fluorescence with histology imaging data. Registration of 3D tumor shape was pursued to combine in vivo MRI/MRSI and ex vivo optical brightfield and fluorescence imaging data. This registration strategy was applied to in vivo MRI/MRSI, ex vivo optical brightfield/fluorescence, as well as histology imaging data sets obtained from human breast tumor models. 3D human breast tumor data sets were successfully reconstructed and fused with this platform. PMID:22945331
A multimodal parallel architecture: A cognitive framework for multimodal interactions.
Cohn, Neil
2016-01-01
Human communication is naturally multimodal, and substantial focus has examined the semantic correspondences in speech-gesture and text-image relationships. However, visual narratives, like those in comics, provide an interesting challenge to multimodal communication because the words and/or images can guide the overall meaning, and both modalities can appear in complicated "grammatical" sequences: sentences use a syntactic structure and sequential images use a narrative structure. These dual structures create complexity beyond those typically addressed by theories of multimodality where only a single form uses combinatorial structure, and also poses challenges for models of the linguistic system that focus on single modalities. This paper outlines a broad theoretical framework for multimodal interactions by expanding on Jackendoff's (2002) parallel architecture for language. Multimodal interactions are characterized in terms of their component cognitive structures: whether a particular modality (verbal, bodily, visual) is present, whether it uses a grammatical structure (syntax, narrative), and whether it "dominates" the semantics of the overall expression. Altogether, this approach integrates multimodal interactions into an existing framework of language and cognition, and characterizes interactions between varying complexity in the verbal, bodily, and graphic domains. The resulting theoretical model presents an expanded consideration of the boundaries of the "linguistic" system and its involvement in multimodal interactions, with a framework that can benefit research on corpus analyses, experimentation, and the educational benefits of multimodality. Copyright © 2015.
Hybrid optical acoustic seafloor mapping
NASA Astrophysics Data System (ADS)
Inglis, Gabrielle
The oceanographic research and industrial communities have a persistent demand for detailed three dimensional sea floor maps which convey both shape and texture. Such data products are used for archeology, geology, ship inspection, biology, and habitat classification. There are a variety of sensing modalities and processing techniques available to produce these maps and each have their own potential benefits and related challenges. Multibeam sonar and stereo vision are such two sensors with complementary strengths making them ideally suited for data fusion. Data fusion approaches however, have seen only limited application to underwater mapping and there are no established methods for creating hybrid, 3D reconstructions from two underwater sensing modalities. This thesis develops a processing pipeline to synthesize hybrid maps from multi-modal survey data. It is helpful to think of this processing pipeline as having two distinct phases: Navigation Refinement and Map Construction. This thesis extends existing work in underwater navigation refinement by incorporating methods which increase measurement consistency between both multibeam and camera. The result is a self consistent 3D point cloud comprised of camera and multibeam measurements. In map construction phase, a subset of the multi-modal point cloud retaining the best characteristics of each sensor is selected to be part of the final map. To quantify the desired traits of a map several characteristics of a useful map are distilled into specific criteria. The different ways that hybrid maps can address these criteria provides justification for producing them as an alternative to current methodologies. The processing pipeline implements multi-modal data fusion and outlier rejection with emphasis on different aspects of map fidelity. The resulting point cloud is evaluated in terms of how well it addresses the map criteria. The final hybrid maps retain the strengths of both sensors and show significant improvement over the single modality maps and naively assembled multi-modal maps.
NASA Astrophysics Data System (ADS)
Chun, Wanhee; Do, Dukho; Gweon, Dae-Gab
2013-01-01
We developed a multimodal microscopy based on an optical scanning system in order to obtain diverse optical information of the same area of a sample. Multimodal imaging researches have mostly depended on a commercial microscope platform, easy to use but restrictive to extend imaging modalities. In this work, the beam scanning optics, especially including a relay lens, was customized to transfer broadband (400-1000 nm) lights to a sample without any optical error or loss. The customized scanning optics guarantees the best performances of imaging techniques utilizing the lights within the design wavelength. Confocal reflection, confocal fluorescence, and two-photon excitation fluorescence images were obtained, through respective implemented imaging channels, to demonstrate imaging feasibility for near-UV, visible, near-IR continuous light, and pulsed light in the scanning optics. The imaging performances for spatial resolution and image contrast were verified experimentally; the results were satisfactory in comparison with theoretical results. The advantages of customization, containing low cost, outstanding combining ability and diverse applications, will contribute to vitalize multimodal imaging researches.
Snuderl, Matija; Wirth, Dennis; Sheth, Sameer A; Bourne, Sarah K; Kwon, Churl-Su; Ancukiewicz, Marek; Curry, William T; Frosch, Matthew P; Yaroslavsky, Anna N
2013-01-01
Intraoperative diagnosis plays an important role in accurate sampling of brain tumors, limiting the number of biopsies required and improving the distinction between brain and tumor. The goal of this study was to evaluate dye-enhanced multimodal confocal imaging for discriminating gliomas from nonglial brain tumors and from normal brain tissue for diagnostic use. We investigated a total of 37 samples including glioma (13), meningioma (7), metastatic tumors (9) and normal brain removed for nontumoral indications (8). Tissue was stained in 0.05 mg/mL aqueous solution of methylene blue (MB) for 2-5 minutes and multimodal confocal images were acquired using a custom-built microscope. After imaging, tissue was formalin fixed and paraffin embedded for standard neuropathologic evaluation. Thirteen pathologists provided diagnoses based on the multimodal confocal images. The investigated tumor types exhibited distinctive and complimentary characteristics in both the reflectance and fluorescence responses. Images showed distinct morphological features similar to standard histology. Pathologists were able to distinguish gliomas from normal brain tissue and nonglial brain tumors, and to render diagnoses from the images in a manner comparable to haematoxylin and eosin (H&E) slides. These results confirm the feasibility of multimodal confocal imaging for intravital intraoperative diagnosis. © 2012 The Authors; Brain Pathology © 2012 International Society of Neuropathology.
Image-guided plasma therapy of cutaneous wound
NASA Astrophysics Data System (ADS)
Zhang, Zhiwu; Ren, Wenqi; Yu, Zelin; Zhang, Shiwu; Yue, Ting; Xu, Ronald
2014-02-01
The wound healing process involves the reparative phases of inflammation, proliferation, and remodeling. Interrupting any of these phases may result in chronically unhealed wounds, amputation, or even patient death. Despite the clinical significance in chronic wound management, no effective methods have been developed for quantitative image-guided treatment. We integrated a multimodal imaging system with a cold atmospheric plasma probe for image-guided treatment of chronic wound. Multimodal imaging system offers a non-invasive, painless, simultaneous and quantitative assessment of cutaneous wound healing. Cold atmospheric plasma accelerates the wound healing process through many mechanisms including decontamination, coagulation and stimulation of the wound healing. The therapeutic effect of cold atmospheric plasma is studied in vivo under the guidance of a multimodal imaging system. Cutaneous wounds are created on the dorsal skin of the nude mice. During the healing process, the sample wound is treated by cold atmospheric plasma at different controlled dosage, while the control wound is healed naturally. The multimodal imaging system integrating a multispectral imaging module and a laser speckle imaging module is used to collect the information of cutaneous tissue oxygenation (i.e. oxygen saturation, StO2) and blood perfusion simultaneously to assess and guide the plasma therapy. Our preliminary tests show that cold atmospheric plasma in combination with multimodal imaging guidance has the potential to facilitate the healing of chronic wounds.
Multi-Source Learning for Joint Analysis of Incomplete Multi-Modality Neuroimaging Data
Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping
2013-01-01
Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. We address this problem by proposing two novel learning methods where all the samples (with at least one available data source) can be used. In the first method, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. Our second method learns a base classifier for each data source independently, based on which we represent each source using a single column of prediction scores; we then estimate the missing prediction scores, which, combined with the existing prediction scores, are used to build a multi-source fusion model. To illustrate the proposed approaches, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 Normal), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithms. Comprehensive experiments show that our proposed methods yield stable and promising results. PMID:24014189
NASA Astrophysics Data System (ADS)
Singh, Dharmendra; Kumar, Harish
Earth observation satellites provide data that covers different portions of the electromagnetic spectrum at different spatial and spectral resolutions. The increasing availability of information products generated from satellite images are extending the ability to understand the patterns and dynamics of the earth resource systems at all scales of inquiry. In which one of the most important application is the generation of land cover classification from satellite images for understanding the actual status of various land cover classes. The prospect for the use of satel-lite images in land cover classification is an extremely promising one. The quality of satellite images available for land-use mapping is improving rapidly by development of advanced sensor technology. Particularly noteworthy in this regard is the improved spatial and spectral reso-lution of the images captured by new satellite sensors like MODIS, ASTER, Landsat 7, and SPOT 5. For the full exploitation of increasingly sophisticated multisource data, fusion tech-niques are being developed. Fused images may enhance the interpretation capabilities. The images used for fusion have different temporal, and spatial resolution. Therefore, the fused image provides a more complete view of the observed objects. It is one of the main aim of image fusion to integrate different data in order to obtain more information that can be de-rived from each of the single sensor data alone. A good example of this is the fusion of images acquired by different sensors having a different spatial resolution and of different spectral res-olution. Researchers are applying the fusion technique since from three decades and propose various useful methods and techniques. The importance of high-quality synthesis of spectral information is well suited and implemented for land cover classification. More recently, an underlying multiresolution analysis employing the discrete wavelet transform has been used in image fusion. It was found that multisensor image fusion is a tradeoff between the spectral information from a low resolution multi-spectral images and the spatial information from a high resolution multi-spectral images. With the wavelet transform based fusion method, it is easy to control this tradeoff. A new transform, the curvelet transform was used in recent years by Starck. A ridgelet transform is applied to square blocks of detail frames of undecimated wavelet decomposition, consequently the curvelet transform is obtained. Since the ridgelet transform possesses basis functions matching directional straight lines therefore, the curvelet transform is capable of representing piecewise linear contours on multiple scales through few significant coefficients. This property leads to a better separation between geometric details and background noise, which may be easily reduced by thresholding curvelet coefficients before they are used for fusion. The Terra and Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) instrument provides high radiometric sensitivity (12 bit) in 36 spectral bands ranging in wavelength from 0.4 m to 14.4 m and also it is freely available. Two bands are imaged at a nominal resolution of 250 m at nadir, with five bands at 500 m, and the remaining 29 bands at 1 km. In this paper, the band 1 of spatial resolution 250 m and bandwidth 620-670 nm, and band 2, of spatial resolution of 250m and bandwidth 842-876 nm is considered as these bands has special features to identify the agriculture and other land covers. In January 2006, the Advanced Land Observing Satellite (ALOS) was successfully launched by the Japan Aerospace Exploration Agency (JAXA). The Phased Arraytype L-band SAR (PALSAR) sensor onboard the satellite acquires SAR imagery at a wavelength of 23.5 cm (frequency 1.27 GHz) with capabilities of multimode and multipolarization observation. PALSAR can operate in several modes: the fine-beam single (FBS) polarization mode (HH), fine-beam dual (FBD) polariza-tion mode (HH/HV or VV/VH), polarimetric (PLR) mode (HH/HV/VH/VV), and ScanSAR (WB) mode (HH/VV) [15]. These makes PALSAR imagery very attractive for spatially and temporally consistent monitoring system. The Overview of Principal Component Analysis is that the most of the information within all the bands can be compressed into a much smaller number of bands with little loss of information. It allows us to extract the low-dimensional subspaces that capture the main linear correlation among the high-dimensional image data. This facilitates viewing the explained variance or signal in the available imagery, allowing both gross and more subtle features in the imagery to be seen. In this paper we have explored the fusion technique for enhancing the land cover classification of low resolution satellite data espe-cially freely available satellite data. For this purpose, we have considered to fuse the PALSAR principal component data with MODIS principal component data. Initially, the MODIS band 1 and band 2 is considered, its principal component is computed. Similarly the PALSAR HH, HV and VV polarized data are considered, and there principal component is also computed. con-sequently, the PALSAR principal component image is fused with MODIS principal component image. The aim of this paper is to analyze the effect of classification accuracy on major type of land cover types like agriculture, water and urban bodies with fusion of PALSAR data to MODIS data. Curvelet transformation has been applied for fusion of these two satellite images and Minimum Distance classification technique has been applied for the resultant fused image. It is qualitatively and visually observed that the overall classification accuracy of MODIS image after fusion is enhanced. This type of fusion technique may be quite helpful in near future to use freely available satellite data to develop monitoring system for different land cover classes on the earth.
Recognition of emotions using multimodal physiological signals and an ensemble deep learning model.
Yin, Zhong; Zhao, Mengyuan; Wang, Yongxiong; Yang, Jingdong; Zhang, Jianhua
2017-03-01
Using deep-learning methodologies to analyze multimodal physiological signals becomes increasingly attractive for recognizing human emotions. However, the conventional deep emotion classifiers may suffer from the drawback of the lack of the expertise for determining model structure and the oversimplification of combining multimodal feature abstractions. In this study, a multiple-fusion-layer based ensemble classifier of stacked autoencoder (MESAE) is proposed for recognizing emotions, in which the deep structure is identified based on a physiological-data-driven approach. Each SAE consists of three hidden layers to filter the unwanted noise in the physiological features and derives the stable feature representations. An additional deep model is used to achieve the SAE ensembles. The physiological features are split into several subsets according to different feature extraction approaches with each subset separately encoded by a SAE. The derived SAE abstractions are combined according to the physiological modality to create six sets of encodings, which are then fed to a three-layer, adjacent-graph-based network for feature fusion. The fused features are used to recognize binary arousal or valence states. DEAP multimodal database was employed to validate the performance of the MESAE. By comparing with the best existing emotion classifier, the mean of classification rate and F-score improves by 5.26%. The superiority of the MESAE against the state-of-the-art shallow and deep emotion classifiers has been demonstrated under different sizes of the available physiological instances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less
MO-DE-202-02: Advances in Image Registration and Reconstruction for Image-Guided Neurosurgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siewerdsen, J.
At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less
Fusion and quality analysis for remote sensing images using contourlet transform
NASA Astrophysics Data System (ADS)
Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram
2013-05-01
Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.
A simultaneous multimodal imaging system for tissue functional parameters
NASA Astrophysics Data System (ADS)
Ren, Wenqi; Zhang, Zhiwu; Wu, Qiang; Zhang, Shiwu; Xu, Ronald
2014-02-01
Simultaneous and quantitative assessment of skin functional characteristics in different modalities will facilitate diagnosis and therapy in many clinical applications such as wound healing. However, many existing clinical practices and multimodal imaging systems are subjective, qualitative, sequential for multimodal data collection, and need co-registration between different modalities. To overcome these limitations, we developed a multimodal imaging system for quantitative, non-invasive, and simultaneous imaging of cutaneous tissue oxygenation and blood perfusion parameters. The imaging system integrated multispectral and laser speckle imaging technologies into one experimental setup. A Labview interface was developed for equipment control, synchronization, and image acquisition. Advanced algorithms based on a wide gap second derivative reflectometry and laser speckle contrast analysis (LASCA) were developed for accurate reconstruction of tissue oxygenation and blood perfusion respectively. Quantitative calibration experiments and a new style of skinsimulating phantom were designed to verify the accuracy and reliability of the imaging system. The experimental results were compared with a Moor tissue oxygenation and perfusion monitor. For In vivo testing, a post-occlusion reactive hyperemia (PORH) procedure in human subject and an ongoing wound healing monitoring experiment using dorsal skinfold chamber models were conducted to validate the usability of our system for dynamic detection of oxygenation and perfusion parameters. In this study, we have not only setup an advanced multimodal imaging system for cutaneous tissue oxygenation and perfusion parameters but also elucidated its potential for wound healing assessment in clinical practice.
A Multistage Approach for Image Registration.
Bowen, Francis; Hu, Jianghai; Du, Eliza Yingzi
2016-09-01
Successful image registration is an important step for object recognition, target detection, remote sensing, multimodal content fusion, scene blending, and disaster assessment and management. The geometric and photometric variations between images adversely affect the ability for an algorithm to estimate the transformation parameters that relate the two images. Local deformations, lighting conditions, object obstructions, and perspective differences all contribute to the challenges faced by traditional registration techniques. In this paper, a novel multistage registration approach is proposed that is resilient to view point differences, image content variations, and lighting conditions. Robust registration is realized through the utilization of a novel region descriptor which couples with the spatial and texture characteristics of invariant feature points. The proposed region descriptor is exploited in a multistage approach. A multistage process allows the utilization of the graph-based descriptor in many scenarios thus allowing the algorithm to be applied to a broader set of images. Each successive stage of the registration technique is evaluated through an effective similarity metric which determines subsequent action. The registration of aerial and street view images from pre- and post-disaster provide strong evidence that the proposed method estimates more accurate global transformation parameters than traditional feature-based methods. Experimental results show the robustness and accuracy of the proposed multistage image registration methodology.
Wong, Ka-Kit; Gandhi, Arpit; Viglianti, Benjamin L; Fig, Lorraine M; Rubello, Domenico; Gross, Milton D
2016-01-01
AIM: To review the benefits of single photon emission computed tomography (SPECT)/computed tomography (CT) hybrid imaging for diagnosis of various endocrine disorders. METHODS: We performed MEDLINE and PubMed searches using the terms: “SPECT/CT”; “functional anatomic mapping”; “transmission emission tomography”; “parathyroid adenoma”; “thyroid cancer”; “neuroendocrine tumor”; “adrenal”; “pheochromocytoma”; “paraganglioma”; in order to identify relevant articles published in English during the years 2003 to 2015. Reference lists from the articles were reviewed to identify additional pertinent articles. Retrieved manuscripts (case reports, reviews, meta-analyses and abstracts) concerning the application of SPECT/CT to endocrine imaging were analyzed to provide a descriptive synthesis of the utility of this technology. RESULTS: The emergence of hybrid SPECT/CT camera technology now allows simultaneous acquisition of combined multi-modality imaging, with seamless fusion of three-dimensional volume datasets. The usefulness of combining functional information to depict the bio-distribution of radiotracers that map cellular processes of the endocrine system and tumors of endocrine origin, with anatomy derived from CT, has improved the diagnostic capability of scintigraphy for a range of disorders of endocrine gland function. The literature describes benefits of SPECT/CT for 99mTc-sestamibi parathyroid scintigraphy and 99mTc-pertechnetate thyroid scintigraphy, 123I- or 131I-radioiodine for staging of differentiated thyroid carcinoma, 111In- and 99mTc- labeled somatostatin receptor analogues for detection of neuroendocrine tumors, 131I-norcholesterol (NP-59) scans for assessment of adrenal cortical hyperfunction, and 123I- or 131I-metaiodobenzylguanidine imaging for evaluation of pheochromocytoma and paraganglioma. CONCLUSION: SPECT/CT exploits the synergism between the functional information from radiopharmaceutical imaging and anatomy from CT, translating to improved diagnostic accuracy and meaningful impact on patient care. PMID:27358692
Nighttime images fusion based on Laplacian pyramid
NASA Astrophysics Data System (ADS)
Wu, Cong; Zhan, Jinhao; Jin, Jicheng
2018-02-01
This paper expounds method of the average weighted fusion, image pyramid fusion, the wavelet transform and apply these methods on the fusion of multiple exposures nighttime images. Through calculating information entropy and cross entropy of fusion images, we can evaluate the effect of different fusion. Experiments showed that Laplacian pyramid image fusion algorithm is suitable for processing nighttime images fusion, it can reduce the halo while preserving image details.
NASA Astrophysics Data System (ADS)
Jung, M.; Lee, J.; Song, W.; Lee, Y. L.; Lee, J. H.; Shin, W.
2016-05-01
We proposed a multimode interference (MMI) fiber based saturable absorber using bismuth telluride at ∼2 μm region. Our MMI based saturable absorber was fabricated by fusion splicing with single mode fiber and null core fiber. The MMI functioned as both wavelength fixed filter and saturable absorber. The 3 dB bandwidth and insertion loss of MMI were 42 nm and 3.4 dB at wavelength of 1958 nm, respectively. We have also reported a passively mode locked thulium doped fiber laser operating at a wavelength of 1958 nm using a multimode interference. A temporal bandwidth of ∼46 ps was experimentally obtained at a repetition rate of 8.58 MHz.
MO-DE-202-01: Image-Guided Focused Ultrasound Surgery and Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farahani, K.
At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less
Optical/MRI Multimodality Molecular Imaging
NASA Astrophysics Data System (ADS)
Ma, Lixin; Smith, Charles; Yu, Ping
2007-03-01
Multimodality molecular imaging that combines anatomical and functional information has shown promise in development of tumor-targeted pharmaceuticals for cancer detection or therapy. We present a new multimodality imaging technique that combines fluorescence molecular tomography (FMT) and magnetic resonance imaging (MRI) for in vivo molecular imaging of preclinical tumor models. Unlike other optical/MRI systems, the new molecular imaging system uses parallel phase acquisition based on heterodyne principle. The system has a higher accuracy of phase measurements, reduced noise bandwidth, and an efficient modulation of the fluorescence diffuse density waves. Fluorescent Bombesin probes were developed for targeting breast cancer cells and prostate cancer cells. Tissue phantom and small animal experiments were performed for calibration of the imaging system and validation of the targeting probes.
Fusion of magnetometer and gradiometer sensors of MEG in the presence of multiplicative error.
Mohseni, Hamid R; Woolrich, Mark W; Kringelbach, Morten L; Luckhoo, Henry; Smith, Penny Probert; Aziz, Tipu Z
2012-07-01
Novel neuroimaging techniques have provided unprecedented information on the structure and function of the living human brain. Multimodal fusion of data from different sensors promises to radically improve this understanding, yet optimal methods have not been developed. Here, we demonstrate a novel method for combining multichannel signals. We show how this method can be used to fuse signals from the magnetometer and gradiometer sensors used in magnetoencephalography (MEG), and through extensive experiments using simulation, head phantom and real MEG data, show that it is both robust and accurate. This new approach works by assuming that the lead fields have multiplicative error. The criterion to estimate the error is given within a spatial filter framework such that the estimated power is minimized in the worst case scenario. The method is compared to, and found better than, existing approaches. The closed-form solution and the conditions under which the multiplicative error can be optimally estimated are provided. This novel approach can also be employed for multimodal fusion of other multichannel signals such as MEG and EEG. Although the multiplicative error is estimated based on beamforming, other methods for source analysis can equally be used after the lead-field modification.
Medical Image Retrieval: A Multimodal Approach
Cao, Yu; Steffey, Shawn; He, Jianbiao; Xiao, Degui; Tao, Cui; Chen, Ping; Müller, Henning
2014-01-01
Medical imaging is becoming a vital component of war on cancer. Tremendous amounts of medical image data are captured and recorded in a digital format during cancer care and cancer research. Facing such an unprecedented volume of image data with heterogeneous image modalities, it is necessary to develop effective and efficient content-based medical image retrieval systems for cancer clinical practice and research. While substantial progress has been made in different areas of content-based image retrieval (CBIR) research, direct applications of existing CBIR techniques to the medical images produced unsatisfactory results, because of the unique characteristics of medical images. In this paper, we develop a new multimodal medical image retrieval approach based on the recent advances in the statistical graphic model and deep learning. Specifically, we first investigate a new extended probabilistic Latent Semantic Analysis model to integrate the visual and textual information from medical images to bridge the semantic gap. We then develop a new deep Boltzmann machine-based multimodal learning model to learn the joint density model from multimodal information in order to derive the missing modality. Experimental results with large volume of real-world medical images have shown that our new approach is a promising solution for the next-generation medical imaging indexing and retrieval system. PMID:26309389
Kin, Taichi; Nakatomi, Hirofumi; Shojima, Masaaki; Tanaka, Minoru; Ino, Kenji; Mori, Harushi; Kunimatsu, Akira; Oyama, Hiroshi; Saito, Nobuhito
2012-07-01
In this study, the authors used preoperative simulation employing 3D computer graphics (interactive computer graphics) to fuse all imaging data for brainstem cavernous malformations. The authors evaluated whether interactive computer graphics or 2D imaging correlated better with the actual operative field, particularly in identifying a developmental venous anomaly (DVA). The study population consisted of 10 patients scheduled for surgical treatment of brainstem cavernous malformations. Data from preoperative imaging (MRI, CT, and 3D rotational angiography) were automatically fused using a normalized mutual information method, and then reconstructed by a hybrid method combining surface rendering and volume rendering methods. With surface rendering, multimodality and multithreshold techniques for 1 tissue were applied. The completed interactive computer graphics were used for simulation of surgical approaches and assumed surgical fields. Preoperative diagnostic rates for a DVA associated with brainstem cavernous malformation were compared between conventional 2D imaging and interactive computer graphics employing receiver operating characteristic (ROC) analysis. The time required for reconstruction of 3D images was 3-6 hours for interactive computer graphics. Observation in interactive mode required approximately 15 minutes. Detailed anatomical information for operative procedures, from the craniotomy to microsurgical operations, could be visualized and simulated three-dimensionally as 1 computer graphic using interactive computer graphics. Virtual surgical views were consistent with actual operative views. This technique was very useful for examining various surgical approaches. Mean (±SEM) area under the ROC curve for rate of DVA diagnosis was significantly better for interactive computer graphics (1.000±0.000) than for 2D imaging (0.766±0.091; p<0.001, Mann-Whitney U-test). The authors report a new method for automatic registration of preoperative imaging data from CT, MRI, and 3D rotational angiography for reconstruction into 1 computer graphic. The diagnostic rate of DVA associated with brainstem cavernous malformation was significantly better using interactive computer graphics than with 2D images. Interactive computer graphics was also useful in helping to plan the surgical access corridor.
2016-12-01
images were segmented into gray and white matter images and spatially normalized to the MNI template (3 mm isotropic voxels) using the DARTEL toolbox in...AWARD NUMBER: W81XWH-13-1-0494 TITLE: Tinnitus Multimodal Imaging PRINCIPAL INVESTIGATOR: Steven Wan Cheung CONTRACTING ORGANIZATION... Medical Research and Materiel Command Fort Detrick, Maryland 21702-5012 DISTRIBUTION STATEMENT: Approved for Public Release; Distribution Unlimited
Melanoma detection using smartphone and multimode hyperspectral imaging
NASA Astrophysics Data System (ADS)
MacKinnon, Nicholas; Vasefi, Fartash; Booth, Nicholas; Farkas, Daniel L.
2016-04-01
This project's goal is to determine how to effectively implement a technology continuum from a low cost, remotely deployable imaging device to a more sophisticated multimode imaging system within a standard clinical practice. In this work a smartphone is used in conjunction with an optical attachment to capture cross-polarized and collinear color images of a nevus that are analyzed to quantify chromophore distribution. The nevus is also imaged by a multimode hyperspectral system, our proprietary SkinSpect™ device. Relative accuracy and biological plausibility of the two systems algorithms are compared to assess aspects of feasibility of in-home or primary care practitioner smartphone screening prior to rigorous clinical analysis via the SkinSpect.
Malone, Joseph D.; El-Haddad, Mohamed T.; Bozic, Ivan; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.
2016-01-01
Scanning laser ophthalmoscopy (SLO) benefits diagnostic imaging and therapeutic guidance by allowing for high-speed en face imaging of retinal structures. When combined with optical coherence tomography (OCT), SLO enables real-time aiming and retinal tracking and provides complementary information for post-acquisition volumetric co-registration, bulk motion compensation, and averaging. However, multimodality SLO-OCT systems generally require dedicated light sources, scanners, relay optics, detectors, and additional digitization and synchronization electronics, which increase system complexity. Here, we present a multimodal ophthalmic imaging system using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography (SS-SESLO-OCT) for in vivo human retinal imaging. SESLO reduces the complexity of en face imaging systems by multiplexing spatial positions as a function of wavelength. SESLO image quality benefited from single-mode illumination and multimode collection through a prototype double-clad fiber coupler, which optimized scattered light throughput and reduce speckle contrast while maintaining lateral resolution. Using a shared 1060 nm swept-source, shared scanner and imaging optics, and a shared dual-channel high-speed digitizer, we acquired inherently co-registered en face retinal images and OCT cross-sections simultaneously at 200 frames-per-second. PMID:28101411
NASA Astrophysics Data System (ADS)
Xu, Ronald X.; Xu, Jeff S.; Huang, Jiwei; Tweedle, Michael F.; Schmidt, Carl; Povoski, Stephen P.; Martin, Edward W.
2010-02-01
Background: Accurate assessment of tumor boundaries and intraoperative detection of therapeutic margins are important oncologic principles for minimal recurrence rates and improved long-term outcomes. However, many existing cancer imaging tools are based on preoperative image acquisition and do not provide real-time intraoperative information that supports critical decision-making in the operating room. Method: Poly lactic-co-glycolic acid (PLGA) microbubbles (MBs) and nanobubbles (NBs) were synthesized by a modified double emulsion method. The MB and NB surfaces were conjugated with CC49 antibody to target TAG-72 antigen, a human glycoprotein complex expressed in many epithelial-derived cancers. Multiple imaging agents were encapsulated in MBs and NBs for multimodal imaging. Both one-step and multi-step cancer targeting strategies were explored. Active MBs/NBs were also fabricated for therapeutic margin assessment in cancer ablation therapies. Results: The multimodal contrast agents and the cancer-targeting strategies were tested on tissue simulating phantoms, LS174 colon cancer cell cultures, and cancer xenograft nude mice. Concurrent multimodal imaging was demonstrated using fluorescence and ultrasound imaging modalities. Technical feasibility of using active MBs and portable imaging tools such as ultrasound for intraoperative therapeutic margin assessment was demonstrated in a biological tissue model. Conclusion: The cancer-specific multimodal contrast agents described in this paper have the potential for intraoperative detection of tumor boundaries and therapeutic margins.
Carbon Tube Electrodes for Electrocardiography-Gated Cardiac Multimodality Imaging in Mice
Choquet, Philippe; Goetz, Christian; Aubertin, Gaelle; Hubele, Fabrice; Sannié, Sébastien; Constantinesco, André
2011-01-01
This report describes a simple design of noninvasive carbon tube electrodes that facilitates electrocardiography (ECG) in mice during cardiac multimodality preclinical imaging. Both forepaws and the left hindpaw, covered by conductive gel, of mice were placed into the openings of small carbon tubes. Cardiac ECG-gated single-photon emission CT, X-ray CT, and MRI were tested (n = 60) in 20 mice. For all applications, electrodes were used in a warmed multimodality imaging cell. A heart rate of 563 ± 48 bpm was recorded from anesthetized mice regardless of the imaging technique used, with acquisition times ranging from 1 to 2 h. PMID:21333165
Quantitative multimodality imaging in cancer research and therapy.
Yankeelov, Thomas E; Abramson, Richard G; Quarles, C Chad
2014-11-01
Advances in hardware and software have enabled the realization of clinically feasible, quantitative multimodality imaging of tissue pathophysiology. Earlier efforts relating to multimodality imaging of cancer have focused on the integration of anatomical and functional characteristics, such as PET-CT and single-photon emission CT (SPECT-CT), whereas more-recent advances and applications have involved the integration of multiple quantitative, functional measurements (for example, multiple PET tracers, varied MRI contrast mechanisms, and PET-MRI), thereby providing a more-comprehensive characterization of the tumour phenotype. The enormous amount of complementary quantitative data generated by such studies is beginning to offer unique insights into opportunities to optimize care for individual patients. Although important technical optimization and improved biological interpretation of multimodality imaging findings are needed, this approach can already be applied informatively in clinical trials of cancer therapeutics using existing tools. These concepts are discussed herein.
Multimode intravascular RF coil for MRI-guided interventions.
Kurpad, Krishna N; Unal, Orhan
2011-04-01
To demonstrate the feasibility of using a single intravascular radiofrequency (RF) probe connected to the external magnetic resonance imaging (MRI) system via a single coaxial cable to perform active tip tracking and catheter visualization and high signal-to-noise ratio (SNR) intravascular imaging. A multimode intravascular RF coil was constructed on a 6F balloon catheter and interfaced to a 1.5T MRI scanner via a decoupling circuit. Bench measurements of coil impedances were followed by imaging experiments in saline and phantoms. The multimode coil behaves as an inductively coupled transmit coil. The forward-looking capability of 6 mm was measured. A greater than 3-fold increase in SNR compared to conventional imaging using optimized external coil was demonstrated. Simultaneous active tip tracking and catheter visualization was demonstrated. It is feasible to perform 1) active tip tracking, 2) catheter visualization, and 3) high SNR imaging using a single multimode intravascular RF coil that is connected to the external system via a single coaxial cable. Copyright © 2011 Wiley-Liss, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, X; Jani, A; Rossi, P
Purpose: MRI has shown promise in identifying prostate tumors with high sensitivity and specificity for the detection of prostate cancer. Accurate segmentation of the prostate plays a key role various tasks: to accurately localize prostate boundaries for biopsy needle placement and radiotherapy, to initialize multi-modal registration algorithms or to obtain the region of interest for computer-aided detection of prostate cancer. However, manual segmentation during biopsy or radiation therapy can be time consuming and subject to inter- and intra-observer variation. This study’s purpose it to develop an automated method to address this technical challenge. Methods: We present an automated multi-atlas segmentationmore » for MR prostate segmentation using patch-based label fusion. After an initial preprocessing for all images, all the atlases are non-rigidly registered to a target image. And then, the resulting transformation is used to propagate the anatomical structure labels of the atlas into the space of the target image. The top L similar atlases are further chosen by measuring intensity and structure difference in the region of interest around prostate. Finally, using voxel weighting based on patch-based anatomical signature, the label that the majority of all warped labels predict for each voxel is used for the final segmentation of the target image. Results: This segmentation technique was validated with a clinical study of 13 patients. The accuracy of our approach was assessed using the manual segmentation (gold standard). The mean volume Dice Overlap Coefficient was 89.5±2.9% between our and manual segmentation, which indicate that the automatic segmentation method works well and could be used for 3D MRI-guided prostate intervention. Conclusion: We have developed a new prostate segmentation approach based on the optimal feature learning label fusion framework, demonstrated its clinical feasibility, and validated its accuracy. This segmentation technique could be a useful tool in image-guided interventions for prostate-cancer diagnosis and treatment.« less
Novel cooperative neural fusion algorithms for image restoration and image fusion.
Xia, Youshen; Kamel, Mohamed S
2007-02-01
To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.
NASA Astrophysics Data System (ADS)
Yang, G.; Lin, Y.; Bhattacharya, P.
2007-12-01
To achieve an effective and safe operation on the machine system where the human interacts with the machine mutually, there is a need for the machine to understand the human state, especially cognitive state, when the human's operation task demands an intensive cognitive activity. Due to a well-known fact with the human being, a highly uncertain cognitive state and behavior as well as expressions or cues, the recent trend to infer the human state is to consider multimodality features of the human operator. In this paper, we present a method for multimodality inferring of human cognitive states by integrating neuro-fuzzy network and information fusion techniques. To demonstrate the effectiveness of this method, we take the driver fatigue detection as an example. The proposed method has, in particular, the following new features. First, human expressions are classified into four categories: (i) casual or contextual feature, (ii) contact feature, (iii) contactless feature, and (iv) performance feature. Second, the fuzzy neural network technique, in particular Takagi-Sugeno-Kang (TSK) model, is employed to cope with uncertain behaviors. Third, the sensor fusion technique, in particular ordered weighted aggregation (OWA), is integrated with the TSK model in such a way that cues are taken as inputs to the TSK model, and then the outputs of the TSK are fused by the OWA which gives outputs corresponding to particular cognitive states under interest (e.g., fatigue). We call this method TSK-OWA. Validation of the TSK-OWA, performed in the Northeastern University vehicle drive simulator, has shown that the proposed method is promising to be a general tool for human cognitive state inferring and a special tool for the driver fatigue detection.
On the Multi-Modal Object Tracking and Image Fusion Using Unsupervised Deep Learning Methodologies
NASA Astrophysics Data System (ADS)
LaHaye, N.; Ott, J.; Garay, M. J.; El-Askary, H. M.; Linstead, E.
2017-12-01
The number of different modalities of remote-sensors has been on the rise, resulting in large datasets with different complexity levels. Such complex datasets can provide valuable information separately, yet there is a bigger value in having a comprehensive view of them combined. As such, hidden information can be deduced through applying data mining techniques on the fused data. The curse of dimensionality of such fused data, due to the potentially vast dimension space, hinders our ability to have deep understanding of them. This is because each dataset requires a user to have instrument-specific and dataset-specific knowledge for optimum and meaningful usage. Once a user decides to use multiple datasets together, deeper understanding of translating and combining these datasets in a correct and effective manner is needed. Although there exists data centric techniques, generic automated methodologies that can potentially solve this problem completely don't exist. Here we are developing a system that aims to gain a detailed understanding of different data modalities. Such system will provide an analysis environment that gives the user useful feedback and can aid in research tasks. In our current work, we show the initial outputs our system implementation that leverages unsupervised deep learning techniques so not to burden the user with the task of labeling input data, while still allowing for a detailed machine understanding of the data. Our goal is to be able to track objects, like cloud systems or aerosols, across different image-like data-modalities. The proposed system is flexible, scalable and robust to understand complex likenesses within multi-modal data in a similar spatio-temporal range, and also to be able to co-register and fuse these images when needed.
Multi-mode of Four and Six Wave Parametric Amplified Process
NASA Astrophysics Data System (ADS)
Zhu, Dayu; Yang, Yiheng; Zhang, Da; Liu, Ruizhou; Ma, Danmeng; Li, Changbiao; Zhang, Yanpeng
2017-03-01
Multiple quantum modes in correlated fields are essential for future quantum information processing and quantum computing. Here we report the generation of multi-mode phenomenon through parametric amplified four- and six-wave mixing processes in a rubidium atomic ensemble. The multi-mode properties in both frequency and spatial domains are studied. On one hand, the multi-mode behavior is dominantly controlled by the intensity of external dressing effect, or nonlinear phase shift through internal dressing effect, in frequency domain; on the other hand, the multi-mode behavior is visually demonstrated from the images of the biphoton fields directly, in spatial domain. Besides, the correlation of the two output fields is also demonstrated in both domains. Our approach supports efficient applications for scalable quantum correlated imaging.
Multi-mode of Four and Six Wave Parametric Amplified Process.
Zhu, Dayu; Yang, Yiheng; Zhang, Da; Liu, Ruizhou; Ma, Danmeng; Li, Changbiao; Zhang, Yanpeng
2017-03-03
Multiple quantum modes in correlated fields are essential for future quantum information processing and quantum computing. Here we report the generation of multi-mode phenomenon through parametric amplified four- and six-wave mixing processes in a rubidium atomic ensemble. The multi-mode properties in both frequency and spatial domains are studied. On one hand, the multi-mode behavior is dominantly controlled by the intensity of external dressing effect, or nonlinear phase shift through internal dressing effect, in frequency domain; on the other hand, the multi-mode behavior is visually demonstrated from the images of the biphoton fields directly, in spatial domain. Besides, the correlation of the two output fields is also demonstrated in both domains. Our approach supports efficient applications for scalable quantum correlated imaging.
Multimodal optoacoustic and multiphoton fluorescence microscopy
NASA Astrophysics Data System (ADS)
Sela, Gali; Razansky, Daniel; Shoham, Shy
2013-03-01
Multiphoton microscopy is a powerful imaging modality that enables structural and functional imaging with cellular and sub-cellular resolution, deep within biological tissues. Yet, its main contrast mechanism relies on extrinsically administered fluorescent indicators. Here we developed a system for simultaneous multimodal optoacoustic and multiphoton fluorescence 3D imaging, which attains both absorption and fluorescence-based contrast by integrating an ultrasonic transducer into a two-photon laser scanning microscope. The system is readily shown to enable acquisition of multimodal microscopic images of fluorescently labeled targets and cell cultures as well as intrinsic absorption-based images of pigmented biological tissue. During initial experiments, it was further observed that that detected optoacoustically-induced response contains low frequency signal variations, presumably due to cavitation-mediated signal generation by the high repetition rate (80MHz) near IR femtosecond laser. The multimodal system may provide complementary structural and functional information to the fluorescently labeled tissue, by superimposing optoacoustic images of intrinsic tissue chromophores, such as melanin deposits, pigmentation, and hemoglobin or other extrinsic particle or dye-based markers highly absorptive in the NIR spectrum.
Recommendations on nuclear and multimodality imaging in IE and CIED infections.
Erba, Paola Anna; Lancellotti, Patrizio; Vilacosta, Isidre; Gaemperli, Oliver; Rouzet, Francois; Hacker, Marcus; Signore, Alberto; Slart, Riemer H J A; Habib, Gilbert
2018-05-24
In the latest update of the European Society of Cardiology (ESC) guidelines for the management of infective endocarditis (IE), imaging is positioned at the centre of the diagnostic work-up so that an early and accurate diagnosis can be reached. Besides echocardiography, contrast-enhanced CT (ce-CT), radiolabelled leucocyte (white blood cell, WBC) SPECT/CT and [ 18 F]FDG PET/CT are included as diagnostic tools in the diagnostic flow chart for IE. Following the clinical guidelines that provided a straightforward message on the role of multimodality imaging, we believe that it is highly relevant to produce specific recommendations on nuclear multimodality imaging in IE and cardiac implantable electronic device infections. In these procedural recommendations we therefore describe in detail the technical and practical aspects of WBC SPECT/CT and [ 18 F]FDG PET/CT, including ce-CT acquisition protocols. We also discuss the advantages and limitations of each procedure, specific pitfalls when interpreting images, and the most important results from the literature, and also provide recommendations on the appropriate use of multimodality imaging.
NASA Astrophysics Data System (ADS)
El-Haddad, Mohamed T.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.
2017-02-01
Multimodal imaging systems that combine scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) have demonstrated the utility of concurrent en face and volumetric imaging for aiming, eye tracking, bulk motion compensation, mosaicking, and contrast enhancement. However, this additional functionality trades off with increased system complexity and cost because both SLO and OCT generally require dedicated light sources, galvanometer scanners, relay and imaging optics, detectors, and control and digitization electronics. We previously demonstrated multimodal ophthalmic imaging using swept-source spectrally encoded SLO and OCT (SS-SESLO-OCT). Here, we present system enhancements and a new optical design that increase our SS-SESLO-OCT data throughput by >7x and field-of-view (FOV) by >4x. A 200 kHz 1060 nm Axsun swept-source was optically buffered to 400 kHz sweep-rate, and SESLO and OCT were simultaneously digitized on dual input channels of a 4 GS/s digitizer at 1.2 GS/s per channel using a custom k-clock. We show in vivo human imaging of the anterior segment out to the limbus and retinal fundus over a >40° FOV. In addition, nine overlapping volumetric SS-SESLO-OCT volumes were acquired under video-rate SESLO preview and guidance. In post-processing, all nine SESLO images and en face projections of the corresponding OCT volumes were mosaicked to show widefield multimodal fundus imaging with a >80° FOV. Concurrent multimodal SS-SESLO-OCT may have applications in clinical diagnostic imaging by enabling aiming, image registration, and multi-field mosaicking and benefit intraoperative imaging by allowing for real-time surgical feedback, instrument tracking, and overlays of computationally extracted image-based surrogate biomarkers of disease.
Cross-modal learning to rank via latent joint representation.
Wu, Fei; Jiang, Xinyang; Li, Xi; Tang, Siliang; Lu, Weiming; Zhang, Zhongfei; Zhuang, Yueting
2015-05-01
Cross-modal ranking is a research topic that is imperative to many applications involving multimodal data. Discovering a joint representation for multimodal data and learning a ranking function are essential in order to boost the cross-media retrieval (i.e., image-query-text or text-query-image). In this paper, we propose an approach to discover the latent joint representation of pairs of multimodal data (e.g., pairs of an image query and a text document) via a conditional random field and structural learning in a listwise ranking manner. We call this approach cross-modal learning to rank via latent joint representation (CML²R). In CML²R, the correlations between multimodal data are captured in terms of their sharing hidden variables (e.g., topics), and a hidden-topic-driven discriminative ranking function is learned in a listwise ranking manner. The experiments show that the proposed approach achieves a good performance in cross-media retrieval and meanwhile has the capability to learn the discriminative representation of multimodal data.
Wavefront Processing Through Integrated Fiber Optics.
NASA Astrophysics Data System (ADS)
Khan, Romel Rabiul
This thesis is devoted to the development of a new technology of integrated fiber optics. Through the use of fusion splicing and etching several dissimilar optical fibers can be integrated into a single fiber providing wave-front processing capabilities not previously possible. Optical fibers have been utilized for their unique capabilities; such as, remote beam delivery and immunity from electromagnetic noise. In this thesis, the understanding of integrated fiber optics through fusion splicing is furthered both theoretically and experimentally. Most of the common optical components such as lenses, apertures, and modulators can be implemented through the use of fiber optics and then integrated together through fusion splicing, resulting in an alignment-free, rugged and miniaturized system. For example, a short length of multimode graded-index fiber can be used as either a lens or a window to relay an image. A step-index multimode fiber provides a spacer or an aperture. Other special arrangements can be exploited to do in-line modulation in both amplitude and phase. The power of this technique is demonstrated by focusing on a few applications where significant advantages are obtained through this technology. In laser light scattering fiber optic systems, integrated fiber optics is used for delivering and receiving light from small scattering volumes in a spatially constrained environment. When applied for the detection of cataracts in the human eye lens, laser light scattering probes with integrated fiber optics could obtain a map of the eye lens and provide invaluable data for further understanding of cataractogenesis. Use of integrated fiber optics in the high resolution structural analysis of aircraft propeller blades is also presented. Coupling of laser diode to monomode fiber through integrated fiber optics is analyzed. The generation of nondiffracting Bessel-Gauss beams using integrated fiber optics is described. The significance of the Bessel-Gauss beam lies in the fact that it has a sharply defined main-lobe whose width can be designed to be as narrow as desired, while maintaining a long propagation-invariant range. Different methods of generation and properties of this beam are reviewed. Effects of misalignments in the input plane and discretization of the source are derived and evaluated.
NASA Technical Reports Server (NTRS)
Chirayath, Ved
2018-01-01
We present preliminary results from NASA NeMO-Net, the first neural multi-modal observation and training network for global coral reef assessment. NeMO-Net is an open-source deep convolutional neural network (CNN) and interactive active learning training software in development which will assess the present and past dynamics of coral reef ecosystems. NeMO-Net exploits active learning and data fusion of mm-scale remotely sensed 3D images of coral reefs captured using fluid lensing with the NASA FluidCam instrument, presently the highest-resolution remote sensing benthic imaging technology capable of removing ocean wave distortion, as well as hyperspectral airborne remote sensing data from the ongoing NASA CORAL mission and lower-resolution satellite data to determine coral reef ecosystem makeup globally at unprecedented spatial and temporal scales. Aquatic ecosystems, particularly coral reefs, remain quantitatively misrepresented by low-resolution remote sensing as a result of refractive distortion from ocean waves, optical attenuation, and remoteness. Machine learning classification of coral reefs using FluidCam mm-scale 3D data show that present satellite and airborne remote sensing techniques poorly characterize coral reef percent living cover, morphology type, and species breakdown at the mm, cm, and meter scales. Indeed, current global assessments of coral reef cover and morphology classification based on km-scale satellite data alone can suffer from segmentation errors greater than 40%, capable of change detection only on yearly temporal scales and decameter spatial scales, significantly hindering our understanding of patterns and processes in marine biodiversity at a time when these ecosystems are experiencing unprecedented anthropogenic pressures, ocean acidification, and sea surface temperature rise. NeMO-Net leverages our augmented machine learning algorithm that demonstrates data fusion of regional FluidCam (mm, cm-scale) airborne remote sensing with global low-resolution (m, km-scale) airborne and spaceborne imagery to reduce classification errors up to 80% over regional scales. Such technologies can substantially enhance our ability to assess coral reef ecosystems dynamics.
Joint modality fusion and temporal context exploitation for semantic video analysis
NASA Astrophysics Data System (ADS)
Papadopoulos, Georgios Th; Mezaris, Vasileios; Kompatsiaris, Ioannis; Strintzis, Michael G.
2011-12-01
In this paper, a multi-modal context-aware approach to semantic video analysis is presented. Overall, the examined video sequence is initially segmented into shots and for every resulting shot appropriate color, motion and audio features are extracted. Then, Hidden Markov Models (HMMs) are employed for performing an initial association of each shot with the semantic classes that are of interest separately for each modality. Subsequently, a graphical modeling-based approach is proposed for jointly performing modality fusion and temporal context exploitation. Novelties of this work include the combined use of contextual information and multi-modal fusion, and the development of a new representation for providing motion distribution information to HMMs. Specifically, an integrated Bayesian Network is introduced for simultaneously performing information fusion of the individual modality analysis results and exploitation of temporal context, contrary to the usual practice of performing each task separately. Contextual information is in the form of temporal relations among the supported classes. Additionally, a new computationally efficient method for providing motion energy distribution-related information to HMMs, which supports the incorporation of motion characteristics from previous frames to the currently examined one, is presented. The final outcome of this overall video analysis framework is the association of a semantic class with every shot. Experimental results as well as comparative evaluation from the application of the proposed approach to four datasets belonging to the domains of tennis, news and volleyball broadcast video are presented.
Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer's Disease Diagnosis.
Liu, Manhua; Cheng, Danni; Wang, Kundong; Wang, Yaping
2018-03-23
Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance.
He, Shuqing; Song, Jun; Qu, Junle; ...
2018-01-01
Recent advances in the chemical design and synthesis of fluorophores in the second near-infrared biological window (NIR-II) for multimodal imaging and theranostics are summarized and highlighted in this review article.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Shuqing; Song, Jun; Qu, Junle
Recent advances in the chemical design and synthesis of fluorophores in the second near-infrared biological window (NIR-II) for multimodal imaging and theranostics are summarized and highlighted in this review article.
Cox, Benjamin L; Mackie, Thomas R; Eliceiri, Kevin W
2015-01-01
Multi-modal imaging approaches of tumor metabolism that provide improved specificity, physiological relevance and spatial resolution would improve diagnosing of tumors and evaluation of tumor progression. Currently, the molecular probe FDG, glucose fluorinated with 18F at the 2-carbon, is the primary metabolic approach for clinical diagnostics with PET imaging. However, PET lacks the resolution necessary to yield intratumoral distributions of deoxyglucose, on the cellular level. Multi-modal imaging could elucidate this problem, but requires the development of new glucose analogs that are better suited for other imaging modalities. Several such analogs have been created and are reviewed here. Also reviewed are several multi-modal imaging studies that have been performed that attempt to shed light on the cellular distribution of glucose analogs within tumors. Some of these studies are performed in vitro, while others are performed in vivo, in an animal model. The results from these studies introduce a visualization gap between the in vitro and in vivo studies that, if solved, could enable the early detection of tumors, the high resolution monitoring of tumors during treatment, and the greater accuracy in assessment of different imaging agents. PMID:25625022
Towards an ultra-thin medical endoscope: multimode fibre as a wide-field image transferring medium
NASA Astrophysics Data System (ADS)
Duriš, Miroslav; Bradu, Adrian; Podoleanu, Adrian; Hughes, Michael
2018-03-01
Multimode optical fibres are attractive for biomedical and industrial applications such as endoscopes because of the small cross section and imaging resolution they can provide in comparison to widely-used fibre bundles. However, the image is randomly scrambled by propagation through a multimode fibre. Even though the scrambling is unpredictable, it is deterministic, and therefore the scrambling can be reversed. To unscramble the image, we treat the multimode fibre as a linear, disordered scattering medium. To calibrate, we scan a focused beam of coherent light over thousands of different beam positions at the distal end and record complex fields at the proximal end of the fibre. This way, the inputoutput response of the system is determined, which then allows computational reconstruction of reflection-mode images. However, there remains the problem of illuminating the tissue via the fibre while avoiding back reflections from the proximal face. To avoid this drawback, we provide here the first preliminary confirmation that an image can be transferred through a 2x2 fibre coupler, with the sample at its distal port interrogated in reflection. Light is injected into one port for illumination and then collected from a second port for imaging.
Developing a multimodal biometric authentication system using soft computing methods.
Malcangi, Mario
2015-01-01
Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.
The new frontiers of multimodality and multi-isotope imaging
NASA Astrophysics Data System (ADS)
Behnam Azad, Babak; Nimmagadda, Sridhar
2014-06-01
Technological advances in imaging systems and the development of target specific imaging tracers has been rapidly growing over the past two decades. Recent progress in "all-in-one" imaging systems that allow for automated image coregistration has significantly added to the growth of this field. These developments include ultra high resolution PET and SPECT scanners that can be integrated with CT or MR resulting in PET/CT, SPECT/CT, SPECT/PET and PET/MRI scanners for simultaneous high resolution high sensitivity anatomical and functional imaging. These technological developments have also resulted in drastic enhancements in image quality and acquisition time while eliminating cross compatibility issues between modalities. Furthermore, the most cutting edge technology, though mostly preclinical, also allows for simultaneous multimodality multi-isotope image acquisition and image reconstruction based on radioisotope decay characteristics. These scientific advances, in conjunction with the explosion in the development of highly specific multimodality molecular imaging agents, may aid in realizing simultaneous imaging of multiple biological processes and pave the way towards more efficient diagnosis and improved patient care.
Multimodal imaging of lung cancer and its microenvironment (Conference Presentation)
NASA Astrophysics Data System (ADS)
Hariri, Lida P.; Niederst, Matthew J.; Mulvey, Hillary; Adams, David C.; Hu, Haichuan; Chico Calero, Isabel; Szabari, Margit V.; Vakoc, Benjamin J.; Hasan, Tayyaba; Bouma, Brett E.; Engelman, Jeffrey A.; Suter, Melissa J.
2016-03-01
Despite significant advances in targeted therapies for lung cancer, nearly all patients develop drug resistance within 6-12 months and prognosis remains poor. Developing drug resistance is a progressive process that involves tumor cells and their microenvironment. We hypothesize that microenvironment factors alter tumor growth and response to targeted therapy. We conducted in vitro studies in human EGFR-mutant lung carcinoma cells, and demonstrated that factors secreted from lung fibroblasts results in increased tumor cell survival during targeted therapy with EGFR inhibitor, gefitinib. We also demonstrated that increased environment stiffness results in increased tumor survival during gefitinib therapy. In order to test our hypothesis in vivo, we developed a multimodal optical imaging protocol for preclinical intravital imaging in mouse models to assess tumor and its microenvironment over time. We have successfully conducted multimodal imaging of dorsal skinfold chamber (DSC) window mice implanted with GFP-labeled human EGFR mutant lung carcinoma cells and visualized changes in tumor development and microenvironment facets over time. Multimodal imaging included structural OCT to assess tumor viability and necrosis, polarization-sensitive OCT to measure tissue birefringence for collagen/fibroblast detection, and Doppler OCT to assess tumor vasculature. Confocal imaging was also performed for high-resolution visualization of EGFR-mutant lung cancer cells labeled with GFP, and was coregistered with OCT. Our results demonstrated that stromal support and vascular growth are essential to tumor progression. Multimodal imaging is a useful tool to assess tumor and its microenvironment over time.
Multimodal flexible cystoscopy for creating co-registered panoramas of the bladder urothelium
NASA Astrophysics Data System (ADS)
Seibel, Eric J.; Soper, Timothy D.; Burkhardt, Matthew R.; Porter, Michael P.; Yoon, W. Jong
2012-02-01
Bladder cancer is the most expensive cancer to treat due to the high rate of recurrence. Though white light cystoscopy is the gold standard for bladder cancer surveillance, the advent of fluorescence biomarkers provides an opportunity to improve sensitivity for early detection and reduced recurrence resulting from more accurate excision. Ideally, fluorescence information could be combined with standard reflectance images to provide multimodal views of the bladder wall. The scanning fiber endoscope (SFE) of 1.2mm in diameter is able to acquire wide-field multimodal video from a bladder phantom with fluorescence cancer "hot-spots". The SFE generates images by scanning red, green, and blue (RGB) laser light and detects the backscatter signal for reflectance video of 500-line resolution at 30 frames per second. We imaged a bladder phantom with painted vessels and mimicked fluorescent lesions by applying green fluorescent microspheres to the surface. By eliminating the green laser illumination, simultaneous reflectance and fluorescence images can be acquired at the same field of view, resolution, and frame rate. Moreover, the multimodal SFE is combined with a robotic steering mechanism and image stitching software as part of a fully automated bladder surveillance system. Using this system, the SFE can be reliably articulated over the entire 360° bladder surface. Acquired images can then be stitched into a multimodal 3D panorama of the bladder using software developed in our laboratory. In each panorama, the fluorescence images are exactly co-registered with RGB reflectance.
Noncontact Sleep Study by Multi-Modal Sensor Fusion.
Chung, Ku-Young; Song, Kwangsub; Shin, Kangsoo; Sohn, Jinho; Cho, Seok Hyun; Chang, Joon-Hyuk
2017-07-21
Polysomnography (PSG) is considered as the gold standard for determining sleep stages, but due to the obtrusiveness of its sensor attachments, sleep stage classification algorithms using noninvasive sensors have been developed throughout the years. However, the previous studies have not yet been proven reliable. In addition, most of the products are designed for healthy customers rather than for patients with sleep disorder. We present a novel approach to classify sleep stages via low cost and noncontact multi-modal sensor fusion, which extracts sleep-related vital signals from radar signals and a sound-based context-awareness technique. This work is uniquely designed based on the PSG data of sleep disorder patients, which were received and certified by professionals at Hanyang University Hospital. The proposed algorithm further incorporates medical/statistical knowledge to determine personal-adjusted thresholds and devise post-processing. The efficiency of the proposed algorithm is highlighted by contrasting sleep stage classification performance between single sensor and sensor-fusion algorithms. To validate the possibility of commercializing this work, the classification results of this algorithm were compared with the commercialized sleep monitoring device, ResMed S+. The proposed algorithm was investigated with random patients following PSG examination, and results show a promising novel approach for determining sleep stages in a low cost and unobtrusive manner.
Noncontact Sleep Study by Multi-Modal Sensor Fusion
Chung, Ku-young; Song, Kwangsub; Shin, Kangsoo; Sohn, Jinho; Cho, Seok Hyun; Chang, Joon-Hyuk
2017-01-01
Polysomnography (PSG) is considered as the gold standard for determining sleep stages, but due to the obtrusiveness of its sensor attachments, sleep stage classification algorithms using noninvasive sensors have been developed throughout the years. However, the previous studies have not yet been proven reliable. In addition, most of the products are designed for healthy customers rather than for patients with sleep disorder. We present a novel approach to classify sleep stages via low cost and noncontact multi-modal sensor fusion, which extracts sleep-related vital signals from radar signals and a sound-based context-awareness technique. This work is uniquely designed based on the PSG data of sleep disorder patients, which were received and certified by professionals at Hanyang University Hospital. The proposed algorithm further incorporates medical/statistical knowledge to determine personal-adjusted thresholds and devise post-processing. The efficiency of the proposed algorithm is highlighted by contrasting sleep stage classification performance between single sensor and sensor-fusion algorithms. To validate the possibility of commercializing this work, the classification results of this algorithm were compared with the commercialized sleep monitoring device, ResMed S+. The proposed algorithm was investigated with random patients following PSG examination, and results show a promising novel approach for determining sleep stages in a low cost and unobtrusive manner. PMID:28753994
New Technologies, New Possibilities for the Arts and Multimodality in English Language Arts
ERIC Educational Resources Information Center
Williams, Wendy R.
2014-01-01
This article discusses the arts, multimodality, and new technologies in English language arts. It then turns to the example of the illuminated text--a multimodal book report consisting of animated text, music, and images--to consider how art, multimodality, and technology can work together to support students' reading of literature and inspire…
Dabo-Niang, S; Zoueu, J T
2012-09-01
In this communication, we demonstrate how kriging, combine with multispectral and multimodal microscopy can enhance the resolution of malaria-infected images and provide more details on their composition, for analysis and diagnosis. The results of this interpolation applied to the two principal components of multispectral and multimodal images illustrate that the examination of the content of Plasmodium falciparum infected human erythrocyte is improved. © 2012 The Authors Journal of Microscopy © 2012 Royal Microscopical Society.
Multifocus confocal Raman microspectroscopy for fast multimode vibrational imaging of living cells.
Okuno, Masanari; Hamaguchi, Hiro-o
2010-12-15
We have developed a multifocus confocal Raman microspectroscopic system for the fast multimode vibrational imaging of living cells. It consists of an inverted microscope equipped with a microlens array, a pinhole array, a fiber bundle, and a multichannel Raman spectrometer. Forty-eight Raman spectra from 48 foci under the microscope are simultaneously obtained by using multifocus excitation and image-compression techniques. The multifocus confocal configuration suppresses the background generated from the cover glass and the cell culturing medium so that high-contrast images are obtainable with a short accumulation time. The system enables us to obtain multimode (10 different vibrational modes) vibrational images of living cells in tens of seconds with only 1 mW laser power at one focal point. This image acquisition time is more than 10 times faster than that in conventional single-focus Raman microspectroscopy.
[Research Progress of Multi-Model Medical Image Fusion at Feature Level].
Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun
2016-04-01
Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.
Enhanced image capture through fusion
NASA Technical Reports Server (NTRS)
Burt, Peter J.; Hanna, Keith; Kolczynski, Raymond J.
1993-01-01
Image fusion may be used to combine images from different sensors, such as IR and visible cameras, to obtain a single composite with extended information content. Fusion may also be used to combine multiple images from a given sensor to form a composite image in which information of interest is enhanced. We present a general method for performing image fusion and show that this method is effective for diverse fusion applications. We suggest that fusion may provide a powerful tool for enhanced image capture with broad utility in image processing and computer vision.
NASA Astrophysics Data System (ADS)
Tian, Chao; Zhang, Wei; Nguyen, Van Phuc; Huang, Ziyi; Wang, Xueding; Paulus, Yannis M.
2018-02-01
Current clinical available retinal imaging techniques have limitations, including limited depth of penetration or requirement for the invasive injection of exogenous contrast agents. Here, we developed a novel multimodal imaging system for high-speed, high-resolution retinal imaging of larger animals, such as rabbits. The system integrates three state-of-the-art imaging modalities, including photoacoustic microscopy (PAM), optical coherence tomography (OCT), and fluorescence microscopy (FM). In vivo experimental results of rabbit eyes show that the PAM is able to visualize laser-induced retinal burns and distinguish individual eye blood vessels using a laser exposure dose of 80 nJ, which is well below the American National Standards Institute (ANSI) safety limit 160 nJ. The OCT can discern different retinal layers and visualize laser burns and choroidal detachments. The novel multi-modal imaging platform holds great promise in ophthalmic imaging.
Multi-Modal Nano-Probes for Radionuclide and 5-color Near Infrared Optical Lymphatic Imaging
Kobayashi, Hisataka; Koyama, Yoshinori; Barrett, Tristan; Hama, Yukihiro; Regino, Celeste A. S.; Shin, In Soo; Jang, Beom-Su; Le, Nhat; Paik, Chang H.; Choyke, Peter L.; Urano, Yasuteru
2008-01-01
Current contrast agents generally have one function and can only be imaged in monochrome, therefore, the majority of imaging methods can only impart uniparametric information. A single nano-particle has the potential to be loaded with multiple payloads. Such multi-modality probes have the ability to be imaged by more than one imaging technique, which could compensate for the weakness or even combine the advantages of each individual modality. Furthermore, optical imaging using different optical probes enables us to achieve multi-color in vivo imaging, wherein multiple parameters can be read from a single image. To allow differentiation of multiple optical signals in vivo, each probe should have a close but different near infrared emission. To this end, we synthesized nano-probes with multi-modal and multi-color potential, which employed a polyamidoamine dendrimer platform linked to both radionuclides and optical probes, permitting dual-modality scintigraphic and 5-color near infrared optical lymphatic imaging using a multiple excitation spectrally-resolved fluorescence imaging technique. PMID:19079788
Multimodal nanoprobes for radionuclide and five-color near-infrared optical lymphatic imaging.
Kobayashi, Hisataka; Koyama, Yoshinori; Barrett, Tristan; Hama, Yukihiro; Regino, Celeste A S; Shin, In Soo; Jang, Beom-Su; Le, Nhat; Paik, Chang H; Choyke, Peter L; Urano, Yasuteru
2007-11-01
Current contrast agents generally have one function and can only be imaged in monochrome; therefore, the majority of imaging methods can only impart uniparametric information. A single nanoparticle has the potential to be loaded with multiple payloads. Such multimodality probes have the ability to be imaged by more than one imaging technique, which could compensate for the weakness or even combine the advantages of each individual modality. Furthermore, optical imaging using different optical probes enables us to achieve multicolor in vivo imaging, wherein multiple parameters can be read from a single image. To allow differentiation of multiple optical signals in vivo, each probe should have a close but different near-infrared emission. To this end, we synthesized nanoprobes with multimodal and multicolor potential, which employed a polyamidoamine dendrimer platform linked to both radionuclides and optical probes, permitting dual-modality scintigraphic and five-color near-infrared optical lymphatic imaging using a multiple-excitation spectrally resolved fluorescence imaging technique.
Multimodal digital color imaging system for facial skin lesion analysis
NASA Astrophysics Data System (ADS)
Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo
2008-02-01
In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.
Internal Mirror Optical Fiber Couplers
NASA Astrophysics Data System (ADS)
Shin, Jong-Dug
A fusion splicing technique has been used to produce angled dielectric mirrors in multimode and single-mode silica fibers. These mirrored fiber couplers serve as compact directional couplers with low excess optical loss (~0.2 dB for multimode and 0.5 dB for single mode at 1.3 μm) and excellent mechanical properties. The reflectance is found to be wavelength dependent and strongly polarization dependent, as expected. Far-field scans of the reflected output power measured with a white-light source show a pattern which is almost circularly symmetric. The splitting ratio in a multimode coupler measured with a laser source is much less dependent on input coupling conditions than in conventional fused biconical-taper couplers. Spectral properties of multilayer fiber mirrors have been investigated experimentally, and a matrix analysis has been used to explain the results.
Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images
NASA Astrophysics Data System (ADS)
Awumah, Anna; Mahanti, Prasun; Robinson, Mark
2016-10-01
Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).
Chowdhury, Shwetadwip; Eldridge, Will J.; Wax, Adam; Izatt, Joseph A.
2017-01-01
Sub-diffraction resolution imaging has played a pivotal role in biological research by visualizing key, but previously unresolvable, sub-cellular structures. Unfortunately, applications of far-field sub-diffraction resolution are currently divided between fluorescent and coherent-diffraction regimes, and a multimodal sub-diffraction technique that bridges this gap has not yet been demonstrated. Here we report that structured illumination (SI) allows multimodal sub-diffraction imaging of both coherent quantitative-phase (QP) and fluorescence. Due to SI’s conventionally fluorescent applications, we first demonstrate the principle of SI-enabled three-dimensional (3D) QP sub-diffraction imaging with calibration microspheres. Image analysis confirmed enhanced lateral and axial resolutions over diffraction-limited QP imaging, and established striking parallels between coherent SI and conventional optical diffraction tomography. We next introduce an optical system utilizing SI to achieve 3D sub-diffraction, multimodal QP/fluorescent visualization of A549 biological cells fluorescently tagged for F-actin. Our results suggest that SI has a unique utility in studying biological phenomena with significant molecular, biophysical, and biochemical components. PMID:28663887
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators
Bai, Xiangzhi
2015-01-01
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.
Bai, Xiangzhi
2015-07-15
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.
A Multimodal Search Engine for Medical Imaging Studies.
Pinho, Eduardo; Godinho, Tiago; Valente, Frederico; Costa, Carlos
2017-02-01
The use of digital medical imaging systems in healthcare institutions has increased significantly, and the large amounts of data in these systems have led to the conception of powerful support tools: recent studies on content-based image retrieval (CBIR) and multimodal information retrieval in the field hold great potential in decision support, as well as for addressing multiple challenges in healthcare systems, such as computer-aided diagnosis (CAD). However, the subject is still under heavy research, and very few solutions have become part of Picture Archiving and Communication Systems (PACS) in hospitals and clinics. This paper proposes an extensible platform for multimodal medical image retrieval, integrated in an open-source PACS software with profile-based CBIR capabilities. In this article, we detail a technical approach to the problem by describing its main architecture and each sub-component, as well as the available web interfaces and the multimodal query techniques applied. Finally, we assess our implementation of the engine with computational performance benchmarks.
Multi-mode Intravascular RF Coil for MRI-guided Interventions
Kurpad, Krishna N.; Unal, Orhan
2011-01-01
Purpose To demonstrate the feasibility of using a single intravascular RF probe connected to the external MRI system via a single coaxial cable to perform active tip tracking and catheter visualization, and high SNR intravascular imaging. Materials and Methods A multi-mode intravascular RF coil was constructed on a 6F balloon catheter and interfaced to a 1.5T MRI scanner via a decoupling circuit. Bench measurements of coil impedances were followed by imaging experiments in saline and phantoms. Results The multi-mode coil behaves as an inductively-coupled transmit coil. Forward looking capability of 6mm is measured. Greater than 3-fold increase in SNR compared to conventional imaging using optimized external coil is demonstrated. Simultaneous active tip tracking and catheter visualization is demonstrated. Conclusions It is feasible to perform 1) active tip tracking, 2) catheter visualization, and 3) high SNR imaging using a single multi-mode intravascular RF coil that is connected to the external system via a single coaxial cable. PMID:21448969
Hao, Xiaoke; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L.; Saykin, Andrew J.; Zhang, Daoqiang; Shen, Li
2016-01-01
Neuroimaging genetics has attracted growing attention and interest, which is thought to be a powerful strategy to examine the influence of genetic variants (i.e., single nucleotide polymorphisms (SNPs)) on structures or functions of human brain. In recent studies, univariate or multivariate regression analysis methods are typically used to capture the effective associations between genetic variants and quantitative traits (QTs) such as brain imaging phenotypes. The identified imaging QTs, although associated with certain genetic markers, may not be all disease specific. A useful, but underexplored, scenario could be to discover only those QTs associated with both genetic markers and disease status for revealing the chain from genotype to phenotype to symptom. In addition, multimodal brain imaging phenotypes are extracted from different perspectives and imaging markers consistently showing up in multimodalities may provide more insights for mechanistic understanding of diseases (i.e., Alzheimer’s disease (AD)). In this work, we propose a general framework to exploit multi-modal brain imaging phenotypes as intermediate traits that bridge genetic risk factors and multi-class disease status. We applied our proposed method to explore the relation between the well-known AD risk SNP APOE rs429358 and three baseline brain imaging modalities (i.e., structural magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET) and F-18 florbetapir PET scans amyloid imaging (AV45)) from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The empirical results demonstrate that our proposed method not only helps improve the performances of imaging genetic associations, but also discovers robust and consistent regions of interests (ROIs) across multi-modalities to guide the disease-induced interpretation. PMID:27277494
Electron Shock Ignition of Inertial Fusion Targets
Shang, W. L.; Betti, R.; Hu, S. X.; ...
2017-11-07
Here, it is shown that inertial fusion targets designed with low implosion velocities can be shock ignited using laser–plasma interaction generated hot electrons (hot-e) to obtain high-energy gains. These designs are robust to multimode asymmetries and are predicted to ignite even for significantly distorted implosions. Electron shock ignition requires tens of kilojoules of hot-e, which can only be produced on a large laser facility like the National Ignition Facility, with the laser to hot-e conversion efficiency greater than 10% at laser intensities ~10 16 W/cm 2.
Electron Shock Ignition of Inertial Fusion Targets
NASA Astrophysics Data System (ADS)
Shang, W. L.; Betti, R.; Hu, S. X.; Woo, K.; Hao, L.; Ren, C.; Christopherson, A. R.; Bose, A.; Theobald, W.
2017-11-01
It is shown that inertial confinement fusion targets designed with low implosion velocities can be shock-ignited using laser-plasma interaction generated hot electrons (hot-e 's) to obtain high energy gains. These designs are robust to multimode asymmetries and are predicted to ignite even for significantly distorted implosions. Electron shock ignition requires tens of kilojoules of hot-e 's which can be produced only at a large laser facility like the National Ignition Facility, with the laser-to-hot-e conversion efficiency greater than 10% at laser intensities ˜1016 W /cm2 .
Electron Shock Ignition of Inertial Fusion Targets.
Shang, W L; Betti, R; Hu, S X; Woo, K; Hao, L; Ren, C; Christopherson, A R; Bose, A; Theobald, W
2017-11-10
It is shown that inertial confinement fusion targets designed with low implosion velocities can be shock-ignited using laser-plasma interaction generated hot electrons (hot-e's) to obtain high energy gains. These designs are robust to multimode asymmetries and are predicted to ignite even for significantly distorted implosions. Electron shock ignition requires tens of kilojoules of hot-e's which can be produced only at a large laser facility like the National Ignition Facility, with the laser-to-hot-e conversion efficiency greater than 10% at laser intensities ∼10^{16} W/cm^{2}.
Bourantas, Christos V; Jaffer, Farouc A; Gijsen, Frank J; van Soest, Gijs; Madden, Sean P; Courtney, Brian K; Fard, Ali M; Tenekecioglu, Erhan; Zeng, Yaping; van der Steen, Antonius F W; Emelianov, Stanislav; Muller, James; Stone, Peter H; Marcu, Laura; Tearney, Guillermo J; Serruys, Patrick W
2017-02-07
Cumulative evidence from histology-based studies demonstrate that the currently available intravascular imaging techniques have fundamental limitations that do not allow complete and detailed evaluation of plaque morphology and pathobiology, limiting the ability to accurately identify high-risk plaques. To overcome these drawbacks, new efforts are developing for data fusion methodologies and the design of hybrid, dual-probe catheters to enable accurate assessment of plaque characteristics, and reliable identification of high-risk lesions. Today several dual-probe catheters have been introduced including combined near infrared spectroscopy-intravascular ultrasound (NIRS-IVUS), that is already commercially available, IVUS-optical coherence tomography (OCT), the OCT-NIRS, the OCT-near infrared fluorescence (NIRF) molecular imaging, IVUS-NIRF, IVUS intravascular photoacoustic imaging and combined fluorescence lifetime-IVUS imaging. These multimodal approaches appear able to overcome limitations of standalone imaging and provide comprehensive visualization of plaque composition and plaque biology. The aim of this review article is to summarize the advances in hybrid intravascular imaging, discuss the technical challenges that should be addressed in order to have a use in the clinical arena, and present the evidence from their first applications aiming to highlight their potential value in the study of atherosclerosis. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.
Multimodality bonchoscopic imaging of tracheopathica osteochondroplastica
NASA Astrophysics Data System (ADS)
Colt, Henri; Murgu, Septimiu D.; Ahn, Yeh-Chan; Brenner, Matt
2009-05-01
Results of a commercial optical coherence tomography system used as part of a multimodality diagnostic bronchoscopy platform are presented for a 61-year-old patient with central airway obstruction from tracheopathica osteochondroplastica. Comparison to results of white-light bronchoscopy, histology, and endobronchial ultrasound examination are accompanied by a discussion of resolution, penetration depth, contrast, and field of view of these imaging modalities. White-light bronchoscopy revealed irregularly shaped, firm submucosal nodules along cartilaginous structures of the anterior and lateral walls of the trachea, sparing the muscular posterior membrane. Endobronchial ultrasound showed a hyperechoic density of 0.4 cm thickness. optical coherence tomography (OCT) was performed using a commercially available, compact time-domain OCT system (Niris System, Imalux Corp., Cleveland, Ohio) with a magnetically actuating probe (two-dimensional, front imaging, and inside actuation). Images showed epithelium, upper submucosa, and osseous submucosal nodule layers corresponding with histopathology. To our knowledge, this is the first time these commercially available systems are used as part of a multimodality bronchoscopy platform to study diagnostic imaging of a benign disease causing central airway obstruction. Further studies are needed to optimize these systems for pulmonary applications and to determine how new-generation imaging modalities will be integrated into a multimodality bronchoscopy platform.
[Research on non-rigid registration of multi-modal medical image based on Demons algorithm].
Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang
2014-02-01
Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.
Kamran, Mudassar; Fowler, Kathryn J; Mellnick, Vincent M; Sicard, Gregorio A; Narra, Vamsi R
2016-06-01
Primary aortic neoplasms are rare. Aortic sarcoma arising after endovascular aneurysm repair (EVAR) is a scarce subset of primary aortic malignancies, reports of which are infrequent in the published literature. The diagnosis of aortic sarcoma is challenging due to its non-specific clinical presentation, and the prognosis is poor due to delayed diagnosis, rapid proliferation, and propensity for metastasis. Post-EVAR, aortic sarcomas may mimic other more common aortic processes on surveillance imaging. Radiologists are rarely knowledgeable about this rare entity for which multimodality imaging and awareness are invaluable in early diagnosis. A series of three pathologically confirmed cases are presented to display the multimodality imaging features and clinical presentations of aortic sarcoma arising after EVAR.
NASA Astrophysics Data System (ADS)
Dadkhah, Arash; Zhou, Jun; Yeasmin, Nusrat; Jiao, Shuliang
2018-02-01
Various optical imaging modalities with different optical contrast mechanisms have been developed over the past years. Although most of these imaging techniques are being used in many biomedical applications and researches, integration of these techniques will allow researchers to reach the full potential of these technologies. Nevertheless, combining different imaging techniques is always challenging due to the difference in optical and hardware requirements for different imaging systems. Here, we developed a multimodal optical imaging system with the capability of providing comprehensive structural, functional and molecular information of living tissue in micrometer scale. This imaging system integrates photoacoustic microscopy (PAM), optical coherence tomography (OCT), optical Doppler tomography (ODT) and fluorescence microscopy in one platform. Optical-resolution PAM (OR-PAM) provides absorption-based imaging of biological tissues. Spectral domain OCT is able to provide structural information based on the scattering property of biological sample with no need for exogenous contrast agents. In addition, ODT is a functional extension of OCT with the capability of measurement and visualization of blood flow based on the Doppler effect. Fluorescence microscopy allows to reveal molecular information of biological tissue using autofluoresce or exogenous fluorophores. In-vivo as well as ex-vivo imaging studies demonstrated the capability of our multimodal imaging system to provide comprehensive microscopic information on biological tissues. Integrating all the aforementioned imaging modalities for simultaneous multimodal imaging has promising potential for preclinical research and clinical practice in the near future.
Semiotic foundation for multisensor-multilook fusion
NASA Astrophysics Data System (ADS)
Myler, Harley R.
1998-07-01
This paper explores the concept of an application of semiotic principles to the design of a multisensor-multilook fusion system. Semiotics is an approach to analysis that attempts to process media in a united way using qualitative methods as opposed to quantitative. The term semiotic refers to signs, or signatory data that encapsulates information. Semiotic analysis involves the extraction of signs from information sources and the subsequent processing of the signs into meaningful interpretations of the information content of the source. The multisensor fusion problem predicated on a semiotic system structure and incorporating semiotic analysis techniques is explored and the design for a multisensor system as an information fusion system is explored. Semiotic analysis opens the possibility of using non-traditional sensor sources and modalities in the fusion process, such as verbal and textual intelligence derived from human observers. Examples of how multisensor/multimodality data might be analyzed semiotically is shown and discussion on how a semiotic system for multisensor fusion could be realized is outlined. The architecture of a semiotic multisensor fusion processor that can accept situational awareness data is described, although an implementation has not as yet been constructed.
Multimodal Sensor Fusion for Personnel Detection
2011-07-01
video ). Efficacy of UGS systems is often limited by high false alarm rates because the onboard data processing algorithms may not be able to correctly...humans) and animals (e.g., donkeys , mules, and horses). The humans walked alone and in groups with and without backpacks; the animals were led by their
Multimodal targeted high relaxivity thermosensitive liposome for in vivo imaging
NASA Astrophysics Data System (ADS)
Kuijten, Maayke M. P.; Hannah Degeling, M.; Chen, John W.; Wojtkiewicz, Gregory; Waterman, Peter; Weissleder, Ralph; Azzi, Jamil; Nicolay, Klaas; Tannous, Bakhos A.
2015-11-01
Liposomes are spherical, self-closed structures formed by lipid bilayers that can encapsulate drugs and/or imaging agents in their hydrophilic core or within their membrane moiety, making them suitable delivery vehicles. We have synthesized a new liposome containing gadolinium-DOTA lipid bilayer, as a targeting multimodal molecular imaging agent for magnetic resonance and optical imaging. We showed that this liposome has a much higher molar relaxivities r1 and r2 compared to a more conventional liposome containing gadolinium-DTPA-BSA lipid. By incorporating both gadolinium and rhodamine in the lipid bilayer as well as biotin on its surface, we used this agent for multimodal imaging and targeting of tumors through the strong biotin-streptavidin interaction. Since this new liposome is thermosensitive, it can be used for ultrasound-mediated drug delivery at specific sites, such as tumors, and can be guided by magnetic resonance imaging.
Image recovery from defocused 2D fluorescent images in multimodal digital holographic microscopy.
Quan, Xiangyu; Matoba, Osamu; Awatsuji, Yasuhiro
2017-05-01
A technique of three-dimensional (3D) intensity retrieval from defocused, two-dimensional (2D) fluorescent images in the multimodal digital holographic microscopy (DHM) is proposed. In the multimodal DHM, 3D phase and 2D fluorescence distributions are obtained simultaneously by an integrated system of an off-axis DHM and a conventional epifluorescence microscopy, respectively. This gives us more information of the target; however, defocused fluorescent images are observed due to the short depth of field. In this Letter, we propose a method to recover the defocused images based on the phase compensation and backpropagation from the defocused plane to the focused plane using the distance information that is obtained from a 3D phase distribution. By applying Zernike polynomial phase correction, we brought back the fluorescence intensity to the focused imaging planes. The experimental demonstration using fluorescent beads is presented, and the expected applications are suggested.
Multi-focus image fusion using a guided-filter-based difference image.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu
2016-03-20
The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.
Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU
NASA Astrophysics Data System (ADS)
Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.
2007-03-01
In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.
Parallel Information Processing (Image Transmission Via Fiber Bundle and Multimode Fiber
NASA Technical Reports Server (NTRS)
Kukhtarev, Nicholai
2003-01-01
Growing demand for visual, user-friendly representation of information inspires search for the new methods of image transmission. Currently used in-series (sequential) methods of information processing are inherently slow and are designed mainly for transmission of one or two dimensional arrays of data. Conventional transmission of data by fibers requires many fibers with array of laser diodes and photodetectors. In practice, fiber bundles are also used for transmission of images. Image is formed on the fiber-optic bundle entrance surface and each fiber transmits the incident image to the exit surface. Since the fibers do not preserve phase, only 2D intensity distribution can be transmitted in this way. Each single mode fiber transmit only one pixel of an image. Multimode fibers may be also used, so that each mode represent different pixel element. Direct transmission of image through multimode fiber is hindered by the mode scrambling and phase randomization. To overcome these obstacles wavelength and time-division multiplexing have been used, with each pixel transmitted on a separate wavelength or time interval. Phase-conjugate techniques also was tested in, but only in the unpractical scheme when reconstructed image return back to the fiber input end. Another method of three-dimensional imaging over single mode fibers was demonstrated in, using laser light of reduced spatial coherence. Coherence encoding, needed for a transmission of images by this methods, was realized with grating interferometer or with the help of an acousto-optic deflector. We suggest simple practical holographic method of image transmission over single multimode fiber or over fiber bundle with coherent light using filtering by holographic optical elements. Originally this method was successfully tested for the single multimode fiber. In this research we have modified holographic method for transmission of laser illuminated images over commercially available fiber bundle (fiber endoscope, or fiberscope).
Sarikaya, Duygu; Corso, Jason J; Guru, Khurshid A
2017-07-01
Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human-robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.
Palmprint and Face Multi-Modal Biometric Recognition Based on SDA-GSVD and Its Kernelization
Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu
2012-01-01
When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance. PMID:22778600
Palmprint and face multi-modal biometric recognition based on SDA-GSVD and its kernelization.
Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu
2012-01-01
When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance.
NASA Astrophysics Data System (ADS)
Ye, Y.
2017-09-01
This paper presents a fast and robust method for the registration of multimodal remote sensing data (e.g., optical, LiDAR, SAR and map). The proposed method is based on the hypothesis that structural similarity between images is preserved across different modalities. In the definition of the proposed method, we first develop a pixel-wise feature descriptor named Dense Orientated Gradient Histogram (DOGH), which can be computed effectively at every pixel and is robust to non-linear intensity differences between images. Then a fast similarity metric based on DOGH is built in frequency domain using the Fast Fourier Transform (FFT) technique. Finally, a template matching scheme is applied to detect tie points between images. Experimental results on different types of multimodal remote sensing images show that the proposed similarity metric has the superior matching performance and computational efficiency than the state-of-the-art methods. Moreover, based on the proposed similarity metric, we also design a fast and robust automatic registration system for multimodal images. This system has been evaluated using a pair of very large SAR and optical images (more than 20000 × 20000 pixels). Experimental results show that our system outperforms the two popular commercial software systems (i.e. ENVI and ERDAS) in both registration accuracy and computational efficiency.
Zhou, Jing; Zhu, Xingjun; Chen, Min; Sun, Yun; Li, Fuyou
2012-09-01
Multimodal imaging is rapidly becoming an important tool for biomedical applications because it can compensate for the deficiencies of individual imaging modalities. Herein, multifunctional NaLuF(4)-based upconversion nanoparticles (Lu-UCNPs) were synthesized though a facile one-step microemulsion method under ambient condition. The doping of lanthanide ions (Gd(3+), Yb(3+) and Er(3+)/Tm(3+)) endows the Lu-UCNPs with high T(1)-enhancement, bright upconversion luminescence (UCL) emissions, and excellent X-ray absorption coefficient. Moreover, the as-prepared Lu-UCNPs are stable in water for more than six months, due to the protection of sodium glutamate and diethylene triamine pentacetate acid (DTPA) coordinating ligands on the surface. Lu-UCNPs have been successfully applied to the trimodal CT/MR/UCL lymphatic imaging on the modal of small animals. It is worth noting that Lu-UCNPs could be used for imaging even after preserving for over six months. In vitro transmission electron microscope (TEM), methyl thiazolyl tetrazolium (MTT) assay and histological analysis demonstrated that Lu-UCNPs exhibited low toxicity on living systems. Therefore, Lu-UCNPs could be multimodal agents for CT/MR/UCL imaging, and the concept can be served as a platform technology for the next-generation of probes for multimodal imaging. Copyright © 2012 Elsevier Ltd. All rights reserved.
A deep learning model integrating FCNNs and CRFs for brain tumor segmentation.
Zhao, Xiaomei; Wu, Yihong; Song, Guidong; Li, Zhenye; Zhang, Yazhuo; Fan, Yong
2018-01-01
Accurate and reliable brain tumor segmentation is a critical component in cancer diagnosis, treatment planning, and treatment outcome evaluation. Build upon successful deep learning techniques, a novel brain tumor segmentation method is developed by integrating fully convolutional neural networks (FCNNs) and Conditional Random Fields (CRFs) in a unified framework to obtain segmentation results with appearance and spatial consistency. We train a deep learning based segmentation model using 2D image patches and image slices in following steps: 1) training FCNNs using image patches; 2) training CRFs as Recurrent Neural Networks (CRF-RNN) using image slices with parameters of FCNNs fixed; and 3) fine-tuning the FCNNs and the CRF-RNN using image slices. Particularly, we train 3 segmentation models using 2D image patches and slices obtained in axial, coronal and sagittal views respectively, and combine them to segment brain tumors using a voting based fusion strategy. Our method could segment brain images slice-by-slice, much faster than those based on image patches. We have evaluated our method based on imaging data provided by the Multimodal Brain Tumor Image Segmentation Challenge (BRATS) 2013, BRATS 2015 and BRATS 2016. The experimental results have demonstrated that our method could build a segmentation model with Flair, T1c, and T2 scans and achieve competitive performance as those built with Flair, T1, T1c, and T2 scans. Copyright © 2017 Elsevier B.V. All rights reserved.
A dual-channel fusion system of visual and infrared images based on color transfer
NASA Astrophysics Data System (ADS)
Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong
2013-09-01
A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.
NASA Astrophysics Data System (ADS)
Liu, Xiaonan; Chen, Kewei; Wu, Teresa; Weidman, David; Lure, Fleming; Li, Jing
2018-02-01
Alzheimer's Disease (AD) is the most common cause of dementia and currently has no cure. Treatments targeting early stages of AD such as Mild Cognitive Impairment (MCI) may be most effective to deaccelerate AD, thus attracting increasing attention. However, MCI has substantial heterogeneity in that it can be caused by various underlying conditions, not only AD. To detect MCI due to AD, NIA-AA published updated consensus criteria in 2011, in which the use of multi-modality images was highlighted as one of the most promising methods. It is of great interest to develop a CAD system based on automatic, quantitative analysis of multi-modality images and machine learning algorithms to help physicians more adequately diagnose MCI due to AD. The challenge, however, is that multi-modality images are not universally available for many patients due to cost, access, safety, and lack of consent. We developed a novel Missing Modality Transfer Learning (MMTL) algorithm capable of utilizing whatever imaging modalities are available for an MCI patient to diagnose the patient's likelihood of MCI due to AD. Furthermore, we integrated MMTL with radiomics steps including image processing, feature extraction, and feature screening, and a post-processing for uncertainty quantification (UQ), and developed a CAD system called "ADMultiImg" to assist clinical diagnosis of MCI due to AD using multi-modality images together with patient demographic and genetic information. Tested on ADNI date, our system can generate a diagnosis with high accuracy even for patients with only partially available image modalities (AUC=0.94), and therefore may have broad clinical utility.
Tiwari, Pallavi; Kurhanewicz, John; Viswanath, Satish; Sridhar, Akshay; Madabhushi, Anant
2011-01-01
Rationale and Objectives To develop a computerized data integration framework (MaWERiC) for quantitatively combining structural and metabolic information from different Magnetic Resonance (MR) imaging modalities. Materials and Methods In this paper, we present a novel computerized support system that we call Multimodal Wavelet Embedding Representation for data Combination (MaWERiC) which (1) employs wavelet theory and dimensionality reduction for providing a common, uniform representation of the different imaging (T2-w) and non-imaging (spectroscopy) MRI channels, and (2) leverages a random forest classifier for automated prostate cancer detection on a per voxel basis from combined 1.5 Tesla in vivo MRI and MRS. Results A total of 36 1.5 T endorectal in vivo T2-w MRI, MRS patient studies were evaluated on a per-voxel via MaWERiC, using a three-fold cross validation scheme across 25 iterations. Ground truth for evaluation of the results was obtained via ex-vivo whole-mount histology sections which served as the gold standard for expert radiologist annotations of prostate cancer on a per-voxel basis. The results suggest that MaWERiC based MRS-T2-w meta-classifier (mean AUC, μ = 0.89 ± 0.02) significantly outperformed (i) a T2-w MRI (employing wavelet texture features) classifier (μ = 0.55± 0.02), (ii) a MRS (employing metabolite ratios) classifier (μ= 0.77 ± 0.03), (iii) a decision-fusion classifier, obtained by combining individual T2-w MRI and MRS classifier outputs (μ = 0.85 ± 0.03) and (iv) a data combination scheme involving combination of metabolic MRS and MR signal intensity features (μ = 0.66± 0.02). Conclusion A novel data integration framework, MaWERiC, for combining imaging and non-imaging MRI channels was presented. Application to prostate cancer detection via combination of T2-w MRI and MRS data demonstrated significantly higher AUC and accuracy values compared to the individual T2-w MRI, MRS modalities and other data integration strategies. PMID:21960175
Hosseini, Seyyed Abed; Khalilzadeh, Mohammad Ali; Naghibi-Sistani, Mohammad Bagher; Homam, Seyyed Mehran
2015-01-01
Background: This paper proposes a new emotional stress assessment system using multi-modal bio-signals. Electroencephalogram (EEG) is the reflection of brain activity and is widely used in clinical diagnosis and biomedical research. Methods: We design an efficient acquisition protocol to acquire the EEG signals in five channels (FP1, FP2, T3, T4 and Pz) and peripheral signals such as blood volume pulse, skin conductance (SC) and respiration, under images induction (calm-neutral and negatively excited) for the participants. The visual stimuli images are selected from the subset International Affective Picture System database. The qualitative and quantitative evaluation of peripheral signals are used to select suitable segments of EEG signals for improving the accuracy of signal labeling according to emotional stress states. After pre-processing, wavelet coefficients, fractal dimension, and Lempel-Ziv complexity are used to extract the features of the EEG signals. The vast number of features leads to the problem of dimensionality, which is solved using the genetic algorithm as a feature selection method. Results: The results show that the average classification accuracy is 89.6% for two categories of emotional stress states using the support vector machine (SVM). Conclusion: This is a great improvement in results compared to other similar researches. We achieve a noticeable improvement of 11.3% in accuracy using SVM classifier, in compared to previous studies. Therefore, a new fusion between EEG and peripheral signals are more robust in comparison to the separate signals. PMID:26622979
Hosseini, Seyyed Abed; Khalilzadeh, Mohammad Ali; Naghibi-Sistani, Mohammad Bagher; Homam, Seyyed Mehran
2015-07-06
This paper proposes a new emotional stress assessment system using multi-modal bio-signals. Electroencephalogram (EEG) is the reflection of brain activity and is widely used in clinical diagnosis and biomedical research. We design an efficient acquisition protocol to acquire the EEG signals in five channels (FP1, FP2, T3, T4 and Pz) and peripheral signals such as blood volume pulse, skin conductance (SC) and respiration, under images induction (calm-neutral and negatively excited) for the participants. The visual stimuli images are selected from the subset International Affective Picture System database. The qualitative and quantitative evaluation of peripheral signals are used to select suitable segments of EEG signals for improving the accuracy of signal labeling according to emotional stress states. After pre-processing, wavelet coefficients, fractal dimension, and Lempel-Ziv complexity are used to extract the features of the EEG signals. The vast number of features leads to the problem of dimensionality, which is solved using the genetic algorithm as a feature selection method. The results show that the average classification accuracy is 89.6% for two categories of emotional stress states using the support vector machine (SVM). This is a great improvement in results compared to other similar researches. We achieve a noticeable improvement of 11.3% in accuracy using SVM classifier, in compared to previous studies. Therefore, a new fusion between EEG and peripheral signals are more robust in comparison to the separate signals.
A color fusion method of infrared and low-light-level images based on visual perception
NASA Astrophysics Data System (ADS)
Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa
2014-11-01
The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.
Novel multimodality segmentation using level sets and Jensen-Rényi divergence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markel, Daniel, E-mail: daniel.markel@mail.mcgill.ca; Zaidi, Habib; Geneva Neuroscience Center, Geneva University, CH-1205 Geneva
2013-12-15
Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. Methods: A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set activemore » contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. Results: The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with aR{sup 2} value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. Conclusions: The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.« less
Novel multimodality segmentation using level sets and Jensen-Rényi divergence.
Markel, Daniel; Zaidi, Habib; El Naqa, Issam
2013-12-01
Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set active contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with a R(2) value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.
NASA Astrophysics Data System (ADS)
Chernavskaia, Olga; Heuke, Sandro; Vieth, Michael; Friedrich, Oliver; Schürmann, Sebastian; Atreya, Raja; Stallmach, Andreas; Neurath, Markus F.; Waldner, Maximilian; Petersen, Iver; Schmitt, Michael; Bocklitz, Thomas; Popp, Jürgen
2016-07-01
Assessing disease activity is a prerequisite for an adequate treatment of inflammatory bowel diseases (IBD) such as Crohn’s disease and ulcerative colitis. In addition to endoscopic mucosal healing, histologic remission poses a promising end-point of IBD therapy. However, evaluating histological remission harbors the risk for complications due to the acquisition of biopsies and results in a delay of diagnosis because of tissue processing procedures. In this regard, non-linear multimodal imaging techniques might serve as an unparalleled technique that allows the real-time evaluation of microscopic IBD activity in the endoscopy unit. In this study, tissue sections were investigated using the non-linear multimodal microscopy combination of coherent anti-Stokes Raman scattering (CARS), two-photon excited auto fluorescence (TPEF) and second-harmonic generation (SHG). After the measurement a gold-standard assessment of histological indexes was carried out based on a conventional H&E stain. Subsequently, various geometry and intensity related features were extracted from the multimodal images. An optimized feature set was utilized to predict histological index levels based on a linear classifier. Based on the automated prediction, the diagnosis time interval is decreased. Therefore, non-linear multimodal imaging may provide a real-time diagnosis of IBD activity suited to assist clinical decision making within the endoscopy unit.
Reconstructing multi-mode networks from multivariate time series
NASA Astrophysics Data System (ADS)
Gao, Zhong-Ke; Yang, Yu-Xuan; Dang, Wei-Dong; Cai, Qing; Wang, Zhen; Marwan, Norbert; Boccaletti, Stefano; Kurths, Jürgen
2017-09-01
Unveiling the dynamics hidden in multivariate time series is a task of the utmost importance in a broad variety of areas in physics. We here propose a method that leads to the construction of a novel functional network, a multi-mode weighted graph combined with an empirical mode decomposition, and to the realization of multi-information fusion of multivariate time series. The method is illustrated in a couple of successful applications (a multi-phase flow and an epileptic electro-encephalogram), which demonstrate its powerfulness in revealing the dynamical behaviors underlying the transitions of different flow patterns, and enabling to differentiate brain states of seizure and non-seizure.
Adaptive fusion of infrared and visible images in dynamic scene
NASA Astrophysics Data System (ADS)
Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi
2011-11-01
Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.
Multimodality imaging of adult gastric emergencies: A pictorial review
Sunnapwar, Abhijit; Ojili, Vijayanadh; Katre, Rashmi; Shah, Hardik; Nagar, Arpit
2017-01-01
Acute gastric emergencies require urgent surgical or nonsurgical intervention because they are associated with high morbidity and mortality. Imaging plays an important role in diagnosis since the clinical symptoms are often nonspecific and radiologist may be the first one to suggest a diagnosis as the imaging findings are often characteristic. The purpose of this article is to provide a comprehensive review of multimodality imaging (plain radiograph, fluoroscopy, and computed tomography) of various life threatening gastric emergencies. PMID:28515579
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chittenden, J. P., E-mail: j.chittenden@imperial.ac.uk; Appelbe, B. D.; Manke, F.
2016-05-15
We present the results of 3D simulations of indirect drive inertial confinement fusion capsules driven by the “high-foot” radiation pulse on the National Ignition Facility. The results are post-processed using a semi-deterministic ray tracing model to generate synthetic deuterium-tritium (DT) and deuterium-deuterium (DD) neutron spectra as well as primary and down scattered neutron images. Results with low-mode asymmetries are used to estimate the magnitude of anisotropy in the neutron spectra shift, width, and shape. Comparisons of primary and down scattered images highlight the lack of alignment between the neutron sources, scatter sites, and detector plane, which limits the ability tomore » infer the ρr of the fuel from a down scattered ratio. Further calculations use high bandwidth multi-mode perturbations to induce multiple short scale length flows in the hotspot. The results indicate that the effect of fluid velocity is to produce a DT neutron spectrum with an apparently higher temperature than that inferred from the DD spectrum and which is also higher than the temperature implied by the DT to DD yield ratio.« less
Huang, Yawen; Shao, Ling; Frangi, Alejandro F
2018-03-01
Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors, such as patient discomfort, increased cost, prolonged scanning time, and scanner unavailability. In additionally, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. In this paper, we propose a weakly coupled and geometry co-regularized joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting the large amounts of training data is often impractical. Our learning stage requires only a few registered multi-modality image pairs as training data. To employ both paired images and a large set of unpaired data, a cross-modality image matching criterion is proposed. Then, we propose a unified model by integrating such a criterion into the joint dictionary learning and the observed common feature space for associating cross-modality data for the purpose of synthesis. Furthermore, two regularization terms are added to construct robust sparse representations. Our experimental results demonstrate superior performance of the proposed model over state-of-the-art methods.
Present status and trends of image fusion
NASA Astrophysics Data System (ADS)
Xiang, Dachao; Fu, Sheng; Cai, Yiheng
2009-10-01
Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.
Digital diagnosis of medical images
NASA Astrophysics Data System (ADS)
Heinonen, Tomi; Kuismin, Raimo; Jormalainen, Raimo; Dastidar, Prasun; Frey, Harry; Eskola, Hannu
2001-08-01
The popularity of digital imaging devices and PACS installations has increased during the last years. Still, images are analyzed and diagnosed using conventional techniques. Our research group begun to study the requirements for digital image diagnostic methods to be applied together with PACS systems. The research was focused on various image analysis procedures (e.g., segmentation, volumetry, 3D visualization, image fusion, anatomic atlas, etc.) that could be useful in medical diagnosis. We have developed Image Analysis software (www.medimag.net) to enable several image-processing applications in medical diagnosis, such as volumetry, multimodal visualization, and 3D visualizations. We have also developed a commercial scalable image archive system (ActaServer, supports DICOM) based on component technology (www.acta.fi), and several telemedicine applications. All the software and systems operate in NT environment and are in clinical use in several hospitals. The analysis software have been applied in clinical work and utilized in numerous patient cases (500 patients). This method has been used in the diagnosis, therapy and follow-up in various diseases of the central nervous system (CNS), respiratory system (RS) and human reproductive system (HRS). In many of these diseases e.g. Systemic Lupus Erythematosus (CNS), nasal airways diseases (RS) and ovarian tumors (HRS), these methods have been used for the first time in clinical work. According to our results, digital diagnosis improves diagnostic capabilities, and together with PACS installations it will become standard tool during the next decade by enabling more accurate diagnosis and patient follow-up.
Djan, Igor; Petrović, Borislava; Erak, Marko; Nikolić, Ivan; Lucić, Silvija
2013-08-01
Development of imaging techniques, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), made great impact on radiotherapy treatment planning by improving the localization of target volumes. Improved localization allows better local control of tumor volumes, but also minimizes geographical misses. Mutual information is obtained by registration and fusion of images achieved manually or automatically. The aim of this study was to validate the CT-MRI image fusion method and compare delineation obtained by CT versus CT-MRI image fusion. The image fusion software (XIO CMS 4.50.0) was applied to delineate 16 patients. The patients were scanned on CT and MRI in the treatment position within an immobilization device before the initial treatment. The gross tumor volume (GTV) and clinical target volume (CTV) were delineated on CT alone and on CT+MRI images consecutively and image fusion was obtained. Image fusion showed that CTV delineated on a CT image study set is mainly inadequate for treatment planning, in comparison with CTV delineated on CT-MRI fused image study set. Fusion of different modalities enables the most accurate target volume delineation. This study shows that registration and image fusion allows precise target localization in terms of GTV and CTV and local disease control.
[Possibilities of sonographic image fusion: Current developments].
Jung, E M; Clevert, D-A
2015-11-01
For diagnostic and interventional procedures ultrasound (US) image fusion can be used as a complementary imaging technique. Image fusion has the advantage of real time imaging and can be combined with other cross-sectional imaging techniques. With the introduction of US contrast agents sonography and image fusion have gained more importance in the detection and characterization of liver lesions. Fusion of US images with computed tomography (CT) or magnetic resonance imaging (MRI) facilitates the diagnostics and postinterventional therapy control. In addition to the primary application of image fusion in the diagnosis and treatment of liver lesions, there are more useful indications for contrast-enhanced US (CEUS) in routine clinical diagnostic procedures, such as intraoperative US (IOUS), vascular imaging and diagnostics of other organs, such as the kidneys and prostate gland.
XML-based scripting of multimodality image presentations in multidisciplinary clinical conferences
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Allada, Vivekanand; Dahlbom, Magdalena; Marcus, Phillip; Fine, Ian; Lapstra, Lorelle
2002-05-01
We developed a multi-modality image presentation software for display and analysis of images and related data from different imaging modalities. The software is part of a cardiac image review and presentation platform that supports integration of digital images and data from digital and analog media such as videotapes, analog x-ray films and 35 mm cine films. The software supports standard DICOM image files as well as AVI and PDF data formats. The system is integrated in a digital conferencing room that includes projections of digital and analog sources, remote videoconferencing capabilities, and an electronic whiteboard. The goal of this pilot project is to: 1) develop a new paradigm for image and data management for presentation in a clinically meaningful sequence adapted to case-specific scenarios, 2) design and implement a multi-modality review and conferencing workstation using component technology and customizable 'plug-in' architecture to support complex review and diagnostic tasks applicable to all cardiac imaging modalities and 3) develop an XML-based scripting model of image and data presentation for clinical review and decision making during routine clinical tasks and multidisciplinary clinical conferences.
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-01-01
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. PMID:28505137
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-05-15
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems.
Durán-Sánchez, Manuel; Prieto-Cortés, Patricia; Salceda-Delgado, Guillermo; Castillo-Guzmán, Arturo A.; Selvas-Aguilar, Romeo; Ibarra-Escamilla, Baldemar; Kuzin, Evgeny A.
2017-01-01
An all-fiber curvature laser sensor by using a novel modal interference in-fiber structure is proposed and experimentally demonstrated. The in-fiber device, fabricated by fusion splicing of multimode fiber and double-clad fiber segments, is used as wavelength filter as well as the sensing element. By including a multimode fiber in an ordinary modal interference structure based on a double-clad fiber, the fringe visibility of the filter transmission spectrum is significantly increased. By using the modal interferometer as a curvature sensitive wavelength filter within a ring cavity erbium-doped fiber laser, the spectral quality factor Q is considerably increased. The results demonstrate the reliability of the proposed curvature laser sensor with advantages of robustness, ease of fabrication, low cost, repeatability on the fabrication process and simple operation. PMID:29182527
Álvarez-Tamayo, Ricardo I; Durán-Sánchez, Manuel; Prieto-Cortés, Patricia; Salceda-Delgado, Guillermo; Castillo-Guzmán, Arturo A; Selvas-Aguilar, Romeo; Ibarra-Escamilla, Baldemar; Kuzin, Evgeny A
2017-11-28
An all-fiber curvature laser sensor by using a novel modal interference in-fiber structure is proposed and experimentally demonstrated. The in-fiber device, fabricated by fusion splicing of multimode fiber and double-clad fiber segments, is used as wavelength filter as well as the sensing element. By including a multimode fiber in an ordinary modal interference structure based on a double-clad fiber, the fringe visibility of the filter transmission spectrum is significantly increased. By using the modal interferometer as a curvature sensitive wavelength filter within a ring cavity erbium-doped fiber laser, the spectral quality factor Q is considerably increased. The results demonstrate the reliability of the proposed curvature laser sensor with advantages of robustness, ease of fabrication, low cost, repeatability on the fabrication process and simple operation.
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-11-26
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
Weng, Sheng; Chen, Xu; Xu, Xiaoyun; Wong, Kelvin K.; Wong, Stephen T. C.
2016-01-01
In coherent anti-Stokes Raman scattering (CARS) and second harmonic generation (SHG) imaging, backward and forward generated photons exhibit different image patterns and thus capture salient intrinsic information of tissues from different perspectives. However, they are often mixed in collection using traditional image acquisition methods and thus are hard to interpret. We developed a multimodal scheme using a single central fiber and multimode fiber bundle to simultaneously collect and differentiate images formed by these two types of photons and evaluated the scheme in an endomicroscopy prototype. The ratio of these photons collected was calculated for the characterization of tissue regions with strong or weak epi-photon generation while different image patterns of these photons at different tissue depths were revealed. This scheme provides a new approach to extract and integrate information captured by backward and forward generated photons in dual CARS/SHG imaging synergistically for biomedical applications. PMID:27375938
[An improved medical image fusion algorithm and quality evaluation].
Chen, Meiling; Tao, Ling; Qian, Zhiyu
2009-08-01
Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.
Ramírez-Nava, Gerardo J; Santos-Cuevas, Clara L; Chairez, Isaac; Aranda-Lara, Liliana
2017-12-01
The aim of this study was to characterize the in vivo volumetric distribution of three folate-based biosensors by different imaging modalities (X-ray, fluorescence, Cerenkov luminescence, and radioisotopic imaging) through the development of a tridimensional image reconstruction algorithm. The preclinical and multimodal Xtreme imaging system, with a Multimodal Animal Rotation System (MARS), was used to acquire bidimensional images, which were processed to obtain the tridimensional reconstruction. Images of mice at different times (biosensor distribution) were simultaneously obtained from the four imaging modalities. The filtered back projection and inverse Radon transformation were used as main image-processing techniques. The algorithm developed in Matlab was able to calculate the volumetric profiles of 99m Tc-Folate-Bombesin (radioisotopic image), 177 Lu-Folate-Bombesin (Cerenkov image), and FolateRSense™ 680 (fluorescence image) in tumors and kidneys of mice, and no significant differences were detected in the volumetric quantifications among measurement techniques. The imaging tridimensional reconstruction algorithm can be easily extrapolated to different 2D acquisition-type images. This characteristic flexibility of the algorithm developed in this study is a remarkable advantage in comparison to similar reconstruction methods.
A survey of infrared and visual image fusion methods
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Hai, Jinjin; He, Kangjian
2017-09-01
Infrared (IR) and visual (VI) image fusion is designed to fuse multiple source images into a comprehensive image to boost imaging quality and reduce redundancy information, which is widely used in various imaging equipment to improve the visual ability of human and robot. The accurate, reliable and complementary descriptions of the scene in fused images make these techniques be widely used in various fields. In recent years, a large number of fusion methods for IR and VI images have been proposed due to the ever-growing demands and the progress of image representation methods; however, there has not been published an integrated survey paper about this field in last several years. Therefore, we make a survey to report the algorithmic developments of IR and VI image fusion. In this paper, we first characterize the IR and VI image fusion based applications to represent an overview of the research status. Then we present a synthesize survey of the state of the art. Thirdly, the frequently-used image fusion quality measures are introduced. Fourthly, we perform some experiments of typical methods and make corresponding analysis. At last, we summarize the corresponding tendencies and challenges in IR and VI image fusion. This survey concludes that although various IR and VI image fusion methods have been proposed, there still exist further improvements or potential research directions in different applications of IR and VI image fusion.
You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue
2018-01-01
To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.
Objective quality assessment for multiexposure multifocus image fusion.
Hassen, Rania; Wang, Zhou; Salama, Magdy M A
2015-09-01
There has been a growing interest in image fusion technologies, but how to objectively evaluate the quality of fused images has not been fully understood. Here, we propose a method for objective quality assessment of multiexposure multifocus image fusion based on the evaluation of three key factors of fused image quality: 1) contrast preservation; 2) sharpness; and 3) structure preservation. Subjective experiments are conducted to create an image fusion database, based on which, performance evaluation shows that the proposed fusion quality index correlates well with subjective scores, and gives a significant improvement over the existing fusion quality measures.
Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis
Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.
2006-01-01
In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709
Busse, Harald; Schmitgen, Arno; Trantakis, Christos; Schober, Ralf; Kahn, Thomas; Moche, Michael
2006-07-01
To present an advanced approach for intraoperative image guidance in an open 0.5 T MRI and to evaluate its effectiveness for neurosurgical interventions by comparison with a dynamic scan-guided localization technique. The built-in scan guidance mode relied on successive interactive MRI scans. The additional advanced mode provided real-time navigation based on reformatted high-quality, intraoperatively acquired MR reference data, allowed multimodal image fusion, and used the successive scans of the built-in mode for quick verification of the position only. Analysis involved tumor resections and biopsies in either scan guidance (N = 36) or advanced mode (N = 59) by the same three neurosurgeons. Technical, surgical, and workflow aspects were compared. The image quality and hand-eye coordination of the advanced approach were improved. While the average extent of resection, neurologic outcome after functional MRI (fMRI) integration, and diagnostic yield appeared to be slightly better under advanced guidance, particularly for the main surgeon, statistical analysis revealed no significant differences. Resection times were comparable, while biopsies took around 30 minutes longer. The presented approach is safe and provides more detailed images and higher navigation speed at the expense of actuality. The surgical outcome achieved with advanced guidance is (at least) as good as that obtained with dynamic scan guidance. (c) 2006 Wiley-Liss, Inc.
Safe Genetic Modification of Cardiac Stem Cells Using a Site-Specific Integration Technique
Lan, Feng; Liu, Junwei; Narsinh, Kazim H.; Hu, Shijun; Han, Leng; Lee, Andrew S.; Karow, Marisa; Nguyen, Patricia K.; Nag, Divya; Calos, Michele P.; Robbins, Robert C.; Wu, Joseph C.
2012-01-01
Background Human cardiac progenitor cells (hCPCs) are a promising cell source for regenerative repair after myocardial infarction. Exploitation of their full therapeutic potential may require stable genetic modification of the cells ex vivo. Safe genetic engineering of stem cells, using facile methods for site-specific integration of transgenes into known genomic contexts, would significantly enhance the overall safety and efficacy of cellular therapy in a variety of clinical contexts. Methods and Results We employed the phiC31 site-specific recombinase to achieve targeted integration of a triple fusion reporter gene into a known chromosomal context in hCPCs and human endothelial cells (hECs). Stable expression of the reporter gene from its unique chromosomal integration site resulted in no discernible genomic instability or adverse changes in cell phenotype. Namely, phiC31-modified hCPCs were unchanged in their differentiation propensity, cellular proliferative rate, and global gene expression profile when compared to unaltered control hCPCs. Expression of the triple fusion reporter gene enabled multimodal assessment of cell fate in vitro and in vivo using fluorescence microscopy, bioluminescence imaging (BLI), and positron emission tomography (PET). Intramyocardial transplantation of genetically modified hCPCs resulted in significant improvement in myocardial function two weeks after cell delivery, as assessed by echocardiography (P = 0.002) and magnetic resonance imaging (P = 0.001). We also demonstrated the feasibility and therapeutic efficacy of genetically modifying differentiated hECs, which enhanced hindlimb perfusion (P<0.05 at day 7 and 14 after transplantation) on laser Doppler imaging. Conclusions The phiC31 integrase genomic modification system is a safe, efficient tool to enable site-specific integration of reporter transgenes in progenitor and differentiated cell types. PMID:22965984
Franco, Alexandre R; Ling, Josef; Caprihan, Arvind; Calhoun, Vince D; Jung, Rex E; Heileman, Gregory L; Mayer, Andrew R
2008-12-01
The human brain functions as an efficient system where signals arising from gray matter are transported via white matter tracts to other regions of the brain to facilitate human behavior. However, with a few exceptions, functional and structural neuroimaging data are typically optimized to maximize the quantification of signals arising from a single source. For example, functional magnetic resonance imaging (FMRI) is typically used as an index of gray matter functioning whereas diffusion tensor imaging (DTI) is typically used to determine white matter properties. While it is likely that these signals arising from different tissue sources contain complementary information, the signal processing algorithms necessary for the fusion of neuroimaging data across imaging modalities are still in a nascent stage. In the current paper we present a data-driven method for combining measures of functional connectivity arising from gray matter sources (FMRI resting state data) with different measures of white matter connectivity (DTI). Specifically, a joint independent component analysis (J-ICA) was used to combine these measures of functional connectivity following intensive signal processing and feature extraction within each of the individual modalities. Our results indicate that one of the most predominantly used measures of functional connectivity (activity in the default mode network) is highly dependent on the integrity of white matter connections between the two hemispheres (corpus callosum) and within the cingulate bundles. Importantly, the discovery of this complex relationship of connectivity was entirely facilitated by the signal processing and fusion techniques presented herein and could not have been revealed through separate analyses of both data types as is typically performed in the majority of neuroimaging experiments. We conclude by discussing future applications of this technique to other areas of neuroimaging and examining potential limitations of the methods.
Recent Advances in Molecular, Multimodal and Theranostic Ultrasound Imaging
Kiessling, Fabian; Fokong, Stanley; Bzyl, Jessica; Lederle, Wiltrud; Palmowski, Moritz; Lammers, Twan
2014-01-01
Ultrasound (US) imaging is an exquisite tool for the non-invasive and real-time diagnosis of many different diseases. In this context, US contrast agents can improve lesion delineation, characterization and therapy response evaluation. US contrast agents are usually micrometer-sized gas bubbles, stabilized with soft or hard shells. By conjugating antibodies to the microbubble (MB) surface, and by incorporating diagnostic agents, drugs or nucleic acids into or onto the MB shell, molecular, multimodal and theranostic MB can be generated. We here summarize recent advances in molecular, multimodal and theranostic US imaging, and introduce concepts how such advanced MB can be generated, applied and imaged. Examples are given for their use to image and treat oncological, cardiovascular and neurological diseases. Furthermore, we discuss for which therapeutic entities incorporation into (or conjugation to) MB is meaningful, and how US-mediated MB destruction can increase their extravasation, penetration, internalization and efficacy. PMID:24316070
A New Approach to Image Fusion Based on Cokriging
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; LeMoigne, Jacqueline; Mount, David M.; Morisette, Jeffrey T.
2005-01-01
We consider the image fusion problem involving remotely sensed data. We introduce cokriging as a method to perform fusion. We investigate the advantages of fusing Hyperion with ALI. The evaluation is performed by comparing the classification of the fused data with that of input images and by calculating well-chosen quantitative fusion quality metrics. We consider the Invasive Species Forecasting System (ISFS) project as our fusion application. The fusion of ALI with Hyperion data is studies using PCA and wavelet-based fusion. We then propose utilizing a geostatistical based interpolation method called cokriging as a new approach for image fusion.
McLoughlin, L C; Inder, S; Moran, D; O'Rourke, C; Manecksha, R P; Lynch, T H
2018-02-01
The diagnostic evaluation of a PSA recurrence after RP in the Irish hospital setting involves multimodality imaging with MRI, CT, and bone scanning, despite the low diagnostic yield from imaging at low PSA levels. We aim to investigate the value of multimodality imaging in PC patients after RP with a PSA recurrence. Forty-eight patients with a PSA recurrence after RP who underwent multimodality imaging were evaluated. Demographic data, postoperative PSA levels, and imaging studies performed at those levels were evaluated. Eight (21%) MRIs, 6 (33%) CTs, and 4 (9%) bone scans had PCa-specific findings. Three (12%) patients had a positive MRI with a PSA <1.0 ng/ml, while 5 (56%) were positive at PSA ≥1.1 ng/ml (p = 0.05). Zero patient had a positive CT TAP at a PSA level <1.0 ng/ml, while 5 (56%) were positive at levels ≥1.1 ng/ml (p = 0.03). Zero patient had a positive bone at PSA levels <1.0 ng/ml, while 4 (27%) were positive at levels ≥1.1 ng/ml (p = 0.01). The diagnostic yield from multimodality imaging, and isotope bone scanning in particular, in PSA levels <1.0 ng/ml, is low. There is a statistically significant increase in the frequency of positive findings on CT and bone scanning at PSA levels ≥1.1 ng/ml. MRI alone is of investigative value at PSA <1.0 ng/ml. The indication for CT, MRI, or isotope bone scanning should be carefully correlated with the clinical question and how it will affect further management.
Dense depth maps from correspondences derived from perceived motion
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2017-01-01
Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.
NASA Astrophysics Data System (ADS)
Low, Kerwin; Elhadidi, Basman; Glauser, Mark
2009-11-01
Understanding the different noise production mechanisms caused by the free shear flows in a turbulent jet flow provides insight to improve ``intelligent'' feedback mechanisms to control the noise. Towards this effort, a control scheme is based on feedback of azimuthal pressure measurements in the near field of the jet at two streamwise locations. Previous studies suggested that noise reduction can be achieved by azimuthal actuators perturbing the shear layer at the jet lip. The closed-loop actuation will be based on a low-dimensional Fourier representation of the hydrodynamic pressure measurements. Preliminary results show that control authority and reduction in the overall sound pressure level was possible. These results provide motivation to move forward with the overall vision of developing innovative multi-mode sensing methods to improve state estimation and derive dynamical systems. It is envisioned that estimating velocity-field and dynamic pressure information from various locations both local and in the far-field regions, sensor fusion techniques can be utilized to ascertain greater overall control authority.
Fast and robust multimodal image registration using a local derivative pattern.
Jiang, Dongsheng; Shi, Yonghong; Chen, Xinrong; Wang, Manning; Song, Zhijian
2017-02-01
Deformable multimodal image registration, which can benefit radiotherapy and image guided surgery by providing complementary information, remains a challenging task in the medical image analysis field due to the difficulty of defining a proper similarity measure. This article presents a novel, robust and fast binary descriptor, the discriminative local derivative pattern (dLDP), which is able to encode images of different modalities into similar image representations. dLDP calculates a binary string for each voxel according to the pattern of intensity derivatives in its neighborhood. The descriptor similarity is evaluated using the Hamming distance, which can be efficiently computed, instead of conventional L1 or L2 norms. For the first time, we validated the effectiveness and feasibility of the local derivative pattern for multimodal deformable image registration with several multi-modal registration applications. dLDP was compared with three state-of-the-art methods in artificial image and clinical settings. In the experiments of deformable registration between different magnetic resonance imaging (MRI) modalities from BrainWeb, between computed tomography and MRI images from patient data, and between MRI and ultrasound images from BITE database, we show our method outperforms localized mutual information and entropy images in terms of both accuracy and time efficiency. We have further validated dLDP for the deformable registration of preoperative MRI and three-dimensional intraoperative ultrasound images. Our results indicate that dLDP reduces the average mean target registration error from 4.12 mm to 2.30 mm. This accuracy is statistically equivalent to the accuracy of the state-of-the-art methods in the study; however, in terms of computational complexity, our method significantly outperforms other methods and is even comparable to the sum of the absolute difference. The results reveal that dLDP can achieve superior performance regarding both accuracy and time efficiency in general multimodal image registration. In addition, dLDP also indicates the potential for clinical ultrasound guided intervention. © 2016 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Coughlin, Andrew J.; Ananta, Jeyarama S.; Deng, Nanfu; Larina, Irina V.; Decuzzi, Paolo
2014-01-01
Multimodal imaging offers the potential to improve diagnosis and enhance the specificity of photothermal cancer therapy. Toward this goal, we have engineered gadolinium-conjugated gold nanoshells and demonstrated that they enhance contrast for magnetic resonance imaging, X-Ray, optical coherence tomography, reflectance confocal microscopy, and two-photon luminescence. Additionally, these particles effectively convert near-infrared light to heat, which can be used to ablate cancer cells. Ultimately, these studies demonstrate the potential of gadolinium-nanoshells for image-guided photothermal ablation. PMID:24115690
Gradient-based multiresolution image fusion.
Petrović, Valdimir S; Xydeas, Costas S
2004-02-01
A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan
2014-01-01
Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery.
Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan
2014-01-01
Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery. PMID:25250092
Grating-based X-ray Dark-field Computed Tomography of Living Mice.
Velroyen, A; Yaroshenko, A; Hahn, D; Fehringer, A; Tapfer, A; Müller, M; Noël, P B; Pauwels, B; Sasov, A; Yildirim, A Ö; Eickelberg, O; Hellbach, K; Auweter, S D; Meinel, F G; Reiser, M F; Bech, M; Pfeiffer, F
2015-10-01
Changes in x-ray attenuating tissue caused by lung disorders like emphysema or fibrosis are subtle and thus only resolved by high-resolution computed tomography (CT). The structural reorganization, however, is of strong influence for lung function. Dark-field CT (DFCT), based on small-angle scattering of x-rays, reveals such structural changes even at resolutions coarser than the pulmonary network and thus provides access to their anatomical distribution. In this proof-of-concept study we present x-ray in vivo DFCTs of lungs of a healthy, an emphysematous and a fibrotic mouse. The tomographies show excellent depiction of the distribution of structural - and thus indirectly functional - changes in lung parenchyma, on single-modality slices in dark field as well as on multimodal fusion images. Therefore, we anticipate numerous applications of DFCT in diagnostic lung imaging. We introduce a scatter-based Hounsfield Unit (sHU) scale to facilitate comparability of scans. In this newly defined sHU scale, the pathophysiological changes by emphysema and fibrosis cause a shift towards lower numbers, compared to healthy lung tissue.
Grating-based X-ray Dark-field Computed Tomography of Living Mice
Velroyen, A.; Yaroshenko, A.; Hahn, D.; Fehringer, A.; Tapfer, A.; Müller, M.; Noël, P.B.; Pauwels, B.; Sasov, A.; Yildirim, A.Ö.; Eickelberg, O.; Hellbach, K.; Auweter, S.D.; Meinel, F.G.; Reiser, M.F.; Bech, M.; Pfeiffer, F.
2015-01-01
Changes in x-ray attenuating tissue caused by lung disorders like emphysema or fibrosis are subtle and thus only resolved by high-resolution computed tomography (CT). The structural reorganization, however, is of strong influence for lung function. Dark-field CT (DFCT), based on small-angle scattering of x-rays, reveals such structural changes even at resolutions coarser than the pulmonary network and thus provides access to their anatomical distribution. In this proof-of-concept study we present x-ray in vivo DFCTs of lungs of a healthy, an emphysematous and a fibrotic mouse. The tomographies show excellent depiction of the distribution of structural – and thus indirectly functional – changes in lung parenchyma, on single-modality slices in dark field as well as on multimodal fusion images. Therefore, we anticipate numerous applications of DFCT in diagnostic lung imaging. We introduce a scatter-based Hounsfield Unit (sHU) scale to facilitate comparability of scans. In this newly defined sHU scale, the pathophysiological changes by emphysema and fibrosis cause a shift towards lower numbers, compared to healthy lung tissue. PMID:26629545
Mert, Aygül; Kiesel, Barbara; Wöhrer, Adelheid; Martínez-Moreno, Mauricio; Minchev, Georgi; Furtner, Julia; Knosp, Engelbert; Wolfsberger, Stefan; Widhalm, Georg
2015-01-01
OBJECT Surgery of suspected low-grade gliomas (LGGs) poses a special challenge for neurosurgeons due to their diffusely infiltrative growth and histopathological heterogeneity. Consequently, neuronavigation with multimodality imaging data, such as structural and metabolic data, fiber tracking, and 3D brain visualization, has been proposed to optimize surgery. However, currently no standardized protocol has been established for multimodality imaging data in modern glioma surgery. The aim of this study was therefore to define a specific protocol for multimodality imaging and navigation for suspected LGG. METHODS Fifty-one patients who underwent surgery for a diffusely infiltrating glioma with nonsignificant contrast enhancement on MRI and available multimodality imaging data were included. In the first 40 patients with glioma, the authors retrospectively reviewed the imaging data, including structural MRI (contrast-enhanced T1-weighted, T2-weighted, and FLAIR sequences), metabolic images derived from PET, or MR spectroscopy chemical shift imaging, fiber tracking, and 3D brain surface/vessel visualization, to define standardized image settings and specific indications for each imaging modality. The feasibility and surgical relevance of this new protocol was subsequently prospectively investigated during surgery with the assistance of an advanced electromagnetic navigation system in the remaining 11 patients. Furthermore, specific surgical outcome parameters, including the extent of resection, histological analysis of the metabolic hotspot, presence of a new postoperative neurological deficit, and intraoperative accuracy of 3D brain visualization models, were assessed in each of these patients. RESULTS After reviewing these first 40 cases of glioma, the authors defined a specific protocol with standardized image settings and specific indications that allows for optimal and simultaneous visualization of structural and metabolic data, fiber tracking, and 3D brain visualization. This new protocol was feasible and was estimated to be surgically relevant during navigation-guided surgery in all 11 patients. According to the authors' predefined surgical outcome parameters, they observed a complete resection in all resectable gliomas (n = 5) by using contour visualization with T2-weighted or FLAIR images. Additionally, tumor tissue derived from the metabolic hotspot showed the presence of malignant tissue in all WHO Grade III or IV gliomas (n = 5). Moreover, no permanent postoperative neurological deficits occurred in any of these patients, and fiber tracking and/or intraoperative monitoring were applied during surgery in the vast majority of cases (n = 10). Furthermore, the authors found a significant intraoperative topographical correlation of 3D brain surface and vessel models with gyral anatomy and superficial vessels. Finally, real-time navigation with multimodality imaging data using the advanced electromagnetic navigation system was found to be useful for precise guidance to surgical targets, such as the tumor margin or the metabolic hotspot. CONCLUSIONS In this study, the authors defined a specific protocol for multimodality imaging data in suspected LGGs, and they propose the application of this new protocol for advanced navigation-guided procedures optimally in conjunction with continuous electromagnetic instrument tracking to optimize glioma surgery.
Nanoengineered multimodal contrast agent for medical image guidance
NASA Astrophysics Data System (ADS)
Perkins, Gregory J.; Zheng, Jinzi; Brock, Kristy; Allen, Christine; Jaffray, David A.
2005-04-01
Multimodality imaging has gained momentum in radiation therapy planning and image-guided treatment delivery. Specifically, computed tomography (CT) and magnetic resonance (MR) imaging are two complementary imaging modalities often utilized in radiation therapy for visualization of anatomical structures for tumour delineation and accurate registration of image data sets for volumetric dose calculation. The development of a multimodal contrast agent for CT and MR with prolonged in vivo residence time would provide long-lasting spatial and temporal correspondence of the anatomical features of interest, and therefore facilitate multimodal image registration, treatment planning and delivery. The multimodal contrast agent investigated consists of nano-sized stealth liposomes encapsulating conventional iodine and gadolinium-based contrast agents. The average loading achieved was 33.5 +/- 7.1 mg/mL of iodine for iohexol and 9.8 +/- 2.0 mg/mL of gadolinium for gadoteridol. The average liposome diameter was 46.2 +/- 13.5 nm. The system was found to be stable in physiological buffer over a 15-day period, releasing 11.9 +/- 1.1% and 11.2 +/- 0.9% of the total amounts of iohexol and gadoteridol loaded, respectively. 200 minutes following in vivo administration, the contrast agent maintained a relative contrast enhancement of 81.4 +/- 13.05 differential Hounsfield units (ΔHU) in CT (40% decrease from the peak signal value achieved 3 minutes post-injection) and 731.9 +/- 144.2 differential signal intensity (ΔSI) in MR (46% decrease from the peak signal value achieved 3 minutes post-injection) in the blood (aorta), a relative contrast enhancement of 38.0 +/- 5.1 ΔHU (42% decrease from the peak signal value achieved 3 minutes post-injection) and 178.6 +/- 41.4 ΔSI (62% decrease from the peak signal value achieved 3 minutes post-injection) in the liver (parenchyma), a relative contrast enhancement of 9.1 +/- 1.7 ΔHU (94% decrease from the peak signal value achieved 3 minutes post-injection) and 461.7 +/- 78.1 ΔSI (60% decrease from the peak signal value achieved 5 minutes post-injection) in the kidney (cortex) of a New Zealand white rabbit. This multimodal contrast agent, with prolonged in vivo residence time and imaging efficacy, has the potential to bring about improvements in the fields of medical imaging and radiation therapy, particularly for image registration and guidance.
Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM
NASA Astrophysics Data System (ADS)
Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang
2016-03-01
A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.
Vollnhals, Florian; Audinot, Jean-Nicolas; Wirtz, Tom; Mercier-Bonin, Muriel; Fourquaux, Isabelle; Schroeppel, Birgit; Kraushaar, Udo; Lev-Ram, Varda; Ellisman, Mark H; Eswara, Santhana
2017-10-17
Correlative microscopy combining various imaging modalities offers powerful insights into obtaining a comprehensive understanding of physical, chemical, and biological phenomena. In this article, we investigate two approaches for image fusion in the context of combining the inherently lower-resolution chemical images obtained using secondary ion mass spectrometry (SIMS) with the high-resolution ultrastructural images obtained using electron microscopy (EM). We evaluate the image fusion methods with three different case studies selected to broadly represent the typical samples in life science research: (i) histology (unlabeled tissue), (ii) nanotoxicology, and (iii) metabolism (isotopically labeled tissue). We show that the intensity-hue-saturation fusion method often applied for EM-sharpening can result in serious image artifacts, especially in cases where different contrast mechanisms interplay. Here, we introduce and demonstrate Laplacian pyramid fusion as a powerful and more robust alternative method for image fusion. Both physical and technical aspects of correlative image overlay and image fusion specific to SIMS-based correlative microscopy are discussed in detail alongside the advantages, limitations, and the potential artifacts. Quantitative metrics to evaluate the results of image fusion are also discussed.
Yao, Shujing; Zhang, Jiashu; Zhao, Yining; Hou, Yuanzheng; Xu, Xinghua; Zhang, Zhizhong; Kikinis, Ron; Chen, Xiaolei
2018-05-01
To address the feasibility and predictive value of multimodal image-based virtual reality in detecting and assessing features of neurovascular confliction (NVC), particularly regarding the detection of offending vessels, degree of compression exerted on the nerve root, in patients who underwent microvascular decompression for nonlesional trigeminal neuralgia and hemifacial spasm (HFS). This prospective study includes 42 consecutive patients who underwent microvascular decompression for classic primary trigeminal neuralgia or HFS. All patients underwent preoperative 1.5-T magnetic resonance imaging (MRI) with T2-weighted three-dimensional (3D) sampling perfection with application-optimized contrasts by using different flip angle evolutions, 3D time-of-flight magnetic resonance angiography, and 3D T1-weighted gadolinium-enhanced sequences in combination, whereas 2 patients underwent extra experimental preoperative 7.0-T MRI scans with the same imaging protocol. Multimodal MRIs were then coregistered with open-source software 3D Slicer, followed by 3D image reconstruction to generate virtual reality (VR) images for detection of possible NVC in the cerebellopontine angle. Evaluations were performed by 2 reviewers and compared with the intraoperative findings. For detection of NVC, multimodal image-based VR sensitivity was 97.6% (40/41) and specificity was 100% (1/1). Compared with the intraoperative findings, the κ coefficients for predicting the offending vessel and the degree of compression were >0.75 (P < 0.001). The 7.0-T scans have a clearer view of vessels in the cerebellopontine angle, which may have significant impact on detection of small-caliber offending vessels with relatively slow flow speed in cases of HFS. Multimodal image-based VR using 3D sampling perfection with application-optimized contrasts by using different flip angle evolutions in combination with 3D time-of-flight magnetic resonance angiography sequences proved to be reliable in detecting NVC and in predicting the degree of root compression. The VR image-based simulation correlated well with the real surgical view. Copyright © 2018 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Tan, Sabine; O'Halloran, Kay L.; Wignell, Peter
2016-01-01
Multimodality, the study of the interaction of language with other semiotic resources such as images and sound resources, has significant implications for computer assisted language learning (CALL) with regards to understanding the impact of digital environments on language teaching and learning. In this paper, we explore recent manifestations of…
Feature-based Alignment of Volumetric Multi-modal Images
Toews, Matthew; Zöllei, Lilla; Wells, William M.
2014-01-01
This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955
Multimodal Spectral Imaging of Cells Using a Transmission Diffraction Grating on a Light Microscope
Isailovic, Dragan; Xu, Yang; Copus, Tyler; Saraswat, Suraj; Nauli, Surya M.
2011-01-01
A multimodal methodology for spectral imaging of cells is presented. The spectral imaging setup uses a transmission diffraction grating on a light microscope to concurrently record spectral images of cells and cellular organelles by fluorescence, darkfield, brightfield, and differential interference contrast (DIC) spectral microscopy. Initially, the setup was applied for fluorescence spectral imaging of yeast and mammalian cells labeled with multiple fluorophores. Fluorescence signals originating from fluorescently labeled biomolecules in cells were collected through triple or single filter cubes, separated by the grating, and imaged using a charge-coupled device (CCD) camera. Cellular components such as nuclei, cytoskeleton, and mitochondria were spatially separated by the fluorescence spectra of the fluorophores present in them, providing detailed multi-colored spectral images of cells. Additionally, the grating-based spectral microscope enabled measurement of scattering and absorption spectra of unlabeled cells and stained tissue sections using darkfield and brightfield or DIC spectral microscopy, respectively. The presented spectral imaging methodology provides a readily affordable approach for multimodal spectral characterization of biological cells and other specimens. PMID:21639978
Makino, Yuki; Imai, Yasuharu; Igura, Takumi; Hori, Masatoshi; Fukuda, Kazuto; Sawai, Yoshiyuki; Kogita, Sachiyo; Fujita, Norihiko; Takehara, Tetsuo; Murakami, Takamichi
2015-01-01
To assess the feasibility of fusion of pre- and post-ablation gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid-enhanced magnetic resonance imaging (Gd-EOB-DTPA-MRI) to evaluate the effects of radiofrequency ablation (RFA) of hepatocellular carcinoma (HCC), compared with similarly fused CT images This retrospective study included 67 patients with 92 HCCs treated with RFA. Fusion images of pre- and post-RFA dynamic CT, and pre- and post-RFA Gd-EOB-DTPA-MRI were created, using a rigid registration method. The minimal ablative margin measured on fusion imaging was categorized into three groups: (1) tumor protruding outside the ablation zone boundary, (2) ablative margin 0-<5.0 mm beyond the tumor boundary, and (3) ablative margin ≥5.0 mm beyond the tumor boundary. The categorization of minimal ablative margins was compared between CT and MR fusion images. In 57 (62.0%) HCCs, treatment evaluation was possible both on CT and MR fusion images, and the overall agreement between them for the categorization of minimal ablative margin was good (κ coefficient = 0.676, P < 0.01). MR fusion imaging enabled treatment evaluation in a significantly larger number of HCCs than CT fusion imaging (86/92 [93.5%] vs. 62/92 [67.4%], P < 0.05). Fusion of pre- and post-ablation Gd-EOB-DTPA-MRI is feasible for treatment evaluation after RFA. It may enable accurate treatment evaluation in cases where CT fusion imaging is not helpful.
[Research progress of multi-model medical image fusion and recognition].
Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian
2013-10-01
Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.
Rapid Screening of Cancer Margins in Tissue with Multimodal Confocal Microscopy
Gareau, Daniel S.; Jeon, Hana; Nehal, Kishwer S.; Rajadhyaksha, Milind
2012-01-01
Background Complete and accurate excision of cancer is guided by the examination of histopathology. However, preparation of histopathology is labor intensive and slow, leading to insufficient sampling of tissue and incomplete and/or inaccurate excision of margins. We demonstrate the potential utility of multimodal confocal mosaicing microscopy for rapid screening of cancer margins, directly in fresh surgical excisions, without the need for conventional embedding, sectioning or processing. Materials/Methods A multimodal confocal mosaicing microscope was developed to image basal cell carcinoma margins in surgical skin excisions, with resolution that shows nuclear detail. Multimodal contrast is with fluorescence for imaging nuclei and reflectance for cellular cytoplasm and dermal collagen. Thirtyfive excisions of basal cell carcinomas from Mohs surgery were imaged, and the mosaics analyzed by comparison to the corresponding frozen pathology. Results Confocal mosaics are produced in about 9 minutes, displaying tissue in fields-of-view of 12 mm with 2X magnification. A digital staining algorithm transforms black and white contrast to purple and pink, which simulates the appearance of standard histopathology. Mosaicing enables rapid digital screening, which mimics the examination of histopathology. Conclusions Multimodal confocal mosaicing microscopy offers a technology platform to potentially enable real-time pathology at the bedside. The imaging may serve as an adjunct to conventional histopathology, to expedite screening of margins and guide surgery toward more complete and accurate excision of cancer. PMID:22721570
NASA Astrophysics Data System (ADS)
Rouffiac, Valérie; Ser-Leroux, Karine; Dugon, Emilie; Leguerney, Ingrid; Polrot, Mélanie; Robin, Sandra; Salomé-Desnoulez, Sophie; Ginefri, Jean-Christophe; Sebrié, Catherine; Laplace-Builhé, Corinne
2015-03-01
In vivo high-resolution imaging of tumor development is possible through dorsal skinfold chamber implantable on mice model. However, current intravital imaging systems are weakly tolerated along time by mice and do not allow multimodality imaging. Our project aims to develop a new chamber for: 1- long-term micro/macroscopic visualization of tumor (vascular and cellular compartments) and tissue microenvironment; and 2- multimodality imaging (photonic, MRI and sonography). Our new experimental device was patented in March 2014 and was primarily assessed on 75 mouse engrafted with 4T1-Luc tumor cell line, and validated in confocal and multiphoton imaging after staining the mice vasculature using Dextran 155KDa-TRITC or Dextran 2000kDa-FITC. Simultaneously, a universal stage was designed for optimal removal of respiratory and cardiac artifacts during microscopy assays. Experimental results from optical, ultrasound (Bmode and pulse subtraction mode) and MRI imaging (anatomic sequences) showed that our patented design, unlike commercial devices, improves longitudinal monitoring over several weeks (35 days on average against 12 for the commercial chamber) and allows for a better characterization of the early and late tissue alterations due to tumour development. We also demonstrated the compatibility for multimodality imaging and the increase of mice survival was by a factor of 2.9, with our new skinfold chamber. Current developments include: 1- defining new procedures for multi-labelling of cells and tissue (screening of fluorescent molecules and imaging protocols); 2- developing ultrasound and MRI imaging procedures with specific probes; 3- correlating optical/ultrasound/MRI data for a complete mapping of tumour development and microenvironment.
Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation
Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang
2015-01-01
The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829
Kim, Su Wan; Song, Heesung
2017-12-01
We report the case of a 19-year-old man who presented with a 12-year history of progressive fatigue, feeling hot, excessive sweating, and numbness in the left arm. He had undergone multimodal imaging and was diagnosed as having Klippel-Trénaunay-Weber syndrome (KTWS). This is a rare congenital disease, defined by combinations of nevus flammeus, venous and lymphatic malformation, and hypertrophy of the affected limbs. Lower extremities are affected mostly. Conventional modalities for evaluating KTWS are ultrasonography, CT, MRI, lymphoscintigraphy, and angiography. There are few reports on multimodal imaging of upper extremities of KTWS patients, and this is the first report of an infrared thermography in KTWS.
High-resolution multimodal clinical multiphoton tomography of skin
NASA Astrophysics Data System (ADS)
König, Karsten
2011-03-01
This review focuses on multimodal multiphoton tomography based on near infrared femtosecond lasers. Clinical multiphoton tomographs for 3D high-resolution in vivo imaging have been placed into the market several years ago. The second generation of this Prism-Award winning High-Tech skin imaging tool (MPTflex) was introduced in 2010. The same year, the world's first clinical CARS studies have been performed with a hybrid multimodal multiphoton tomograph. In particular, non-fluorescent lipids and water as well as mitochondrial fluorescent NAD(P)H, fluorescent elastin, keratin, and melanin as well as SHG-active collagen has been imaged with submicron resolution in patients suffering from psoriasis. Further multimodal approaches include the combination of multiphoton tomographs with low-resolution wide-field systems such as ultrasound, optoacoustical, OCT, and dermoscopy systems. Multiphoton tomographs are currently employed in Australia, Japan, the US, and in several European countries for early diagnosis of skin cancer, optimization of treatment strategies, and cosmetic research including long-term testing of sunscreen nanoparticles as well as anti-aging products.
Dong, Kai; Ju, Enguo; Liu, Jianhua; Han, Xueli; Ren, Jinsong; Qu, Xiaogang
2014-10-21
Multimodal molecular imaging has recently attracted much attention on disease diagnostics by taking advantage of individual imaging modalities. Herein, we have demonstrated a new paradigm for multimodal bioimaging based on amino acids-anchored ultrasmall lanthanide-doped GdVO4 nanoprobes. On the merit of special metal-cation complexation and abundant functional groups, these amino acids-anchored nanoprobes showed high colloidal stability and excellent dispersibility. Additionally, due to typical paramagnetic behaviour, high X-ray mass absorption coefficient and strong fluorescence, these nanoprobes would provide a unique opportunity to develop multifunctional probes for MRI, CT and luminescence imaging. More importantly, the small size and biomolecular coatings endow the nanoprobes with effective metabolisability and high biocompatibility. With the superior stability, high biocompatibility, effective metabolisability and excellent contrast performance, amino acids-capped GdVO4:Eu(3+) nanocastings are a promising candidate as multimodal contrast agents and would bring more opportunities for biological and medical applications with further modifications.
Shirvani, Atefeh; Jabbari, Keyvan; Amouheidari, Alireza
2017-01-01
In radiation therapy, computed tomography (CT) simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI)-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P < 0.005) and second factor was the angle between CT and MRI slice in the sagittal plane (cor = 0.75, P < 0.005). In 20% of patients, this angle was more than 28° and image fusion was not efficient. In 17% of patients, difference slice gap in CT and MRI was >4 cm and image fusion quality was <25%. The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle.
Extended depth of field integral imaging using multi-focus fusion
NASA Astrophysics Data System (ADS)
Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua
2018-03-01
In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.
Robust detection of heartbeats using association models from blood pressure and EEG signals.
Jeon, Taegyun; Yu, Jongmin; Pedrycz, Witold; Jeon, Moongu; Lee, Boreom; Lee, Byeongcheol
2016-01-15
The heartbeat is fundamental cardiac activity which is straightforwardly detected with a variety of measurement techniques for analyzing physiological signals. Unfortunately, unexpected noise or contaminated signals can distort or cut out electrocardiogram (ECG) signals in practice, misleading the heartbeat detectors to report a false heart rate or suspend itself for a considerable length of time in the worst case. To deal with the problem of unreliable heartbeat detection, PhysioNet/CinC suggests a challenge in 2014 for developing robust heart beat detectors using multimodal signals. This article proposes a multimodal data association method that supplements ECG as a primary input signal with blood pressure (BP) and electroencephalogram (EEG) as complementary input signals when input signals are unreliable. If the current signal quality index (SQI) qualifies ECG as a reliable input signal, our method applies QRS detection to ECG and reports heartbeats. Otherwise, the current SQI selects the best supplementary input signal between BP and EEG after evaluating the current SQI of BP. When BP is chosen as a supplementary input signal, our association model between ECG and BP enables us to compute their regular intervals, detect characteristics BP signals, and estimate the locations of the heartbeat. When both ECG and BP are not qualified, our fusion method resorts to the association model between ECG and EEG that allows us to apply an adaptive filter to ECG and EEG, extract the QRS candidates, and report heartbeats. The proposed method achieved an overall score of 86.26 % for the test data when the input signals are unreliable. Our method outperformed the traditional method, which achieved 79.28 % using QRS detector and BP detector from PhysioNet. Our multimodal signal processing method outperforms the conventional unimodal method of taking ECG signals alone for both training and test data sets. To detect the heartbeat robustly, we have proposed a novel multimodal data association method of supplementing ECG with a variety of physiological signals and accounting for the patient-specific lag between different pulsatile signals and ECG. Multimodal signal detectors and data-fusion approaches such as those proposed in this article can reduce false alarms and improve patient monitoring.
Approach for scene reconstruction from the analysis of a triplet of still images
NASA Astrophysics Data System (ADS)
Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle
1997-03-01
Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.
Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga
2017-11-01
Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.
Baek, Jihye; Huh, Jangyoung; Kim, Myungsoo; Hyun An, So; Oh, Yoonjin; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena
2013-02-01
To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Volume measurement, using 3D US, shows a 2.8 ± 1.5% error, 4.4 ± 3.0% error for CT, and 3.1 ± 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.
Multispectral image fusion for target detection
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-09-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Applicability of common measures in multifocus image fusion comparison
NASA Astrophysics Data System (ADS)
Vajgl, Marek
2017-11-01
Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.
NASA Astrophysics Data System (ADS)
Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue
2016-03-01
During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.
2013-10-01
AD_________________ Award Number: W81XWH-12-1-0597 TITLE: Parametric PET /MR Fusion Imaging to...Parametric PET /MR Fusion Imaging to Differentiate Aggressive from Indolent Primary Prostate Cancer with Application for Image-Guided Prostate Cancer Biopsies...The study investigates whether fusion PET /MRI imaging with 18F-choline PET /CT and diffusion-weighted MRI can be successfully applied to target prostate
Taqueti, Viviany R.; Di Carli, Marcelo F.
2018-01-01
Over the last several decades, radionuclide myocardial perfusion imaging (MPI) with single photon emission tomography and positron emission tomography has been a mainstay for the evaluation of patients with known or suspected coronary artery disease (CAD). More recently, technical advances in separate and complementary imaging modalities including coronary computed tomography angiography, computed tomography perfusion, cardiac magnetic resonance imaging, and contrast stress echocardiography have expanded the toolbox of diagnostic testing for cardiac patients. While the growth of available technologies has heralded an exciting era of multimodality cardiovascular imaging, coordinated and dispassionate utilization of these techniques is needed to implement the right test for the right patient at the right time, a promise of “precision medicine.” In this article, we review the maturing role of MPI in the current era of multimodality cardiovascular imaging, particularly in the context of recent advances in myocardial blood flow quantitation, and as applied to the evaluation of patients with known or suspected CAD. PMID:25770849
NASA Astrophysics Data System (ADS)
Issaei, Ali; Szczygiel, Lukasz; Hossein-Javaheri, Nima; Young, Mei; Molday, L. L.; Molday, R. S.; Sarunic, M. V.
2011-03-01
Scanning Laser Ophthalmoscopy (SLO) and Coherence Tomography (OCT) are complimentary retinal imaging modalities. Integration of SLO and OCT allows for both fluorescent detection and depth- resolved structural imaging of the retinal cell layers to be performed in-vivo. System customization is required to image rodents used in medical research by vision scientists. We are investigating multimodal SLO/OCT imaging of a rodent model of Stargardt's Macular Dystrophy which is characterized by retinal degeneration and accumulation of toxic autofluorescent lipofuscin deposits. Our new findings demonstrate the ability to track fundus autofluorescence and retinal degeneration concurrently.
Multimodal Image Alignment via Linear Mapping between Feature Modalities.
Jiang, Yanyun; Zheng, Yuanjie; Hou, Sujuan; Chang, Yuchou; Gee, James
2017-01-01
We propose a novel landmark matching based method for aligning multimodal images, which is accomplished uniquely by resolving a linear mapping between different feature modalities. This linear mapping results in a new measurement on similarity of images captured from different modalities. In addition, our method simultaneously solves this linear mapping and the landmark correspondences by minimizing a convex quadratic function. Our method can estimate complex image relationship between different modalities and nonlinear nonrigid spatial transformations even in the presence of heavy noise, as shown in our experiments carried out by using a variety of image modalities.
Liu, Xiaonan; Chen, Kewei; Wu, Teresa; Weidman, David; Lure, Fleming; Li, Jing
2018-04-01
Alzheimer's disease (AD) is a major neurodegenerative disease and the most common cause of dementia. Currently, no treatment exists to slow down or stop the progression of AD. There is converging belief that disease-modifying treatments should focus on early stages of the disease, that is, the mild cognitive impairment (MCI) and preclinical stages. Making a diagnosis of AD and offering a prognosis (likelihood of converting to AD) at these early stages are challenging tasks but possible with the help of multimodality imaging, such as magnetic resonance imaging (MRI), fluorodeoxyglucose (FDG)-positron emission topography (PET), amyloid-PET, and recently introduced tau-PET, which provides different but complementary information. This article is a focused review of existing research in the recent decade that used statistical machine learning and artificial intelligence methods to perform quantitative analysis of multimodality image data for diagnosis and prognosis of AD at the MCI or preclinical stages. We review the existing work in 3 subareas: diagnosis, prognosis, and methods for handling modality-wise missing data-a commonly encountered problem when using multimodality imaging for prediction or classification. Factors contributing to missing data include lack of imaging equipment, cost, difficulty of obtaining patient consent, and patient drop-off (in longitudinal studies). Finally, we summarize our major findings and provide some recommendations for potential future research directions. Copyright © 2018 Elsevier Inc. All rights reserved.
Direct percutaneous transaortic approach for treatment of aortic pseudoaneurysms.
Pirelli, Luigi; Kliger, Chad; Fontana, Gregory P; Ruiz, Carlos E
2015-05-01
Aortic pseudoaneurysms (APAs) can develop months or years after aortic and cardiac surgery. If not treated appropriately, APAs can lead to fatal complications and ultimately death. We describe a case of a 61-year old patient with a diagnosed large pseudoaneurysm 5 years after his aortic valve surgery, who was treated with a novel transcatheter direct transaortic approach. The patient had dilated cardiomyopathy with an APA adjacent to the lower sternal plate. An Amplatzer septal occlusion device followed by coils was delivered transcutaneously through the APA to close its neck and fill the false aneurysm, respectively. Triple fusion multimodality imaging was used to guide the placement of the occlusion devices. The merging of computed tomography (CT) and echocardiography with real-time fluoroscopy was fundamental in procedural planning and guidance. Post-procedural transoesophageal echocardiogram (TOE) and CT angiography showed complete exclusion of the APA. A direct transaortic approach is a valid option for closure of an APA if the surgical risk is prohibitive, and the use of triple fusion technology is an essential tool in the hands of interventionalists and surgeons for preoperative planning and conduction of these procedures. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
NASA Astrophysics Data System (ADS)
Bachmann, M.; Besse, P. A.; Melchior, H.
1995-10-01
Overlapping-image multimode interference (MMI) couplers, a new class of devices, permit uniform and nonuniform power splitting. A theoretical description directly relates coupler geometry to image intensities, positions, and phases. Among many possibilities of nonuniform power splitting, examples of 1 \\times 2 couplers with ratios of 15:85 and 28:72 are given. An analysis of uniform power splitters includes the well-known 2 \\times N and 1 \\times N MMI couplers. Applications of MMI couplers include mode filters, mode splitters-combiners, and mode converters.
Shirvani, Atefeh; Jabbari, Keyvan; Amouheidari, Alireza
2017-01-01
Background: In radiation therapy, computed tomography (CT) simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI)-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. Materials and Methods: In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. Results: According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P < 0.005) and second factor was the angle between CT and MRI slice in the sagittal plane (cor = 0.75, P < 0.005). In 20% of patients, this angle was more than 28° and image fusion was not efficient. In 17% of patients, difference slice gap in CT and MRI was >4 cm and image fusion quality was <25%. Conclusion: The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle. PMID:29387672
Infrared and visible image fusion with spectral graph wavelet transform.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo
2015-09-01
Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.
Song, Weixiang; Luo, Yindeng; Zhao, Yajing; Liu, Xinjie; Zhao, Jiannong; Luo, Jie; Zhang, Qunxia; Ran, Haitao; Wang, Zhigang; Guo, Dajing
2017-05-01
The aim of this study was to improve tumor-targeted therapy for breast cancer by designing magnetic nanobubbles with the potential for targeted drug delivery and multimodal imaging. Herceptin-decorated and ultrasmall superparamagnetic iron oxide (USPIO)/paclitaxel (PTX)-embedded nanobubbles (PTX-USPIO-HER-NBs) were manufactured by combining a modified double-emulsion evaporation process with carbodiimide technique. PTX-USPIO-HER-NBs were examined for characterization, specific cell-targeting ability and multimodal imaging. PTX-USPIO-HER-NBs exhibited excellent entrapment efficiency of Herceptin/PTX/USPIO and showed greater cytotoxic effects than other delivery platforms. Low-frequency ultrasound triggered accelerated PTX release. Moreover, the magnetic nanobubbles were able to enhance ultrasound, magnetic resonance and photoacoustics trimodal imaging. These results suggest that PTX-USPIO-HER-NBs have potential as a multimodal contrast agent and as a system for ultrasound-triggered drug release in breast cancer.
Exogenous Molecular Probes for Targeted Imaging in Cancer: Focus on Multi-modal Imaging
Joshi, Bishnu P.; Wang, Thomas D.
2010-01-01
Cancer is one of the major causes of mortality and morbidity in our healthcare system. Molecular imaging is an emerging methodology for the early detection of cancer, guidance of therapy, and monitoring of response. The development of new instruments and exogenous molecular probes that can be labeled for multi-modality imaging is critical to this process. Today, molecular imaging is at a crossroad, and new targeted imaging agents are expected to broadly expand our ability to detect and manage cancer. This integrated imaging strategy will permit clinicians to not only localize lesions within the body but also to manage their therapy by visualizing the expression and activity of specific molecules. This information is expected to have a major impact on drug development and understanding of basic cancer biology. At this time, a number of molecular probes have been developed by conjugating various labels to affinity ligands for targeting in different imaging modalities. This review will describe the current status of exogenous molecular probes for optical, scintigraphic, MRI and ultrasound imaging platforms. Furthermore, we will also shed light on how these techniques can be used synergistically in multi-modal platforms and how these techniques are being employed in current research. PMID:22180839
Multimodal Task-Driven Dictionary Learning for Image Classification
2015-12-18
1 Multimodal Task-Driven Dictionary Learning for Image Classification Soheil Bahrampour, Student Member, IEEE, Nasser M. Nasrabadi, Fellow, IEEE...Asok Ray, Fellow, IEEE, and W. Kenneth Jenkins, Life Fellow, IEEE Abstract— Dictionary learning algorithms have been suc- cessfully used for both...reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms. While these methods are
Learning of Multimodal Representations With Random Walks on the Click Graph.
Wu, Fei; Lu, Xinyan; Song, Jun; Yan, Shuicheng; Zhang, Zhongfei Mark; Rui, Yong; Zhuang, Yueting
2016-02-01
In multimedia information retrieval, most classic approaches tend to represent different modalities of media in the same feature space. With the click data collected from the users' searching behavior, existing approaches take either one-to-one paired data (text-image pairs) or ranking examples (text-query-image and/or image-query-text ranking lists) as training examples, which do not make full use of the click data, particularly the implicit connections among the data objects. In this paper, we treat the click data as a large click graph, in which vertices are images/text queries and edges indicate the clicks between an image and a query. We consider learning a multimodal representation from the perspective of encoding the explicit/implicit relevance relationship between the vertices in the click graph. By minimizing both the truncated random walk loss as well as the distance between the learned representation of vertices and their corresponding deep neural network output, the proposed model which is named multimodal random walk neural network (MRW-NN) can be applied to not only learn robust representation of the existing multimodal data in the click graph, but also deal with the unseen queries and images to support cross-modal retrieval. We evaluate the latent representation learned by MRW-NN on a public large-scale click log data set Clickture and further show that MRW-NN achieves much better cross-modal retrieval performance on the unseen queries/images than the other state-of-the-art methods.
Ma, Xibo; Jin, Yushen; Wang, Yi; Zhang, Shuai; Peng, Dong; Yang, Xin; Wei, Shoushui; Chai, Wei; Li, Xuejun; Tian, Jie
2018-01-01
Tumor cell complete extinction is a crucial measure to evaluate antitumor efficacy. The difficulties in defining tumor margins and finding satellite metastases are the reason for tumor recurrence. A synergistic method based on multimodality molecular imaging needs to be developed so as to achieve the complete extinction of the tumor cells. In this study, graphene oxide conjugated with gold nanostars and chelated with Gd through 1,4,7,10-tetraazacyclododecane-N,N',N,N'-tetraacetic acid (DOTA) (GO-AuNS-DOTA-Gd) were prepared to target HCC-LM3-fLuc cells and used for therapy. For subcutaneous tumor, multimodality molecular imaging including photoacoustic imaging (PAI) and magnetic resonance imaging (MRI) and the related processing techniques were used to monitor the pharmacokinetics process of GO-AuNS-DOTA-Gd in order to determine the optimal time for treatment. For orthotopic tumor, MRI was used to delineate the tumor location and margin in vivo before treatment. Then handheld photoacoustic imaging system was used to determine the tumor location during the surgery and guided the photothermal therapy. The experiment result based on orthotopic tumor demonstrated that this synergistic method could effectively reduce tumor residual and satellite metastases by 85.71% compared with the routine photothermal method without handheld PAI guidance. These results indicate that this multimodality molecular imaging-guided photothermal therapy method is promising with a good prospect in clinical application.
Li, Shihong; Goins, Beth; Zhang, Lujun; Bao, Ande
2012-06-20
Liposomes are effective lipid nanoparticle drug delivery systems, which can also be functionalized with noninvasive multimodality imaging agents with each modality providing distinct information and having synergistic advantages in diagnosis, monitoring of disease treatment, and evaluation of liposomal drug pharmacokinetics. We designed and constructed a multifunctional theranostic liposomal drug delivery system, which integrated multimodality magnetic resonance (MR), near-infrared (NIR) fluorescent and nuclear imaging of liposomal drug delivery, and therapy monitoring and prediction. The premanufactured liposomes were composed of DSPC/cholesterol/Gd-DOTA-DSPE/DOTA-DSPE with the molar ratio of 39:35:25:1 and having ammonium sulfate/pH gradient. A lipidized NIR fluorescent tracer, IRDye-DSPE, was effectively postinserted into the premanufactured liposomes. Doxorubicin could be effectively postloaded into the multifunctional liposomes. The multifunctional doxorubicin-liposomes could also be stably radiolabeled with (99m)Tc or (64)Cu for single-photon emission computed tomography (SPECT) or positron emission tomography (PET) imaging, respectively. MR images displayed the high-resolution micro-intratumoral distribution of the liposomes in squamous cell carcinoma of head and neck (SCCHN) tumor xenografts in nude rats after intratumoral injection. NIR fluorescent, SPECT, and PET images also clearly showed either the high intratumoral retention or distribution of the multifunctional liposomes. This multifunctional drug carrying liposome system is promising for disease theranostics allowing noninvasive multimodality NIR fluorescent, MR, SPECT, and PET imaging of their in vivo behavior and capitalizing on the inherent advantages of each modality.
Range and Panoramic Image Fusion Into a Textured Range Image for Culture Heritage Documentation
NASA Astrophysics Data System (ADS)
Bila, Z.; Reznicek, J.; Pavelka, K.
2013-07-01
This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.
Investigations of image fusion
NASA Astrophysics Data System (ADS)
Zhang, Zhong
1999-12-01
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.
Sensor fusion for synthetic vision
NASA Technical Reports Server (NTRS)
Pavel, M.; Larimer, J.; Ahumada, A.
1991-01-01
Display methodologies are explored for fusing images gathered by millimeter wave sensors with images rendered from an on-board terrain data base to facilitate visually guided flight and ground operations in low visibility conditions. An approach to fusion based on multiresolution image representation and processing is described which facilitates fusion of images differing in resolution within and between images. To investigate possible fusion methods, a workstation-based simulation environment is being developed.
Schulte, Tilman; Oberlin, Brandon G; Kareken, David A; Marinkovic, Ksenija; Müller-Oehring, Eva M; Meyerhoff, Dieter J; Tapert, Susan
2012-12-01
Multimodal imaging combining 2 or more techniques is becoming increasingly important because no single imaging approach has the capacity to elucidate all clinically relevant characteristics of a network. This review highlights recent advances in multimodal neuroimaging (i.e., combined use and interpretation of data collected through magnetic resonance imaging [MRI], functional MRI, diffusion tensor imaging, positron emission tomography, magnetoencephalography, MR perfusion, and MR spectroscopy methods) that leads to a more comprehensive understanding of how acute and chronic alcohol consumption affect neural networks underlying cognition, emotion, reward processing, and drinking behavior. Several innovative investigators have started utilizing multiple imaging approaches within the same individual to better understand how alcohol influences brain systems, both during intoxication and after years of chronic heavy use. Their findings can help identify mechanism-based therapeutic and pharmacological treatment options, and they may increase the efficacy and cost effectiveness of such treatments by predicting those at greatest risk for relapse. Copyright © 2012 by the Research Society on Alcoholism.
Dye-Enhanced Multimodal Confocal Imaging of Brain Cancers
NASA Astrophysics Data System (ADS)
Wirth, Dennis; Snuderl, Matija; Sheth, Sameer; Curry, William; Yaroslavsky, Anna
2011-04-01
Background and Significance: Accurate high resolution intraoperative detection of brain tumors may result in improved patient survival and better quality of life. The goal of this study was to evaluate dye enhanced multimodal confocal imaging for discriminating normal and cancerous brain tissue. Materials and Methods: Fresh thick brain specimens were obtained from the surgeries. Normal and cancer tissues were investigated. Samples were stained in methylene blue and imaged. Reflectance and fluorescence signals were excited at 658nm. Fluorescence emission and polarization were registered from 670 nm to 710 nm. The system provided lateral resolution of 0.6 μm and axial resolution of 7 μm. Normal and cancer specimens exhibited distinctively different characteristics. H&E histopathology was processed from each imaged sample. Results and Conclusions: The analysis of normal and cancerous tissues indicated clear differences in appearance in both the reflectance and fluorescence responses. These results confirm the feasibility of multimodal confocal imaging for intraoperative detection of small cancer nests and cells.
Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B
2013-03-01
Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.
a Comparative Analysis of Spatiotemporal Data Fusion Models for Landsat and Modis Data
NASA Astrophysics Data System (ADS)
Hazaymeh, K.; Almagbile, A.
2018-04-01
In this study, three documented spatiotemporal data fusion models were applied to Landsat-7 and MODIS surface reflectance, and NDVI. The algorithms included the spatial and temporal adaptive reflectance fusion model (STARFM), sparse representation based on a spatiotemporal reflectance fusion model (SPSTFM), and spatiotemporal image-fusion model (STI-FM). The objectives of this study were to (i) compare the performance of these three fusion models using a one Landsat-MODIS spectral reflectance image pairs using time-series datasets from the Coleambally irrigation area in Australia, and (ii) quantitatively evaluate the accuracy of the synthetic images generated from each fusion model using statistical measurements. Results showed that the three fusion models predicted the synthetic Landsat-7 image with adequate agreements. The STI-FM produced more accurate reconstructions of both Landsat-7 spectral bands and NDVI. Furthermore, it produced surface reflectance images having the highest correlation with the actual Landsat-7 images. This study indicated that STI-FM would be more suitable for spatiotemporal data fusion applications such as vegetation monitoring, drought monitoring, and evapotranspiration.
Liu, Mengyang; Chen, Zhe; Zabihian, Behrooz; Sinz, Christoph; Zhang, Edward; Beard, Paul C.; Ginner, Laurin; Hoover, Erich; Minneman, Micheal P.; Leitgeb, Rainer A.; Kittler, Harald; Drexler, Wolfgang
2016-01-01
Cutaneous blood flow accounts for approximately 5% of cardiac output in human and plays a key role in a number of a physiological and pathological processes. We show for the first time a multi-modal photoacoustic tomography (PAT), optical coherence tomography (OCT) and OCT angiography system with an articulated probe to extract human cutaneous vasculature in vivo in various skin regions. OCT angiography supplements the microvasculature which PAT alone is unable to provide. Co-registered volumes for vessel network is further embedded in the morphologic image provided by OCT. This multi-modal system is therefore demonstrated as a valuable tool for comprehensive non-invasive human skin vasculature and morphology imaging in vivo. PMID:27699106
Multimodal Nonlinear Optical Microscopy
Yue, Shuhua; Slipchenko, Mikhail N.; Cheng, Ji-Xin
2013-01-01
Because each nonlinear optical (NLO) imaging modality is sensitive to specific molecules or structures, multimodal NLO imaging capitalizes the potential of NLO microscopy for studies of complex biological tissues. The coupling of multiphoton fluorescence, second harmonic generation, and coherent anti-Stokes Raman scattering (CARS) has allowed investigation of a broad range of biological questions concerning lipid metabolism, cancer development, cardiovascular disease, and skin biology. Moreover, recent research shows the great potential of using CARS microscope as a platform to develop more advanced NLO modalities such as electronic-resonance-enhanced four-wave mixing, stimulated Raman scattering, and pump-probe microscopy. This article reviews the various approaches developed for realization of multimodal NLO imaging as well as developments of new NLO modalities on a CARS microscope. Applications to various aspects of biological and biomedical research are discussed. PMID:24353747
An FPGA-based heterogeneous image fusion system design method
NASA Astrophysics Data System (ADS)
Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong
2011-08-01
Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.
Image fusion based on millimeter-wave for concealed weapon detection
NASA Astrophysics Data System (ADS)
Zhu, Weiwen; Zhao, Yuejin; Deng, Chao; Zhang, Cunlin; Zhang, Yalin; Zhang, Jingshui
2010-11-01
This paper describes a novel multi sensors image fusion technology which is presented for concealed weapon detection (CWD). It is known to all, because of the good transparency of the clothes at millimeter wave band, a millimeter wave radiometer can be used to image and distinguish concealed contraband beneath clothes, for example guns, knives, detonator and so on. As a result, we adopt the passive millimeter wave (PMMW) imaging technology for airport security. However, in consideration of the wavelength of millimeter wave and the single channel mechanical scanning, the millimeter wave image has law optical resolution, which can't meet the need of practical application. Therefore, visible image (VI), which has higher resolution, is proposed for the image fusion with the millimeter wave image to enhance the readability. Before the image fusion, a novel image pre-processing which specifics to the fusion of millimeter wave imaging and visible image is adopted. And in the process of image fusion, multi resolution analysis (MRA) based on Wavelet Transform (WT) is adopted. In this way, the experiment result shows that this method has advantages in concealed weapon detection and has practical significance.
A method based on IHS cylindrical transform model for quality assessment of image fusion
NASA Astrophysics Data System (ADS)
Zhu, Xiaokun; Jia, Yonghong
2005-10-01
Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.
NASA Astrophysics Data System (ADS)
Bai, Z. M.; Zhang, Z. Z.; Wang, C. Y.; Klemperer, S. L.
2012-04-01
The weakened lithosphere around eastern syntax of Tibet plateau has been revealed by the Average Pn and Sn velocities, the 3D upper mantle velocity variations of P wave and S wave, and the iimaging results of magnetotelluric data. Tengchong volcanic area is neighboring to core of eastern syntax and famous for its springs, volcanic-geothermal activities and remarkable seismicity in mainland China. To probe the deep environment for the Tengchong volcanic-geothermal activity a deep seismic sounding (DSS) project was carried out across the this area in 1999. In this paper the seismic signature of crustal magma and fluid is explored from the DSS data with the seismic attribute fusion (SAF) technique, hence four possible positions for magma generation together with some locations for porous and fractured fluid beneath the Tengchong volcanic area were disclosed from the final fusion image of multi seismic attributes. The adopted attributes include the Vp, Vs and Vp/Vs results derived from a new inversion method based on the No-Ray-Tomography technique, and the migrated instantaneous attributes of central frequency, bandwidth and high frequency energy of pressure wave. Moreover, the back-projected ones which are mainly consisted by the attenuation factor Qp , the delay-time of shear wave splitting, and the amplitude ratio between S wave and P wave + S wave were also considered in this fusion process. Our fusion image indicates such a mechanism for the surface springs: a large amount of heat and the fluid released by the crystallization of magma were transmitted upward into the fluid-filled rock, and the fluid upwells along some pipeline since the high pressure in deep, thus the widespread springs of Tengchong volcanic area were developed. Moreover, the fusion image, regional volcanic and geothermal activities, and the seismicity suggest that the main risk of volcanic eruption was concentrated to the south of Tengchong city, especially around the shot point (SP) Tuantian. There are typical tectonic and deep origin mechanisms for the moderate-strong earthquakes nearby SP Tuantian, and precaution should be added on this area in case of the potential earthquake. Our fusion image also clearly revealed that there exist two remarkable positions on the Moho discontinuity through which the heat from the upper mantle was transmitted upward, and this is attributed to the widely distributed hot material within the crust and upper mantle. We acknowledge the financial support of the Ministry of Land and Resources of China (SinoProbe-02-02), and the National Nature Science Foundation of China (No. 41074033 and No. 40830315). Key Words: Seismic Signature, Magma, Tengchong Volcanic Area, Deep Seismic Sounding, Seismic Attribute Fusion Li, Chang, van der Hilst, D., Meltzer, A.S., Engdahl, E.R., 2008. Subduction of the Indian lithosphere beneath the Tibetan Plateau and Burma. Earth Planet. Sci. Lett. 274. doi:10.1016/j.epsl.2008.07.016. Lebedev, S., van der Hilst, R.D., 2008. Global upper-mantle tomography with the automated multi-mode surface and S waveforms. Geophys. J. Int. 173 (2), 505-518. Wang C.Y. and Huangfu G.. 2004. Crustal structure in Tengchong Volcano-Geothermal Area, western Yunnan, China. Tectonophysics, 380: 69-87.
Abi-Jaoudeh, Nadine; Mielekamp, Peter; Noordhoek, Niels; Venkatesan, Aradhana M; Millo, Corina; Radaelli, Alessandro; Carelsen, Bart; Wood, Bradford J
2012-06-01
To describe a novel technique for multimodality positron emission tomography (PET) fusion-guided interventions that combines cone-beam computed tomography (CT) with PET/CT before the procedure. Subjects were selected among patients scheduled for a biopsy or ablation procedure. The lesions were not visible with conventional imaging methods or did not have uniform uptake on PET. Clinical success was defined by adequate histopathologic specimens for molecular profiling or diagnosis and by lack of enhancement on follow-up imaging for ablation procedures. Time to target (time elapsed between the completion of the initial cone-beam CT scan and first tissue sample or treatment), total procedure time (time from the moment the patient was on the table until the patient was off the table), and number of times the needle was repositioned were recorded. Seven patients underwent eight procedures (two ablations and six biopsies). Registration and procedures were completed successfully in all cases. Clinical success was achieved in all biopsy procedures and in one of the two ablation procedures. The needle was repositioned once in one biopsy procedure only. On average, the time to target was 38 minutes (range 13-54 min). Total procedure time was 95 minutes (range 51-240 min, which includes composite ablation). On average, fluoroscopy time was 2.5 minutes (range 1.3-6.2 min). An integrated cone-beam CT software platform can enable PET-guided biopsies and ablation procedures without the need for additional specialized hardware. Copyright © 2012 SIR. Published by Elsevier Inc. All rights reserved.
SD-OCT stages of progression of type 2 macular telangiectasia in a patient followed for 3 years.
Coscas, Gabriel; Coscas, Florence; Zucchiatti, Ilaria; Bandello, Francesco; Soubrane, Gisele; SouÏed, Eric
2013-01-01
To describe the natural course of type 2 idiopathic macular telangiectasia (MT) using spectral-domain optical coherence tomography (SD-OCT). Analysis of the different stages of progression of type 2 MT during a period of 3 years using multimodal imaging, including SD-OCT correlated with angiographic and autofluorescence images. The analysis of the different steps was obtained initially from the first eye, then successively from the fellow eye when progressive changes appeared. The earliest visible alteration at SD-OCT was the interruption of the interface between inner segment and ellipsoid (IS/EL) (stage 1). The second stage was characterized by the complete interruption of both IS/EL interface and external limiting membrane (stage 2). At the next step, a wide disruption of the outer nuclear layer was noted (stage 3). The fourth stage showed a complete disorganization of the inner layers with aspect of fusion of the inner retinal layers associated with progressive atrophy of the outer layers (stage 4). Hyper-reflective deposits were found in both the internal and external retinal layers (stage 5). Small intraretinal cystoid spaces appeared in the different retinal layers (stage 6). This last feature was an earlier manifestation of the typical intraretinal cysts that are the well-known OCT appearance of type 2 MT. We describe the 6 steps of progression from earlier SD-OCT findings that led to a complete disorganization and fusion of the inner layers (probably due to changes in the Müller cells) to the typical intraretinal cysts.
NASA Astrophysics Data System (ADS)
Adi Aizudin Bin Radin Nasirudin, Radin; Meier, Reinhard; Ahari, Carmen; Sievert, Matti; Fiebich, Martin; Rummeny, Ernst J.; No"l, Peter B.
2011-03-01
Optical imaging (OI) is a relatively new method in detecting active inflammation of hand joints of patients suffering from rheumatoid arthritis (RA). With the high number of people affected by this disease especially in western countries, the availability of OI as an early diagnostic imaging method is clinically highly relevant. In this paper, we present a newly in-house developed OI analyzing tool and a clinical evaluation study. Our analyzing tool extends the capability of existing OI tools. We include many features in the tool, such as region-based image analysis, hyper perfusion curve analysis, and multi-modality image fusion to aid clinicians in localizing and determining the intensity of inflammation in joints. Additionally, image data management options, such as the full integration of PACS/RIS, are included. In our clinical study we demonstrate how OI facilitates the detection of active inflammation in rheumatoid arthritis. The preliminary clinical results indicate a sensitivity of 43.5%, a specificity of 80.3%, an accuracy of 65.7%, a positive predictive value of 76.6%, and a negative predictive value of 64.9% in relation to clinical results from MRI. The accuracy of inflammation detection serves as evidence to the potential of OI as a useful imaging modality for early detection of active inflammation in patients with rheumatoid arthritis. With our in-house developed tool we extend the usefulness of OI imaging in the clinical arena. Overall, we show that OI is a fast, inexpensive, non-invasive and nonionizing yet highly sensitive and accurate imaging modality.-
Advances in multi-sensor data fusion: algorithms and applications.
Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying
2009-01-01
With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hakime, Antoine, E-mail: thakime@yahoo.com; Yevich, Steven; Tselikas, Lambros
PurposeTo assess whether fusion imaging-guided percutaneous microwave ablation (MWA) can improve visibility and targeting of liver metastasis that were deemed inconspicuous on ultrasound (US).Materials and MethodsMWA of liver metastasis not judged conspicuous enough on US was performed under CT/US fusion imaging guidance. The conspicuity before and after the fusion imaging was graded on a five-point scale, and significance was assessed by Wilcoxon test. Technical success, procedure time, and procedure-related complications were evaluated.ResultsA total of 35 patients with 40 liver metastases (mean size 1.3 ± 0.4 cm) were enrolled. Image fusion improved conspicuity sufficiently to allow fusion-targeted MWA in 33 patients. The time requiredmore » for image fusion processing and tumors’ identification averaged 10 ± 2.1 min (range 5–14). Initial conspicuity on US by inclusion criteria was 1.2 ± 0.4 (range 0–2), while conspicuity after localization on fusion imaging was 3.5 ± 1 (range 1–5, p < 0.001). Technical success rate was 83% (33/40) in intention-to-treat analysis and 100% in analysis of treated tumors. There were no major procedure-related complications.ConclusionsFusion imaging broadens the scope of US-guided MWA to metastasis lacking adequate conspicuity on conventional US. Fusion imaging is an effective tool to increase the conspicuity of liver metastases that were initially deemed non visualizable on conventional US imaging.« less
Image fusion via nonlocal sparse K-SVD dictionary learning.
Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang
2016-03-01
Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.
Multifocus image fusion using phase congruency
NASA Astrophysics Data System (ADS)
Zhan, Kun; Li, Qiaoqiao; Teng, Jicai; Wang, Mingying; Shi, Jinhui
2015-05-01
We address the problem of fusing multifocus images based on the phase congruency (PC). PC provides a sharpness feature of a natural image. The focus measure (FM) is identified as strong PC near a distinctive image feature evaluated by the complex Gabor wavelet. The PC is more robust against noise than other FMs. The fusion image is obtained by a new fusion rule (FR), and the focused region is selected by the FR from one of the input images. Experimental results show that the proposed fusion scheme achieves the fusion performance of the state-of-the-art methods in terms of visual quality and quantitative evaluations.
NASA Astrophysics Data System (ADS)
Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan
2015-10-01
Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.
Zhou, Jing; Yu, Mengxiao; Sun, Yun; Zhang, Xianzhong; Zhu, Xingjun; Wu, Zhanhong; Wu, Dongmei; Li, Fuyou
2011-02-01
Molecular imaging modalities provide a wealth of information that is highly complementary and rarely redundant. To combine the advantages of molecular imaging techniques, (18)F-labeled Gd(3+)/Yb(3+)/Er(3+) co-doped NaYF(4) nanophosphors (NPs) simultaneously possessing with radioactivity, magnetic, and upconversion luminescent properties have been fabricated for multimodality positron emission tomography (PET), magnetic resonance imaging (MRI), and laser scanning upconversion luminescence (UCL) imaging. Hydrophilic citrate-capped NaY(0.2)Gd(0.6)Yb(0.18)Er(0.02)F(4) nanophosphors (cit-NPs) were obtained from hydrophobic oleic acid (OA)-coated nanoparticles (OA-NPs) through a process of ligand exchange of OA with citrate, and were found to be monodisperse with an average size of 22 × 19 nm. The obtained hexagonal cit-NPs show intense UCL emission in the visible region and paramagnetic longitudinal relaxivity (r(1) = 0.405 s(-1)·(mM)(-1)). Through a facile inorganic reaction based on the strong binding between Y(3+) and F(-), (18)F-labeled NPs have been fabricated in high yield. The use of cit-NPs as a multimodal probe has been further explored for T(1)-weighted MR and PET imaging in vivo and UCL imaging of living cells and tissue slides. The results indicate that (18)F-labeled NaY(0.2)Gd(0.6)Yb(0.18)Er(0.02) is a potential candidate as a multimodal nanoprobe for ultra-sensitive molecular imaging from the cellular scale to whole-body evaluation. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Schmitt, Michael; Heuke, Sandro; Meyer, Tobias; Chernavskaia, Olga; Bocklitz, Thomas W.; Popp, Juergen
2016-03-01
The realization of label-free molecule specific imaging of morphology and chemical composition of tissue at subcellular spatial resolution in real time is crucial for many envisioned applications in medicine, e.g., precise surgical guidance and non-invasive histopathologic examination of tissue. Thus, new approaches for a fast and reliable in vivo and near in vivo (ex corpore in vivo) tissue characterization to supplement routine pathological diagnostics is needed. Spectroscopic imaging approaches are particularly important since they have the potential to provide a pathologist with adequate support in the form of clinically-relevant information under both ex vivo and in vivo conditions. In this contribution it is demonstrated, that multimodal nonlinear microscopy combining coherent anti-Stokes Raman scattering (CARS), two photon excited fluorescence (TPEF) and second harmonic generation (SHG) enables the detection of characteristic structures and the accompanying molecular changes of widespread diseases, particularly of cancer and atherosclerosis. The detailed images enable an objective evaluation of the tissue samples for an early diagnosis of the disease status. Increasing the spectral resolution and analyzing CARS images at multiple Raman resonances improves the chemical specificity. To facilitate handling and interpretation of the image data characteristic properties can be automatically extracted by advanced image processing algorithms, e.g., for tissue classification. Overall, the presented examples show the great potential of multimodal imaging to augment standard intraoperative clinical assessment with functional multimodal CARS/SHG/TPEF images to highlight functional activity and tumor boundaries. It ensures fast, label-free and non-invasive intraoperative tissue classification paving the way towards in vivo optical pathology.
NASA Astrophysics Data System (ADS)
Wang, Guannan; Gao, Wei; Zhang, Xuanjun; Mei, Xifan
2016-06-01
Diagnostic approaches based on multimodal imaging of clinical noninvasive imaging (eg. MRI/CT scanner) are highly developed in recent years for accurate selection of the therapeutic regimens in critical diseases. Therefore, it is highly demanded in the development of appropriate all-in-one multimodal contrast agents (MCAs) for the MRI/CT multimodal imaging. Here a novel ideal MCAs (F-AuNC@Fe3O4) were engineered by assemble Au nanocages (Au NC) and ultra-small iron oxide nanoparticles (Fe3O4) for simultaneous T1-T2dual MRI and CT contrast imaging. In this system, the Au nanocages offer facile thiol modification and strong X-ray attenuation property for CT imaging. The ultra-small Fe3O4 nanoparticles, as excellent contrast agent, is able to provide great enhanced signal of T1- and T2-weighted MRI (r1 = 6.263 mM-1 s-1, r2 = 28.117 mM-1 s-1) due to their ultra-refined size. After functionalization, the present MCAs nanoparticles exhibited small average size, low aggregation and excellent biocompatible. In vitro and In vivo studies revealed that the MCAs show long-term circulation time, renal clearance properties and outstanding capability of selective accumulation in tumor tissues for simultaneous CT imaging and T1- and T2-weighted MRI. Taken together, these results show that as-prepared MCAs are excellent candidates as MRI/CT multimodal imaging contrast agents.
Multi-focus image fusion based on window empirical mode decomposition
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao
2017-09-01
In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.
NASA Astrophysics Data System (ADS)
Kang, Jeeun; Chang, Jin Ho; Wilson, Brian C.; Veilleux, Israel; Bai, Yanhui; DaCosta, Ralph; Kim, Kang; Ha, Seunghan; Lee, Jong Gun; Kim, Jeong Seok; Lee, Sang-Goo; Kim, Sun Mi; Lee, Hak Jong; Ahn, Young Bok; Han, Seunghee; Yoo, Yangmo; Song, Tai-Kyong
2015-03-01
Multi-modality imaging is beneficial for both preclinical and clinical applications as it enables complementary information from each modality to be obtained in a single procedure. In this paper, we report the design, fabrication, and testing of a novel tri-modal in vivo imaging system to exploit molecular/functional information from fluorescence (FL) and photoacoustic (PA) imaging as well as anatomical information from ultrasound (US) imaging. The same ultrasound transducer was used for both US and PA imaging, bringing the pulsed laser light into a compact probe by fiberoptic bundles. The FL subsystem is independent of the acoustic components but the front end that delivers and collects the light is physically integrated into the same probe. The tri-modal imaging system was implemented to provide each modality image in real time as well as co-registration of the images. The performance of the system was evaluated through phantom and in vivo animal experiments. The results demonstrate that combining the modalities does not significantly compromise the performance of each of the separate US, PA, and FL imaging techniques, while enabling multi-modality registration. The potential applications of this novel approach to multi-modality imaging range from preclinical research to clinical diagnosis, especially in detection/localization and surgical guidance of accessible solid tumors.
ERIC Educational Resources Information Center
Wang, Kelu
2013-01-01
Multimodal texts that combine words and images produce meaning in a different way from monomodal texts that rely on words. They differ not only in representing the subject matter, but also constructing relationships between text producers and text receivers. This article uses two multimodal texts and one monomodal written text as samples, which…
Compressive Coded-Aperture Multimodal Imaging Systems
NASA Astrophysics Data System (ADS)
Rueda-Chacon, Hoover F.
Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.
Zu, Chen; Jie, Biao; Liu, Mingxia; Chen, Songcan
2015-01-01
Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer’s disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI. PMID:26572145
Wang, Sheng; Chen, Xuanze; Chang, Lei; Ding, Miao; Xue, Ruiying; Duan, Haifeng; Sun, Yujie
2018-06-05
Fluorescent probes with multimodal and multilevel imaging capabilities are highly valuable as imaging with such probes not only can obtain new layers of information but also enable cross-validation of results under different experimental conditions. In recent years, the development of genetically encoded reversibly photoswitchable fluorescent proteins (RSFPs) has greatly promoted the application of various kinds of live-cell nanoscopy approaches, including reversible saturable optical fluorescence transitions (RESOLFT) and stochastic optical fluctuation imaging (SOFI). However, these two classes of live-cell nanoscopy approaches require different optical characteristics of specific RSFPs. In this work, we developed GMars-T, a monomeric bright green RSFP which can satisfy both RESOLFT and photochromic SOFI (pcSOFI) imaging in live cells. We further generated biosensor based on bimolecular fluorescence complementation (BiFC) of GMars-T which offers high specificity and sensitivity in detecting and visualizing various protein-protein interactions (PPIs) in different subcellular compartments under physiological conditions (e.g., 37 °C) in live mammalian cells. Thus, the newly developed GMars-T can serve as both structural imaging probe with multimodal super-resolution imaging capability and functional imaging probe for reporting PPIs with high specificity and sensitivity based on its derived biosensor.
Fusion of infrared and visible images based on BEMD and NSDFB
NASA Astrophysics Data System (ADS)
Zhu, Pan; Huang, Zhanhua; Lei, Hai
2016-07-01
This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.
[Contrast-enhanced ultrasound (CEUS) and image fusion for procedures of liver interventions].
Jung, E M; Clevert, D A
2018-06-01
Contrast-enhanced ultrasound (CEUS) is becoming increasingly important for the detection and characterization of malignant liver lesions and allows percutaneous treatment when surgery is not possible. Contrast-enhanced ultrasound image fusion with computed tomography (CT) and magnetic resonance imaging (MRI) opens up further options for the targeted investigation of a modified tumor treatment. Ultrasound image fusion offers the potential for real-time imaging and can be combined with other cross-sectional imaging techniques as well as CEUS. With the implementation of ultrasound contrast agents and image fusion, ultrasound has been improved in the detection and characterization of liver lesions in comparison to other cross-sectional imaging techniques. In addition, this method can also be used for intervention procedures. The success rate of fusion-guided biopsies or CEUS-guided tumor ablation lies between 80 and 100% in the literature. Ultrasound-guided image fusion using CT or MRI data, in combination with CEUS, can facilitate diagnosis and therapy follow-up after liver interventions. In addition to the primary applications of image fusion in the diagnosis and treatment of liver lesions, further useful indications can be integrated into daily work. These include, for example, intraoperative and vascular applications as well applications in other organ systems.
Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren
2018-03-14
Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.
Dual wavelength imaging allows analysis of membrane fusion of influenza virus inside cells.
Sakai, Tatsuya; Ohuchi, Masanobu; Imai, Masaki; Mizuno, Takafumi; Kawasaki, Kazunori; Kuroda, Kazumichi; Yamashina, Shohei
2006-02-01
Influenza virus hemagglutinin (HA) is a determinant of virus infectivity. Therefore, it is important to determine whether HA of a new influenza virus, which can potentially cause pandemics, is functional against human cells. The novel imaging technique reported here allows rapid analysis of HA function by visualizing viral fusion inside cells. This imaging was designed to detect fusion changing the spectrum of the fluorescence-labeled virus. Using this imaging, we detected the fusion between a virus and a very small endosome that could not be detected previously, indicating that the imaging allows highly sensitive detection of viral fusion.
V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S
2016-12-01
The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Designing Image Operators for MRI-PET Image Fusion of the Brain
NASA Astrophysics Data System (ADS)
Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.
2006-09-01
Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.
Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.
Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie
2016-07-01
Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.
Infrared and visible image fusion method based on saliency detection in sparse domain
NASA Astrophysics Data System (ADS)
Liu, C. H.; Qi, Y.; Ding, W. R.
2017-06-01
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.
Yaseen, Mohammad A.; Srinivasan, Vivek J.; Gorczynska, Iwona; Fujimoto, James G.; Boas, David A.; Sakadžić, Sava
2015-01-01
Improving our understanding of brain function requires novel tools to observe multiple physiological parameters with high resolution in vivo. We have developed a multimodal imaging system for investigating multiple facets of cerebral blood flow and metabolism in small animals. The system was custom designed and features multiple optical imaging capabilities, including 2-photon and confocal lifetime microscopy, optical coherence tomography, laser speckle imaging, and optical intrinsic signal imaging. Here, we provide details of the system’s design and present in vivo observations of multiple metrics of cerebral oxygen delivery and energy metabolism, including oxygen partial pressure, microvascular blood flow, and NADH autofluorescence. PMID:26713212
Computational method for multi-modal microscopy based on transport of intensity equation
NASA Astrophysics Data System (ADS)
Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao
2017-02-01
In this paper, we develop the requisite theory to describe a hybrid virtual-physical multi-modal imaging system which yields quantitative phase, Zernike phase contrast, differential interference contrast (DIC), and light field moment imaging simultaneously based on transport of intensity equation(TIE). We then give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens based TIE system, combined with the appropriate post-processing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.
Drusen Characterization with Multimodal Imaging
Spaide, Richard F.; Curcio, Christine A.
2010-01-01
Summary Multimodal imaging findings and histological demonstration of soft drusen, cuticular drusen, and subretinal drusenoid deposits provided information used to develop a model explaining their imaging characteristics. Purpose To characterize the known appearance of cuticular drusen, subretinal drusenoid deposits (reticular pseudodrusen), and soft drusen as revealed by multimodal fundus imaging; to create an explanatory model that accounts for these observations. Methods Reported color, fluorescein angiographic, autofluorescence, and spectral domain optical coherence tomography (SD-OCT) images of patients with cuticular drusen, soft drusen, and subretinal drusenoid deposits were reviewed, as were actual images from affected eyes. Representative histological sections were examined. The geometry, location, and imaging characteristics of these lesions were evaluated. A hypothesis based on the Beer-Lambert Law of light absorption was generated to fit these observations. Results Cuticular drusen appear as numerous uniform round yellow-white punctate accumulations under the retinal pigment epithelium (RPE). Soft drusen are larger yellow-white dome-shaped mounds of deposit under the RPE. Subretinal drusenoid deposits are polymorphous light-grey interconnected accumulations above the RPE. Based on the model, both cuticular and soft drusen appear yellow due to the removal of shorter wavelength light by a double pass through the RPE. Subretinal drusenoid deposits, which are located on the RPE, are not subjected to short wavelength attenuation and therefore are more prominent when viewed with blue light. The location and morphology of extracellular material in relationship to the RPE, and associated changes to RPE morphology and pigmentation, appeared to be primary determinants of druse appearance in different imaging modalities. Conclusion Although cuticular drusen, subretinal drusenoid deposits, and soft drusen are composed of common components, they are distinguishable by multimodal imaging due to differences in location, morphology, and optical filtering effects by drusenoid material and the RPE. PMID:20924263
NASA Astrophysics Data System (ADS)
Zajicek, J.; Burian, M.; Soukup, P.; Novak, V.; Macko, M.; Jakubek, J.
2017-01-01
Multimodal medical imaging based on Magnetic Resonance is mainly combinated with one of the scintigraphic method like PET or SPECT. These methods provide functional information whereas magnetic resonance imaging provides high spatial resolution of anatomical information or complementary functional information. Fusion of imaging modalities allows researchers to obtain complimentary information in a single measurement. The combination of MRI with SPECT is still relatively new and challenging in many ways. The main complication of using SPECT in MRI systems is the presence of a high magnetic field therefore (ferro)magnetic materials have to be eliminated. Furthermore the application of radiofrequency fields within the MR gantry does not allow for the use of conductive structures such as the common heavy metal collimators. This work presents design and construction of an experimental MRI-SPECT insert system and its initial tests. This unique insert system consists of an MR-compatible SPECT setup with CdTe pixelated sensors Timepix tungsten collimators and a radiofrequency coil. Measurements were performed on a gelatine and tissue phantom with an embedded radioisotopic source (57Co 122 keV γ ray) inside the RF coil by the Bruker BioSpec 47/20 (4.7 T) MR animal scanner. The project was performed in the framework of the Medipix Collaboration.
Segment fusion of ToF-SIMS images.
Milillo, Tammy M; Miller, Mary E; Fischione, Remo; Montes, Angelina; Gardella, Joseph A
2016-06-08
The imaging capabilities of time-of-flight secondary ion mass spectrometry (ToF-SIMS) have not been used to their full potential in the analysis of polymer and biological samples. Imaging has been limited by the size of the dataset and the chemical complexity of the sample being imaged. Pixel and segment based image fusion algorithms commonly used in remote sensing, ecology, geography, and geology provide a way to improve spatial resolution and classification of biological images. In this study, a sample of Arabidopsis thaliana was treated with silver nanoparticles and imaged with ToF-SIMS. These images provide insight into the uptake mechanism for the silver nanoparticles into the plant tissue, giving new understanding to the mechanism of uptake of heavy metals in the environment. The Munechika algorithm was programmed in-house and applied to achieve pixel based fusion, which improved the spatial resolution of the image obtained. Multispectral and quadtree segment or region based fusion algorithms were performed using ecognition software, a commercially available remote sensing software suite, and used to classify the images. The Munechika fusion improved the spatial resolution for the images containing silver nanoparticles, while the segment fusion allowed classification and fusion based on the tissue types in the sample, suggesting potential pathways for the uptake of the silver nanoparticles.
Aoki, Yasuko; Endo, Hidenori; Niizuma, Kuniyasu; Inoue, Takashi; Shimizu, Hiroaki; Tominaga, Teiji
2013-12-01
We report two cases with internal carotid artery(ICA)aneurysms, in which fusion image effectively indicated the anatomical variations of the anterior choroidal artery (AchoA). Fusion image was obtained using fusion application software (Integrated Registration, Advantage Workstation VS4, GE Healthcare). When the artery passed through the choroidal fissure, it was diagnosed as AchoA. Case 1 had an aneurysm at the left ICA. Left internal carotid angiography (ICAG) showed that an artery arising from the aneurysmal neck supplied the medial occipital lobe. Fusion image showed that this artery had a branch passing through the choroidal fissure, which was diagnosed as hyperplastic AchoA. Case 2 had an aneurysm at the supraclinoid segment of the right ICA. AchoA or posterior communicating artery (PcomA) were not detected by the right ICAG. Fusion image obtained from 3D vertebral angiography (VAG) and MRI showed that the right AchoA arose from the right PcomA. Fusion image obtained from the right ICAG and the left VAG suggested that the aneurysm was located on the ICA where the PcomA regressed. Fusion image is an effective tool for assessing anatomical variations of AchoA. The present method is simple and quick for obtaining a fusion image that can be used in a real-time clinical setting.
NASA Astrophysics Data System (ADS)
Zdora, M.-C.; Thibault, P.; Deyhle, H.; Vila-Comamala, J.; Rau, C.; Zanette, I.
2018-05-01
X-ray phase-contrast and dark-field imaging provides valuable, complementary information about the specimen under study. Among the multimodal X-ray imaging methods, X-ray grating interferometry and speckle-based imaging have drawn particular attention, which, however, in their common implementations incur certain limitations that can restrict their range of applications. Recently, the unified modulated pattern analysis (UMPA) approach was proposed to overcome these limitations and combine grating- and speckle-based imaging in a single approach. Here, we demonstrate the multimodal imaging capabilities of UMPA and highlight its tunable character regarding spatial resolution, signal sensitivity and scan time by using different reconstruction parameters.
Multimodality imaging of ovarian cystic lesions: Review with an imaging based algorithmic approach
Wasnik, Ashish P; Menias, Christine O; Platt, Joel F; Lalchandani, Usha R; Bedi, Deepak G; Elsayes, Khaled M
2013-01-01
Ovarian cystic masses include a spectrum of benign, borderline and high grade malignant neoplasms. Imaging plays a crucial role in characterization and pretreatment planning of incidentally detected or suspected adnexal masses, as diagnosis of ovarian malignancy at an early stage is correlated with a better prognosis. Knowledge of differential diagnosis, imaging features, management trends and an algorithmic approach of such lesions is important for optimal clinical management. This article illustrates a multi-modality approach in the diagnosis of a spectrum of ovarian cystic masses and also proposes an algorithmic approach for the diagnosis of these lesions. PMID:23671748
Rapid multi-modality preregistration based on SIFT descriptor.
Chen, Jian; Tian, Jie
2006-01-01
This paper describes the scale invariant feature transform (SIFT) method for rapid preregistration of medical image. This technique originates from Lowe's method wherein preregistration is achieved by matching the corresponding keypoints between two images. The computational complexity has been reduced when we applied SIFT preregistration method before refined registration due to its O(n) exponential calculations. The features of SIFT are highly distinctive and invariant to image scaling and rotation, and partially invariant to change in illumination and contrast, it is robust and repeatable for cursorily matching two images. We also altered the descriptor so our method can deal with multimodality preregistration.
A systematic, multimodality approach to emergency elbow imaging.
Singer, Adam D; Hanna, Tarek; Jose, Jean; Datir, Abhijit
2016-01-01
The elbow is a complex synovial hinge joint that is frequently involved in both athletic and nonathletic injuries. A thorough understanding of the normal anatomy and various injury patterns is essential when utilizing diagnostic imaging to identify damaged structures and to assist in surgical planning. In this review, the elbow anatomy will be scrutinized in a systematic approach. This will be followed by a comprehensive presentation of elbow injuries that are commonly seen in the emergency department accompanied by multimodality imaging findings. A short discussion regarding pitfalls in elbow imaging is also included. Copyright © 2015 Elsevier Inc. All rights reserved.
Multimode-Optical-Fiber Imaging Probe
NASA Technical Reports Server (NTRS)
Jackson, Deborah
2000-01-01
Currently, endoscopic surgery uses single-mode fiber-bundles to obtain in vivo image information inside orifices of the body. This limits their use to the larger natural bodily orifices and to surgical procedures where there is plenty of room for manipulation. The knee joint, for example can be easily viewed with a fiber optic viewer, but joints in the finger cannot. However, there are a host of smaller orifices where fiber endoscopy would play an important role if a cost effective fiber probe were developed with small enough dimensions (< 250 microns). Examples of beneficiaries of micro-endoscopes are the treatment of the Eustatian tube of the middle ear, the breast ducts, tear ducts, coronary arteries, fallopian tubes, as well as the treatment of salivary duct parotid disease, and the neuro endoscopy of the ventricles and spinal canal. To solve this problem, this work describes an approach for recovering images from. tightly confined spaces using multimode fibers and analytically demonstrates that the concept is sound. The proof of concept draws upon earlier works that concentrated on image recovery after two-way transmission through a multimode fiber as well as work that demonstrated the recovery of images after one-way transmission through a multimode fiber. Both relied on generating a phase conjugated wavefront which was predistorted with the characteristics of the fiber. The described approach also relies on generating a phase conjugated wavefront, but utilizes two fibers to capture the image at some intermediate point (accessible by the fibers, but which is otherwise visually unaccessible).
Multimode-Optical-Fiber Imaging Probe
NASA Technical Reports Server (NTRS)
Jackson, Deborah
1999-01-01
Currently, endoscopic surgery uses single-mode fiber-bundles to obtain in vivo image information inside the orifices of the body. This limits their use to the larger natural orifices and to surgical procedures where there is plenty of room for manipulation. The knee joint, for example, can be easily viewed with a fiber optic viewer, but joints in the finger cannot. However, there are a host of smaller orifices where fiber endoscopy would play an important role if a cost effective fiber probe were developed with small enough dimensions (less than or equal to 250 microns). Examples of beneficiaries of micro-endoscopes are the treatment of the Eustatian tube of the middle ear, the breast ducts, tear ducts, coronary arteries, fallopian tubes, as well as the treatment of salivary duct parotid disease, and the neuro endoscopy of the ventricles and spinal canal. This work describes an approach for recovering images from tightly confined spaces using multimode. The concept draws upon earlier works that concentrated on image recovery after two-way transmission through a multimode fiber as well as work that demonstrated the recovery of images after one-way transmission through a multimode fiber. Both relied on generating a phase conjugated wavefront, which was predistorted with the characteristics of the fiber. The approach described here also relies on generating a phase conjugated wavefront, but utilizes two fibers to capture the image at some intermediate point (accessible by the fibers, but which is otherwise visually inaccessible).
NASA Astrophysics Data System (ADS)
Goh, Sheng-Yang M.; Irimia, Andrei; Vespa, Paul M.; Van Horn, John D.
2016-03-01
In traumatic brain injury (TBI) and intracerebral hemorrhage (ICH), the heterogeneity of lesion sizes and types necessitates a variety of imaging modalities to acquire a comprehensive perspective on injury extent. Although it is advantageous to combine imaging modalities and to leverage their complementary benefits, there are difficulties in integrating information across imaging types. Thus, it is important that efforts be dedicated to the creation and sustained refinement of resources for multimodal data integration. Here, we propose a novel approach to the integration of neuroimaging data acquired from human patients with TBI/ICH using various modalities; we also demonstrate the integrated use of multimodal magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) data for TBI analysis based on both visual observations and quantitative metrics. 3D models of healthy-appearing tissues and TBIrelated pathology are generated, both of which are derived from multimodal imaging data. MRI volumes acquired using FLAIR, SWI, and T2 GRE are used to segment pathology. Healthy tissues are segmented using user-supervised tools, and results are visualized using a novel graphical approach called a `connectogram', where brain connectivity information is depicted within a circle of radially aligned elements. Inter-region connectivity and its strength are represented by links of variable opacities drawn between regions, where opacity reflects the percentage longitudinal change in brain connectivity density. Our method for integrating, analyzing and visualizing structural brain changes due to TBI and ICH can promote knowledge extraction and enhance the understanding of mechanisms underlying recovery.
Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong
2016-07-01
Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less
MMX-I: A data-processing software for multi-modal X-ray imaging and tomography
NASA Astrophysics Data System (ADS)
Bergamaschi, A.; Medjoubi, K.; Messaoudi, C.; Marco, S.; Somogyi, A.
2017-06-01
Scanning hard X-ray imaging allows simultaneous acquisition of multimodal information, including X-ray fluorescence, absorption, phase and dark-field contrasts, providing structural and chemical details of the samples. Combining these scanning techniques with the infrastructure developed for fast data acquisition at Synchrotron Soleil permits to perform multimodal imaging and tomography during routine user experiments at the Nanoscopium beamline. A main challenge of such imaging techniques is the online processing and analysis of the generated very large volume (several hundreds of Giga Bytes) multimodal data-sets. This is especially important for the wide user community foreseen at the user oriented Nanoscopium beamline (e.g. from the fields of Biology, Life Sciences, Geology, Geobiology), having no experience in such data-handling. MMX-I is a new multi-platform open-source freeware for the processing and reconstruction of scanning multi-technique X-ray imaging and tomographic datasets. The MMX-I project aims to offer, both expert users and beginners, the possibility of processing and analysing raw data, either on-site or off-site. Therefore we have developed a multi-platform (Mac, Windows and Linux 64bit) data processing tool, which is easy to install, comprehensive, intuitive, extendable and user-friendly. MMX-I is now routinely used by the Nanoscopium user community and has demonstrated its performance in treating big data.
Image-guided thoracic surgery in the hybrid operation room.
Ujiie, Hideki; Effat, Andrew; Yasufuku, Kazuhiro
2017-01-01
There has been an increase in the use of image-guided technology to facilitate minimally invasive therapy. The next generation of minimally invasive therapy is focused on advancement and translation of novel image-guided technologies in therapeutic interventions, including surgery, interventional pulmonology, radiation therapy, and interventional laser therapy. To establish the efficacy of different minimally invasive therapies, we have developed a hybrid operating room, known as the guided therapeutics operating room (GTx OR) at the Toronto General Hospital. The GTx OR is equipped with multi-modality image-guidance systems, which features a dual source-dual energy computed tomography (CT) scanner, a robotic cone-beam CT (CBCT)/fluoroscopy, high-performance endobronchial ultrasound system, endoscopic surgery system, near-infrared (NIR) fluorescence imaging system, and navigation tracking systems. The novel multimodality image-guidance systems allow physicians to quickly, and accurately image patients while they are on the operating table. This yield improved outcomes since physicians are able to use image guidance during their procedures, and carry out innovative multi-modality therapeutics. Multiple preclinical translational studies pertaining to innovative minimally invasive technology is being developed in our guided therapeutics laboratory (GTx Lab). The GTx Lab is equipped with similar technology, and multimodality image-guidance systems as the GTx OR, and acts as an appropriate platform for translation of research into human clinical trials. Through the GTx Lab, we are able to perform basic research, such as the development of image-guided technologies, preclinical model testing, as well as preclinical imaging, and then translate that research into the GTx OR. This OR allows for the utilization of new technologies in cancer therapy, including molecular imaging, and other innovative imaging modalities, and therefore enables a better quality of life for patients, both during and after the procedure. In this article, we describe capabilities of the GTx systems, and discuss the first-in-human technologies used, and evaluated in GTx OR.
WE-H-206-02: Recent Advances in Multi-Modality Molecular Imaging of Small Animals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsui, B.
Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less
Characterizing virus-induced gene silencing at the cellular level with in situ multimodal imaging
Burkhow, Sadie J.; Stephens, Nicole M.; Mei, Yu; ...
2018-05-25
Reverse genetic strategies, such as virus-induced gene silencing, are powerful techniques to study gene function. Currently, there are few tools to study the spatial dependence of the consequences of gene silencing at the cellular level. Here, we report the use of multimodal Raman and mass spectrometry imaging to study the cellular-level biochemical changes that occur from silencing the phytoene desaturase ( pds) gene using a Foxtail mosaic virus (FoMV) vector in maize leaves. The multimodal imaging method allows the localized carotenoid distribution to be measured and reveals differences lost in the spatial average when analyzing a carotenoid extraction of themore » whole leaf. The nature of the Raman and mass spectrometry signals are complementary: silencing pds reduces the downstream carotenoid Raman signal and increases the phytoene mass spectrometry signal.« less
Characterizing virus-induced gene silencing at the cellular level with in situ multimodal imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burkhow, Sadie J.; Stephens, Nicole M.; Mei, Yu
Reverse genetic strategies, such as virus-induced gene silencing, are powerful techniques to study gene function. Currently, there are few tools to study the spatial dependence of the consequences of gene silencing at the cellular level. Here, we report the use of multimodal Raman and mass spectrometry imaging to study the cellular-level biochemical changes that occur from silencing the phytoene desaturase ( pds) gene using a Foxtail mosaic virus (FoMV) vector in maize leaves. The multimodal imaging method allows the localized carotenoid distribution to be measured and reveals differences lost in the spatial average when analyzing a carotenoid extraction of themore » whole leaf. The nature of the Raman and mass spectrometry signals are complementary: silencing pds reduces the downstream carotenoid Raman signal and increases the phytoene mass spectrometry signal.« less
Calibration for single multi-mode fiber digital scanning microscopy imaging system
NASA Astrophysics Data System (ADS)
Yin, Zhe; Liu, Guodong; Liu, Bingguo; Gan, Yu; Zhuang, Zhitao; Chen, Fengdong
2015-11-01
Single multimode fiber (MMF) digital scanning imaging system is a development tendency of modern endoscope. We concentrate on the calibration method of the imaging system. Calibration method comprises two processes, forming scanning focused spots and calibrating the couple factors varied with positions. Adaptive parallel coordinate algorithm (APC) is adopted to form the focused spots at the multimode fiber (MMF) output. Compare with other algorithm, APC contains many merits, i.e. rapid speed, small amount calculations and no iterations. The ratio of the optics power captured by MMF to the intensity of the focused spots is called couple factor. We setup the calibration experimental system to form the scanning focused spots and calculate the couple factors for different object positions. The experimental result the couple factor is higher in the center than the edge.
Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach
NASA Astrophysics Data System (ADS)
Jazaeri, Amin
High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.
NASA Astrophysics Data System (ADS)
Cheng, Boyang; Jin, Longxu; Li, Guoning
2018-06-01
Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.
Satellite image fusion based on principal component analysis and high-pass filtering.
Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E
2010-06-01
This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.
Scharfe, Michael; Pielot, Rainer; Schreiber, Falk
2010-01-11
Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.
Li, Shihong; Goins, Beth; Zhang, Lujun; Bao, Ande
2012-01-01
Liposomes are effective lipid nanoparticle drug delivery systems, which can also be functionalized with non-invasive multimodality imaging agents with each modality providing distinct information and having synergistic advantages in diagnosis, monitoring of disease treatment, and evaluation of liposomal drug pharmacokinetics. We designed and constructed a multifunctional theranostic liposomal drug delivery system, which integrated multimodality magnetic resonance (MR), near-infrared (NIR) fluorescent and nuclear imaging of liposomal drug delivery, and therapy monitoring and prediction. The pre-manufactured liposomes were composed of DSPC/cholesterol/Gd-DOTADSPE/DOTA-DSPE with the molar ratio of 39:35:25:1 and having ammonium sulfate/pH gradient. A lipidized NIR fluorescent tracer, IRDye-DSPE, was effectively post-inserted into the pre-manufactured liposomes. Doxorubicin could be effectively post-loaded into the multifunctional liposomes. The multifunctional doxorubicin-liposomes could also be stably radiolabeled with 99mTc or 64Cu for single photon emission computed tomography (SPECT) or positron emission tomography (PET) imaging, respectively. MR images displayed the high resolution micro-intratumoral distribution of the liposomes in squamous cell carcinoma of head and neck (SCCHN) tumor xenografts in nude rats after intratumoral injection. NIR fluorescent, SPECT and PET images also clearly showed either the high intratumoral retention or distribution of the multifunctional liposomes. This multifunctional drug carrying liposome system is promising for disease theranostics allowing non-invasive multimodality NIR fluorescent, MR, SPECT and PET imaging of their in vivo behavior and capitalizing on the inherent advantages of each modality. PMID:22577859
Remote sensing fusion based on guided image filtering
NASA Astrophysics Data System (ADS)
Zhao, Wenfei; Dai, Qinling; Wang, Leiguang
2015-12-01
In this paper, we propose a novel remote sensing fusion approach based on guided image filtering. The fused images can well preserve the spectral features of the original multispectral (MS) images, meanwhile, enhance the spatial details information. Four quality assessment indexes are also introduced to evaluate the fusion effect when compared with other fusion methods. Experiments carried out on Gaofen-2, QuickBird, WorldView-2 and Landsat-8 images. And the results show an excellent performance of the proposed method.
Wu, Guorong; Kim, Minjeong; Sanroma, Gerard; Wang, Qian; Munsell, Brent C.; Shen, Dinggang
2014-01-01
Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images, after registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process it’s critical the chosen patch similarity measurement accurately captures the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch now characterized by a multi-scale feature representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchically approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 tesla MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale feature representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods. PMID:25463474
LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang
2015-03-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.
LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188
NASA Astrophysics Data System (ADS)
Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing
2016-04-01
In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.
A fast fusion scheme for infrared and visible light images in NSCT domain
NASA Astrophysics Data System (ADS)
Zhao, Chunhui; Guo, Yunting; Wang, Yulei
2015-09-01
Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.
Lu, Yisu; Jiang, Jun; Yang, Wei; Feng, Qianjin; Chen, Wufan
2014-01-01
Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use.
Lu, Yisu; Jiang, Jun; Chen, Wufan
2014-01-01
Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use. PMID:25254064
Li, Fang-Ye; Chen, Xiao-Lei; Xu, Bai-Nan
2016-09-01
To determine the beneficial effects of intraoperative high-field magnetic resonance imaging (MRI), multimodal neuronavigation, and intraoperative electrophysiological monitoring-guided surgery for treating supratentorial cavernomas. Twelve patients with 13 supratentorial cavernomas were prospectively enrolled and operated while using a 1.5 T intraoperative MRI, multimodal neuronavigation, and intraoperative electrophysiological monitoring. All cavernomas were deeply located in subcortical areas or involved critical areas. Intraoperative high-field MRIs were obtained for the intraoperative "visualization" of surrounding eloquent structures, "brain shift" corrections, and navigational plan updates. All cavernomas were successfully resected with guidance from intraoperative MRI, multimodal neuronavigation, and intraoperative electrophysiological monitoring. In 5 cases with supratentorial cavernomas, intraoperative "brain shift" severely deterred locating of the lesions; however, intraoperative MRI facilitated precise locating of these lesions. During long-term (>3 months) follow-up, some or all presenting signs and symptoms improved or resolved in 4 cases, but were unchanged in 7 patients. Intraoperative high-field MRI, multimodal neuronavigation, and intraoperative electrophysiological monitoring are helpful in surgeries for the treatment of small deeply seated subcortical cavernomas.
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
[Perceptual sharpness metric for visible and infrared color fusion images].
Gao, Shao-Shu; Jin, Wei-Qi; Wang, Xia; Wang, Ling-Xue; Luo, Yuan
2012-12-01
For visible and infrared color fusion images, objective sharpness assessment model is proposed to measure the clarity of detail and edge definition of the fusion image. Firstly, the contrast sensitivity functions (CSF) of the human visual system is used to reduce insensitive frequency components under certain viewing conditions. Secondly, perceptual contrast model, which takes human luminance masking effect into account, is proposed based on local band-limited contrast model. Finally, the perceptual contrast is calculated in the region of interest (contains image details and edges) in the fusion image to evaluate image perceptual sharpness. Experimental results show that the proposed perceptual sharpness metrics provides better predictions, which are more closely matched to human perceptual evaluations, than five existing sharpness (blur) metrics for color images. The proposed perceptual sharpness metrics can evaluate the perceptual sharpness for color fusion images effectively.
Tools and Methods for the Registration and Fusion of Remotely Sensed Data
NASA Technical Reports Server (NTRS)
Goshtasby, Arthur Ardeshir; LeMoigne, Jacqueline
2010-01-01
Tools and methods for image registration were reviewed. Methods for the registration of remotely sensed data at NASA were discussed. Image fusion techniques were reviewed. Challenges in registration of remotely sensed data were discussed. Examples of image registration and image fusion were given.
Fusion Imaging for Procedural Guidance.
Wiley, Brandon M; Eleid, Mackram F; Thaden, Jeremy J
2018-05-01
The field of percutaneous structural heart interventions has grown tremendously in recent years. This growth has fueled the development of new imaging protocols and technologies in parallel to help facilitate these minimally-invasive procedures. Fusion imaging is an exciting new technology that combines the strength of 2 imaging modalities and has the potential to improve procedural planning and the safety of many commonly performed transcatheter procedures. In this review we discuss the basic concepts of fusion imaging along with the relative strengths and weaknesses of static vs dynamic fusion imaging modalities. This review will focus primarily on echocardiographic-fluoroscopic fusion imaging and its application in commonly performed transcatheter structural heart procedures. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Witharana, Chandi; LaRue, Michelle A.; Lynch, Heather J.
2016-03-01
Remote sensing is a rapidly developing tool for mapping the abundance and distribution of Antarctic wildlife. While both panchromatic and multispectral imagery have been used in this context, image fusion techniques have received little attention. We tasked seven widely-used fusion algorithms: Ehlers fusion, hyperspherical color space fusion, high-pass fusion, principal component analysis (PCA) fusion, University of New Brunswick fusion, and wavelet-PCA fusion to resolution enhance a series of single-date QuickBird-2 and Worldview-2 image scenes comprising penguin guano, seals, and vegetation. Fused images were assessed for spectral and spatial fidelity using a variety of quantitative quality indicators and visual inspection methods. Our visual evaluation elected the high-pass fusion algorithm and the University of New Brunswick fusion algorithm as best for manual wildlife detection while the quantitative assessment suggested the Gram-Schmidt fusion algorithm and the University of New Brunswick fusion algorithm as best for automated classification. The hyperspherical color space fusion algorithm exhibited mediocre results in terms of spectral and spatial fidelities. The PCA fusion algorithm showed spatial superiority at the expense of spectral inconsistencies. The Ehlers fusion algorithm and the wavelet-PCA algorithm showed the weakest performances. As remote sensing becomes a more routine method of surveying Antarctic wildlife, these benchmarks will provide guidance for image fusion and pave the way for more standardized products for specific types of wildlife surveys.
Feng, Peng; Wang, Jing; Wei, Biao; Mi, Deling
2013-01-01
A hybrid multiscale and multilevel image fusion algorithm for green fluorescent protein (GFP) image and phase contrast image of Arabidopsis cell is proposed in this paper. Combining intensity-hue-saturation (IHS) transform and sharp frequency localization Contourlet transform (SFL-CT), this algorithm uses different fusion strategies for different detailed subbands, which include neighborhood consistency measurement (NCM) that can adaptively find balance between color background and gray structure. Also two kinds of neighborhood classes based on empirical model are taken into consideration. Visual information fidelity (VIF) as an objective criterion is introduced to evaluate the fusion image. The experimental results of 117 groups of Arabidopsis cell image from John Innes Center show that the new algorithm cannot only make the details of original images well preserved but also improve the visibility of the fusion image, which shows the superiority of the novel method to traditional ones. PMID:23476716
Development of a fusion approach selection tool
NASA Astrophysics Data System (ADS)
Pohl, C.; Zeng, Y.
2015-06-01
During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.
Patel, Meenal J; Andreescu, Carmen; Price, Julie C; Edelman, Kathryn L; Reynolds, Charles F; Aizenstein, Howard J
2015-10-01
Currently, depression diagnosis relies primarily on behavioral symptoms and signs, and treatment is guided by trial and error instead of evaluating associated underlying brain characteristics. Unlike past studies, we attempted to estimate accurate prediction models for late-life depression diagnosis and treatment response using multiple machine learning methods with inputs of multi-modal imaging and non-imaging whole brain and network-based features. Late-life depression patients (medicated post-recruitment) (n = 33) and older non-depressed individuals (n = 35) were recruited. Their demographics and cognitive ability scores were recorded, and brain characteristics were acquired using multi-modal magnetic resonance imaging pretreatment. Linear and nonlinear learning methods were tested for estimating accurate prediction models. A learning method called alternating decision trees estimated the most accurate prediction models for late-life depression diagnosis (87.27% accuracy) and treatment response (89.47% accuracy). The diagnosis model included measures of age, Mini-mental state examination score, and structural imaging (e.g. whole brain atrophy and global white mater hyperintensity burden). The treatment response model included measures of structural and functional connectivity. Combinations of multi-modal imaging and/or non-imaging measures may help better predict late-life depression diagnosis and treatment response. As a preliminary observation, we speculate that the results may also suggest that different underlying brain characteristics defined by multi-modal imaging measures-rather than region-based differences-are associated with depression versus depression recovery because to our knowledge this is the first depression study to accurately predict both using the same approach. These findings may help better understand late-life depression and identify preliminary steps toward personalized late-life depression treatment. Copyright © 2015 John Wiley & Sons, Ltd.
A novel framework of tissue membrane systems for image fusion.
Zhang, Zulin; Yi, Xinzhong; Peng, Hong
2014-01-01
This paper proposes a tissue membrane system-based framework to deal with the optimal image fusion problem. A spatial domain fusion algorithm is given, and a tissue membrane system of multiple cells is used as its computing framework. Based on the multicellular structure and inherent communication mechanism of the tissue membrane system, an improved velocity-position model is developed. The performance of the fusion framework is studied with comparison of several traditional fusion methods as well as genetic algorithm (GA)-based and differential evolution (DE)-based spatial domain fusion methods. Experimental results show that the proposed fusion framework is superior or comparable to the other methods and can be efficiently used for image fusion.
Sedai, Suman; Garnavi, Rahil; Roy, Pallab; Xi Liang
2015-08-01
Multi-atlas segmentation first registers each atlas image to the target image and transfers the label of atlas image to the coordinate system of the target image. The transferred labels are then combined, using a label fusion algorithm. In this paper, we propose a novel label fusion method which aggregates discriminative learning and generative modeling for segmentation of cardiac MR images. First, a probabilistic Random Forest classifier is trained as a discriminative model to obtain the prior probability of a label at the given voxel of the target image. Then, a probability distribution of image patches is modeled using Gaussian Mixture Model for each label, providing the likelihood of the voxel belonging to the label. The final label posterior is obtained by combining the classification score and the likelihood score under Bayesian rule. Comparative study performed on MICCAI 2013 SATA Segmentation Challenge demonstrates that our proposed hybrid label fusion algorithm is accurate than other five state-of-the-art label fusion methods. The proposed method obtains dice similarity coefficient of 0.94 and 0.92 in segmenting epicardium and endocardium respectively. Moreover, our label fusion method achieves more accurate segmentation results compared to four other label fusion methods.
[A study on medical image fusion].
Zhang, Er-hu; Bian, Zheng-zhong
2002-09-01
Five algorithms with its advantages and disadvantage for medical image fusion are analyzed. Four kinds of quantitative evaluation criteria for the quality of image fusion algorithms are proposed and these will give us some guidance for future research.
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian
2018-01-01
In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.
Multimode optical dermoscopy (SkinSpect) analysis for skin with melanocytic nevus
NASA Astrophysics Data System (ADS)
Vasefi, Fartash; MacKinnon, Nicholas; Saager, Rolf; Kelly, Kristen M.; Maly, Tyler; Chave, Robert; Booth, Nicholas; Durkin, Anthony J.; Farkas, Daniel L.
2016-04-01
We have developed a multimode dermoscope (SkinSpect™) capable of illuminating human skin samples in-vivo with spectrally-programmable linearly-polarized light at 33 wavelengths between 468nm and 857 nm. Diffusely reflected photons are separated into collinear and cross-polarized image paths and images captured for each illumination wavelength. In vivo human skin nevi (N = 20) were evaluated with the multimode dermoscope and melanin and hemoglobin concentrations were compared with Spatially Modulated Quantitative Spectroscopy (SMoQS) measurements. Both systems show low correlation between their melanin and hemoglobin concentrations, demonstrating the ability of the SkinSpect™ to separate these molecular signatures and thus act as a biologically plausible device capable of early onset melanoma detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lorenz, Matthias; Ovchinnikova, Olga S; Kertesz, Vilmos
2013-01-01
This paper describes the coupling of ambient laser ablation surface sampling, accomplished using a laser capture microdissection system, with atmospheric pressure chemical ionization mass spectrometry for high spatial resolution multimodal imaging. A commercial laser capture microdissection system was placed in close proximity to a modified ion source of a mass spectrometer designed to allow for sampling of laser ablated material via a transfer tube directly into the ionization region. Rhodamine 6G dye of red sharpie ink in a laser etched pattern as well as cholesterol and phosphatidylcholine in a cerebellum mouse brain thin tissue section were identified and imaged frommore » full scan mass spectra. A minimal spot diameter of 8 m was achieved using the 10X microscope cutting objective with a lateral oversampling pixel resolution of about 3.7 m. Distinguishing between features approximately 13 m apart in a cerebellum mouse brain thin tissue section was demonstrated in a multimodal fashion including co-registered optical and mass spectral chemical images.« less
Design and applications of a multimodality image data warehouse framework.
Wong, Stephen T C; Hoo, Kent Soo; Knowlton, Robert C; Laxer, Kenneth D; Cao, Xinhau; Hawkins, Randall A; Dillon, William P; Arenson, Ronald L
2002-01-01
A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications--namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains.
Design and Applications of a Multimodality Image Data Warehouse Framework
Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.
2002-01-01
A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885
A Multimodal Approach to Counselor Supervision.
ERIC Educational Resources Information Center
Ponterotto, Joseph G.; Zander, Toni A.
1984-01-01
Represents an initial effort to apply Lazarus's multimodal approach to a model of counselor supervision. Includes continuously monitoring the trainee's behavior, affect, sensations, images, cognitions, interpersonal functioning, and when appropriate, biological functioning (diet and drugs) in the supervisory process. (LLL)
Enhanced image fusion using directional contrast rules in fuzzy transform domain.
Nandal, Amita; Rosales, Hamurabi Gamboa
2016-01-01
In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.
Novakovic, Dunja; Saarinen, Jukka; Rojalin, Tatu; Antikainen, Osmo; Fraser-Miller, Sara J; Laaksonen, Timo; Peltonen, Leena; Isomäki, Antti; Strachan, Clare J
2017-11-07
Two nonlinear imaging modalities, coherent anti-Stokes Raman scattering (CARS) and sum-frequency generation (SFG), were successfully combined for sensitive multimodal imaging of multiple solid-state forms and their changes on drug tablet surfaces. Two imaging approaches were used and compared: (i) hyperspectral CARS combined with principal component analysis (PCA) and SFG imaging and (ii) simultaneous narrowband CARS and SFG imaging. Three different solid-state forms of indomethacin-the crystalline gamma and alpha forms, as well as the amorphous form-were clearly distinguished using both approaches. Simultaneous narrowband CARS and SFG imaging was faster, but hyperspectral CARS and SFG imaging has the potential to be applied to a wider variety of more complex samples. These methodologies were further used to follow crystallization of indomethacin on tablet surfaces under two storage conditions: 30 °C/23% RH and 30 °C/75% RH. Imaging with (sub)micron resolution showed that the approach allowed detection of very early stage surface crystallization. The surfaces progressively crystallized to predominantly (but not exclusively) the gamma form at lower humidity and the alpha form at higher humidity. Overall, this study suggests that multimodal nonlinear imaging is a highly sensitive, solid-state (and chemically) specific, rapid, and versatile imaging technique for understanding and hence controlling (surface) solid-state forms and their complex changes in pharmaceuticals.
Luo, Y.; Xia, J.; Miller, R.D.; Liu, J.; Xu, Y.; Liu, Q.
2008-01-01
Multichannel Analysis of Surface Waves (MASW) analysis is an efficient tool to obtain the vertical shear-wave profile. One of the key steps in the MASW method is to generate an image of dispersive energy in the frequency-velocity domain, so dispersion curves can be determined by picking peaks of dispersion energy. In this paper, we image Rayleigh-wave dispersive energy and separate multimodes from a multichannel record by high-resolution linear Radon transform (LRT). We first introduce Rayleigh-wave dispersive energy imaging by high-resolution LRT. We then show the process of Rayleigh-wave mode separation. Results of synthetic and real-world examples demonstrate that (1) compared with slant stacking algorithm, high-resolution LRT can improve the resolution of images of dispersion energy by more than 50% (2) high-resolution LRT can successfully separate multimode dispersive energy of Rayleigh waves with high resolution; and (3) multimode separation and reconstruction expand frequency ranges of higher mode dispersive energy, which not only increases the investigation depth but also provides a means to accurately determine cut-off frequencies.
NASA Astrophysics Data System (ADS)
Cheung, Carling L.; Looi, Thomas; Drake, James; Kim, Peter C. W.
2012-02-01
The development of image guided robotic and mechatronic platforms for medical applications requires a phantom model for initial testing. Finding an appropriate phantom becomes challenging when the targeted patient population is pediatrics, particularly infants, neonates or fetuses. Our group is currently developing a pediatricsized surgical robot that operates under fused MRI and laparoscopic video guidance. To support this work, we describe a method for designing and manufacturing silicone rubber organ phantoms for the purpose of testing the robotics and the image fusion system. A surface model of the organ is obtained and converted into a mold that is then rapid-prototyped using a 3D printer. The mold is filled with a solution containing a particular ratio of silicone rubber to slacker additive to achieve a specific set of tactile and imaging characteristics in the phantom. The expected MRI relaxation times of different ratios of silicone rubber to slacker additive are experimentally quantified so that the imaging properties of the phantom can be matched to those of the organ that it represents. Samples of silicone rubber and slacker additive mixed in ratios ranging from 1:0 to 1:1.5 were prepared and scanned using inversion recovery and spin echo sequences with varying TI and TE, respectively, in order to fit curves to calculate the expected T1 and T2 relaxation times of each ratio. A set of infantsized abdominal organs was prepared, which were successfully sutured by the robot and imaged using different modalities.
Multiscale infrared and visible image fusion using gradient domain guided image filtering
NASA Astrophysics Data System (ADS)
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
PIRATE: pediatric imaging response assessment and targeting environment
NASA Astrophysics Data System (ADS)
Glenn, Russell; Zhang, Yong; Krasin, Matthew; Hua, Chiaho
2010-02-01
By combining the strengths of various imaging modalities, the multimodality imaging approach has potential to improve tumor staging, delineation of tumor boundaries, chemo-radiotherapy regime design, and treatment response assessment in cancer management. To address the urgent needs for efficient tools to analyze large-scale clinical trial data, we have developed an integrated multimodality, functional and anatomical imaging analysis software package for target definition and therapy response assessment in pediatric radiotherapy (RT) patients. Our software provides quantitative tools for automated image segmentation, region-of-interest (ROI) histogram analysis, spatial volume-of-interest (VOI) analysis, and voxel-wise correlation across modalities. To demonstrate the clinical applicability of this software, histogram analyses were performed on baseline and follow-up 18F-fluorodeoxyglucose (18F-FDG) PET images of nine patients with rhabdomyosarcoma enrolled in an institutional clinical trial at St. Jude Children's Research Hospital. In addition, we combined 18F-FDG PET, dynamic-contrast-enhanced (DCE) MR, and anatomical MR data to visualize the heterogeneity in tumor pathophysiology with the ultimate goal of adaptive targeting of regions with high tumor burden. Our software is able to simultaneously analyze multimodality images across multiple time points, which could greatly speed up the analysis of large-scale clinical trial data and validation of potential imaging biomarkers.
Wang, Hao; Gardecki, Joseph A.; Ughi, Giovanni J.; Jacques, Paulino Vacas; Hamidi, Ehsan; Tearney, Guillermo J.
2015-01-01
While optical coherence tomography (OCT) has been shown to be capable of imaging coronary plaque microstructure, additional chemical/molecular information may be needed in order to determine which lesions are at risk of causing an acute coronary event. In this study, we used a recently developed imaging system and double-clad fiber (DCF) catheter capable of simultaneously acquiring both OCT and red excited near-infrared autofluorescence (NIRAF) images (excitation: 633 nm, emission: 680nm to 900nm). We found that NIRAF is elevated in lesions that contain necrotic core – a feature that is critical for vulnerable plaque diagnosis and that is not readily discriminated by OCT alone. We first utilized a DCF ball lens probe and a bench top setup to acquire en face NIRAF images of aortic plaques ex vivo (n = 20). In addition, we used the OCT-NIRAF system and fully assembled catheters to acquire multimodality images from human coronary arteries (n = 15) prosected from human cadaver hearts (n = 5). Comparison of these images with corresponding histology demonstrated that necrotic core plaques exhibited significantly higher NIRAF intensity than other plaque types. These results suggest that multimodality intracoronary OCT-NIRAF imaging technology may be used in the future to provide improved characterization of coronary artery disease in human patients. PMID:25909020