Wavelet Fusion for Concealed Object Detection Using Passive Millimeter Wave Sequence Images
NASA Astrophysics Data System (ADS)
Chen, Y.; Pang, L.; Liu, H.; Xu, X.
2018-04-01
PMMW imaging system can create interpretable imagery on the objects concealed under clothing, which gives the great advantage to the security check system. Paper addresses wavelet fusion to detect concealed objects using passive millimeter wave (PMMW) sequence images. According to PMMW real-time imager acquired image characteristics and storage methods firstly, using the sum of squared difference (SSD) as the image-related parameters to screen the sequence images. Secondly, the selected images are optimized using wavelet fusion algorithm. Finally, the concealed objects are detected by mean filter, threshold segmentation and edge detection. The experimental results show that this method improves the detection effect of concealed objects by selecting the most relevant images from PMMW sequence images and using wavelet fusion to enhance the information of the concealed objects. The method can be effectively applied to human body concealed object detection in millimeter wave video.
A Hierarchical Convolutional Neural Network for vesicle fusion event classification.
Li, Haohan; Mao, Yunxiang; Yin, Zhaozheng; Xu, Yingke
2017-09-01
Quantitative analysis of vesicle exocytosis and classification of different modes of vesicle fusion from the fluorescence microscopy are of primary importance for biomedical researches. In this paper, we propose a novel Hierarchical Convolutional Neural Network (HCNN) method to automatically identify vesicle fusion events in time-lapse Total Internal Reflection Fluorescence Microscopy (TIRFM) image sequences. Firstly, a detection and tracking method is developed to extract image patch sequences containing potential fusion events. Then, a Gaussian Mixture Model (GMM) is applied on each image patch of the patch sequence with outliers rejected for robust Gaussian fitting. By utilizing the high-level time-series intensity change features introduced by GMM and the visual appearance features embedded in some key moments of the fusion process, the proposed HCNN architecture is able to classify each candidate patch sequence into three classes: full fusion event, partial fusion event and non-fusion event. Finally, we validate the performance of our method on 9 challenging datasets that have been annotated by cell biologists, and our method achieves better performances when comparing with three previous methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Image fusion pitfalls for cranial radiosurgery.
Jonker, Benjamin P
2013-01-01
Stereotactic radiosurgery requires imaging to define both the stereotactic space in which the treatment is delivered and the target itself. Image fusion is the process of using rotation and translation to bring a second image set into alignment with the first image set. This allows the potential concurrent use of multiple image sets to define the target and stereotactic space. While a single magnetic resonance imaging (MRI) sequence alone can be used for delineation of the target and fiducials, there may be significant advantages to using additional imaging sets including other MRI sequences, computed tomography (CT) scans, and advanced imaging sets such as catheter-based angiography, diffusor tension imaging-based fiber tracking and positon emission tomography in order to more accurately define the target and surrounding critical structures. Stereotactic space is usually defined by detection of fiducials on the stereotactic head frame or mask system. Unfortunately MRI sequences are susceptible to geometric distortion, whereas CT scans do not face this problem (although they have poorer resolution of the target in most cases). Thus image fusion can allow the definition of stereotactic space to proceed from the geometrically accurate CT images at the same time as using MRI to define the target. The use of image fusion is associated with risk of error introduced by inaccuracies of the fusion process, as well as workflow changes that if not properly accounted for can mislead the treating clinician. The purpose of this review is to describe the uses of image fusion in stereotactic radiosurgery as well as its potential pitfalls.
Information recovery through image sequence fusion under wavelet transformation
NASA Astrophysics Data System (ADS)
He, Qiang
2010-04-01
Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.
Image fusion pitfalls for cranial radiosurgery
Jonker, Benjamin P.
2013-01-01
Stereotactic radiosurgery requires imaging to define both the stereotactic space in which the treatment is delivered and the target itself. Image fusion is the process of using rotation and translation to bring a second image set into alignment with the first image set. This allows the potential concurrent use of multiple image sets to define the target and stereotactic space. While a single magnetic resonance imaging (MRI) sequence alone can be used for delineation of the target and fiducials, there may be significant advantages to using additional imaging sets including other MRI sequences, computed tomography (CT) scans, and advanced imaging sets such as catheter-based angiography, diffusor tension imaging-based fiber tracking and positon emission tomography in order to more accurately define the target and surrounding critical structures. Stereotactic space is usually defined by detection of fiducials on the stereotactic head frame or mask system. Unfortunately MRI sequences are susceptible to geometric distortion, whereas CT scans do not face this problem (although they have poorer resolution of the target in most cases). Thus image fusion can allow the definition of stereotactic space to proceed from the geometrically accurate CT images at the same time as using MRI to define the target. The use of image fusion is associated with risk of error introduced by inaccuracies of the fusion process, as well as workflow changes that if not properly accounted for can mislead the treating clinician. The purpose of this review is to describe the uses of image fusion in stereotactic radiosurgery as well as its potential pitfalls. PMID:23682338
Short memory fuzzy fusion image recognition schema employing spatial and Fourier descriptors
NASA Astrophysics Data System (ADS)
Raptis, Sotiris N.; Tzafestas, Spyros G.
2001-03-01
Single images quite often do not bear enough information for precise interpretation due to a variety of reasons. Multiple image fusion and adequate integration recently became the state of the art in the pattern recognition field. In this paper presented here and enhanced multiple observation schema is discussed investigating improvements to the baseline fuzzy- probabilistic image fusion methodology. The first innovation introduced consists in considering only a limited but seemingly ore effective part of the uncertainty information obtained by a certain time restricting older uncertainty dependencies and alleviating computational burden that is now needed for short sequence (stored into memory) of samples. The second innovation essentially grouping them into feature-blind object hypotheses. Experiment settings include a sequence of independent views obtained by camera being moved around the investigated object.
Region-based multifocus image fusion for the precise acquisition of Pap smear images.
Tello-Mijares, Santiago; Bescós, Jesús
2018-05-01
A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Visualization and Sequencing of Membrane Remodeling Leading to Influenza Virus Fusion
Gui, Long; Ebner, Jamie L.; Mileant, Alexander; Williams, James A.
2016-01-01
ABSTRACT Protein-mediated membrane fusion is an essential step in many fundamental biological events, including enveloped virus infection. The nature of protein and membrane intermediates and the sequence of membrane remodeling during these essential processes remain poorly understood. Here we used cryo-electron tomography (cryo-ET) to image the interplay between influenza virus and vesicles with a range of lipid compositions. By following the population kinetics of membrane fusion intermediates imaged by cryo-ET, we found that membrane remodeling commenced with the hemagglutinin fusion protein spikes grappling onto the target membrane, followed by localized target membrane dimpling as local clusters of hemagglutinin started to undergo conformational refolding. The local dimples then transitioned to extended, tightly apposed contact zones where the two proximal membrane leaflets were in most cases indistinguishable from each other, suggesting significant dehydration and possible intermingling of the lipid head groups. Increasing the content of fusion-enhancing cholesterol or bis-monoacylglycerophosphate in the target membrane led to an increase in extended contact zone formation. Interestingly, hemifused intermediates were found to be extremely rare in the influenza virus fusion system studied here, most likely reflecting the instability of this state and its rapid conversion to postfusion complexes, which increased in population over time. By tracking the populations of fusion complexes over time, the architecture and sequence of membrane reorganization leading to efficient enveloped virus fusion were thus resolved. IMPORTANCE Enveloped viruses employ specialized surface proteins to mediate fusion of cellular and viral membranes that results in the formation of pores through which the viral genetic material is delivered to the cell. For influenza virus, the trimeric hemagglutinin (HA) glycoprotein spike mediates host cell attachment and membrane fusion. While structures of a subset of conformations and parts of the fusion machinery have been characterized, the nature and sequence of membrane deformations during fusion have largely eluded characterization. Building upon studies that focused on early stages of HA-mediated membrane remodeling, here cryo-electron tomography (cryo-ET) was used to image the three-dimensional organization of intact influenza virions at different stages of fusion with liposomes, leading all the way to completion of the fusion reaction. By monitoring the evolution of fusion intermediate populations over the course of acid-induced fusion, we identified the progression of membrane reorganization that leads to efficient fusion by an enveloped virus. PMID:27226364
Basset, Antoine; Bouthemy, Patrick; Boulanger, Jérôme; Waharte, François; Salamero, Jean; Kervrann, Charles
2017-07-24
Characterizing membrane dynamics is a key issue to understand cell exchanges with the extra-cellular medium. Total internal reflection fluorescence microscopy (TIRFM) is well suited to focus on the late steps of exocytosis at the plasma membrane. However, it is still a challenging task to quantify (lateral) diffusion and estimate local dynamics of proteins. A new model was introduced to represent the behavior of cargo transmembrane proteins during the vesicle fusion to the plasma membrane at the end of the exocytosis process. Two biophysical parameters, the diffusion coefficient and the release rate parameter, are automatically estimated from TIRFM image sequences, to account for both the lateral diffusion of molecules at the membrane and the continuous release of the proteins from the vesicle to the plasma membrane. Quantitative evaluation on 300 realistic computer-generated image sequences demonstrated the efficiency and accuracy of the method. The application of our method on 16 real TIRFM image sequences additionally revealed differences in the dynamic behavior of Transferrin Receptor (TfR) and Langerin proteins. An automated method has been designed to simultaneously estimate the diffusion coefficient and the release rate for each individual vesicle fusion event at the plasma membrane in TIRFM image sequences. It can be exploited for further deciphering cell membrane dynamics.
NASA Astrophysics Data System (ADS)
Wu, Jiangling; Huang, Yu; Bian, Xintong; Li, DanDan; Cheng, Quan; Ding, Shijia
2016-10-01
In this work, a custom-made intensity-interrogation surface plasmon resonance imaging (SPRi) system has been developed to directly detect a specific sequence of BCR/ABL fusion gene in chronic myelogenous leukemia (CML). The variation in the reflected light intensity detected from the sensor chip composed of gold islands array is proportional to the change of refractive index due to the selective hybridization of surface-bound DNA probes with target ssDNA. SPRi measurements were performed with different concentrations of synthetic target DNA sequence. The calibration curve of synthetic target sequence shows a good relationship between the concentration of synthetic target and the change of reflected light intensity. The detection limit of this SPRi measurement could approach 10.29 nM. By comparing SPRi images, the target ssDNA and non-complementary DNA sequence are able to be distinguished. This SPRi system has been applied for assay of BCR/ABL fusion gene extracted from real samples. This nucleic acid-based SPRi biosensor therefore offers an alternative high-effective, high-throughput label-free tool for DNA detection in biomedical research and molecular diagnosis.
Fusion of infrared and visible images based on BEMD and NSDFB
NASA Astrophysics Data System (ADS)
Zhu, Pan; Huang, Zhanhua; Lei, Hai
2016-07-01
This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.
Translational regulation of sigma 32 synthesis: requirement for an internal control element.
Kamath-Loeb, A S; Gross, C A
1991-01-01
We have investigated the sequence requirements for the translational regulation of sigma 32 by examining the behavior of a new rpoH-lacZ protein fusion containing a short N-terminal fragment of sigma 32 fused to beta-galactosidase. Although the fusion retains rpoH translational initiation signals, it lacks translational regulation, implicating coding sequences within rpoH in this regulatory process. Images PMID:2050641
[Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain].
Fiedler, E; Platsch, G; Schwarz, A; Schmiedehausen, K; Tomandl, B; Huk, W; Rupprecht, Th; Rahn, N; Kuwert, T
2003-10-01
Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. PATIENTS, MATERIAL AND METHOD: In 32 patients regional cerebral blood flow was measured using (99m)Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.
Neumann, Jan-Oliver; Giese, Henrik; Biller, Armin; Nagel, Armin M; Kiening, Karl
2015-01-01
Magnetic resonance imaging (MRI) is replacing computed tomography (CT) as the main imaging modality for stereotactic transformations. MRI is prone to spatial distortion artifacts, which can lead to inaccuracy in stereotactic procedures. Modern MRI systems provide distortion correction algorithms that may ameliorate this problem. This study investigates the different options of distortion correction using standard 1.5-, 3- and 7-tesla MRI scanners. A phantom was mounted on a stereotactic frame. One CT scan and three MRI scans were performed. At all three field strengths, two 3-dimensional sequences, volumetric interpolated breath-hold examination (VIBE) and magnetization-prepared rapid acquisition with gradient echo, were acquired, and automatic distortion correction was performed. Global stereotactic transformation of all 13 datasets was performed and two stereotactic planning workflows (MRI only vs. CT/MR image fusion) were subsequently analysed. Distortion correction on the 1.5- and 3-tesla scanners caused a considerable reduction in positional error. The effect was more pronounced when using the VIBE sequences. By using co-registration (CT/MR image fusion), even a lower positional error could be obtained. In ultra-high-field (7 T) MR imaging, distortion correction introduced even higher errors. However, the accuracy of non-corrected 7-tesla sequences was comparable to CT/MR image fusion 3-tesla imaging. MRI distortion correction algorithms can reduce positional errors by up to 60%. For stereotactic applications of utmost precision, we recommend a co-registration to an additional CT dataset. © 2015 S. Karger AG, Basel.
Evaluation of MRI-US Fusion Technology in Sports-Related Musculoskeletal Injuries.
Wong-On, Manuel; Til-Pérez, Lluís; Balius, Ramón
2015-06-01
A combination of magnetic resonance imaging (MRI) with real-time high-resolution ultrasound (US) known as fusion imaging may improve visualization of musculoskeletal (MSK) sports medicine injuries. The aim of this study was to evaluate the applicability of MRI-US fusion technology in MSK sports medicine. This study was conducted by the medical services of the FC Barcelona. The participants included volunteers and referred athletes with symptomatic and asymptomatic MSK injuries. All cases underwent MRI which was loaded into the US system for manual registration on the live US image and fusion imaging examination. After every test, an evaluation form was completed in terms of advantages, disadvantages, and anatomic fusion landmarks. From November 2014 to March 2015, we evaluated 20 subjects who underwent fusion imaging, 5 non-injured volunteers and 15 injured athletes, 11 symptomatic and 4 asymptomatic, age range 16-50 years, mean 22. We describe some of the anatomic landmarks used to guide fusion in different regions. This technology allowed us to examine muscle and tendon injuries simultaneously in US and MRI, and the correlation of both techniques, especially low-grade muscular injuries. This has also helped compensate for the limited field of view with US. It improves spatial orientation of cartilage, labrum and meniscal injuries. However, a high-quality MRI image is essential in achieving an adequate fusion image, and 3D sequences need to be added in MRI protocols to improve navigation. The combination of real-time MRI and US image fusion and navigation is relatively easy to perform and is helping to improve understanding of MSK injuries. However, it requires specific skills in MSK imaging and still needs further research in sports-related injuries. Toshiba Medical Systems Corporation.
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-01-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary. PMID:26942233
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution.
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-10-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary.
A new method of Quickbird own image fusion
NASA Astrophysics Data System (ADS)
Han, Ying; Jiang, Hong; Zhang, Xiuying
2009-10-01
With the rapid development of remote sensing technology, the means of accessing to remote sensing data become increasingly abundant, thus the same area can form a large number of multi-temporal, different resolution image sequence. At present, the fusion methods are mainly: HPF, IHS transform method, PCA method, Brovey, Mallat algorithm and wavelet transform and so on. There exists a serious distortion of the spectrums in the IHS transform, Mallat algorithm omits low-frequency information of the high spatial resolution images, the integration results of which has obvious blocking effects. Wavelet multi-scale decomposition for different sizes, the directions, details and the edges can have achieved very good results, but different fusion rules and algorithms can achieve different effects. This article takes the Quickbird own image fusion as an example, basing on wavelet transform and HVS, wavelet transform and IHS integration. The result shows that the former better. This paper introduces the correlation coefficient, the relative average spectral error index and usual index to evaluate the quality of image.
Progressive multi-atlas label fusion by dictionary evolution.
Song, Yantao; Wu, Guorong; Bahrami, Khosro; Sun, Quansen; Shen, Dinggang
2017-02-01
Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary. Copyright © 2016 Elsevier B.V. All rights reserved.
Underwater video enhancement using multi-camera super-resolution
NASA Astrophysics Data System (ADS)
Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.
2017-12-01
Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.
Hamm, Klaus D; Surber, Gunnar; Schmücking, Michael; Wurm, Reinhard E; Aschenbach, Rene; Kleinert, Gabriele; Niesen, A; Baum, Richard P
2004-11-01
Innovative new software solutions may enable image fusion to produce the desired data superposition for precise target definition and follow-up studies in radiosurgery/stereotactic radiotherapy in patients with intracranial lesions. The aim is to integrate the anatomical and functional information completely into the radiation treatment planning and to achieve an exact comparison for follow-up examinations. Special conditions and advantages of BrainLAB's fully automatic image fusion system are evaluated and described for this purpose. In 458 patients, the radiation treatment planning and some follow-up studies were performed using an automatic image fusion technique involving the use of different imaging modalities. Each fusion was visually checked and corrected as necessary. The computerized tomography (CT) scans for radiation treatment planning (slice thickness 1.25 mm), as well as stereotactic angiography for arteriovenous malformations, were acquired using head fixation with stereotactic arc or, in the case of stereotactic radiotherapy, with a relocatable stereotactic mask. Different magnetic resonance (MR) imaging sequences (T1, T2, and fluid-attenuated inversion-recovery images) and positron emission tomography (PET) scans were obtained without head fixation. Fusion results and the effects on radiation treatment planning and follow-up studies were analyzed. The precision level of the results of the automatic fusion depended primarily on the image quality, especially the slice thickness and the field homogeneity when using MR images, as well as on patient movement during data acquisition. Fully automated image fusion of different MR, CT, and PET studies was performed for each patient. Only in a few cases was it necessary to correct the fusion manually after visual evaluation. These corrections were minor and did not materially affect treatment planning. High-quality fusion of thin slices of a region of interest with a complete head data set could be performed easily. The target volume for radiation treatment planning could be accurately delineated using multimodal information provided by CT, MR, angiography, and PET studies. The fusion of follow-up image data sets yielded results that could be successfully compared and quantitatively evaluated. Depending on the quality of the originally acquired image, automated image fusion can be a very valuable tool, allowing for fast (approximately 1-2 minute) and precise fusion of all relevant data sets. Fused multimodality imaging improves the target volume definition for radiation treatment planning. High-quality follow-up image data sets should be acquired for image fusion to provide exactly comparable slices and volumetric results that will contribute to quality contol.
Bickelhaupt, Sebastian; Tesdorff, Jana; Laun, Frederik Bernd; Kuder, Tristan Anselm; Lederer, Wolfgang; Teiner, Susanne; Maier-Hein, Klaus; Daniel, Heidi; Stieber, Anne; Delorme, Stefan; Schlemmer, Heinz-Peter
2017-02-01
The aim of this study was to evaluate the accuracy and applicability of solitarily reading fused image series of T2-weighted and high-b-value diffusion-weighted sequences for lesion characterization as compared to sequential or combined image analysis of these unenhanced sequences and to contrast- enhanced breast MRI. This IRB-approved study included 50 female participants with suspicious breast lesions detected in screening X-ray mammograms, all of which provided written informed consent. Prior to biopsy, all women underwent MRI including diffusion-weighted imaging (DWIBS, b = 1500s/mm 2 ). Images were analyzed as follows: prospective image fusion of DWIBS and T2-weighted images (FU), side-by-side analysis of DWIBS and T2-weighted series (CO), combination of the first two methods (CO+FU), and full contrast-enhanced diagnostic protocol (FDP). Diagnostic indices, confidence, and image quality of the protocols were compared by two blinded readers. Reading the CO+FU (accuracy 0.92; NPV 96.1 %; PPV 87.6 %) and the CO series (0.90; 96.1 %; 83.7 %) provided a diagnostic performance similar to the FDP (0.95; 96.1 %; 91.3 %; p > 0.05). FU reading alone significantly reduced the diagnostic accuracy (0.82; 93.3 %; 73.4 %; p = 0.023). MR evaluation of suspicious BI-RADS 4 and 5 lesions detected on mammography by using a non-contrast-enhanced T2-weighted and DWIBS sequence protocol is most accurate if MR images were read using the CO+FU protocol. • Unenhanced breast MRI with additional DWIBS/T2w-image fusion allows reliable lesion characterization. • Abbreviated reading of fused DWIBS/T2w-images alone decreases diagnostic confidence and accuracy. • Reading fused DWIBS/T2w-images as the sole diagnostic method should be avoided.
Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)
A fast and automatic fusion algorithm for unregistered multi-exposure image sequence
NASA Astrophysics Data System (ADS)
Liu, Yan; Yu, Feihong
2014-09-01
Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.
Highlander, S K; Wickersham, E A; Garza, O; Weinstock, G M
1993-01-01
Multicopy and single-copy chromosomal fusions between the Pasteurella haemolytica leukotoxin regulatory region and the Escherichia coli beta-galactosidase gene have been constructed. These fusions were used as reporters to identify and isolate regulators of leukotoxin expression from a P. haemolytica cosmid library. A cosmid clone, which inhibited leukotoxin expression from multicopy and single-copy protein fusions, was isolated and found to contain the complete leukotoxin gene cluster plus additional upstream sequences. The locus responsible for inhibition of expression from leukotoxin-beta-galactosidase fusions was mapped within these upstream sequences, by transposon mutagenesis with Tn5, and its DNA sequence was determined. The inhibitory activity was found to be associated with a predicted 440-amino-acid reading frame (lapA) that lies within a four-gene arginine transport locus. LapA is predicted to be the nucleotide-binding component of this transport system and shares homology with the Clp family of proteases. Images PMID:8359916
An object tracking method based on guided filter for night fusion image
NASA Astrophysics Data System (ADS)
Qian, Xiaoyan; Wang, Yuedong; Han, Lei
2016-01-01
Online object tracking is a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a novel online object tracking with guided image filter for accurate and robust night fusion image tracking. Firstly, frame difference is applied to produce the coarse target, which helps to generate observation models. Under the restriction of these models and local source image, guided filter generates sufficient and accurate foreground target. Then accurate boundaries of the target can be extracted from detection results. Finally timely updating for observation models help to avoid tracking shift. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-art methods.
NASA Astrophysics Data System (ADS)
Awwal, Abdul A. S.; Bliss, Erlan S.; Miller Kamm, Victoria; Leach, Richard R.; Roberts, Randy; Rushford, Michael C.; Lowe-Webb, Roger; Wilhelmsen, Karl
2015-09-01
Four of the 192 beams of the National Ignition Facility (NIF) are currently being diverted into the Advanced Radiographic Capability (ARC) system to generate a sequence of short (1-50 picoseconds) 1053 nm laser pulses. When focused onto high Z wires in vacuum, these pulses create high energy x-ray pulses capable of penetrating the dense, imploding fusion fuel plasma during ignition scale experiments. The transmitted x-rays imaged with x-ray diagnostics can create movie radiographs that are expected to provide unprecedented insight into the implosion dynamics. The resulting images will serve as a diagnostic for tuning the experimental parameters towards successful fusion reactions. Beam delays introduced into the ARC pulses via independent, free-space optical trombones create the desired x-ray image sequence, or movie. However, these beam delays cause optical distortion of various alignment fiducials viewed by alignment sensors in the NIF and ARC beamlines. This work describes how the position of circular alignment fiducials is estimated in the presence of distortion.
NASA Astrophysics Data System (ADS)
Debon, Renaud; Le Guillou, Clara; Cauvin, Jean-Michel; Solaiman, Basel; Roux, Christian
2001-08-01
Medical domain makes intensive use of information fusion. In particular, the gastro-enterology is a discipline where physicians have the choice between several imagery modalities that offer complementary advantages. Among all existing systems, videoendoscopy (based on a CCD sensor) and echoendoscopy (based on an ultrasound sensor) are the most efficient. The use of each system corresponds to a given step in the physician diagnostic elaboration. Nowadays, several works aim to achieve automatic interpretation of videoendoscopic sequences. These systems can quantify color and superficial textures of the digestive tube. Unfortunately the relief information, which is important for the diagnostic, is very difficult to retrieve. On the other hand, some studies have proved that 3D information can be easily quantified using echoendoscopy image sequences. That is why the idea to combine these information, acquired from two very different points of view, can be considered as a real challenge for the medical image fusion topic. In this paper, after a review of actual works concerning numerical exploitation of videoendoscopy and echoendoscopy, the following question will be discussed: how can the use of complementary aspects of the different systems ease the automatic exploitation of videoendoscopy ? In a second time, we will evaluate the feasibility of the achievement of a realistic 3D reconstruction based both on information given by echoendoscopy (relief) and videoendoscopy (texture). Enumeration of potential applications of such a fusion system will then follow. Further discussions and perspectives will conclude this first study.
Ghost detection and removal based on super-pixel grouping in exposure fusion
NASA Astrophysics Data System (ADS)
Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun
2014-09-01
A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.
Neurovascular Study of the Trigeminal Nerve at 3 T MRI
Gonzalez, Nadia; Muñoz, Alexandra; Bravo, Fernando; Sarroca, Daniel; Morales, Carlos
2015-01-01
This study aimed to show a novel visualization method to investigate neurovascular compression of the trigeminal nerve (TN) using a volume-rendering fusion imaging technique of 3D fast imaging employing steady-state acquisition (3D FIESTA) and coregistered 3D time of flight MR angiography (3D TOF MRA) sequences, which we called “neurovascular study of the trigeminal nerve”. We prospectively studied 30 patients with unilateral trigeminal neuralgia (TN) and 50 subjects without symptoms of TN (control group), on a 3 Tesla scanner. All patients were assessed using 3D FIESTA and 3D TOF MRA sequences centered on the pons, as well as a standard brain protocol including axial T1, T2, FLAIR and GRE sequences to exclude other pathologies that could cause TN. Post-contrast T1-weighted sequences were also performed. All cases showing arterial imprinting on the trigeminal nerve (n = 11) were identified on the ipsilateral side of the pain. No significant relationship was found between the presence of an artery in contact with the trigeminal nerve and TN. Eight cases were found showing arterial contact on the ipsilateral side of the pain and five cases of arterial contact on the contralateral side. The fusion imaging technique of 3D FIESTA and 3D TOF MRA sequences, combining the high anatomical detail provided by the 3D FIESTA sequence with the 3D TOF MRA sequence and its capacity to depict arterial structures, results in a tool that enables quick and efficient visualization and assessment of the relationship between the trigeminal nerve and the neighboring vascular structures. PMID:25924169
Weirather, Jason L.; Afshar, Pegah Tootoonchi; Clark, Tyson A.; Tseng, Elizabeth; Powers, Linda S.; Underwood, Jason G.; Zabner, Joseph; Korlach, Jonas; Wong, Wing Hung; Au, Kin Fai
2015-01-01
We developed an innovative hybrid sequencing approach, IDP-fusion, to detect fusion genes, determine fusion sites and identify and quantify fusion isoforms. IDP-fusion is the first method to study gene fusion events by integrating Third Generation Sequencing long reads and Second Generation Sequencing short reads. We applied IDP-fusion to PacBio data and Illumina data from the MCF-7 breast cancer cells. Compared with the existing tools, IDP-fusion detects fusion genes at higher precision and a very low false positive rate. The results show that IDP-fusion will be useful for unraveling the complexity of multiple fusion splices and fusion isoforms within tumorigenesis-relevant fusion genes. PMID:26040699
Paprottka, P M; Zengel, P; Cyran, C C; Ingrisch, M; Nikolaou, K; Reiser, M F; Clevert, D A
2014-01-01
To evaluate the ultrasound tissue elasticity imaging by comparison to multimodality imaging using image fusion with Magnetic Resonance Imaging (MRI) and conventional grey scale imaging with additional elasticity-ultrasound in an experimental small-animal-squamous-cell carcinoma-model for the assessment of tissue morphology. Human hypopharynx carcinoma cells were subcutaneously injected into the left flank of 12 female athymic nude rats. After 10 days (SD ± 2) of subcutaneous tumor growth, sonographic grey scale including elasticity imaging and MRI measurements were performed using a high-end ultrasound system and a 3T MR. For image fusion the contrast-enhanced MRI DICOM data set was uploaded in the ultrasonic device which has a magnetic field generator, a linear array transducer (6-15 MHz) and a dedicated software package (GE Logic E9), that can detect transducers by means of a positioning system. Conventional grey scale and elasticity imaging were integrated in the image fusion examination. After successful registration and image fusion the registered MR-images were simultaneously shown with the respective ultrasound sectional plane. Data evaluation was performed using the digitally stored video sequence data sets by two experienced radiologist using a modified Tsukuba Elasticity score. The colors "red and green" are assigned for an area of soft tissue, "blue" indicates hard tissue. In all cases a successful image fusion and plan registration with MRI and ultrasound imaging including grey scale and elasticity imaging was possible. The mean tumor volume based on caliper measurements in 3 dimensions was ~323 mm3. 4/12 rats were evaluated with Score I, 5/12 rates were evaluated with Score II, 3/12 rates were evaluated with Score III. There was a close correlation in the fused MRI with existing small necrosis in the tumor. None of the scored II or III lesions was visible by conventional grey scale. The comparison of ultrasound tissue elasticity imaging enables a secure differentiation between different tumor tissue areas in comparison to image fusion with MRI in our small study group. Therefore ultrasound tissue elasticity imaging might be used for fast detection of tumor response in the future whereas conventional grey scale imaging alone could not provide the additional information. By using standard, contrast-enhanced MRI images for reliable and reproducible slice positioning, the strongly user-dependent limitation of ultrasound tissue elasticity imaging may be overcome, especially for a comparison between baseline and follow-up measurements.
R&D 100, 2016: Ultrafast X-ray Imager
Porter, John; Claus, Liam; Sanchez, Marcos; Robertson, Gideon; Riley, Nathan; Rochau, Greg
2018-06-13
The Ultrafast X-ray Imager is a solid-state camera capable of capturing a sequence of images with user-selectable exposure times as short as 2 billionths of a second. Using 3D semiconductor integration techniques to form a hybrid chip, this camera was developed to enable scientists to study the heating and compression of fusion targets in the quest to harness the energy process that powers the stars.
R&D 100, 2016: Ultrafast X-ray Imager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, John; Claus, Liam; Sanchez, Marcos
The Ultrafast X-ray Imager is a solid-state camera capable of capturing a sequence of images with user-selectable exposure times as short as 2 billionths of a second. Using 3D semiconductor integration techniques to form a hybrid chip, this camera was developed to enable scientists to study the heating and compression of fusion targets in the quest to harness the energy process that powers the stars.
Weirather, Jason L; Afshar, Pegah Tootoonchi; Clark, Tyson A; Tseng, Elizabeth; Powers, Linda S; Underwood, Jason G; Zabner, Joseph; Korlach, Jonas; Wong, Wing Hung; Au, Kin Fai
2015-10-15
We developed an innovative hybrid sequencing approach, IDP-fusion, to detect fusion genes, determine fusion sites and identify and quantify fusion isoforms. IDP-fusion is the first method to study gene fusion events by integrating Third Generation Sequencing long reads and Second Generation Sequencing short reads. We applied IDP-fusion to PacBio data and Illumina data from the MCF-7 breast cancer cells. Compared with the existing tools, IDP-fusion detects fusion genes at higher precision and a very low false positive rate. The results show that IDP-fusion will be useful for unraveling the complexity of multiple fusion splices and fusion isoforms within tumorigenesis-relevant fusion genes. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Tene, Yair; Tene, Noam; Tene, G.
1993-08-01
An interactive data fusion methodology of video, audio, and nonlinear structural dynamic analysis for potential application in forensic engineering is presented. The methodology was developed and successfully demonstrated in the analysis of heavy transportable bridge collapse during preparation for testing. Multiple bridge elements failures were identified after the collapse, including fracture, cracks and rupture of high performance structural materials. Videotape recording by hand held camcorder was the only source of information about the collapse sequence. The interactive data fusion methodology resulted in extracting relevant information form the videotape and from dynamic nonlinear structural analysis, leading to full account of the sequence of events during the bridge collapse.
Development of a robust MRI fiducial system for automated fusion of MR-US abdominal images.
Favazza, Christopher P; Gorny, Krzysztof R; Callstrom, Matthew R; Kurup, Anil N; Washburn, Michael; Trester, Pamela S; Fowler, Charles L; Hangiandreou, Nicholas J
2018-05-21
We present the development of a two-component magnetic resonance (MR) fiducial system, that is, a fiducial marker device combined with an auto-segmentation algorithm, designed to be paired with existing ultrasound probe tracking and image fusion technology to automatically fuse MR and ultrasound (US) images. The fiducial device consisted of four ~6.4 mL cylindrical wells filled with 1 g/L copper sulfate solution. The algorithm was designed to automatically segment the device in clinical abdominal MR images. The algorithm's detection rate and repeatability were investigated through a phantom study and in human volunteers. The detection rate was 100% in all phantom and human images. The center-of-mass of the fiducial device was robustly identified with maximum variations of 2.9 mm in position and 0.9° in angular orientation. In volunteer images, average differences between algorithm-measured inter-marker spacings and actual separation distances were 0.53 ± 0.36 mm. "Proof-of-concept" automatic MR-US fusions were conducted with sets of images from both a phantom and volunteer using a commercial prototype system, which was built based on the above findings. Image fusion accuracy was measured to be within 5 mm for breath-hold scanning. These results demonstrate the capability of this approach to automatically fuse US and MR images acquired across a wide range of clinical abdominal pulse sequences. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Non-rigid registration for fusion of carotid vascular ultrasound and MRI volumetric datasets
NASA Astrophysics Data System (ADS)
Chan, R. C.; Sokka, S.; Hinton, D.; Houser, S.; Manzke, R.; Hanekamp, A.; Reddy, V. Y.; Kaazempur-Mofrad, M. R.; Rasche, V.
2006-03-01
In carotid plaque imaging, MRI provides exquisite soft-tissue characterization, but lacks the temporal resolution for tissue strain imaging that real-time 3D ultrasound (3DUS) can provide. On the other hand, real-time 3DUS currently lacks the spatial resolution of carotid MRI. Non-rigid alignment of ultrasound and MRI data is essential for integrating complementary morphology and biomechanical information for carotid vascular assessment. We assessed non-rigid registration for fusion of 3DUS and MRI carotid data based on deformable models which are warped to maximize voxel similarity. We performed validation in vitro using isolated carotid artery imaging. These samples were subjected to soft-tissue deformations during 3DUS and were imaged in a static configuration with standard MR carotid pulse sequences. Registration of the source ultrasound sequences to the target MR volume was performed and the mean absolute distance between fiducials within the ultrasound and MR datasets was measured to determine inter-modality alignment quality. Our results indicate that registration errors on the order of 1mm are possible in vitro despite the low-resolution of current generation 3DUS transducers. Registration performance should be further improved with the use of higher frequency 3DUS prototypes and efforts are underway to test those probes for in vivo 3DUS carotid imaging.
Schwein, Adeline; Lu, Tony; Chinnadurai, Ponraj; Kitkungvan, Danai; Shah, Dipan J; Chakfe, Nabil; Lumsden, Alan B; Bismuth, Jean
2017-01-01
Endovascular recanalization is considered first-line therapy for chronic central venous occlusion (CVO). Unlike arteries, in which landmarks such as wall calcifications provide indirect guidance for endovascular navigation, sclerotic veins without known vascular branching patterns impose significant challenges. Therefore, safe wire access through such chronic lesions mostly relies on intuition and experience. Studies have shown that magnetic resonance venography (MRV) can be performed safely in these patients, and the boundaries of occluded veins may be visualized on specific MRV sequences. Intraoperative image fusion techniques have become more common to guide complex arterial endovascular procedures. The aim of this study was to assess the feasibility and utility of MRV and intraoperative cone-beam computed tomography (CBCT) image fusion technique during endovascular CVO recanalization. During the study period, patients with symptomatic CVO and failed standard endovascular recanalization underwent further recanalization attempts with use of intraoperative MRV image fusion guidance. After preoperative MRV and intraoperative CBCT image coregistration, a virtual centerline path of the occluded segment was electronically marked in MRV and overlaid on real-time two-dimensional fluoroscopy images. Technical success, fluoroscopy times, radiation doses, number of venograms before recanalization, and accuracy of the virtual centerline overlay were evaluated. Four patients underwent endovascular CVO recanalization with use of intraoperative MRV image fusion guidance. Mean (± standard deviation) time for image fusion was 6:36 ± 00:51 mm:ss. The lesion was successfully crossed in all patients without complications. Mean fluoroscopy time for lesion crossing was 12.5 ± 3.4 minutes. Mean total fluoroscopy time was 28.8 ± 6.5 minutes. Mean total radiation dose was 15,185 ± 7747 μGy/m 2 , and mean radiation dose from CBCT acquisition was 2788 ± 458 μGy/m 2 (18% of mean total radiation dose). Mean number of venograms before recanalization was 1.6 ± 0.9, whereas two lesions were crossed without any prior venography. On qualitative analysis, virtual centerlines from MRV were aligned with actual guidewire trajectory on fluoroscopy in all four cases. MRV image fusion is feasible and may improve success, safety, and the surgeon's confidence during CVO recanalization. Similar to arterial interventions, three-dimensional MRV imaging and image fusion techniques could foster innovative solutions for such complex venous interventions and have the potential to affect a great number of patients. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Nechifor-Boilă, I A; Bancu, S; Buruian, M; Charlot, M; Decaussin-Petrucci, M; Krauth, J-S; Nechifor-Boilă, A C; Borda, A
2013-01-01
Dynamic Contrast-Enhanced Magnetic Resonance Mammography (DCE-MRM) represents the most sensitive examination for breast cancer (BC) diagnosis. However literature data reports very inhomogeneous specificity. The aim of our study was to evaluate the clinical efficiency of a new MRM technique - diffusion weighted imaging with background body signal suppression T2 image fusion in BC diagnosis, compared to DCE-MRM. We retrospectively analyzed 50 consecutive DCE-MRM examinations with DWIBS sequence from the archives of the Department of Radiology, Lyon Sud Hospital, (02.2010- 02.2011), summing up to 64 breast lesions. Fusions were created using the Osirix software from the DWIBS images (b=1000 s mm2) and their T2 correspondents. Interpretation was performed using an adapted BI-RADS system. The final histopathological examination or a minimum 6-months follow-up served as gold standard. Out of the 64 examined breast lesions, 35(54.7%) were classified as malignant by DCE-MRM and 24(37.5%) cases by DWIBS T2, respectively. Thus the DWIBS T2 fusion had a Sensitivity of 62.5%(95%CI:35.4-84.8) and a Specificity of 70.8%(95%CI:55.9-83.3) while DCE-MRM had a higher Sensitivity: 87.5%(95%CI:61.6-98.4) but a lower Specificity: 56.2%(95%CI:41.1-70.5). DWIBS T2 fusion is an innovative MRM technique, with a specificity superior to DCE-MRM, showing a large potential for improving the clinical efficiency of classical MRM. Celsius.
A hybrid image fusion system for endovascular interventions of peripheral artery disease.
Lalys, Florent; Favre, Ketty; Villena, Alexandre; Durrmann, Vincent; Colleaux, Mathieu; Lucas, Antoine; Kaladji, Adrien
2018-07-01
Interventional endovascular treatment has become the first line of management in the treatment of peripheral artery disease (PAD). However, contrast and radiation exposure continue to limit the feasibility of these procedures. This paper presents a novel hybrid image fusion system for endovascular intervention of PAD. We present two different roadmapping methods from intra- and pre-interventional imaging that can be used either simultaneously or independently, constituting the navigation system. The navigation system is decomposed into several steps that can be entirely integrated within the procedure workflow without modifying it to benefit from the roadmapping. First, a 2D panorama of the entire peripheral artery system is automatically created based on a sequence of stepping fluoroscopic images acquired during the intra-interventional diagnosis phase. During the interventional phase, the live image can be synchronized on the panorama to form the basis of the image fusion system. Two types of augmented information are then integrated. First, an angiography panorama is proposed to avoid contrast media re-injection. Information exploiting the pre-interventional computed tomography angiography (CTA) is also brought to the surgeon by means of semiautomatic 3D/2D registration on the 2D panorama. Each step of the workflow was independently validated. Experiments for both the 2D panorama creation and the synchronization processes showed very accurate results (errors of 1.24 and [Formula: see text] mm, respectively), similarly to the registration on the 3D CTA (errors of [Formula: see text] mm), with minimal user interaction and very low computation time. First results of an on-going clinical study highlighted its major clinical added value on intraoperative parameters. No image fusion system has been proposed yet for endovascular procedures of PAD in lower extremities. More globally, such a navigation system, combining image fusion from different 2D and 3D image sources, is novel in the field of endovascular procedures.
Reanalysis of RNA-Sequencing Data Reveals Several Additional Fusion Genes with Multiple Isoforms
Kangaspeska, Sara; Hultsch, Susanne; Edgren, Henrik; Nicorici, Daniel; Murumägi, Astrid; Kallioniemi, Olli
2012-01-01
RNA-sequencing and tailored bioinformatic methodologies have paved the way for identification of expressed fusion genes from the chaotic genomes of solid tumors. We have recently successfully exploited RNA-sequencing for the discovery of 24 novel fusion genes in breast cancer. Here, we demonstrate the importance of continuous optimization of the bioinformatic methodology for this purpose, and report the discovery and experimental validation of 13 additional fusion genes from the same samples. Integration of copy number profiling with the RNA-sequencing results revealed that the majority of the gene fusions were promoter-donating events that occurred at copy number transition points or involved high-level DNA-amplifications. Sequencing of genomic fusion break points confirmed that DNA-level rearrangements underlie selected fusion transcripts. Furthermore, a significant portion (>60%) of the fusion genes were alternatively spliced. This illustrates the importance of reanalyzing sequencing data as gene definitions change and bioinformatic methods improve, and highlights the previously unforeseen isoform diversity among fusion transcripts. PMID:23119097
Reanalysis of RNA-sequencing data reveals several additional fusion genes with multiple isoforms.
Kangaspeska, Sara; Hultsch, Susanne; Edgren, Henrik; Nicorici, Daniel; Murumägi, Astrid; Kallioniemi, Olli
2012-01-01
RNA-sequencing and tailored bioinformatic methodologies have paved the way for identification of expressed fusion genes from the chaotic genomes of solid tumors. We have recently successfully exploited RNA-sequencing for the discovery of 24 novel fusion genes in breast cancer. Here, we demonstrate the importance of continuous optimization of the bioinformatic methodology for this purpose, and report the discovery and experimental validation of 13 additional fusion genes from the same samples. Integration of copy number profiling with the RNA-sequencing results revealed that the majority of the gene fusions were promoter-donating events that occurred at copy number transition points or involved high-level DNA-amplifications. Sequencing of genomic fusion break points confirmed that DNA-level rearrangements underlie selected fusion transcripts. Furthermore, a significant portion (>60%) of the fusion genes were alternatively spliced. This illustrates the importance of reanalyzing sequencing data as gene definitions change and bioinformatic methods improve, and highlights the previously unforeseen isoform diversity among fusion transcripts.
Zhang, Zhuang; Zhao, Rujin; Liu, Enhai; Yan, Kun; Ma, Yuebo
2018-06-15
This article presents a new sensor fusion method for visual simultaneous localization and mapping (SLAM) through integration of a monocular camera and a 1D-laser range finder. Such as a fusion method provides the scale estimation and drift correction and it is not limited by volume, e.g., the stereo camera is constrained by the baseline and overcomes the limited depth range problem associated with SLAM for RGBD cameras. We first present the analytical feasibility for estimating the absolute scale through the fusion of 1D distance information and image information. Next, the analytical derivation of the laser-vision fusion is described in detail based on the local dense reconstruction of the image sequences. We also correct the scale drift of the monocular SLAM using the laser distance information which is independent of the drift error. Finally, application of this approach to both indoor and outdoor scenes is verified by the Technical University of Munich dataset of RGBD and self-collected data. We compare the effects of the scale estimation and drift correction of the proposed method with the SLAM for a monocular camera and a RGBD camera.
High dynamic range image acquisition based on multiplex cameras
NASA Astrophysics Data System (ADS)
Zeng, Hairui; Sun, Huayan; Zhang, Tinghua
2018-03-01
High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.
Hogervorst, Maarten A.; Pinkus, Alan R.
2016-01-01
The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4–0.7μm), near-infrared (NIR, 0.7–1.0μm) and long-wave infrared (LWIR, 8–14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance. PMID:28036328
Toet, Alexander; Hogervorst, Maarten A; Pinkus, Alan R
2016-01-01
The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm), near-infrared (NIR, 0.7-1.0μm) and long-wave infrared (LWIR, 8-14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance.
Fusion and quality analysis for remote sensing images using contourlet transform
NASA Astrophysics Data System (ADS)
Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram
2013-05-01
Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.
Nighttime images fusion based on Laplacian pyramid
NASA Astrophysics Data System (ADS)
Wu, Cong; Zhan, Jinhao; Jin, Jicheng
2018-02-01
This paper expounds method of the average weighted fusion, image pyramid fusion, the wavelet transform and apply these methods on the fusion of multiple exposures nighttime images. Through calculating information entropy and cross entropy of fusion images, we can evaluate the effect of different fusion. Experiments showed that Laplacian pyramid image fusion algorithm is suitable for processing nighttime images fusion, it can reduce the halo while preserving image details.
Prendeville, Susan; Gertner, Mark; Maganti, Manjula; Pintilie, Melania; Perlis, Nathan; Toi, Ants; Evans, Andrew J; Finelli, Antonio; van der Kwast, Theodorus H; Ghai, Sangeet
2018-07-01
The aim of this study was to compare biopsy detection of intraductal and cribriform pattern invasive prostate carcinoma in multiparametric magnetic resonance imaging positive and negative regions of the prostate. We queried a prospectively maintained, single institution database to identify patients who underwent multiparametric magnetic resonance imaging/ultrasound fusion targeted biopsy and concurrent systematic sextant biopsy of magnetic resonance imaging negative regions between January 2013 and May 2016. All multiparametric magnetic resonance imaging targets were reviewed retrospectively by 2 readers for the PI-RADS™ (Prostate Imaging-Reporting and Data System), version 2 score, the maximum dimension, the apparent diffusion coefficient parameter and whether positive or negative on dynamic contrast enhancement sequence. Biopsy slides were reviewed by 2 urological pathologists for Gleason score/Grade Group and the presence or absence of an intraductal/cribriform pattern. A total of 154 patients were included in study. Multiparametric magnetic resonance imaging/ultrasound fusion targeted biopsy and systematic sextant biopsy of magnetic resonance imaging negative regions were negative for prostate carcinoma in 51 patients, leaving 103 available for the correlation of multiparametric magnetic resonance imaging and the intraductal/cribriform pattern. Prostate carcinoma was identified by multiparametric magnetic resonance imaging/ultrasound fusion targeted biopsy in 93 cases and by systematic sextant biopsy of magnetic resonance imaging negative regions in 76 (p = 0.008). Intraductal/cribriform positive tumor was detected in 23 cases, including at the multiparametric magnetic resonance imaging/ultrasound fusion targeted biopsy site in 22 and at the systematic sextant biopsy of magnetic resonance imaging negative region site in 3 (p <0.001). The intraductal/cribriform pattern was significantly associated with a PI-RADS score of 5 and a decreasing apparent diffusion coefficient value (p = 0.008 and 0.005, respectively). In 19 of the 23 cases with the intraductal/cribriform pattern prior 12-core standard systematic biopsy was negative in 8 and showed Grade Group 1 disease in 11. Multiparametric magnetic resonance imaging/ultrasound fusion targeted biopsy was associated with significantly increased detection of intraductal/cribriform positive prostate carcinoma compared to systematic sextant biopsy of multiparametric magnetic resonance imaging negative regions. This supports the role of magnetic resonance imaging to enhance the detection of clinically aggressive intraductal/cribriform positive prostate carcinoma. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Turkbey, Baris; Xu, Sheng; Kruecker, Jochen; Locklin, Julia; Pang, Yuxi; Shah, Vijay; Bernardo, Marcelino; Baccala, Angelo; Rastinehad, Ardeshir; Benjamin, Compton; Merino, Maria J; Wood, Bradford J; Choyke, Peter L; Pinto, Peter A
2011-03-29
During transrectal ultrasound (TRUS)-guided prostate biopsies, the actual location of the biopsy site is rarely documented. Here, we demonstrate the capability of TRUS-magnetic resonance imaging (MRI) image fusion to document the biopsy site and correlate biopsy results with multi-parametric MRI findings. Fifty consecutive patients (median age 61 years) with a median prostate-specific antigen (PSA) level of 5.8 ng/ml underwent 12-core TRUS-guided biopsy of the prostate. Pre-procedural T2-weighted magnetic resonance images were fused to TRUS. A disposable needle guide with miniature tracking sensors was attached to the TRUS probe to enable fusion with MRI. Real-time TRUS images during biopsy and the corresponding tracking information were recorded. Each biopsy site was superimposed onto the MRI. Each biopsy site was classified as positive or negative for cancer based on the results of each MRI sequence. Sensitivity, specificity, and receiver operating curve (ROC) area under the curve (AUC) values were calculated for multi-parametric MRI. Gleason scores for each multi-parametric MRI pattern were also evaluated. Six hundred and 5 systemic biopsy cores were analyzed in 50 patients, of whom 20 patients had 56 positive cores. MRI identified 34 of 56 positive cores. Overall, sensitivity, specificity, and ROC area values for multi-parametric MRI were 0.607, 0.727, 0.667, respectively. TRUS-MRI fusion after biopsy can be used to document the location of each biopsy site, which can then be correlated with MRI findings. Based on correlation with tracked biopsies, T2-weighted MRI and apparent diffusion coefficient maps derived from diffusion-weighted MRI are the most sensitive sequences, whereas the addition of delayed contrast enhancement MRI and three-dimensional magnetic resonance spectroscopy demonstrated higher specificity consistent with results obtained using radical prostatectomy specimens.
Panagopoulos, Ioannis; Gorunova, Ludmila; Bjerkehagen, Bodil; Heim, Sverre
2014-01-01
Whole transcriptome sequencing was used to study a small round cell tumor in which a t(4;19)(q35;q13) was part of the complex karyotype but where the initial reverse transcriptase PCR (RT-PCR) examination did not detect a CIC-DUX4 fusion transcript previously described as the crucial gene-level outcome of this specific translocation. The RNA sequencing data were analysed using the FusionMap, FusionFinder, and ChimeraScan programs which are specifically designed to identify fusion genes. FusionMap, FusionFinder, and ChimeraScan identified 1017, 102, and 101 fusion transcripts, respectively, but CIC-DUX4 was not among them. Since the RNA sequencing data are in the fastq text-based format, we searched the files using the "grep" command-line utility. The "grep" command searches the text for specific expressions and displays, by default, the lines where matches occur. The "specific expression" was a sequence of 20 nucleotides from the coding part of the last exon 20 of CIC (Reference Sequence: NM_015125.3) chosen since all the so far reported CIC breakpoints have occurred here. Fifteen chimeric CIC-DUX4 cDNA sequences were captured and the fusion between the CIC and DUX4 genes was mapped precisely. New primer combinations were constructed based on these findings and were used together with a polymerase suitable for amplification of GC-rich DNA templates to amplify CIC-DUX4 cDNA fragments which had the same fusion point found with "grep". In conclusion, FusionMap, FusionFinder, and ChimeraScan generated a plethora of fusion transcripts but did not detect the biologically important CIC-DUX4 chimeric transcript; they are generally useful but evidently suffer from imperfect both sensitivity and specificity. The "grep" command is an excellent tool to capture chimeric transcripts from RNA sequencing data when the pathological and/or cytogenetic information strongly indicates the presence of a specific fusion gene.
Self-regulation of 70-kilodalton heat shock proteins in Saccharomyces cerevisiae.
Stone, D E; Craig, E A
1990-01-01
To determine whether the 70-kilodalton heat shock proteins of Saccharomyces cerevisiae play a role in regulating their own synthesis, we studied the effect of overexpressing the SSA1 protein on the activity of the SSA1 5'-regulatory region. The constitutive level of Ssa1p was increased by fusing the SSA1 structural gene to the GAL1 promoter. A reporter vector consisting of an SSA1-lacZ translational fusion was used to assess SSA1 promoter activity. In a strain producing approximately 10-fold the normal heat shock level of Ssa1p, induction of beta-galactosidase activity by heat shock was almost entirely blocked. Expression of a transcriptional fusion vector in which the CYC1 upstream activating sequence of a CYC1-lacZ chimera was replaced by a sequence containing a heat shock upstream activating sequence (heat shock element 2) from the 5'-regulatory region of SSA1 was inhibited by excess Ssa1p. The repression of an SSA1 upstream activating sequence by the SSA1 protein indicates that SSA1 self-regulation is at least partially mediated at the transcriptional level. The expression of another transcriptional fusion vector, containing heat shock element 2 and a lesser amount of flanking sequence, is not inhibited when Ssa1p is overexpressed. This suggests the existence of an element, proximal to or overlapping heat shock element 2, that confers sensitivity to the SSA1 protein. Images PMID:2181281
Novel cooperative neural fusion algorithms for image restoration and image fusion.
Xia, Youshen; Kamel, Mohamed S
2007-02-01
To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.
Constancy and diversity in the flavivirus fusion peptide.
Seligman, Stephen J
2008-02-14
Flaviviruses include the mosquito-borne dengue, Japanese encephalitis, yellow fever and West Nile and the tick-borne encephalitis viruses. They are responsible for considerable world-wide morbidity and mortality. Viral entry is mediated by a conserved fusion peptide containing 16 amino acids located in domain II of the envelope protein E. Highly orchestrated conformational changes initiated by exposure to acidic pH accompany the fusion process and are important factors limiting amino acid changes in the fusion peptide that still permit fusion with host cell membranes in both arthropod and vertebrate hosts. The cell-fusing related agents, growing only in mosquitoes or insect cell lines, possess a different homologous peptide. Analysis of 46 named flaviviruses deposited in the Entrez Nucleotides database extended the constancy in the canonical fusion peptide sequences of mosquito-borne, tick-borne and viruses with no known vector to include more recently-sequenced viruses. The mosquito-borne signature amino acid, G104, was also found in flaviviruses with no known vector and with the cell-fusion related viruses. Despite the constancy in the canonical sequences in pathogenic flaviviruses, mutations were surprisingly frequent with a 27% prevalence of nonsynonymous mutations in yellow fever virus fusion peptide sequences, and 0 to 7.4% prevalence in the others. Six of seven yellow fever patients whose virus had fusion peptide mutations died. In the cell-fusing related agents, not enough sequences have been deposited to estimate reliably the prevalence of fusion peptide mutations. However, the canonical sequences homologous to the fusion peptide and the pattern of disulfide linkages in protein E differed significantly from the other flaviviruses. The constancy of the canonical fusion peptide sequences in the arthropod-borne flaviviruses contrasts with the high prevalence of mutations in most individual viruses. The discrepancy may be the result of a survival advantage accompanying sequence diversity (quasispecies) involving the fusion peptide. Limited clinical data with yellow fever virus suggest that the presence of fusion peptide mutants is not associated with a decreased case fatality rate. The cell-fusing related agents may have substantial differences from other flaviviruses in their mechanism of viral entry into the host cell.
Malghem, Jacques; Lecouvet, Frédéric E; François, Robert; Vande Berg, Bruno C; Duprez, Thierry; Cosnard, Guy; Maldague, Baudouin E
2005-02-01
To explain a cause of high signal intensity on T1-weighted MR images in calcified intervertebral disks associated with spinal fusion. Magnetic resonance and radiological examinations of 13 patients were reviewed, presenting one or several intervertebral disks showing a high signal intensity on T1-weighted MR images, associated both with the presence of calcifications in the disks and with peripheral fusion of the corresponding spinal segments. Fusion was due to ligament ossifications (n=8), ankylosing spondylitis (n=4), or posterior arthrodesis (n=1). Imaging files included X-rays and T1-weighted MR images in all cases, T2-weighted MR images in 12 cases, MR images with fat signal suppression in 7 cases, and a CT scan in 1 case. Histological study of a calcified disk from an anatomical specimen of an ankylosed lumbar spine resulting from ankylosing spondylitis was examined. The signal intensity of the disks was similar to that of the bone marrow or of perivertebral fat both on T1-weighted MR images and on all sequences, including those with fat signal suppression. In one of these disks, a strongly negative absorption coefficient was focally measured by CT scan, suggesting a fatty content. The histological examination of the ankylosed calcified disk revealed the presence of well-differentiated bone tissue and fatty marrow within the disk. The high signal intensity of some calcified intervertebral disks on T1-weighted MR images can result from the presence of fatty marrow, probably related to a disk ossification process in ankylosed spines.
Integration of retinal image sequences
NASA Astrophysics Data System (ADS)
Ballerini, Lucia
1998-10-01
In this paper a method for noise reduction in ocular fundus image sequences is described. The eye is the only part of the human body where the capillary network can be observed along with the arterial and venous circulation using a non invasive technique. The study of the retinal vessels is very important both for the study of the local pathology (retinal disease) and for the large amount of information it offers on systematic haemodynamics, such as hypertension, arteriosclerosis, and diabetes. In this paper a method for image integration of ocular fundus image sequences is described. The procedure can be divided in two step: registration and fusion. First we describe an automatic alignment algorithm for registration of ocular fundus images. In order to enhance vessel structures, we used a spatially oriented bank of filters designed to match the properties of the objects of interest. To evaluate interframe misalignment we adopted a fast cross-correlation algorithm. The performances of the alignment method have been estimated by simulating shifts between image pairs and by using a cross-validation approach. Then we propose a temporal integration technique of image sequences so as to compute enhanced pictures of the overall capillary network. Image registration is combined with image enhancement by fusing subsequent frames of a same region. To evaluate the attainable results, the signal-to-noise ratio was estimated before and after integration. Experimental results on synthetic images of vessel-like structures with different kind of Gaussian additive noise as well as on real fundus images are reported.
MicroRNA Detection by DNA-Mediated Liposome Fusion.
Jumeaux, Coline; Wahlsten, Olov; Block, Stephan; Kim, Eunjung; Chandrawati, Rona; Howes, Philip D; Höök, Fredrik; Stevens, Molly M
2018-03-02
Membrane fusion is a process of fundamental importance in biological systems that involves highly selective recognition mechanisms for the trafficking of molecular and ionic cargos. Mimicking natural membrane fusion mechanisms for the purpose of biosensor development holds great potential for amplified detection because relatively few highly discriminating targets lead to fusion and an accompanied engagement of a large payload of signal-generating molecules. In this work, sequence-specific DNA-mediated liposome fusion is used for the highly selective detection of microRNA. The detection of miR-29a, a known flu biomarker, is demonstrated down to 18 nm within 30 min with high specificity by using a standard laboratory microplate reader. Furthermore, one order of magnitude improvement in the limit of detection is demonstrated by using a novel imaging technique combined with an intensity fluctuation analysis, which is coined two-color fluorescence correlation microscopy. © 2018 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Antwi, Prince; Grant, Ryan; Kuzmik, Gregory; Abbed, Khalid
2018-05-01
"White cord syndrome" is a very rare condition thought to be due to acute reperfusion of chronically ischemic areas of the spinal cord. Its hallmark is the presence of intramedullary hyperintense signal on T2-weighted magnetic resonance imaging sequences in a patient with unexplained neurologic deficits following spinal cord decompression surgery. The syndrome is rare and has been reported previously in 2 patients following anterior cervical decompression and fusion. We report an additional case of this complication. A 68-year-old man developed acute left-sided hemiparesis after posterior cervical decompression and fusion for cervical spondylotic myelopathy. The patient improved with high-dose steroid therapy. The rare white cord syndrome following either anterior cervical decompression and fusion or posterior cervical decompression and fusion may be due to ischemic-reperfusion injury sustained by chronically compressed parts of the spinal cord. In previous reports, patients have improved following steroid therapy and acute rehabilitation. Copyright © 2018 Elsevier Inc. All rights reserved.
Winters, Jennifer L; Davila, Jaime I; McDonald, Amber M; Nair, Asha A; Fadra, Numrah; Wehrs, Rebecca N; Thomas, Brittany C; Balcom, Jessica R; Jin, Long; Wu, Xianglin; Voss, Jesse S; Klee, Eric W; Oliver, Gavin R; Graham, Rondell P; Neff, Jadee L; Rumilla, Kandelaria M; Aypar, Umut; Kipp, Benjamin R; Jenkins, Robert B; Jen, Jin; Halling, Kevin C
2018-06-13
We assessed the performance characteristics of an RNA sequencing (RNA-Seq) assay designed to detect gene fusions in 571 genes to help manage patients with cancer. Polyadenylated RNA was converted to cDNA, which was then used to prepare next-generation sequencing libraries that were sequenced on an Illumina HiSeq 2500 instrument and analyzed with an in-house developed bioinformatic pipeline. The assay identified 38 of 41 gene fusions detected by another method, such as fluorescence in situ hybridization or RT-PCR, for a sensitivity of 93%. No false-positive gene fusions were identified in 15 normal tissue specimens and 10 tumor specimens that were negative for fusions by RNA sequencing or Mate Pair NGS (100% specificity). The assay also identified 22 fusions in 17 tumor specimens that had not been detected by other methods. Eighteen of the 22 fusions had not previously been described. Good intra-assay and interassay reproducibility was observed with complete concordance for the presence or absence of gene fusions in replicates. The analytical sensitivity of the assay was tested by diluting RNA isolated from gene fusion-positive cases with fusion-negative RNA. Gene fusions were generally detectable down to 12.5% dilutions for most fusions and as little as 3% for some fusions. This assay can help identify fusions in patients with cancer; these patients may in turn benefit from both US Food and Drug Administration-approved and investigational targeted therapies. Copyright © 2018 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
[Research Progress of Multi-Model Medical Image Fusion at Feature Level].
Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun
2016-04-01
Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.
Enhanced image capture through fusion
NASA Technical Reports Server (NTRS)
Burt, Peter J.; Hanna, Keith; Kolczynski, Raymond J.
1993-01-01
Image fusion may be used to combine images from different sensors, such as IR and visible cameras, to obtain a single composite with extended information content. Fusion may also be used to combine multiple images from a given sensor to form a composite image in which information of interest is enhanced. We present a general method for performing image fusion and show that this method is effective for diverse fusion applications. We suggest that fusion may provide a powerful tool for enhanced image capture with broad utility in image processing and computer vision.
Aldossari, M; Alfalou, A; Brosseau, C
2014-09-22
This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.
Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images
NASA Astrophysics Data System (ADS)
Awumah, Anna; Mahanti, Prasun; Robinson, Mark
2016-10-01
Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators
Bai, Xiangzhi
2015-01-01
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.
Bai, Xiangzhi
2015-07-15
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.
Hong, Yoonki; Kim, Woo Jin; Bang, Chi Young; Lee, Jae Cheol; Oh, Yeon-Mok
2016-04-01
Lung cancer is the most common cause of cancer related death. Alterations in gene sequence, structure, and expression have an important role in the pathogenesis of lung cancer. Fusion genes and alternative splicing of cancer-related genes have the potential to be oncogenic. In the current study, we performed RNA-sequencing (RNA-seq) to investigate potential fusion genes and alternative splicing in non-small cell lung cancer. RNA was isolated from lung tissues obtained from 86 subjects with lung cancer. The RNA samples from lung cancer and normal tissues were processed with RNA-seq using the HiSeq 2000 system. Fusion genes were evaluated using Defuse and ChimeraScan. Candidate fusion transcripts were validated by Sanger sequencing. Alternative splicing was analyzed using multivariate analysis of transcript sequencing and validated using quantitative real time polymerase chain reaction. RNA-seq data identified oncogenic fusion genes EML4-ALK and SLC34A2-ROS1 in three of 86 normal-cancer paired samples. Nine distinct fusion transcripts were selected using DeFuse and ChimeraScan; of which, four fusion transcripts were validated by Sanger sequencing. In 33 squamous cell carcinoma, 29 tumor specific skipped exon events and six mutually exclusive exon events were identified. ITGB4 and PYCR1 were top genes that showed significant tumor specific splice variants. In conclusion, RNA-seq data identified novel potential fusion transcripts and splice variants. Further evaluation of their functional significance in the pathogenesis of lung cancer is required.
Engineering workstation: Sensor modeling
NASA Technical Reports Server (NTRS)
Pavel, M; Sweet, B.
1993-01-01
The purpose of the engineering workstation is to provide an environment for rapid prototyping and evaluation of fusion and image processing algorithms. Ideally, the algorithms are designed to optimize the extraction of information that is useful to a pilot for all phases of flight operations. Successful design of effective fusion algorithms depends on the ability to characterize both the information available from the sensors and the information useful to a pilot. The workstation is comprised of subsystems for simulation of sensor-generated images, image processing, image enhancement, and fusion algorithms. As such, the workstation can be used to implement and evaluate both short-term solutions and long-term solutions. The short-term solutions are being developed to enhance a pilot's situational awareness by providing information in addition to his direct vision. The long term solutions are aimed at the development of complete synthetic vision systems. One of the important functions of the engineering workstation is to simulate the images that would be generated by the sensors. The simulation system is designed to use the graphics modeling and rendering capabilities of various workstations manufactured by Silicon Graphics Inc. The workstation simulates various aspects of the sensor-generated images arising from phenomenology of the sensors. In addition, the workstation can be used to simulate a variety of impairments due to mechanical limitations of the sensor placement and due to the motion of the airplane. Although the simulation is currently not performed in real-time, sequences of individual frames can be processed, stored, and recorded in a video format. In that way, it is possible to examine the appearance of different dynamic sensor-generated and fused images.
Multi-focus image fusion using a guided-filter-based difference image.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu
2016-03-20
The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.
A dual-channel fusion system of visual and infrared images based on color transfer
NASA Astrophysics Data System (ADS)
Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong
2013-09-01
A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.
Antonica, Filippo; Asabella, Artor Niccoli; Ferrari, Cristina; Rubini, Domenico; Notaristefano, Antonio; Nicoletti, Adriano; Altini, Corinna; Merenda, Nunzio; Mossa, Emilio; Guarini, Attilio; Rubini, Giuseppe
2014-01-01
In the last decade numerous attempts were considered to co-register and integrate different imaging data. Like PET/CT the integration of PET to MR showed great interest. PET/MR scanners are recently tested on different distrectual or systemic pathologies. Unfortunately PET/MR scanners are expensive and diagnostic protocols are still under studies and investigations. Nuclear Medicine imaging highlights functional and biometabolic information but has poor anatomic details. The aim of this study is to integrate MR and PET data to produce distrectual or whole body fused images acquired from different scanners even in different days. We propose an offline method to fuse PET with MR data using an open-source software that has to be inexpensive, reproducible and capable to exchange data over the network. We also evaluate global quality, alignment quality, and diagnostic confidence of fused PET-MR images. We selected PET/CT studies performed in our Nuclear Medicine unit, MR studies provided by patients on DICOM CD media or network received. We used Osirix 5.7 open source version. We aligned CT slices with the first MR slice, pointed and marked for co-registration using MR-T1 sequence and CT as reference and fused with PET to produce a PET-MR image. A total of 100 PET/CT studies were fused with the following MR studies: 20 head, 15 thorax, 24 abdomen, 31 pelvis, 10 whole body. An interval of no more than 15 days between PET and MR was the inclusion criteria. PET/CT, MR and fused studies were evaluated by two experienced radiologist and two experienced nuclear medicine physicians. Each one filled a five point based evaluation scoring scheme based on image quality, image artifacts, segmentation errors, fusion misalignment and diagnostic confidence. Our fusion method showed best results for head, thorax and pelvic districts in terms of global quality, alignment quality and diagnostic confidence,while for the abdomen and pelvis alignement quality and global quality resulted poor due to internal organs filling variation and time shifting beetwen examinations. PET/CT images with time of flight reconstruction and real attenuation correction were combined with anatomical detailed MRI images. We used Osirix, an image processing Open Source Software dedicated to DICOM images. No additional costs, to buy and upgrade proprietary software are required for combining data. No high technology or very expensive PET/MR scanner, that requires dedicated shielded room spaces and personnel to be employed or to be trained, are needed. Our method allows to share patient PET/MR fused data with different medical staff using dedicated networks. The proposed method may be applied to every MR sequence (MR-DWI and MR-STIR, magnet enhanced sequences) to characterize soft tissue alterations and improve discrimination diseases. It can be applied not only to PET with MR but virtually to every DICOM study.
A color fusion method of infrared and low-light-level images based on visual perception
NASA Astrophysics Data System (ADS)
Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa
2014-11-01
The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.
Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.
Ganasala, Padma; Kumar, Vinod
2016-02-01
Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.
Multiscale Medical Image Fusion in Wavelet Domain
Khare, Ashish
2013-01-01
Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach. PMID:24453868
Adaptive fusion of infrared and visible images in dynamic scene
NASA Astrophysics Data System (ADS)
Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi
2011-11-01
Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.
A Multiplexed Amplicon Approach for Detecting Gene Fusions by Next-Generation Sequencing.
Beadling, Carol; Wald, Abigail I; Warrick, Andrea; Neff, Tanaya L; Zhong, Shan; Nikiforov, Yuri E; Corless, Christopher L; Nikiforova, Marina N
2016-03-01
Chromosomal rearrangements that result in oncogenic gene fusions are clinically important drivers of many cancer types. Rapid and sensitive methods are therefore needed to detect a broad range of gene fusions in clinical specimens that are often of limited quantity and quality. We describe a next-generation sequencing approach that uses a multiplex PCR-based amplicon panel to interrogate fusion transcripts that involve 19 driver genes and 94 partners implicated in solid tumors. The panel also includes control assays that evaluate the 3'/5' expression ratios of 12 oncogenic kinases, which might be used to infer gene fusion events when the partner is unknown or not included on the panel. There was good concordance between the solid tumor fusion gene panel and other methods, including fluorescence in situ hybridization, real-time PCR, Sanger sequencing, and other next-generation sequencing panels, because 40 specimens known to harbor gene fusions were correctly identified. No specific fusion reads were observed in 59 fusion-negative specimens. The 3'/5' expression ratio was informative for fusions that involved ALK, RET, and NTRK1 but not for BRAF or ROS1 fusions. However, among 37 ALK or RET fusion-negative specimens, four exhibited elevated 3'/5' expression ratios, indicating that fusions predicted solely by 3'/5' read ratios require confirmatory testing. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Present status and trends of image fusion
NASA Astrophysics Data System (ADS)
Xiang, Dachao; Fu, Sheng; Cai, Yiheng
2009-10-01
Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.
Djan, Igor; Petrović, Borislava; Erak, Marko; Nikolić, Ivan; Lucić, Silvija
2013-08-01
Development of imaging techniques, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), made great impact on radiotherapy treatment planning by improving the localization of target volumes. Improved localization allows better local control of tumor volumes, but also minimizes geographical misses. Mutual information is obtained by registration and fusion of images achieved manually or automatically. The aim of this study was to validate the CT-MRI image fusion method and compare delineation obtained by CT versus CT-MRI image fusion. The image fusion software (XIO CMS 4.50.0) was applied to delineate 16 patients. The patients were scanned on CT and MRI in the treatment position within an immobilization device before the initial treatment. The gross tumor volume (GTV) and clinical target volume (CTV) were delineated on CT alone and on CT+MRI images consecutively and image fusion was obtained. Image fusion showed that CTV delineated on a CT image study set is mainly inadequate for treatment planning, in comparison with CTV delineated on CT-MRI fused image study set. Fusion of different modalities enables the most accurate target volume delineation. This study shows that registration and image fusion allows precise target localization in terms of GTV and CTV and local disease control.
[Possibilities of sonographic image fusion: Current developments].
Jung, E M; Clevert, D-A
2015-11-01
For diagnostic and interventional procedures ultrasound (US) image fusion can be used as a complementary imaging technique. Image fusion has the advantage of real time imaging and can be combined with other cross-sectional imaging techniques. With the introduction of US contrast agents sonography and image fusion have gained more importance in the detection and characterization of liver lesions. Fusion of US images with computed tomography (CT) or magnetic resonance imaging (MRI) facilitates the diagnostics and postinterventional therapy control. In addition to the primary application of image fusion in the diagnosis and treatment of liver lesions, there are more useful indications for contrast-enhanced US (CEUS) in routine clinical diagnostic procedures, such as intraoperative US (IOUS), vascular imaging and diagnostics of other organs, such as the kidneys and prostate gland.
Felsenstein, K M; Goff, S P
1992-01-01
The gag-pol polyprotein of the murine and feline leukemia viruses is expressed by translational readthrough of a UAG terminator codon at the 3' end of the gag gene. To explore the cis-acting sequence requirements for the readthrough event in vivo, we generated a library of mutants of the Moloney murine leukemia virus with point mutations near the terminator codon and tested the mutant viral DNAs for the ability to direct synthesis of the gag-pol fusion protein and formation of infectious virus. The analysis showed that sequences 3' to the terminator are necessary and sufficient for the process. The results do not support a role for one proposed stem-loop structure that includes the terminator but are consistent with the involvement of another stem-loop 3' to the terminator. One mutant, containing two compensatory changes in this stem structure, was temperature sensitive for replication and for formation of the gag-pol protein. The results suggest that RNA sequence and structure are critical determinants of translational readthrough in vivo. Images PMID:1404606
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-01-01
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. PMID:28505137
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-05-15
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems.
Anatomic vascular phantom for the verification of MRA and XRA visualization and fusion
NASA Astrophysics Data System (ADS)
Mankovich, Nicholas J.; Lambert, Timothy; Zrimec, Tatjana; Hiller, John B.
1995-05-01
A project is underway to develop automated methods of fusing cerebral magnetic resonance angiography (MRA) and x-ray angiography (XRA) for creating accurate visualizations used in planning treatment of vascular disease. We have developed a vascular phantom suitable for testing segmentation and fusion algorithms with either derived images (psuedo-MRA/psuedo-XRA) or actual MRA or XRA image sequences. The initial unilateral arterial phantom design, based on normal human anatomy, contains 48 tapering vascular segments with lumen diameters from 2.5 millimeter to 0.25 millimeter. The initial phantom used rapid prototyping technology (stereolithography) with a 0.9 millimeter vessel wall fabricated in an ultraviolet-cured plastic. The model fabrication resulted in a hollow vessel model comprising the internal carotid artery, the ophthalmic artery, and the proximal segments of the anterior, middle, and posterior cerebral arteries. The complete model was fabricated but the model's lumen could not be cleared for vessels with less than 1 millimeter diameter. Measurements of selected vascular outer diameters as judged against the CAD specification showed an accuracy of 0.14 mm and precision (standard deviation) of 0.15 mm. The plastic vascular model produced provides a fixed geometric framework for the evaluation of imaging protocols and the development of algorithms for both segmentation and fusion.
Walter, Uwe; Niendorf, Thoralf; Graessl, Andreas; Rieger, Jan; Krüger, Paul-Christian; Langner, Sönke; Guthoff, Rudolf F; Stachs, Oliver
2014-05-01
A combination of magnetic resonance images with real-time high-resolution ultrasound known as fusion imaging may improve ophthalmologic examination. This study was undertaken to evaluate the feasibility of orbital high-field magnetic resonance and real-time colour Doppler ultrasound image fusion and navigation. This case study, performed between April and June 2013, included one healthy man (age, 47 years) and two patients (one woman, 57 years; one man, 67 years) with choroidal melanomas. All cases underwent 7.0-T magnetic resonance imaging using a custom-made ocular imaging surface coil. The Digital Imaging and Communications in Medicine volume data set was then loaded into the ultrasound system for manual registration of the live ultrasound image and fusion imaging examination. Data registration, matching and then volume navigation were feasible in all cases. Fusion imaging provided real-time imaging capabilities and high tissue contrast of choroidal tumour and optic nerve. It also allowed adding a real-time colour Doppler signal on magnetic resonance images for assessment of vasculature of tumour and retrobulbar structures. The combination of orbital high-field magnetic resonance and colour Doppler ultrasound image fusion and navigation is feasible. Multimodal fusion imaging promises to foster assessment and monitoring of choroidal melanoma and optic nerve disorders. • Orbital magnetic resonance and colour Doppler ultrasound real-time fusion imaging is feasible • Fusion imaging combines the spatial and temporal resolution advantages of each modality • Magnetic resonance and ultrasound fusion imaging improves assessment of choroidal melanoma vascularisation.
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-11-26
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
[An improved medical image fusion algorithm and quality evaluation].
Chen, Meiling; Tao, Ling; Qian, Zhiyu
2009-08-01
Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.
A survey of infrared and visual image fusion methods
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Hai, Jinjin; He, Kangjian
2017-09-01
Infrared (IR) and visual (VI) image fusion is designed to fuse multiple source images into a comprehensive image to boost imaging quality and reduce redundancy information, which is widely used in various imaging equipment to improve the visual ability of human and robot. The accurate, reliable and complementary descriptions of the scene in fused images make these techniques be widely used in various fields. In recent years, a large number of fusion methods for IR and VI images have been proposed due to the ever-growing demands and the progress of image representation methods; however, there has not been published an integrated survey paper about this field in last several years. Therefore, we make a survey to report the algorithmic developments of IR and VI image fusion. In this paper, we first characterize the IR and VI image fusion based applications to represent an overview of the research status. Then we present a synthesize survey of the state of the art. Thirdly, the frequently-used image fusion quality measures are introduced. Fourthly, we perform some experiments of typical methods and make corresponding analysis. At last, we summarize the corresponding tendencies and challenges in IR and VI image fusion. This survey concludes that although various IR and VI image fusion methods have been proposed, there still exist further improvements or potential research directions in different applications of IR and VI image fusion.
Objective quality assessment for multiexposure multifocus image fusion.
Hassen, Rania; Wang, Zhou; Salama, Magdy M A
2015-09-01
There has been a growing interest in image fusion technologies, but how to objectively evaluate the quality of fused images has not been fully understood. Here, we propose a method for objective quality assessment of multiexposure multifocus image fusion based on the evaluation of three key factors of fused image quality: 1) contrast preservation; 2) sharpness; and 3) structure preservation. Subjective experiments are conducted to create an image fusion database, based on which, performance evaluation shows that the proposed fusion quality index correlates well with subjective scores, and gives a significant improvement over the existing fusion quality measures.
NASA Astrophysics Data System (ADS)
Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.
2016-06-01
This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.
A New Approach to Image Fusion Based on Cokriging
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; LeMoigne, Jacqueline; Mount, David M.; Morisette, Jeffrey T.
2005-01-01
We consider the image fusion problem involving remotely sensed data. We introduce cokriging as a method to perform fusion. We investigate the advantages of fusing Hyperion with ALI. The evaluation is performed by comparing the classification of the fused data with that of input images and by calculating well-chosen quantitative fusion quality metrics. We consider the Invasive Species Forecasting System (ISFS) project as our fusion application. The fusion of ALI with Hyperion data is studies using PCA and wavelet-based fusion. We then propose utilizing a geostatistical based interpolation method called cokriging as a new approach for image fusion.
Gradient-based multiresolution image fusion.
Petrović, Valdimir S; Xydeas, Costas S
2004-02-01
A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
Novel kinase fusion transcripts found in endometrial cancer
Tamura, Ryo; Yoshihara, Kosuke; Yamawaki, Kaoru; Suda, Kazuaki; Ishiguro, Tatsuya; Adachi, Sosuke; Okuda, Shujiro; Inoue, Ituro; Verhaak, Roel G. W.; Enomoto, Takayuki
2015-01-01
Recent advances in RNA-sequencing technology have enabled the discovery of gene fusion transcripts in the transcriptome of cancer cells. However, it remains difficult to differentiate the therapeutically targetable fusions from passenger events. We have analyzed RNA-sequencing data and DNA copy number data from 25 endometrial cancer cell lines to identify potential therapeutically targetable fusion transcripts, and have identified 124 high-confidence fusion transcripts, of which 69% are associated with gene amplifications. As targetable fusion candidates, we focused on three in-frame kinase fusion transcripts that retain a kinase domain (CPQ-PRKDC, CAPZA2-MET, and VGLL4-PRKG1). We detected only CPQ-PRKDC fusion transcript in three of 122 primary endometrial cancer tissues. Cell proliferation of the fusion-positive cell line was inhibited by knocking down the expression of wild-type PRKDC but not by blocking the CPQ-PRKDC fusion transcript expression. Quantitative real-time RT-PCR demonstrated that the expression of the CPQ-PRKDC fusion transcript was significantly lower than that of wild-type PRKDC, corresponding to a low transcript allele fraction of this fusion, based on RNA-sequencing read counts. In endometrial cancers, the CPQ-PRKDC fusion transcript may be a passenger aberration related to gene amplification. Our findings suggest that transcript allele fraction is a useful predictor to find bona-fide therapeutic-targetable fusion transcripts. PMID:26689674
Novel kinase fusion transcripts found in endometrial cancer.
Tamura, Ryo; Yoshihara, Kosuke; Yamawaki, Kaoru; Suda, Kazuaki; Ishiguro, Tatsuya; Adachi, Sosuke; Okuda, Shujiro; Inoue, Ituro; Verhaak, Roel G W; Enomoto, Takayuki
2015-12-22
Recent advances in RNA-sequencing technology have enabled the discovery of gene fusion transcripts in the transcriptome of cancer cells. However, it remains difficult to differentiate the therapeutically targetable fusions from passenger events. We have analyzed RNA-sequencing data and DNA copy number data from 25 endometrial cancer cell lines to identify potential therapeutically targetable fusion transcripts, and have identified 124 high-confidence fusion transcripts, of which 69% are associated with gene amplifications. As targetable fusion candidates, we focused on three in-frame kinase fusion transcripts that retain a kinase domain (CPQ-PRKDC, CAPZA2-MET, and VGLL4-PRKG1). We detected only CPQ-PRKDC fusion transcript in three of 122 primary endometrial cancer tissues. Cell proliferation of the fusion-positive cell line was inhibited by knocking down the expression of wild-type PRKDC but not by blocking the CPQ-PRKDC fusion transcript expression. Quantitative real-time RT-PCR demonstrated that the expression of the CPQ-PRKDC fusion transcript was significantly lower than that of wild-type PRKDC, corresponding to a low transcript allele fraction of this fusion, based on RNA-sequencing read counts. In endometrial cancers, the CPQ-PRKDC fusion transcript may be a passenger aberration related to gene amplification. Our findings suggest that transcript allele fraction is a useful predictor to find bona-fide therapeutic-targetable fusion transcripts.
FusionAnalyser: a new graphical, event-driven tool for fusion rearrangements discovery
Piazza, Rocco; Pirola, Alessandra; Spinelli, Roberta; Valletta, Simona; Redaelli, Sara; Magistroni, Vera; Gambacorti-Passerini, Carlo
2012-01-01
Gene fusions are common driver events in leukaemias and solid tumours; here we present FusionAnalyser, a tool dedicated to the identification of driver fusion rearrangements in human cancer through the analysis of paired-end high-throughput transcriptome sequencing data. We initially tested FusionAnalyser by using a set of in silico randomly generated sequencing data from 20 known human translocations occurring in cancer and subsequently using transcriptome data from three chronic and three acute myeloid leukaemia samples. in all the cases our tool was invariably able to detect the presence of the correct driver fusion event(s) with high specificity. In one of the acute myeloid leukaemia samples, FusionAnalyser identified a novel, cryptic, in-frame ETS2–ERG fusion. A fully event-driven graphical interface and a flexible filtering system allow complex analyses to be run in the absence of any a priori programming or scripting knowledge. Therefore, we propose FusionAnalyser as an efficient and robust graphical tool for the identification of functional rearrangements in the context of high-throughput transcriptome sequencing data. PMID:22570408
FusionAnalyser: a new graphical, event-driven tool for fusion rearrangements discovery.
Piazza, Rocco; Pirola, Alessandra; Spinelli, Roberta; Valletta, Simona; Redaelli, Sara; Magistroni, Vera; Gambacorti-Passerini, Carlo
2012-09-01
Gene fusions are common driver events in leukaemias and solid tumours; here we present FusionAnalyser, a tool dedicated to the identification of driver fusion rearrangements in human cancer through the analysis of paired-end high-throughput transcriptome sequencing data. We initially tested FusionAnalyser by using a set of in silico randomly generated sequencing data from 20 known human translocations occurring in cancer and subsequently using transcriptome data from three chronic and three acute myeloid leukaemia samples. in all the cases our tool was invariably able to detect the presence of the correct driver fusion event(s) with high specificity. In one of the acute myeloid leukaemia samples, FusionAnalyser identified a novel, cryptic, in-frame ETS2-ERG fusion. A fully event-driven graphical interface and a flexible filtering system allow complex analyses to be run in the absence of any a priori programming or scripting knowledge. Therefore, we propose FusionAnalyser as an efficient and robust graphical tool for the identification of functional rearrangements in the context of high-throughput transcriptome sequencing data.
The interobserver-validated relevance of intervertebral spacer materials in MRI artifacting
Heidrich, G.; Bruening, T.; Krefft, S.; Buchhorn, G.; Klinger, H.M.
2006-01-01
Intervertebral spacers for anterior spine fusion are made of different materials, such as titanium, carbon or cobalt-chrome, which can affect the post-fusion MRI scans. Implant-related susceptibility artifacts can decrease the quality of MRI scans, thwarting proper evaluation. This cadaver study aimed to demonstrate the extent that implant-related MRI artifacting affects the post-fusion evaluation of intervertebral spacers. In a cadaveric porcine spine, we evaluated the post-implantation MRI scans of three intervertebral spacers that differed in shape, material, surface qualities and implantation technique. A spacer made of human cortical bone was used as a control. The median sagittal MRI slice was divided into 12 regions of interest (ROI). No significant differences were found on 15 different MRI sequences read independently by an interobserver-validated team of specialists (P>0.05). Artifact-affected image quality was rated on a score of 0-1-2. A maximum score of 24 points (100%) was possible. Turbo spin echo sequences produced the best scores for all spacers and the control. Only the control achieved a score of 100%. The carbon, titanium and cobalt-chrome spacers scored 83.3, 62.5 and 50%, respectively. Our scoring system allowed us to create an implant-related ranking of MRI scan quality in reference to the control that was independent of artifact dimensions. The carbon spacer had the lowest percentage of susceptibility artifacts. Even with turbo spin echo sequences, the susceptibility artifacts produced by the metallic spacers showed a high degree of variability. Despite optimum sequencing, implant design and material are relevant factors in MRI artifacting. PMID:16463200
Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM
NASA Astrophysics Data System (ADS)
Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang
2016-03-01
A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.
Vollnhals, Florian; Audinot, Jean-Nicolas; Wirtz, Tom; Mercier-Bonin, Muriel; Fourquaux, Isabelle; Schroeppel, Birgit; Kraushaar, Udo; Lev-Ram, Varda; Ellisman, Mark H; Eswara, Santhana
2017-10-17
Correlative microscopy combining various imaging modalities offers powerful insights into obtaining a comprehensive understanding of physical, chemical, and biological phenomena. In this article, we investigate two approaches for image fusion in the context of combining the inherently lower-resolution chemical images obtained using secondary ion mass spectrometry (SIMS) with the high-resolution ultrastructural images obtained using electron microscopy (EM). We evaluate the image fusion methods with three different case studies selected to broadly represent the typical samples in life science research: (i) histology (unlabeled tissue), (ii) nanotoxicology, and (iii) metabolism (isotopically labeled tissue). We show that the intensity-hue-saturation fusion method often applied for EM-sharpening can result in serious image artifacts, especially in cases where different contrast mechanisms interplay. Here, we introduce and demonstrate Laplacian pyramid fusion as a powerful and more robust alternative method for image fusion. Both physical and technical aspects of correlative image overlay and image fusion specific to SIMS-based correlative microscopy are discussed in detail alongside the advantages, limitations, and the potential artifacts. Quantitative metrics to evaluate the results of image fusion are also discussed.
The plant virus microscope image registration method based on mismatches removing.
Wei, Lifang; Zhou, Shucheng; Dong, Heng; Mao, Qianzhuo; Lin, Jiaxiang; Chen, Riqing
2016-01-01
The electron microscopy is one of the major means to observe the virus. The view of virus microscope images is limited by making specimen and the size of the camera's view field. To solve this problem, the virus sample is produced into multi-slice for information fusion and image registration techniques are applied to obtain large field and whole sections. Image registration techniques have been developed in the past decades for increasing the camera's field of view. Nevertheless, these approaches typically work in batch mode and rely on motorized microscopes. Alternatively, the methods are conceived just to provide visually pleasant registration for high overlap ratio image sequence. This work presents a method for virus microscope image registration acquired with detailed visual information and subpixel accuracy, even when overlap ratio of image sequence is 10% or less. The method proposed focus on the correspondence set and interimage transformation. A mismatch removal strategy is proposed by the spatial consistency and the components of keypoint to enrich the correspondence set. And the translation model parameter as well as tonal inhomogeneities is corrected by the hierarchical estimation and model select. In the experiments performed, we tested different registration approaches and virus images, confirming that the translation model is not always stationary, despite the fact that the images of the sample come from the same sequence. The mismatch removal strategy makes building registration of virus microscope images at subpixel accuracy easier and optional parameters for building registration according to the hierarchical estimation and model select strategies make the proposed method high precision and reliable for low overlap ratio image sequence. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dynamics of Laser-Driven Shock Waves in Solid Targets
NASA Astrophysics Data System (ADS)
Aglitskiy, Y.; Karasik, M.; Velikovich, A. L.; Serlin, V.; Weaver, J.; Schmitt, A. J.; Obenschain, S. P.; Grun, J.; Metzler, N.; Zalesak, S. T.; Gardner, J. H.; Oh, J.; Harding, E. C.
2009-11-01
Accurate shock timing is a key issue of both indirect- and direct-drive laser fusions. The experiments on the Nike laser at NRL presented here were made possible by improvements in the imaging capability of our monochromatic x-ray diagnostics based on Bragg reflection from spherically curved crystals. Side-on imaging implemented on Nike makes it possible to observe dynamics of the shock wave and ablation front in laser-driven solid targets. We can choose to observe a sequence of 2D images or a continuous time evolution of an image resolved in one spatial dimension. A sequence of 300 ps snapshots taken using vanadium backlighter at 5.2 keV reveals propagation of a shock wave in a solid plastic target. The shape of the shock wave reflects the intensity distribution in the Nike beam. The streak records with continuous time resolution show the x-t trajectory of a laser-driven shock wave in a 10% solid density DVB foam.
Makino, Yuki; Imai, Yasuharu; Igura, Takumi; Hori, Masatoshi; Fukuda, Kazuto; Sawai, Yoshiyuki; Kogita, Sachiyo; Fujita, Norihiko; Takehara, Tetsuo; Murakami, Takamichi
2015-01-01
To assess the feasibility of fusion of pre- and post-ablation gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid-enhanced magnetic resonance imaging (Gd-EOB-DTPA-MRI) to evaluate the effects of radiofrequency ablation (RFA) of hepatocellular carcinoma (HCC), compared with similarly fused CT images This retrospective study included 67 patients with 92 HCCs treated with RFA. Fusion images of pre- and post-RFA dynamic CT, and pre- and post-RFA Gd-EOB-DTPA-MRI were created, using a rigid registration method. The minimal ablative margin measured on fusion imaging was categorized into three groups: (1) tumor protruding outside the ablation zone boundary, (2) ablative margin 0-<5.0 mm beyond the tumor boundary, and (3) ablative margin ≥5.0 mm beyond the tumor boundary. The categorization of minimal ablative margins was compared between CT and MR fusion images. In 57 (62.0%) HCCs, treatment evaluation was possible both on CT and MR fusion images, and the overall agreement between them for the categorization of minimal ablative margin was good (κ coefficient = 0.676, P < 0.01). MR fusion imaging enabled treatment evaluation in a significantly larger number of HCCs than CT fusion imaging (86/92 [93.5%] vs. 62/92 [67.4%], P < 0.05). Fusion of pre- and post-ablation Gd-EOB-DTPA-MRI is feasible for treatment evaluation after RFA. It may enable accurate treatment evaluation in cases where CT fusion imaging is not helpful.
[Research progress of multi-model medical image fusion and recognition].
Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian
2013-10-01
Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.
Shirvani, Atefeh; Jabbari, Keyvan; Amouheidari, Alireza
2017-01-01
In radiation therapy, computed tomography (CT) simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI)-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P < 0.005) and second factor was the angle between CT and MRI slice in the sagittal plane (cor = 0.75, P < 0.005). In 20% of patients, this angle was more than 28° and image fusion was not efficient. In 17% of patients, difference slice gap in CT and MRI was >4 cm and image fusion quality was <25%. The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle.
Extended depth of field integral imaging using multi-focus fusion
NASA Astrophysics Data System (ADS)
Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua
2018-03-01
In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.
Luzio, J P; Brake, B; Banting, G; Howell, K E; Braghetta, P; Stanley, K K
1990-01-01
Organelle-specific integral membrane proteins were identified by a novel strategy which gives rise to monospecific antibodies to these proteins as well as to the cDNA clones encoding them. A cDNA expression library was screened with a polyclonal antiserum raised against Triton X-114-extracted organelle proteins and clones were then grouped using antibodies affinity-purified on individual fusion proteins. The identification, molecular cloning and sequencing are described of a type 1 membrane protein (TGN38) which is located specifically in the trans-Golgi network. Images Fig. 1. Fig. 3. PMID:2204342
Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga
2017-11-01
Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.
Baek, Jihye; Huh, Jangyoung; Kim, Myungsoo; Hyun An, So; Oh, Yoonjin; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena
2013-02-01
To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Volume measurement, using 3D US, shows a 2.8 ± 1.5% error, 4.4 ± 3.0% error for CT, and 3.1 ± 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.
Determination of feature generation methods for PTZ camera object tracking
NASA Astrophysics Data System (ADS)
Doyle, Daniel D.; Black, Jonathan T.
2012-06-01
Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.
Multispectral image fusion for target detection
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-09-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Left Ventricular Endocardium Tracking by Fusion of Biomechanical and Deformable Models
Gu, Jason
2014-01-01
This paper presents a framework for tracking left ventricular (LV) endocardium through 2D echocardiography image sequence. The framework is based on fusion of biomechanical (BM) model of the heart with the parametric deformable model. The BM model constitutive equation consists of passive and active strain energy functions. The deformations of the LV are obtained by solving the constitutive equations using ABAQUS FEM in each frame in the cardiac cycle. The strain energy functions are defined in two user subroutines for active and passive phases. Average fusion technique is used to fuse the BM and deformable model contours. Experimental results are conducted to verify the detected contours and the results are evaluated by comparing themto a created gold standard. The results and the evaluation proved that the framework has the tremendous potential to track and segment the LV through the whole cardiac cycle. PMID:24587814
Applicability of common measures in multifocus image fusion comparison
NASA Astrophysics Data System (ADS)
Vajgl, Marek
2017-11-01
Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.
NASA Astrophysics Data System (ADS)
Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue
2016-03-01
During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.
Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying
2016-12-20
The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.
2013-10-01
AD_________________ Award Number: W81XWH-12-1-0597 TITLE: Parametric PET /MR Fusion Imaging to...Parametric PET /MR Fusion Imaging to Differentiate Aggressive from Indolent Primary Prostate Cancer with Application for Image-Guided Prostate Cancer Biopsies...The study investigates whether fusion PET /MRI imaging with 18F-choline PET /CT and diffusion-weighted MRI can be successfully applied to target prostate
Shirvani, Atefeh; Jabbari, Keyvan; Amouheidari, Alireza
2017-01-01
Background: In radiation therapy, computed tomography (CT) simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI)-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. Materials and Methods: In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. Results: According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P < 0.005) and second factor was the angle between CT and MRI slice in the sagittal plane (cor = 0.75, P < 0.005). In 20% of patients, this angle was more than 28° and image fusion was not efficient. In 17% of patients, difference slice gap in CT and MRI was >4 cm and image fusion quality was <25%. Conclusion: The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle. PMID:29387672
Dynamic Denoising of Tracking Sequences
Michailovich, Oleg; Tannenbaum, Allen
2009-01-01
In this paper, we describe an approach to the problem of simultaneously enhancing image sequences and tracking the objects of interest represented by the latter. The enhancement part of the algorithm is based on Bayesian wavelet denoising, which has been chosen due to its exceptional ability to incorporate diverse a priori information into the process of image recovery. In particular, we demonstrate that, in dynamic settings, useful statistical priors can come both from some reasonable assumptions on the properties of the image to be enhanced as well as from the images that have already been observed before the current scene. Using such priors forms the main contribution of the present paper which is the proposal of the dynamic denoising as a tool for simultaneously enhancing and tracking image sequences. Within the proposed framework, the previous observations of a dynamic scene are employed to enhance its present observation. The mechanism that allows the fusion of the information within successive image frames is Bayesian estimation, while transferring the useful information between the images is governed by a Kalman filter that is used for both prediction and estimation of the dynamics of tracked objects. Therefore, in this methodology, the processes of target tracking and image enhancement “collaborate” in an interlacing manner, rather than being applied separately. The dynamic denoising is demonstrated on several examples of SAR imagery. The results demonstrated in this paper indicate a number of advantages of the proposed dynamic denoising over “static” approaches, in which the tracking images are enhanced independently of each other. PMID:18482881
Infrared and visible image fusion with spectral graph wavelet transform.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo
2015-09-01
Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.
Varga, A R; Kaplan, S
1989-01-01
We demonstrated the utility of Escherichia coli alkaline phosphatase, encoded by phoA, as a reporter molecule for genetic fusions in Rhodobacter sphaeroides. A portion of the R. sphaeroides cycA gene was fused to phoA, yielding a fusion protein comprising the putative signal sequence and first 10 amino acids of the cytochrome c2 apoprotein joined to the sixth amino acid of alkaline phosphatase. The fusion protein was efficiently transported to the periplasm of R. sphaeroides as determined by enzyme activity, Western immunoblot analysis, and immunogold electron microscopy. We also documented the ability of an R. sphaeroides mutant, RS104, with gross defects in photosynthetic membrane morphology to efficiently recognize and translocate the fusion protein to the periplasmic compartment. The inclusion of 500 base pairs of R. sphaeroides DNA in cis to the cycA structural gene resulted in a 2.5-fold increase in alkaline phosphatase activity in photosynthetically grown cells compared with the activity in aerobically grown cells, demonstrating that the fusion protein is regulated in a manner similar to that of cytochrome c2 regulation. We also constructed two pUC19-based plasmids suitable for the construction of translational fusions to phoA. In these plasmids, translational fusions of phoA to the gene under consideration can be made in all three reading frames, thus facilitating construction and expression of fusion protein systems utilizing phoA. Images PMID:2553661
Range and Panoramic Image Fusion Into a Textured Range Image for Culture Heritage Documentation
NASA Astrophysics Data System (ADS)
Bila, Z.; Reznicek, J.; Pavelka, K.
2013-07-01
This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.
Investigations of image fusion
NASA Astrophysics Data System (ADS)
Zhang, Zhong
1999-12-01
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.
Sensor fusion for synthetic vision
NASA Technical Reports Server (NTRS)
Pavel, M.; Larimer, J.; Ahumada, A.
1991-01-01
Display methodologies are explored for fusing images gathered by millimeter wave sensors with images rendered from an on-board terrain data base to facilitate visually guided flight and ground operations in low visibility conditions. An approach to fusion based on multiresolution image representation and processing is described which facilitates fusion of images differing in resolution within and between images. To investigate possible fusion methods, a workstation-based simulation environment is being developed.
Yang, Minglei; Ding, Hui; Zhu, Lei; Wang, Guangzhi
2016-12-01
Ultrasound fusion imaging is an emerging tool and benefits a variety of clinical applications, such as image-guided diagnosis and treatment of hepatocellular carcinoma and unresectable liver metastases. However, respiratory liver motion-induced misalignment of multimodal images (i.e., fusion error) compromises the effectiveness and practicability of this method. The purpose of this paper is to develop a subject-specific liver motion model and automatic registration-based method to correct the fusion error. An online-built subject-specific motion model and automatic image registration method for 2D ultrasound-3D magnetic resonance (MR) images were combined to compensate for the respiratory liver motion. The key steps included: 1) Build a subject-specific liver motion model for current subject online and perform the initial registration of pre-acquired 3D MR and intra-operative ultrasound images; 2) During fusion imaging, compensate for liver motion first using the motion model, and then using an automatic registration method to further correct the respiratory fusion error. Evaluation experiments were conducted on liver phantom and five subjects. In the phantom study, the fusion error (superior-inferior axis) was reduced from 13.90±2.38mm to 4.26±0.78mm by using the motion model only. The fusion error further decreased to 0.63±0.53mm by using the registration method. The registration method also decreased the rotation error from 7.06±0.21° to 1.18±0.66°. In the clinical study, the fusion error was reduced from 12.90±9.58mm to 6.12±2.90mm by using the motion model alone. Moreover, the fusion error decreased to 1.96±0.33mm by using the registration method. The proposed method can effectively correct the respiration-induced fusion error to improve the fusion image quality. This method can also reduce the error correction dependency on the initial registration of ultrasound and MR images. Overall, the proposed method can improve the clinical practicability of ultrasound fusion imaging. Copyright © 2016 Elsevier Ltd. All rights reserved.
Recognizing human activities using appearance metric feature and kinematics feature
NASA Astrophysics Data System (ADS)
Qian, Huimin; Zhou, Jun; Lu, Xinbiao; Wu, Xinye
2017-05-01
The problem of automatically recognizing human activities from videos through the fusion of the two most important cues, appearance metric feature and kinematics feature, is considered. And a system of two-dimensional (2-D) Poisson equations is introduced to extract the more discriminative appearance metric feature. Specifically, the moving human blobs are first detected out from the video by background subtraction technique to form a binary image sequence, from which the appearance feature designated as the motion accumulation image and the kinematics feature termed as centroid instantaneous velocity are extracted. Second, 2-D discrete Poisson equations are employed to reinterpret the motion accumulation image to produce a more differentiated Poisson silhouette image, from which the appearance feature vector is created through the dimension reduction technique called bidirectional 2-D principal component analysis, considering the balance between classification accuracy and time consumption. Finally, a cascaded classifier based on the nearest neighbor classifier and two directed acyclic graph support vector machine classifiers, integrated with the fusion of the appearance feature vector and centroid instantaneous velocity vector, is applied to recognize the human activities. Experimental results on the open databases and a homemade one confirm the recognition performance of the proposed algorithm.
Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B
2013-03-01
Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.
a Comparative Analysis of Spatiotemporal Data Fusion Models for Landsat and Modis Data
NASA Astrophysics Data System (ADS)
Hazaymeh, K.; Almagbile, A.
2018-04-01
In this study, three documented spatiotemporal data fusion models were applied to Landsat-7 and MODIS surface reflectance, and NDVI. The algorithms included the spatial and temporal adaptive reflectance fusion model (STARFM), sparse representation based on a spatiotemporal reflectance fusion model (SPSTFM), and spatiotemporal image-fusion model (STI-FM). The objectives of this study were to (i) compare the performance of these three fusion models using a one Landsat-MODIS spectral reflectance image pairs using time-series datasets from the Coleambally irrigation area in Australia, and (ii) quantitatively evaluate the accuracy of the synthetic images generated from each fusion model using statistical measurements. Results showed that the three fusion models predicted the synthetic Landsat-7 image with adequate agreements. The STI-FM produced more accurate reconstructions of both Landsat-7 spectral bands and NDVI. Furthermore, it produced surface reflectance images having the highest correlation with the actual Landsat-7 images. This study indicated that STI-FM would be more suitable for spatiotemporal data fusion applications such as vegetation monitoring, drought monitoring, and evapotranspiration.
An FPGA-based heterogeneous image fusion system design method
NASA Astrophysics Data System (ADS)
Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong
2011-08-01
Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.
Image fusion based on millimeter-wave for concealed weapon detection
NASA Astrophysics Data System (ADS)
Zhu, Weiwen; Zhao, Yuejin; Deng, Chao; Zhang, Cunlin; Zhang, Yalin; Zhang, Jingshui
2010-11-01
This paper describes a novel multi sensors image fusion technology which is presented for concealed weapon detection (CWD). It is known to all, because of the good transparency of the clothes at millimeter wave band, a millimeter wave radiometer can be used to image and distinguish concealed contraband beneath clothes, for example guns, knives, detonator and so on. As a result, we adopt the passive millimeter wave (PMMW) imaging technology for airport security. However, in consideration of the wavelength of millimeter wave and the single channel mechanical scanning, the millimeter wave image has law optical resolution, which can't meet the need of practical application. Therefore, visible image (VI), which has higher resolution, is proposed for the image fusion with the millimeter wave image to enhance the readability. Before the image fusion, a novel image pre-processing which specifics to the fusion of millimeter wave imaging and visible image is adopted. And in the process of image fusion, multi resolution analysis (MRA) based on Wavelet Transform (WT) is adopted. In this way, the experiment result shows that this method has advantages in concealed weapon detection and has practical significance.
A method based on IHS cylindrical transform model for quality assessment of image fusion
NASA Astrophysics Data System (ADS)
Zhu, Xiaokun; Jia, Yonghong
2005-10-01
Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.
Advances in multi-sensor data fusion: algorithms and applications.
Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying
2009-01-01
With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hakime, Antoine, E-mail: thakime@yahoo.com; Yevich, Steven; Tselikas, Lambros
PurposeTo assess whether fusion imaging-guided percutaneous microwave ablation (MWA) can improve visibility and targeting of liver metastasis that were deemed inconspicuous on ultrasound (US).Materials and MethodsMWA of liver metastasis not judged conspicuous enough on US was performed under CT/US fusion imaging guidance. The conspicuity before and after the fusion imaging was graded on a five-point scale, and significance was assessed by Wilcoxon test. Technical success, procedure time, and procedure-related complications were evaluated.ResultsA total of 35 patients with 40 liver metastases (mean size 1.3 ± 0.4 cm) were enrolled. Image fusion improved conspicuity sufficiently to allow fusion-targeted MWA in 33 patients. The time requiredmore » for image fusion processing and tumors’ identification averaged 10 ± 2.1 min (range 5–14). Initial conspicuity on US by inclusion criteria was 1.2 ± 0.4 (range 0–2), while conspicuity after localization on fusion imaging was 3.5 ± 1 (range 1–5, p < 0.001). Technical success rate was 83% (33/40) in intention-to-treat analysis and 100% in analysis of treated tumors. There were no major procedure-related complications.ConclusionsFusion imaging broadens the scope of US-guided MWA to metastasis lacking adequate conspicuity on conventional US. Fusion imaging is an effective tool to increase the conspicuity of liver metastases that were initially deemed non visualizable on conventional US imaging.« less
Image fusion via nonlocal sparse K-SVD dictionary learning.
Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang
2016-03-01
Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.
Multifocus image fusion using phase congruency
NASA Astrophysics Data System (ADS)
Zhan, Kun; Li, Qiaoqiao; Teng, Jicai; Wang, Mingying; Shi, Jinhui
2015-05-01
We address the problem of fusing multifocus images based on the phase congruency (PC). PC provides a sharpness feature of a natural image. The focus measure (FM) is identified as strong PC near a distinctive image feature evaluated by the complex Gabor wavelet. The PC is more robust against noise than other FMs. The fusion image is obtained by a new fusion rule (FR), and the focused region is selected by the FR from one of the input images. Experimental results show that the proposed fusion scheme achieves the fusion performance of the state-of-the-art methods in terms of visual quality and quantitative evaluations.
NASA Astrophysics Data System (ADS)
Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan
2015-10-01
Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.
Fujimoto, Satoru; Sugano, Shigeo S.; Kuwata, Keiko; Osakabe, Keishi; Matsunaga, Sachihiro
2016-01-01
Live imaging of the dynamics of nuclear organization provides the opportunity to uncover the mechanisms responsible for four-dimensional genome architecture. Here, we describe the use of fluorescent protein (FP) fusions of transcription activator-like effectors (TALEs) to visualize endogenous genomic sequences in Arabidopsis thaliana. The ability to engineer sequence-specific TALEs permits the investigation of precise genomic sequences. We could detect TALE-FP signals associated with centromeric, telomeric, and rDNA repeats and the signal distribution was consistent with that observed by fluorescent in situ hybridization. TALE-FPs are advantageous because they permit the observation of intact tissues. We used our TALE-FP method to investigate the nuclei of several multicellular plant tissues including roots, hypocotyls, leaves, and flowers. Because TALE-FPs permit live-cell imaging, we successfully observed the temporal dynamics of centromeres and telomeres in plant organs. Fusing TALEs to multimeric FPs enhanced the signal intensity when observing telomeres. We found that the mobility of telomeres was different in sub-nuclear regions. Transgenic plants stably expressing TALE-FPs will provide new insights into chromatin organization and dynamics in multicellular organisms. PMID:27811079
Multi-focus image fusion based on window empirical mode decomposition
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao
2017-09-01
In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang
2018-04-01
A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.
[Contrast-enhanced ultrasound (CEUS) and image fusion for procedures of liver interventions].
Jung, E M; Clevert, D A
2018-06-01
Contrast-enhanced ultrasound (CEUS) is becoming increasingly important for the detection and characterization of malignant liver lesions and allows percutaneous treatment when surgery is not possible. Contrast-enhanced ultrasound image fusion with computed tomography (CT) and magnetic resonance imaging (MRI) opens up further options for the targeted investigation of a modified tumor treatment. Ultrasound image fusion offers the potential for real-time imaging and can be combined with other cross-sectional imaging techniques as well as CEUS. With the implementation of ultrasound contrast agents and image fusion, ultrasound has been improved in the detection and characterization of liver lesions in comparison to other cross-sectional imaging techniques. In addition, this method can also be used for intervention procedures. The success rate of fusion-guided biopsies or CEUS-guided tumor ablation lies between 80 and 100% in the literature. Ultrasound-guided image fusion using CT or MRI data, in combination with CEUS, can facilitate diagnosis and therapy follow-up after liver interventions. In addition to the primary applications of image fusion in the diagnosis and treatment of liver lesions, further useful indications can be integrated into daily work. These include, for example, intraoperative and vascular applications as well applications in other organ systems.
Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren
2018-03-14
Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.
Dual wavelength imaging allows analysis of membrane fusion of influenza virus inside cells.
Sakai, Tatsuya; Ohuchi, Masanobu; Imai, Masaki; Mizuno, Takafumi; Kawasaki, Kazunori; Kuroda, Kazumichi; Yamashina, Shohei
2006-02-01
Influenza virus hemagglutinin (HA) is a determinant of virus infectivity. Therefore, it is important to determine whether HA of a new influenza virus, which can potentially cause pandemics, is functional against human cells. The novel imaging technique reported here allows rapid analysis of HA function by visualizing viral fusion inside cells. This imaging was designed to detect fusion changing the spectrum of the fluorescence-labeled virus. Using this imaging, we detected the fusion between a virus and a very small endosome that could not be detected previously, indicating that the imaging allows highly sensitive detection of viral fusion.
V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S
2016-12-01
The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Apfelbeck, M; Clevert, D-A; Ricke, J; Stief, C; Schlenker, B
2018-01-01
Reduced acceptance of radical prostatectomy in patients with low risk or intermediate risk prostate cancer has significantly changed treatment strategies in prostate cancer (PCa) during the last years. Focal therapy of the prostate with high intensity focused ultrasound (HIFU) is an organ-preserving treatment for prostate cancer with less impairment of health-related quality of life. Follow-up after HIFU therapy by imaging modalities remains a major problem as eg. MRI performs poorly. Contrast enhanced ultrasound (CEUS) allows to monitor the vascular architecture of organs non-invasively. However, only limited data are available using CEUS to define successful and complete HIFU treatment of the prostate. In this study, we aimed to evaluate short-term image findings using CEUS and image fusion before and after HIFU treatment. Prospective single arm study in patients with uni- or bilateral, low or intermediate risk prostate cancer or recurrent cancer after radiotherapy treated with HIFU at our institution between October 2016 and November 2017. HIFU hemiablation or whole gland treatment was performed using the Focal One® device. PCa was diagnosed either by multiparametric magnetic resonance imaging (mpMRI) followed by MRI fusion based targeted biopsy combined with 12 core transrectal ultrasound (TRUS) guided biopsy or 12 core random biopsy only. Monitoring of the target region before, immediately and 24 hours after the ablation was done by CEUS in combination with image fusion using an axial T2-weighted MRI sequence. 6 consecutive patients with Gleason score (GS) 6, 5 patients with GS 7a prostate cancer and one patient with biochemical recurrence after radiotherapy were included in the study. Three patients underwent whole gland treatment due to histological proven bilateral PCa or recurrent PCa after radiotherapy. Hemiablation was performed in 9 patients with unilateral tumor and no PIRADS 4 or 5 lesion in the contralateral lobe. Median patient age was 69.8 years and median PSA (prostate-specific antigen) level was 8.4 ng/ml. CEUS showed markedly reduced microbubbles in the ablated area, the prostate capsule still showed signs of perfusion. The study is limited by the short follow up and small number of patients. CEUS examination showed a reduction of microcirculation in the treated area immediately after the treatment and 24 hours later. The combination of CEUS and image fusion seems to be helpful for detecting the PCa target lesion and monitor the success of HIFU ablation treatment. Evidence for image findings after HIFU-therapy are rare. Further studies on this topic are needed.
Zabeau, M; Stanley, K K
1982-01-01
Hybrid plasmids carrying cro-lacZ gene fusions have been constructed by joining DNA segments carrying the PR promoter and the start of the cro gene of bacteriophage lambda to the lacZ gene fragment carried by plasmid pLG400 . Plasmids in which the translational reading frames of the cro and lacZ genes are joined in-register (type I) direct the synthesis of elevated levels of cro-beta-galactosidase fusion protein amounting to 30% of the total cellular protein, while plasmids in which the genes are fused out-of-register (type II) produce a low level of beta-galactosidase protein. Sequence rearrangements downstream of the cro initiator AUG were found to influence the efficiency of translation, and have been correlated with alterations in the RNA secondary structure of the ribosome-binding site. Plasmids which direct the synthesis of high levels of beta-galactosidase are conditionally lethal and can only be propagated when the PR promoter is repressed. Deletion of sequences downstream of the lacZ gene restored viability, indicating that this region of the plasmid encodes a function which inhibits the growth of the cells. The different applications of these plasmids for expression of cloned genes are discussed. Images Fig. 6. PMID:6327257
Designing Image Operators for MRI-PET Image Fusion of the Brain
NASA Astrophysics Data System (ADS)
Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.
2006-09-01
Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.
Infrared and visible image fusion method based on saliency detection in sparse domain
NASA Astrophysics Data System (ADS)
Liu, C. H.; Qi, Y.; Ding, W. R.
2017-06-01
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Kai; Roberts, Gareth A.; Stephanou, Augoustinos S.
2010-07-23
Research highlights: {yields} Successful fusion of GFP to M.EcoKI DNA methyltransferase. {yields} GFP located at C-terminal of sequence specificity subunit does not later enzyme activity. {yields} FRET confirms structural model of M.EcoKI bound to DNA. -- Abstract: We describe the fusion of enhanced green fluorescent protein to the C-terminus of the HsdS DNA sequence-specificity subunit of the Type I DNA modification methyltransferase M.EcoKI. The fusion expresses well in vivo and assembles with the two HsdM modification subunits. The fusion protein functions as a sequence-specific DNA methyltransferase protecting DNA against digestion by the EcoKI restriction endonuclease. The purified enzyme shows Foerstermore » resonance energy transfer to fluorescently-labelled DNA duplexes containing the target sequence and to fluorescently-labelled ocr protein, a DNA mimic that binds to the M.EcoKI enzyme. Distances determined from the energy transfer experiments corroborate the structural model of M.EcoKI.« less
Segment fusion of ToF-SIMS images.
Milillo, Tammy M; Miller, Mary E; Fischione, Remo; Montes, Angelina; Gardella, Joseph A
2016-06-08
The imaging capabilities of time-of-flight secondary ion mass spectrometry (ToF-SIMS) have not been used to their full potential in the analysis of polymer and biological samples. Imaging has been limited by the size of the dataset and the chemical complexity of the sample being imaged. Pixel and segment based image fusion algorithms commonly used in remote sensing, ecology, geography, and geology provide a way to improve spatial resolution and classification of biological images. In this study, a sample of Arabidopsis thaliana was treated with silver nanoparticles and imaged with ToF-SIMS. These images provide insight into the uptake mechanism for the silver nanoparticles into the plant tissue, giving new understanding to the mechanism of uptake of heavy metals in the environment. The Munechika algorithm was programmed in-house and applied to achieve pixel based fusion, which improved the spatial resolution of the image obtained. Multispectral and quadtree segment or region based fusion algorithms were performed using ecognition software, a commercially available remote sensing software suite, and used to classify the images. The Munechika fusion improved the spatial resolution for the images containing silver nanoparticles, while the segment fusion allowed classification and fusion based on the tissue types in the sample, suggesting potential pathways for the uptake of the silver nanoparticles.
Aoki, Yasuko; Endo, Hidenori; Niizuma, Kuniyasu; Inoue, Takashi; Shimizu, Hiroaki; Tominaga, Teiji
2013-12-01
We report two cases with internal carotid artery(ICA)aneurysms, in which fusion image effectively indicated the anatomical variations of the anterior choroidal artery (AchoA). Fusion image was obtained using fusion application software (Integrated Registration, Advantage Workstation VS4, GE Healthcare). When the artery passed through the choroidal fissure, it was diagnosed as AchoA. Case 1 had an aneurysm at the left ICA. Left internal carotid angiography (ICAG) showed that an artery arising from the aneurysmal neck supplied the medial occipital lobe. Fusion image showed that this artery had a branch passing through the choroidal fissure, which was diagnosed as hyperplastic AchoA. Case 2 had an aneurysm at the supraclinoid segment of the right ICA. AchoA or posterior communicating artery (PcomA) were not detected by the right ICAG. Fusion image obtained from 3D vertebral angiography (VAG) and MRI showed that the right AchoA arose from the right PcomA. Fusion image obtained from the right ICAG and the left VAG suggested that the aneurysm was located on the ICA where the PcomA regressed. Fusion image is an effective tool for assessing anatomical variations of AchoA. The present method is simple and quick for obtaining a fusion image that can be used in a real-time clinical setting.
Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong
2016-07-01
Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less
Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach
NASA Astrophysics Data System (ADS)
Jazaeri, Amin
High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.
NASA Astrophysics Data System (ADS)
Cheng, Boyang; Jin, Longxu; Li, Guoning
2018-06-01
Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.
Satellite image fusion based on principal component analysis and high-pass filtering.
Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E
2010-06-01
This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.
Remote sensing fusion based on guided image filtering
NASA Astrophysics Data System (ADS)
Zhao, Wenfei; Dai, Qinling; Wang, Leiguang
2015-12-01
In this paper, we propose a novel remote sensing fusion approach based on guided image filtering. The fused images can well preserve the spectral features of the original multispectral (MS) images, meanwhile, enhance the spatial details information. Four quality assessment indexes are also introduced to evaluate the fusion effect when compared with other fusion methods. Experiments carried out on Gaofen-2, QuickBird, WorldView-2 and Landsat-8 images. And the results show an excellent performance of the proposed method.
Wu, Guorong; Kim, Minjeong; Sanroma, Gerard; Wang, Qian; Munsell, Brent C.; Shen, Dinggang
2014-01-01
Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images, after registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process it’s critical the chosen patch similarity measurement accurately captures the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch now characterized by a multi-scale feature representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchically approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 tesla MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale feature representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods. PMID:25463474
NASA Astrophysics Data System (ADS)
Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing
2016-04-01
In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.
A fast fusion scheme for infrared and visible light images in NSCT domain
NASA Astrophysics Data System (ADS)
Zhao, Chunhui; Guo, Yunting; Wang, Yulei
2015-09-01
Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
[Perceptual sharpness metric for visible and infrared color fusion images].
Gao, Shao-Shu; Jin, Wei-Qi; Wang, Xia; Wang, Ling-Xue; Luo, Yuan
2012-12-01
For visible and infrared color fusion images, objective sharpness assessment model is proposed to measure the clarity of detail and edge definition of the fusion image. Firstly, the contrast sensitivity functions (CSF) of the human visual system is used to reduce insensitive frequency components under certain viewing conditions. Secondly, perceptual contrast model, which takes human luminance masking effect into account, is proposed based on local band-limited contrast model. Finally, the perceptual contrast is calculated in the region of interest (contains image details and edges) in the fusion image to evaluate image perceptual sharpness. Experimental results show that the proposed perceptual sharpness metrics provides better predictions, which are more closely matched to human perceptual evaluations, than five existing sharpness (blur) metrics for color images. The proposed perceptual sharpness metrics can evaluate the perceptual sharpness for color fusion images effectively.
Tools and Methods for the Registration and Fusion of Remotely Sensed Data
NASA Technical Reports Server (NTRS)
Goshtasby, Arthur Ardeshir; LeMoigne, Jacqueline
2010-01-01
Tools and methods for image registration were reviewed. Methods for the registration of remotely sensed data at NASA were discussed. Image fusion techniques were reviewed. Challenges in registration of remotely sensed data were discussed. Examples of image registration and image fusion were given.
Fusion Imaging for Procedural Guidance.
Wiley, Brandon M; Eleid, Mackram F; Thaden, Jeremy J
2018-05-01
The field of percutaneous structural heart interventions has grown tremendously in recent years. This growth has fueled the development of new imaging protocols and technologies in parallel to help facilitate these minimally-invasive procedures. Fusion imaging is an exciting new technology that combines the strength of 2 imaging modalities and has the potential to improve procedural planning and the safety of many commonly performed transcatheter procedures. In this review we discuss the basic concepts of fusion imaging along with the relative strengths and weaknesses of static vs dynamic fusion imaging modalities. This review will focus primarily on echocardiographic-fluoroscopic fusion imaging and its application in commonly performed transcatheter structural heart procedures. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Witharana, Chandi; LaRue, Michelle A.; Lynch, Heather J.
2016-03-01
Remote sensing is a rapidly developing tool for mapping the abundance and distribution of Antarctic wildlife. While both panchromatic and multispectral imagery have been used in this context, image fusion techniques have received little attention. We tasked seven widely-used fusion algorithms: Ehlers fusion, hyperspherical color space fusion, high-pass fusion, principal component analysis (PCA) fusion, University of New Brunswick fusion, and wavelet-PCA fusion to resolution enhance a series of single-date QuickBird-2 and Worldview-2 image scenes comprising penguin guano, seals, and vegetation. Fused images were assessed for spectral and spatial fidelity using a variety of quantitative quality indicators and visual inspection methods. Our visual evaluation elected the high-pass fusion algorithm and the University of New Brunswick fusion algorithm as best for manual wildlife detection while the quantitative assessment suggested the Gram-Schmidt fusion algorithm and the University of New Brunswick fusion algorithm as best for automated classification. The hyperspherical color space fusion algorithm exhibited mediocre results in terms of spectral and spatial fidelities. The PCA fusion algorithm showed spatial superiority at the expense of spectral inconsistencies. The Ehlers fusion algorithm and the wavelet-PCA algorithm showed the weakest performances. As remote sensing becomes a more routine method of surveying Antarctic wildlife, these benchmarks will provide guidance for image fusion and pave the way for more standardized products for specific types of wildlife surveys.
Feng, Peng; Wang, Jing; Wei, Biao; Mi, Deling
2013-01-01
A hybrid multiscale and multilevel image fusion algorithm for green fluorescent protein (GFP) image and phase contrast image of Arabidopsis cell is proposed in this paper. Combining intensity-hue-saturation (IHS) transform and sharp frequency localization Contourlet transform (SFL-CT), this algorithm uses different fusion strategies for different detailed subbands, which include neighborhood consistency measurement (NCM) that can adaptively find balance between color background and gray structure. Also two kinds of neighborhood classes based on empirical model are taken into consideration. Visual information fidelity (VIF) as an objective criterion is introduced to evaluate the fusion image. The experimental results of 117 groups of Arabidopsis cell image from John Innes Center show that the new algorithm cannot only make the details of original images well preserved but also improve the visibility of the fusion image, which shows the superiority of the novel method to traditional ones. PMID:23476716
Development of a fusion approach selection tool
NASA Astrophysics Data System (ADS)
Pohl, C.; Zeng, Y.
2015-06-01
During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.
A novel framework of tissue membrane systems for image fusion.
Zhang, Zulin; Yi, Xinzhong; Peng, Hong
2014-01-01
This paper proposes a tissue membrane system-based framework to deal with the optimal image fusion problem. A spatial domain fusion algorithm is given, and a tissue membrane system of multiple cells is used as its computing framework. Based on the multicellular structure and inherent communication mechanism of the tissue membrane system, an improved velocity-position model is developed. The performance of the fusion framework is studied with comparison of several traditional fusion methods as well as genetic algorithm (GA)-based and differential evolution (DE)-based spatial domain fusion methods. Experimental results show that the proposed fusion framework is superior or comparable to the other methods and can be efficiently used for image fusion.
Sedai, Suman; Garnavi, Rahil; Roy, Pallab; Xi Liang
2015-08-01
Multi-atlas segmentation first registers each atlas image to the target image and transfers the label of atlas image to the coordinate system of the target image. The transferred labels are then combined, using a label fusion algorithm. In this paper, we propose a novel label fusion method which aggregates discriminative learning and generative modeling for segmentation of cardiac MR images. First, a probabilistic Random Forest classifier is trained as a discriminative model to obtain the prior probability of a label at the given voxel of the target image. Then, a probability distribution of image patches is modeled using Gaussian Mixture Model for each label, providing the likelihood of the voxel belonging to the label. The final label posterior is obtained by combining the classification score and the likelihood score under Bayesian rule. Comparative study performed on MICCAI 2013 SATA Segmentation Challenge demonstrates that our proposed hybrid label fusion algorithm is accurate than other five state-of-the-art label fusion methods. The proposed method obtains dice similarity coefficient of 0.94 and 0.92 in segmenting epicardium and endocardium respectively. Moreover, our label fusion method achieves more accurate segmentation results compared to four other label fusion methods.
Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images. PMID:25214889
Ernstberger, Thorsten; Heidrich, Gabert; Schultz, Wolfgang; Grabbe, Eckhardt
2007-02-01
Intervertebral spacers for anterior spine fusion are made of different materials, such as titanium and cobalt chromium alloys and carbon fiber-reinforced polymers. Implant-related susceptibility artifacts can decrease the quality of MRI scans. The aim of this cadaveric study was to demonstrate the extent that implant-related MRI artifacting affects the postfusion differentiation of determined regions of interest (ROIs). In six cadaveric porcine spines, we evaluated the postimplantation MRI scans of a titanium, cobalt-chromium and carbon spacer that differed in shape and surface qualities. A spacer made of human cortical bone was used as a control. A defined evaluation unit was divided into ROIs to characterize the spinal canal as well as the intervertebral disc space. Considering 15 different MRI sequences read independently by an interobserver-validated team of specialists the artifact-affected image quality of the median MRI slice was rated on a score of 0-3. A maximum score of 18 points (100%) for the determined ROIs was possible. Turbo spin echo sequences produced the best scores for all spacers and the control. Only the control achieved a score of 100%. For the determined ROI maximum scores for the cobalt-chromium, titanium and carbon spacers were 24%, 32% and 84%, respectively. By using favored T1 TSE sequences the carbon spacer showed a clear advantage in postfusion spinal imaging. Independent of artifact dimensions, the scoring system used allowed us to create an implant-related ranking of MRI scan quality in reference to the bone control.
[A study on medical image fusion].
Zhang, Er-hu; Bian, Zheng-zhong
2002-09-01
Five algorithms with its advantages and disadvantage for medical image fusion are analyzed. Four kinds of quantitative evaluation criteria for the quality of image fusion algorithms are proposed and these will give us some guidance for future research.
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian
2018-01-01
In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.
PET-CT image fusion using random forest and à-trous wavelet transform.
Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo
2018-03-01
New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.
Image Fusion of CT and MR with Sparse Representation in NSST Domain
Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan
2017-01-01
Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. PMID:29250134
Image Fusion of CT and MR with Sparse Representation in NSST Domain.
Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan; Xia, Shunren
2017-01-01
Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.
Enhanced image fusion using directional contrast rules in fuzzy transform domain.
Nandal, Amita; Rosales, Hamurabi Gamboa
2016-01-01
In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.
Bai, D Y; Zhang, H P; Zhong, S; Suo, W H; Gao, D H; Ding, Y; Tu, J H
2016-12-23
Objective: To investigate the clinical application value of combined detection of ALK fusion gene and c-ros oncogene 1 receptor tyrosine kinase (ROS1) fusion gene in non-small cell lung cancer (NSCLC) using real-time fluorescent PCR. Methods: A kit for combined detection of ALK fusion gene and ROS1 fusion gene based on fluorescent PCR was used to simultaneously detect the two fusion genes in 302 cases of NSCLC specimens. The results were validated through Sanger sequencing. The consistency of the two detection methods was analyzed. Results: All 302 cases of NSCLC specimens were successfully analyzed through fluorescent PCR (302/302). 12 cases (4.0%) were found to contain ALK fusion gene, including 3 cases with ALK-M1, 3 with ALK-M2, 3 with ALK-M3, 1 with ALK-M4, and 2 with ALK-M6 fusion gene.12 cases (4.0%) were found to contain ROS1 fusion gene, including 1 case with ROS1-M7, 8 cases with ROS1-M8, 1 case with ROS1-M12, 1 case with ROS1-M14, and 1 case with double-positive ROS1-M3 and ROS1-M8 fusion genes. The total detection rate of ALK fusion gene and ROS1 fusion gene was 7.9% (24/302) and 278 cases showed to be negative for ALK fusion gene and ROS1 fusion gene. The successful detection rates for Sanger DNA sequencing were also 100%. The positive, negative and total coincidence rates obtained by real-time fluorescent PCR and by Sanger DNA sequencing were all 100%. Conclusions: The results of Sanger DNA sequencing demonstrate that the real-time fluorescent PCR assay is equally effective in detecting ALK and ROS1 fusion genes in NSCLC tissues. Furthermore, real-time fluorescent PCR assay can be used to detect trace ALK and ROS1 fusion gene simultaneously in tiny samples, and can save time and avoid repeated sampling. It is worthy of recommendation as a rapid and reliable detection technique.
Mitani, Yoshitsugu; Rao, Pulivarthi H; Futreal, P Andrew; Roberts, Dianna B; Stephens, Philip J; Zhao, Yi-Jue; Zhang, Li; Mitani, Mutsumi; Weber, Randal S; Lippman, Scott M; Caulin, Carlos; El-Naggar, Adel K
2011-11-15
To investigate the molecular genetic heterogeneity associated with the t(6:9) in adenoid cystic carcinoma (ACC) and correlate the findings with patient clinical outcome. Multimolecular and genetic techniques complemented with massive pair-ended sequencing and single-nucleotide polymorphism array analyses were used on tumor specimens from 30 new and 52 previously analyzed fusion transcript-negative ACCs by reverse transcriptase PCR (RT-PCR). MYB mRNA expression level was determined by quantitative RT-PCR. The results of 102 tumors (30 new and 72 previously reported cases) were correlated with the clinicopathologic factors and patients' survival. The FISH analysis showed 34 of 82 (41.5%) fusion-positive tumors and molecular techniques identified fusion transcripts in 21 of the 82 (25.6%) tumors. Detailed FISH analysis of 11 out the 15 tumors with gene fusion without transcript formation showed translocation of NFIB sequences to proximal or distal sites of the MYB gene. Massive pair-end sequencing of a subset of tumors confirmed the proximal translocation to an NFIB sequence and led to the identification of a new fusion gene (NFIB-AIG1) in one of the tumors. Overall, MYB-NFIB gene fusion rate by FISH was in 52.9% whereas fusion transcript forming incidence was 38.2%. Significant statistical association between the 5' MYB transcript expression and patient survival was found. We conclude that: (i) t(6;9) results in complex genetic and molecular alterations in ACC, (ii) MYB-NFIB gene fusion may not always be associated with chimeric transcript formation, (iii) noncanonical MYB-NFIB gene fusions occur in a subset of tumors, (iv) high MYB expression correlates with worse patient survival.
Mitani, Yoshitsugu; Rao, Pulivarthi H.; Futreal, P. Andrew; Roberts, Dianna B.; Stephens, Philip J.; Zhao, Yi-Jue; Zhang, Li; Mitani, Mutsumi; Weber, Randal S.; Lippman, Scott M.; Caulin, Carlos; El-Naggar, Adel K.
2011-01-01
Objective To investigate the molecular-genetic heterogeneity associated with the t(6:9) in adenoid cystic carcinoma (ACC) and correlate the findings with patient clinical outcome. Experimental Design Multi-molecular and genetic techniques complemented with massive pair-ended sequencing and SNP array analyses were used on tumor specimens from 30 new and 52 previously RT-PCR analyzed fusion transcript negative ACCs. MYB mRNA expression level was determined by quantitative RT-PCR. The results of 102 tumors (30 new and 72 previously reported cases) were correlated with the clinicopathologic factors and patients’ survival. Results The FISH analysis showed 34/82 (41.5%) fusion positive tumors and molecular techniques identified fusion transcripts in 21 of the 82 (25.6%) tumors. Detailed FISH analysis of 11 out the 15 tumors with gene fusion without transcript formation showed translocation of NFIB sequences to proximal or distal sites of the MYB gene. Massive pair-end sequencing of a subset of tumors confirmed the proximal translocation to an NFIB sequence and led to the identification of a new fusion gene (NFIB-AIG1) in one of the tumors. Overall, MYB-NFIB gene fusion rate by FISH was in 52.9% while fusion transcript forming incidence was 38.2%. Significant statistical association between the 5′ MYB transcript expression and patient survival was found. Conclusions We conclude that: 1) t(6;9) results in a complex genetic and molecular alterations in ACC, 2) MYB-NFIB gene fusion may not always be associated with chimeric transcript formation, 3) non-canonical MYB, NFIB gene fusions occur in a subset of tumors, 4) high MYB expression correlates with worse patient survival. PMID:21976542
Denis, F; Archambault, D
2001-01-01
Interleukin-1beta (IL-1beta) and tumor necrosis factor-alpha (TNF-alpha) are cytokines produced primarily by monocytes and macrophages with regulatory effects in inflammation and multiple aspects of the immune response. As yet, no molecular data have been reported for IL-1beta and TNF-alpha of the beluga whale. In this study, we cloned and determined the entire cDNA sequence encoding beluga whale IL-1beta and TNF-alpha. The genetic relationship of the cytokine sequences was then analyzed with those from several mammalian species, including the human and the pig. The homology of beluga whale IL-1beta nucleic acid and deduced amino acid sequences with those from these mammalian species ranged from 74.6 to 86.0% and 62.7 to 77.1%, respectively, whereas that of TNF-alpha varied from 79.3 to 90.8% and 75.3 to 87.7%, respectively. Phylogenetic analyses based on deduced amino acid sequences showed that the beluga whale IL-1beta and TNF-alpha were most closely related to those of the ruminant species (cattle, sheep, and deer). The beluga whale IL-1beta- and TNF-alpha-encoding sequences were thereafter successfully expressed in Escherichia coli as fusion proteins by using procaryotic expression vectors. The fusion proteins were used to produce beluga whale IL-1beta- and TNF-alpha-specific rabbit antisera. Images Figure 3. Figure 4. Figure 5. PMID:11768130
Multiscale infrared and visible image fusion using gradient domain guided image filtering
NASA Astrophysics Data System (ADS)
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
A new evaluation method research for fusion quality of infrared and visible images
NASA Astrophysics Data System (ADS)
Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda
2017-03-01
In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
Nagamachi, Shigeki; Nishii, Ryuichi; Wakamatsu, Hideyuki; Mizutani, Youichi; Kiyohara, Shogo; Fujita, Seigo; Futami, Shigemi; Sakae, Tatefumi; Furukoji, Eiji; Tamura, Shozo; Arita, Hideo; Chijiiwa, Kazuo; Kawai, Keiichi
2013-07-01
This study aimed at demonstrating the feasibility of retrospectively fused (18)F FDG-PET and MRI (PET/MRI fusion image) in diagnosing pancreatic tumor, in particular differentiating malignant tumor from benign lesions. In addition, we evaluated additional findings characterizing pancreatic lesions by FDG-PET/MRI fusion image. We analyzed retrospectively 119 patients: 96 cancers and 23 benign lesions. FDG-PET/MRI fusion images (PET/T1 WI or PET/T2WI) were made by dedicated software using 1.5 Tesla (T) MRI image and FDG-PET images. These images were interpreted by two well-trained radiologists without knowledge of clinical information and compared with FDG-PET/CT images. We compared the differential diagnostic capability between PET/CT and FDG-PET/MRI fusion image. In addition, we evaluated additional findings such as tumor structure and tumor invasion. FDG-PET/MRI fusion image significantly improved accuracy compared with that of PET/CT (96.6 vs. 86.6 %). As additional finding, dilatation of main pancreatic duct was noted in 65.9 % of solid types and in 22.6 % of cystic types, on PET/MRI-T2 fusion image. Similarly, encasement of adjacent vessels was noted in 43.1 % of solid types and in 6.5 % of cystic types. Particularly in cystic types, intra-tumor structures such as mural nodule (35.4 %) or intra-cystic septum (74.2 %) were detected additionally. Besides, PET/MRI-T2 fusion image could detect extra benign cystic lesions (9.1 % in solid type and 9.7 % in cystic type) that were not noted by PET/CT. In diagnosing pancreatic lesions, FDG-PET/MRI fusion image was useful in differentiating pancreatic cancer from benign lesions. Furthermore, it was helpful in evaluating relationship between lesions and surrounding tissues as well as in detecting extra benign cysts.
Weber-aware weighted mutual information evaluation for infrared-visible image fusion
NASA Astrophysics Data System (ADS)
Luo, Xiaoyan; Wang, Shining; Yuan, Ding
2016-10-01
A performance metric for infrared and visible image fusion is proposed based on Weber's law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chugh, Brige Paul; Krishnan, Kalpagam; Liu, Jeff
2014-08-15
Integration of biological conductivity information provided by Electrical Impedance Tomography (EIT) with anatomical information provided by Computed Tomography (CT) imaging could improve the ability to characterize tissues in clinical applications. In this paper, we report results of our study which compared the fusion of EIT with CT using three different image fusion algorithms, namely: weighted averaging, wavelet fusion, and ROI indexing. The ROI indexing method of fusion involves segmenting the regions of interest from the CT image and replacing the pixels with the pixels of the EIT image. The three algorithms were applied to a CT and EIT image ofmore » an anthropomorphic phantom, constructed out of five acrylic contrast targets with varying diameter embedded in a base of gelatin bolus. The imaging performance was assessed using Detectability and Structural Similarity Index Measure (SSIM). Wavelet fusion and ROI-indexing resulted in lower Detectability (by 35% and 47%, respectively) yet higher SSIM (by 66% and 73%, respectively) than weighted averaging. Our results suggest that wavelet fusion and ROI-indexing yielded more consistent and optimal fusion performance than weighted averaging.« less
Radar image and data fusion for natural hazards characterisation
Lu, Zhong; Dzurisin, Daniel; Jung, Hyung-Sup; Zhang, Jixian; Zhang, Yonghong
2010-01-01
Fusion of synthetic aperture radar (SAR) images through interferometric, polarimetric and tomographic processing provides an all - weather imaging capability to characterise and monitor various natural hazards. This article outlines interferometric synthetic aperture radar (InSAR) processing and products and their utility for natural hazards characterisation, provides an overview of the techniques and applications related to fusion of SAR/InSAR images with optical and other images and highlights the emerging SAR fusion technologies. In addition to providing precise land - surface digital elevation maps, SAR - derived imaging products can map millimetre - scale elevation changes driven by volcanic, seismic and hydrogeologic processes, by landslides and wildfires and other natural hazards. With products derived from the fusion of SAR and other images, scientists can monitor the progress of flooding, estimate water storage changes in wetlands for improved hydrological modelling predictions and assessments of future flood impacts and map vegetation structure on a global scale and monitor its changes due to such processes as fire, volcanic eruption and deforestation. With the availability of SAR images in near real - time from multiple satellites in the near future, the fusion of SAR images with other images and data is playing an increasingly important role in understanding and forecasting natural hazards.
Rouabah, K; Varoquaux, A; Caporossi, J M; Louis, G; Jacquier, A; Bartoli, J M; Moulin, G; Vidal, V
2016-11-01
The purpose of this study was to assess the feasibility and utility of image fusion (Easy-TIPS) obtained from pre-procedure CT angiography and per-procedure real-time fluoroscopy for portal vein puncture during transjugular intrahepatic portosystemic shunt (TIPS) placement. Eighteen patients (15 men, 3 women) with a mean age of 63 years (range: 48-81 years; median age, 65 years) were included in the study. All patients underwent TIPS placement by two groups of radiologists (one group with radiologists of an experience<3 years and one with an experience≥3 years) using fusion imaging obtained from three-dimensional computed tomography angiography of the portal vein and real-time fluoroscopic images of the portal vein. Image fusion was used to guide the portal vein puncture during TIPS placement. At the end of the procedure, the interventional radiologists evaluated the utility of fusion imaging for portal vein puncture during TIPS placement. Mismatch between three-dimensional computed tomography angiography and real-time fluoroscopic images of the portal vein on image fusion was quantitatively analyzed. Posttreatment CT time, number of the puncture attempts, total radiation exposure and radiation from the retrograde portography were also recorded. Image fusion was considered useful for portal vein puncture in 13/18 TIPS procedures (72%). The mean posttreatment time to obtain fusion images was 16.4minutes. 3D volume rendered CT angiography images was strictly superimposed on direct portography in 10/18 procedures (56%). The mismatch mean value was 0.69cm in height and 0.28cm laterally. A mean number of 4.6 portal vein puncture attempts was made. Eight patients required less than three attempts. The mean radiation dose from retrograde portography was 421.2dGy.cm 2 , corresponding to a mean additional exposure of 19%. Fusion imaging resulting from image fusion from pre-procedural CT angiography is feasible, safe and makes portal puncture easier during TIPS placement. Copyright © 2016 Editions françaises de radiologie. Published by Elsevier Masson SAS. All rights reserved.
Image Fusion During Vascular and Nonvascular Image-Guided Procedures☆
Abi-Jaoudeh, Nadine; Kobeiter, Hicham; Xu, Sheng; Wood, Bradford J.
2013-01-01
Image fusion may be useful in any procedure where previous imaging such as positron emission tomography, magnetic resonance imaging, or contrast-enhanced computed tomography (CT) defines information that is referenced to the procedural imaging, to the needle or catheter, or to an ultrasound transducer. Fusion of prior and intraoperative imaging provides real-time feedback on tumor location or margin, metabolic activity, device location, or vessel location. Multimodality image fusion in interventional radiology was initially introduced for biopsies and ablations, especially for lesions only seen on arterial phase CT, magnetic resonance imaging, or positron emission tomography/CT but has more recently been applied to other vascular and nonvascular procedures. Two different types of platforms are commonly used for image fusion and navigation: (1) electromagnetic tracking and (2) cone-beam CT. Both technologies would be reviewed as well as their strengths and weaknesses, indications, when to use one vs the other, tips and guidance to streamline use, and early evidence defining clinical benefits of these rapidly evolving, commercially available and emerging techniques. PMID:23993079
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
Chen, Yuanbo; Li, Hulin; Wu, Dingtao; Bi, Keming; Liu, Chunxiao
2014-12-01
Construction of three-dimensional (3D) model of renal tumor facilitated surgical planning and imaging guidance of manual image fusion in laparoscopic partial nephrectomy (LPN) for intrarenal tumors. Fifteen patients with intrarenal tumors underwent LPN between January and December 2012. Computed tomography-based reconstruction of the 3D models of renal tumors was performed using Mimics 12.1 software. Surgical planning was performed through morphometry and multi-angle visual views of the tumor model. Two-step manual image fusion superimposed 3D model images onto 2D laparoscopic images. The image fusion was verified by intraoperative ultrasound. Imaging-guided laparoscopic hilar clamping and tumor excision was performed. Manual fusion time, patient demographics, surgical details, and postoperative treatment parameters were analyzed. The reconstructed 3D tumor models accurately represented the patient's physiological anatomical landmarks. The surgical planning markers were marked successfully. Manual image fusion was flexible and feasible with fusion time of 6 min (5-7 min). All surgeries were completed laparoscopically. The median tumor excision time was 5.4 min (3.5-10 min), whereas the median warm ischemia time was 25.5 min (16-32 min). Twelve patients (80 %) demonstrated renal cell carcinoma on final pathology, and all surgical margins were negative. No tumor recurrence was detected after a media follow-up of 1 year (3-15 months). The surgical planning and two-step manual image fusion based on 3D model of renal tumor facilitated visible-imaging-guided tumor resection with negative margin in LPN for intrarenal tumor. It is promising and moves us one step closer to imaging-guided surgery.
CTA with fluoroscopy image fusion guidance in endovascular complex aortic aneurysm repair.
Sailer, A M; de Haan, M W; Peppelenbosch, A G; Jacobs, M J; Wildberger, J E; Schurink, G W H
2014-04-01
To evaluate the effect of intraoperative guidance by means of live fluoroscopy image fusion with computed tomography angiography (CTA) on iodinated contrast material volume, procedure time, and fluoroscopy time in endovascular thoraco-abdominal aortic repair. CTA with fluoroscopy image fusion road-mapping was prospectively evaluated in patients with complex aortic aneurysms who underwent fenestrated and/or branched endovascular repair (FEVAR/BEVAR). Total iodinated contrast material volume, overall procedure time, and fluoroscopy time were compared between the fusion group (n = 31) and case controls (n = 31). Reasons for potential fusion image inaccuracy were analyzed. Fusion imaging was feasible in all patients. Fusion image road-mapping was used for navigation and positioning of the devices and catheter guidance during access to target vessels. Iodinated contrast material volume and procedure time were significantly lower in the fusion group than in case controls (159 mL [95% CI 132-186 mL] vs. 199 mL [95% CI 170-229 mL], p = .037 and 5.2 hours [95% CI 4.5-5.9 hours] vs. 6.3 hours (95% CI 5.4-7.2 hours), p = .022). No significant differences in fluoroscopy time were observed (p = .38). Respiration-related vessel displacement, vessel elongation, and displacement by stiff devices as well as patient movement were identified as reasons for fusion image inaccuracy. Image fusion guidance provides added value in complex endovascular interventions. The technology significantly reduces iodinated contrast material dose and procedure time. Copyright © 2014 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin
2018-04-01
Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.
Multi-focus image fusion with the all convolutional neural network
NASA Astrophysics Data System (ADS)
Du, Chao-ben; Gao, She-sheng
2018-01-01
A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.
Energy-resolved neutron imaging for inertial confinement fusion
NASA Astrophysics Data System (ADS)
Moran, M. J.; Haan, S. W.; Hatchett, S. P.; Izumi, N.; Koch, J. A.; Lerche, R. A.; Phillips, T. W.
2003-03-01
The success of the National Ignition Facility program will depend on diagnostic measurements which study the performance of inertial confinement fusion (ICF) experiments. Neutron yield, fusion-burn time history, and images are examples of important diagnostics. Neutron and x-ray images will record the geometries of compressed targets during the fusion-burn process. Such images provide a critical test of the accuracy of numerical modeling of ICF experiments. They also can provide valuable information in cases where experiments produce unexpected results. Although x-ray and neutron images provide similar data, they do have significant differences. X-ray images represent the distribution of high-temperature regions where fusion occurs, while neutron images directly reveal the spatial distribution of fusion-neutron emission. X-ray imaging has the advantage of a relatively straightforward path to the imaging system design. Neutron imaging, by using energy-resolved detection, offers the intriguing advantage of being able to provide independent images of burning and nonburning regions of the nuclear fuel. The usefulness of energy-resolved neutron imaging depends on both the information content of the data and on the quality of the data that can be recorded. The information content will relate to the characteristic neutron spectra that are associated with emission from different regions of the source. Numerical modeling of ICF fusion burn will be required to interpret the corresponding energy-dependent images. The exercise will be useful only if the images can be recorded with sufficient definition to reveal the spatial and energy-dependent features of interest. Several options are being evaluated with respect to the feasibility of providing the desired simultaneous spatial and energy resolution.
Sjöberg, C; Ahnesjö, A
2013-06-01
Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan
2013-05-01
In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.
NASA Astrophysics Data System (ADS)
Liu, F.; Chen, T.; He, J.; Wen, Q.; Yu, F.; Gu, X.; Wang, Z.
2018-04-01
In recent years, the quick upgrading and improvement of SAR sensors provide beneficial complements for the traditional optical remote sensing in the aspects of theory, technology and data. In this paper, Sentinel-1A SAR data and GF-1 optical data were selected for image fusion, and more emphases were put on the dryland crop classification under a complex crop planting structure, regarding corn and cotton as the research objects. Considering the differences among various data fusion methods, the principal component analysis (PCA), Gram-Schmidt (GS), Brovey and wavelet transform (WT) methods were compared with each other, and the GS and Brovey methods were proved to be more applicable in the study area. Then, the classification was conducted based on the object-oriented technique process. And for the GS, Brovey fusion images and GF-1 optical image, the nearest neighbour algorithm was adopted to realize the supervised classification with the same training samples. Based on the sample plots in the study area, the accuracy assessment was conducted subsequently. The values of overall accuracy and kappa coefficient of fusion images were all higher than those of GF-1 optical image, and GS method performed better than Brovey method. In particular, the overall accuracy of GS fusion image was 79.8 %, and the Kappa coefficient was 0.644. Thus, the results showed that GS and Brovey fusion images were superior to optical images for dryland crop classification. This study suggests that the fusion of SAR and optical images is reliable for dryland crop classification under a complex crop planting structure.
Bo, Xiao-Wan; Xu, Hui-Xiong; Wang, Dan; Guo, Le-Hang; Sun, Li-Ping; Li, Xiao-Long; Zhao, Chong-Ke; He, Ya-Ping; Liu, Bo-Ji; Li, Dan-Dan; Zhang, Kun
2016-11-01
To investigate the usefulness of fusion imaging of contrast-enhanced ultrasound (CEUS) and CECT/CEMRI before percutaneous ultrasound-guided radiofrequency ablation (RFA) for liver cancers. 45 consecutive patients with 70 liver lesions were included between March 2013 and October 2015, and all the lesions were identified on CEMRI/CECT prior to inclusion in the study. Planning ultrasound for percutaneous RFA was performed using conventional ultrasound, ultrasound-CECT/CEMRI and CEUS and CECT/CEMRI fusion imaging during the same session. The numbers of the conspicuous lesions on ultrasound and fusion imaging were recorded. RFA was performed according to the results of fusion imaging. Complete response (CR) rate was calculated and the complications were recorded. On conventional ultrasound, 25 (35.7%) of the 70 lesions were conspicuous, whereas 45 (64.3%) were inconspicuous. Ultrasound-CECT/CEMRI fusion imaging detected additional 24 lesions thus increased the number of the conspicuous lesions to 49 (70.0%) (70.0% vs 35.7%; p < 0.001 in comparison with conventional ultrasound). With the use of CEUS and CECT/CEMRI fusion imaging, the number of the conspicuous lesions further increased to 67 (95.7%, 67/70) (95.7% vs 70.0%, 95.7% vs 35.7%; both p < 0.001 in comparison with ultrasound and ultrasound-CECT/CEMRI fusion imaging, respectively). With the assistance of CEUS and CECT/CEMRI fusion imaging, the confidence level of the operator for performing RFA improved significantly with regard to visualization of the target lesions (p = 0.001). The CR rate for RFA was 97.0% (64/66) in accordance to the CECT/CEMRI results 1 month later. No procedure-related deaths and major complications occurred during and after RFA. Fusion of CEUS and CECT/CEMRI improves the visualization of those inconspicuous lesions on conventional ultrasound. It also facilitates improvement in the RFA operators' confidence and CR of RFA. Advances in knowledge: CEUS and CECT/CEMRI fusion imaging is better than both conventional ultrasound and ultrasound-CECT/CEMRI fusion imaging for lesion visualization and improves the operator confidence, thus it should be recommended to be used as a routine in ultrasound-guided percutaneous RFA procedures for liver cancer.
Bo, Xiao-Wan; Wang, Dan; Guo, Le-Hang; Sun, Li-Ping; Li, Xiao-Long; Zhao, Chong-Ke; He, Ya-Ping; Liu, Bo-Ji; Li, Dan-Dan; Zhang, Kun
2016-01-01
Objective: To investigate the usefulness of fusion imaging of contrast-enhanced ultrasound (CEUS) and CECT/CEMRI before percutaneous ultrasound-guided radiofrequency ablation (RFA) for liver cancers. Methods: 45 consecutive patients with 70 liver lesions were included between March 2013 and October 2015, and all the lesions were identified on CEMRI/CECT prior to inclusion in the study. Planning ultrasound for percutaneous RFA was performed using conventional ultrasound, ultrasound-CECT/CEMRI and CEUS and CECT/CEMRI fusion imaging during the same session. The numbers of the conspicuous lesions on ultrasound and fusion imaging were recorded. RFA was performed according to the results of fusion imaging. Complete response (CR) rate was calculated and the complications were recorded. Results: On conventional ultrasound, 25 (35.7%) of the 70 lesions were conspicuous, whereas 45 (64.3%) were inconspicuous. Ultrasound-CECT/CEMRI fusion imaging detected additional 24 lesions thus increased the number of the conspicuous lesions to 49 (70.0%) (70.0% vs 35.7%; p < 0.001 in comparison with conventional ultrasound). With the use of CEUS and CECT/CEMRI fusion imaging, the number of the conspicuous lesions further increased to 67 (95.7%, 67/70) (95.7% vs 70.0%, 95.7% vs 35.7%; both p < 0.001 in comparison with ultrasound and ultrasound-CECT/CEMRI fusion imaging, respectively). With the assistance of CEUS and CECT/CEMRI fusion imaging, the confidence level of the operator for performing RFA improved significantly with regard to visualization of the target lesions (p = 0.001). The CR rate for RFA was 97.0% (64/66) in accordance to the CECT/CEMRI results 1 month later. No procedure-related deaths and major complications occurred during and after RFA. Conclusion: Fusion of CEUS and CECT/CEMRI improves the visualization of those inconspicuous lesions on conventional ultrasound. It also facilitates improvement in the RFA operators' confidence and CR of RFA. Advances in knowledge: CEUS and CECT/CEMRI fusion imaging is better than both conventional ultrasound and ultrasound-CECT/CEMRI fusion imaging for lesion visualization and improves the operator confidence, thus it should be recommended to be used as a routine in ultrasound-guided percutaneous RFA procedures for liver cancer. PMID:27626506
Schumacher, Kirstin; Matz, Magnus; Brüning, Dennis; Baumann, Knut; Rustenbeck, Ingo
2015-05-01
The pre-exocytotic behavior of insulin granules was studied against the background of the entirety of submembrane granules in MIN6 cells, and the characteristics were compared with the macroscopic secretion pattern and the cytosolic Ca(2+) concentration of MIN6 pseudo-islets at 22°C, 32°C and 37°C. The mobility of granules labeled by insulin-EGFP and the fusion events were assessed by TIRF microscopy utilizing an observer-independent algorithm. In the z-dimension, 40 mm K(+) or 30 mm glucose increased the granule turnover. The effect of high K(+) was quickly reversible. The increase by glucose was more sustained and modified the efficacy of a subsequent K(+) stimulus. The effect size of glucose increased with physiological temperature whereas that of high K(+) did not. The mobility in the x/y-dimension and the fusion rates were little affected by the stimuli, in contrast to secretion. Fusion and secretion, however, had the same temperature dependence. Granules that appeared and fused within one image sequence had significantly larger caging diameters than pre-existent granules that underwent fusion. These in turn had a different mobility than residence-matched non-fusing granules. In conclusion, delivery to the membrane, tethering and fusion of granules are differently affected by insulinotropic stimuli. Fusion rates and secretion do not appear to be tightly coupled. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
Cha, Dong Ik; Lee, Min Woo; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Kim, Kyunga
2017-10-01
To identify the more accurate reference data sets for fusion imaging-guided radiofrequency ablation or biopsy of hepatic lesions between computed tomography (CT) and magnetic resonance (MR) images. This study was approved by the institutional review board, and written informed consent was received from all patients. Twelve consecutive patients who were referred to assess the feasibility of radiofrequency ablation or biopsy were enrolled. Automatic registration using CT and MR images was performed in each patient. Registration errors during optimal and opposite respiratory phases, time required for image fusion and number of point locks used were compared using the Wilcoxon signed-rank test. The registration errors during optimal respiratory phase were not significantly different between image fusion using CT and MR images as reference data sets (p = 0.969). During opposite respiratory phase, the registration error was smaller with MR images than CT (p = 0.028). The time and the number of points locks needed for complete image fusion were not significantly different between CT and MR images (p = 0.328 and p = 0.317, respectively). MR images would be more suitable as the reference data set for fusion imaging-guided procedures of focal hepatic lesions than CT images.
Momeni, Saba; Pourghassem, Hossein
2014-08-01
Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.
Effective Multifocus Image Fusion Based on HVS and BP Neural Network
Yang, Yong
2014-01-01
The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations. PMID:24683327
Quantitative image fusion in infrared radiometry
NASA Astrophysics Data System (ADS)
Romm, Iliya; Cukurel, Beni
2018-05-01
Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-01-01
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-09-15
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.
Landcover classification in MRF context using Dempster-Shafer fusion for multisensor imagery.
Sarkar, Anjan; Banerjee, Anjan; Banerjee, Nilanjan; Brahma, Siddhartha; Kartikeyan, B; Chakraborty, Manab; Majumder, K L
2005-05-01
This work deals with multisensor data fusion to obtain landcover classification. The role of feature-level fusion using the Dempster-Shafer rule and that of data-level fusion in the MRF context is studied in this paper to obtain an optimally segmented image. Subsequently, segments are validated and classification accuracy for the test data is evaluated. Two examples of data fusion of optical images and a synthetic aperture radar image are presented, each set having been acquired on different dates. Classification accuracies of the technique proposed are compared with those of some recent techniques in literature for the same image data.
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Pavel, M.
1993-01-01
The topics covered include the following: a system overview of the basic components of a system designed to improve the ability of a pilot to fly through low-visibility conditions such as fog; the role of visual sciences; fusion issues; sensor characterization; sources of information; image processing; and image fusion.
Image fusion based on Bandelet and sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Jiuxing; Zhang, Wei; Li, Xuzhi
2018-04-01
Bandelet transform could acquire geometric regular direction and geometric flow, sparse representation could represent signals with as little as possible atoms on over-complete dictionary, both of which could be used to image fusion. Therefore, a new fusion method is proposed based on Bandelet and Sparse Representation, to fuse Bandelet coefficients of multi-source images and obtain high quality fusion effects. The test are performed on remote sensing images and simulated multi-focus images, experimental results show that the performance of new method is better than tested methods according to objective evaluation indexes and subjective visual effects.
A Standard Mammography Unit - Standard 3D Ultrasound Probe Fusion Prototype: First Results.
Schulz-Wendtland, Rüdiger; Jud, Sebastian M; Fasching, Peter A; Hartmann, Arndt; Radicke, Marcus; Rauh, Claudia; Uder, Michael; Wunderle, Marius; Gass, Paul; Langemann, Hanna; Beckmann, Matthias W; Emons, Julius
2017-06-01
The combination of different imaging modalities through the use of fusion devices promises significant diagnostic improvement for breast pathology. The aim of this study was to evaluate image quality and clinical feasibility of a prototype fusion device (fusion prototype) constructed from a standard tomosynthesis mammography unit and a standard 3D ultrasound probe using a new method of breast compression. Imaging was performed on 5 mastectomy specimens from patients with confirmed DCIS or invasive carcinoma (BI-RADS ™ 6). For the preclinical fusion prototype an ABVS system ultrasound probe from an Acuson S2000 was integrated into a MAMMOMAT Inspiration (both Siemens Healthcare Ltd) and, with the aid of a newly developed compression plate, digital mammogram and automated 3D ultrasound images were obtained. The quality of digital mammogram images produced by the fusion prototype was comparable to those produced using conventional compression. The newly developed compression plate did not influence the applied x-ray dose. The method was not more labour intensive or time-consuming than conventional mammography. From the technical perspective, fusion of the two modalities was achievable. In this study, using only a few mastectomy specimens, the fusion of an automated 3D ultrasound machine with a standard mammography unit delivered images of comparable quality to conventional mammography. The device allows simultaneous ultrasound - the second important imaging modality in complementary breast diagnostics - without increasing examination time or requiring additional staff.
Fusion method of SAR and optical images for urban object extraction
NASA Astrophysics Data System (ADS)
Jia, Yonghong; Blum, Rick S.; Li, Fangfang
2007-11-01
A new image fusion method of SAR, Panchromatic (Pan) and multispectral (MS) data is proposed. First of all, SAR texture is extracted by ratioing the despeckled SAR image to its low pass approximation, and is used to modulate high pass details extracted from the available Pan image by means of the á trous wavelet decomposition. Then, high pass details modulated with the texture is applied to obtain the fusion product by HPFM (High pass Filter-based Modulation) fusion method. A set of image data including co-registered Landsat TM, ENVISAT SAR and SPOT Pan is used for the experiment. The results demonstrate accurate spectral preservation on vegetated regions, bare soil, and also on textured areas (buildings and road network) where SAR texture information enhances the fusion product, and the proposed approach is effective for image interpret and classification.
Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.
Reena Benjamin, J; Jayasree, T
2018-02-01
In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.
Schwein, Adeline; Chinnadurai, Ponraj; Shah, Dipan J; Lumsden, Alan B; Bechara, Carlos F; Bismuth, Jean
2017-05-01
Three-dimensional image fusion of preoperative computed tomography (CT) angiography with fluoroscopy using intraoperative noncontrast cone-beam CT (CBCT) has been shown to improve endovascular procedures by reducing procedure length, radiation dose, and contrast media volume. However, patients with a contraindication to CT angiography (renal insufficiency, iodinated contrast allergy) may not benefit from this image fusion technique. The primary objective of this study was to evaluate the feasibility of magnetic resonance angiography (MRA) and fluoroscopy image fusion using noncontrast CBCT as a guidance tool during complex endovascular aortic procedures, especially in patients with renal insufficiency. All endovascular aortic procedures done under MRA image fusion guidance at a single-center were retrospectively reviewed. The patients had moderate to severe renal insufficiency and underwent diagnostic contrast-enhanced magnetic resonance imaging after gadolinium or ferumoxytol injection. Relevant vascular landmarks electronically marked in MRA images were overlaid on real-time two-dimensional fluoroscopy for image guidance, after image fusion with noncontrast intraoperative CBCT. Technical success, time for image registration, procedure time, fluoroscopy time, number of digital subtraction angiography (DSA) acquisitions before stent deployment or vessel catheterization, and renal function before and after the procedure were recorded. The image fusion accuracy was qualitatively evaluated on a binary scale by three physicians after review of image data showing virtual landmarks from MRA on fluoroscopy. Between November 2012 and March 2016, 10 patients underwent endovascular procedures for aortoiliac aneurysmal disease or aortic dissection using MRA image fusion guidance. All procedures were technically successful. A paired t-test analysis showed no difference between preimaging and postoperative renal function (P = .6). The mean time required for MRA-CBCT image fusion was 4:09 ± 01:31 min:sec. Total fluoroscopy time was 20.1 ± 6.9 minutes. Five of 10 patients (50%) underwent stent graft deployment without any predeployment DSA acquisition. Three of six vessels (50%) were cannulated under image fusion guidance without any precannulation DSA runs, and the remaining vessels were cannulated after one planning DSA acquisition. Qualitative evaluation showed 14 of 22 virtual landmarks (63.6%) from MRA overlaid on fluoroscopy were completely accurate, without the need for adjustment. Five of eight incorrect virtual landmarks (iliac and visceral arteries) resulted from vessel deformation caused by endovascular devices. Ferumoxytol or gadolinium-enhanced MRA imaging and image fusion with fluoroscopy using noncontrast CBCT is feasible and allows patients with renal insufficiency to benefit from optimal guidance during complex endovascular aortic procedures, while preserving their residual renal function. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Improved detection probability of low level light and infrared image fusion system
NASA Astrophysics Data System (ADS)
Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang
2018-02-01
Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.
Export of FepA::PhoA fusion proteins to the outer membrane of Escherichia coli K-12.
Murphy, C K; Klebba, P E
1989-11-01
A library of fepA::phoA gene fusions was generated in order to study the structure and secretion of the Escherichia coli K-12 ferric enterobactin receptor, FepA. All of the fusion proteins contained various lengths of the amino-terminal portion of FepA fused in frame to the catalytic portion of bacterial alkaline phosphatase. Localization of FepA::PhoA fusion proteins in the cell envelope was dependent on the number of residues of mature FepA present at the amino terminus. Hybrids containing up to one-third of the amino-terminal portion of FepA fractionated with their periplasm, while those containing longer sequences of mature FepA were exported to the outer membrane. Outer membrane-localized fusion proteins expressed FepA sequences on the external face of the outer membrane and alkaline phosphatase moieties in the periplasmic space. From sequence determinations of the fepA::phoA fusion joints, residues within FepA which may be exposed on the periplasmic side of the outer membrane were identified.
Li, Kai; Su, Zhong-Zhen; Xu, Er-Jiao; Ju, Jin-Xiu; Meng, Xiao-Chun; Zheng, Rong-Qin
2016-04-18
To assess whether intraoperative use of contrast-enhanced ultrasound (CEUS)-CT/MR image fusion can accurately evaluate ablative margin (AM) and guide supplementary ablation to improve AM after hepatocellular carcinoma (HCC) ablation. Ninety-eight patients with 126 HCCs designated to undergo thermal ablation treatment were enrolled in this prospective study. CEUS-CT/MR image fusion was performed intraoperatively to evaluate whether 5-mm AM was covered by the ablative area. If possible, supplementary ablation was applied at the site of inadequate AM. The CEUS image quality, the time used for CEUS-CT/MR image fusion and the success rate of image fusion were recorded. Local tumor progression (LTP) was observed during follow-up. Clinical factors including AM were examined to identify risk factors for LTP. The success rate of image fusion was 96.2% (126/131), and the duration required for image fusion was 4.9 ± 2.0 (3-13) min. The CEUS image quality was good in 36.1% (53/147) and medium in 63.9% (94/147) of the cases. By supplementary ablation, 21.8% (12/55) of lesions with inadequate AMs became adequate AMs. During follow-up, there were 5 LTPs in lesions with inadequate AMs and 1 LTP in lesions with adequate AMs. Multivariate analysis showed that AM was the only independent risk factor for LTP (hazard ratio, 9.167; 95% confidence interval, 1.070-78.571; p = 0.043). CEUS-CT/MR image fusion is feasible for intraoperative use and can serve as an accurate method to evaluate AMs and guide supplementary ablation to lower inadequate AMs.
NASA Astrophysics Data System (ADS)
Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao
2015-12-01
The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gambhir, Sanjiv; Pritha, Ray
Novel double and triple fusion reporter gene constructs harboring distinct imagable reporter genes are provided, as well as applications for the use of such double and triple fusion constructs in living cells and in living animals using distinct imaging technologies.
Gambhir, Sanjiv; Pritha, Ray
2015-07-14
Novel double and triple fusion reporter gene constructs harboring distinct imagable reporter genes are provided, as well as applications for the use of such double and triple fusion constructs in living cells and in living animals using distinct imaging technologies.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
Infrared and visible image fusion scheme based on NSCT and low-level visual features
NASA Astrophysics Data System (ADS)
Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei
2016-05-01
Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.
Analysis of Multiallelic CNVs by Emulsion Haplotype Fusion PCR.
Tyson, Jess; Armour, John A L
2017-01-01
Emulsion-fusion PCR recovers long-range sequence information by combining products in cis from individual genomic DNA molecules. Emulsion droplets act as very numerous small reaction chambers in which different PCR products from a single genomic DNA molecule are condensed into short joint products, to unite sequences in cis from widely separated genomic sites. These products can therefore provide information about the arrangement of sequences and variants at a larger scale than established long-read sequencing methods. The method has been useful in defining the phase of variants in haplotypes, the typing of inversions, and determining the configuration of sequence variants in multiallelic CNVs. In this description we outline the rationale for the application of emulsion-fusion PCR methods to the analysis of multiallelic CNVs, and give practical details for our own implementation of the method in that context.
Nölte, Ingo S; Gerigk, Lars; Al-Zghloul, Mansour; Groden, Christoph; Kerl, Hans U
2012-03-01
Deep-brain stimulation (DBS) of the internal globus pallidus (GPi) has shown remarkable therapeutic benefits for treatment-resistant neurological disorders including dystonia and Parkinson's disease (PD). The success of the DBS is critically dependent on the reliable visualization of the GPi. The aim of the study was to evaluate promising 3.0 Tesla magnetic resonance imaging (MRI) methods for pre-stereotactic visualization of the GPi using a standard installation protocol. MRI at 3.0 T of nine healthy individuals and of one patient with PD was acquired (FLAIR, T1-MPRAGE, T2-SPACE, T2*-FLASH2D, susceptibility-weighted imaging mapping (SWI)). Image quality and visualization of the GPi for each sequence were assessed by two neuroradiologists independently using a 6-point scale. Axial, coronal, and sagittal planes of the T2*-FLASH2D images were compared. Inter-rater reliability, contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR) for the GPi were determined. For illustration, axial T2*-FLASH2D images were fused with a section schema of the Schaltenbrand-Wahren stereotactic atlas. The GPi was best and reliably visualized in axial and to a lesser degree on coronal T2*-FLASH2D images. No major artifacts in the GPi were observed in any of the sequences. SWI offered a significantly higher CNR for the GPi compared to standard T2-weighted imaging using the standard parameters. The fusion of the axial T2*-FLASH2D images and the atlas projected the GPi clearly in the boundaries of the section schema. Using a standard installation protocol at 3.0 T T2*-FLASH2D imaging (particularly axial view) provides optimal and reliable delineation of the GPi.
Li, Kai; Su, Zhongzhen; Xu, Erjiao; Huang, Qiannan; Zeng, Qingjing; Zheng, Rongqin
2017-01-19
To assess the accuracy of contrast-enhanced ultrasound (CEUS)-CT/MR image fusion in evaluating the radiofrequency ablative margin (AM) of hepatocellular carcinoma (HCC) based on a custom-made phantom model and in HCC patients. Twenty-four phantoms were randomly divided into a complete ablation group (n = 6) and an incomplete ablation group (n = 18). After radiofrequency ablation (RFA), the AM was evaluated using ultrasound (US)-CT image fusion, and the results were compared with the AM results that were directly measured in a gross specimen. CEUS-CT/MR image fusion and CT-CT / MR-MR image fusion were used to evaluate the AM in 37 tumors from 33 HCC patients who underwent RFA. The sensitivity, specificity, and accuracy of US-CT image fusion for evaluating AM in the phantom model were 93.8, 85.7 and 91.3%, respectively. The maximal thicknesses of the residual AM were 3.5 ± 2.0 mm and 3.2 ± 2.0 mm in the US-CT image fusion and gross specimen, respectively. No significant difference was observed between the US-CT image fusion and direct measurements of the AM of HCC. In the clinical study, the success rate of the AM evaluation was 100% for both CEUS-CT/MR and CT-CT/MR-MR, and the duration was 8.5 ± 2.8 min (range: 4-12 min) and 13.5 ± 4.5 min (range: 8-16 min) for CEUS-CT/MR and CT-CT/MR-MR, respectively. The sensitivity, specificity, and accuracy of CEUS-CT/MR imaging for evaluating the AM were 100.0, 80.0, and 90.0%, respectively. A phantom model composed of carrageenan gel and additives was suitable for the evaluation of HCC AM. CEUS-CT/MR image fusion can be used to evaluate HCC AM with high accuracy.
Fully Convolutional Network-Based Multifocus Image Fusion.
Guo, Xiaopeng; Nie, Rencan; Cao, Jinde; Zhou, Dongming; Qian, Wenhua
2018-07-01
As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is an efficient technology that can synthesize an all-in-focus image using several partially focused images. Previous methods have accomplished the fusion task in spatial or transform domains. However, fusion rules are always a problem in most methods. In this letter, from the aspect of focus region detection, we propose a novel multifocus image fusion method based on a fully convolutional network (FCN) learned from synthesized multifocus images. The primary novelty of this method is that the pixel-wise focus regions are detected through a learning FCN, and the entire image, not just the image patches, are exploited to train the FCN. First, we synthesize 4500 pairs of multifocus images by repeatedly using a gaussian filter for each image from PASCAL VOC 2012, to train the FCN. After that, a pair of source images is fed into the trained FCN, and two score maps indicating the focus property are generated. Next, an inversed score map is averaged with another score map to produce an aggregative score map, which take full advantage of focus probabilities in two score maps. We implement the fully connected conditional random field (CRF) on the aggregative score map to accomplish and refine a binary decision map for the fusion task. Finally, we exploit the weighted strategy based on the refined decision map to produce the fused image. To demonstrate the performance of the proposed method, we compare its fused results with several start-of-the-art methods not only on a gray data set but also on a color data set. Experimental results show that the proposed method can achieve superior fusion performance in both human visual quality and objective assessment.
BP fusion model for the detection of oil spills on the sea by remote sensing
NASA Astrophysics Data System (ADS)
Chen, Weiwei; An, Jubai; Zhang, Hande; Lin, Bin
2003-06-01
Oil spills are very serious marine pollution in many countries. In order to detect and identify the oil-spilled on the sea by remote sensor, scientists have to conduct a research work on the remote sensing image. As to the detection of oil spills on the sea, edge detection is an important technology in image processing. There are many algorithms of edge detection developed for image processing. These edge detection algorithms always have their own advantages and disadvantages in the image processing. Based on the primary requirements of edge detection of the oil spills" image on the sea, computation time and detection accuracy, we developed a fusion model. The model employed a BP neural net to fuse the detection results of simple operators. The reason we selected BP neural net as the fusion technology is that the relation between simple operators" result of edge gray level and the image"s true edge gray level is nonlinear, while BP neural net is good at solving the nonlinear identification problem. Therefore in this paper we trained a BP neural net by some oil spill images, then applied the BP fusion model on the edge detection of other oil spill images and obtained a good result. In this paper the detection result of some gradient operators and Laplacian operator are also compared with the result of BP fusion model to analysis the fusion effect. At last the paper pointed out that the fusion model has higher accuracy and higher speed in the processing oil spill image"s edge detection.
Multisource image fusion method using support value transform.
Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen
2007-07-01
With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering.
Gong, Maoguo; Zhou, Zhiqiang; Ma, Jingjing
2012-04-01
This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.
chimeraviz: a tool for visualizing chimeric RNA.
Lågstad, Stian; Zhao, Sen; Hoff, Andreas M; Johannessen, Bjarne; Lingjærde, Ole Christian; Skotheim, Rolf I
2017-09-15
Advances in high-throughput RNA sequencing have enabled more efficient detection of fusion transcripts, but the technology and associated software used for fusion detection from sequencing data often yield a high false discovery rate. Good prioritization of the results is important, and this can be helped by a visualization framework that automatically integrates RNA data with known genomic features. Here we present chimeraviz , a Bioconductor package that automates the creation of chimeric RNA visualizations. The package supports input from nine different fusion-finder tools: deFuse, EricScript, InFusion, JAFFA, FusionCatcher, FusionMap, PRADA, SOAPfuse and STAR-FUSION. chimeraviz is an R package available via Bioconductor ( https://bioconductor.org/packages/release/bioc/html/chimeraviz.html ) under Artistic-2.0. Source code and support is available at GitHub ( https://github.com/stianlagstad/chimeraviz ). rolf.i.skotheim@rr-research.no. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Multimodality Image Fusion-Guided Procedures: Technique, Accuracy, and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abi-Jaoudeh, Nadine, E-mail: naj@mail.nih.gov; Kruecker, Jochen, E-mail: jochen.kruecker@philips.com; Kadoury, Samuel, E-mail: samuel.kadoury@polymtl.ca
2012-10-15
Personalized therapies play an increasingly critical role in cancer care: Image guidance with multimodality image fusion facilitates the targeting of specific tissue for tissue characterization and plays a role in drug discovery and optimization of tailored therapies. Positron-emission tomography (PET), magnetic resonance imaging (MRI), and contrast-enhanced computed tomography (CT) may offer additional information not otherwise available to the operator during minimally invasive image-guided procedures, such as biopsy and ablation. With use of multimodality image fusion for image-guided interventions, navigation with advanced modalities does not require the physical presence of the PET, MRI, or CT imaging system. Several commercially available methodsmore » of image-fusion and device navigation are reviewed along with an explanation of common tracking hardware and software. An overview of current clinical applications for multimodality navigation is provided.« less
Adaptive polarization image fusion based on regional energy dynamic weighted average
NASA Astrophysics Data System (ADS)
Zhao, Yong-Qiang; Pan, Quan; Zhang, Hong-Cai
2005-11-01
According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations, most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.
Data fusion of Landsat TM and IRS images in forest classification
Guangxing Wang; Markus Holopainen; Eero Lukkarinen
2000-01-01
Data fusion of Landsat TM images and Indian Remote Sensing satellite panchromatic image (IRS-1C PAN) was studied and compared to the use of TM or IRS image only. The aim was to combine the high spatial resolution of IRS-1C PAN to the high spectral resolution of Landsat TM images using a data fusion algorithm. The ground truth of the study was based on a sample of 1,020...
Ravì, Daniele; Szczotka, Agnieszka Barbara; Shakir, Dzhoshkun Ismail; Pereira, Stephen P; Vercauteren, Tom
2018-06-01
Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images.
Soyama, Takeshi; Sakuhara, Yusuke; Kudo, Kohsuke; Abo, Daisuke; Wang, Jeff; Ito, Yoichi M; Hasegawa, Yu; Shirato, Hiroki
2016-07-01
This preliminary study compared ultrasonography-computed tomography (US-CT) fusion imaging and conventional ultrasonography (US) for accuracy and time required for target identification using a combination of real phantoms and sets of digitally modified computed tomography (CT) images (digital/real hybrid phantoms). In this randomized prospective study, 27 spheres visible on B-mode US were placed at depths of 3.5, 8.5, and 13.5 cm (nine spheres each). All 27 spheres were digitally erased from the CT images, and a radiopaque sphere was digitally placed at each of the 27 locations to create 27 different sets of CT images. Twenty clinicians were instructed to identify the sphere target using US alone and fusion imaging. The accuracy of target identification of the two methods was compared using McNemar's test. The mean time required for target identification and error distances were compared using paired t tests. At all three depths, target identification was more accurate and the mean time required for target identification was significantly less with US-CT fusion imaging than with US alone, and the mean error distances were also shorter with US-CT fusion imaging. US-CT fusion imaging was superior to US alone in terms of accurate and rapid identification of target lesions.
A robust color image fusion for low light level and infrared images
NASA Astrophysics Data System (ADS)
Liu, Chao; Zhang, Xiao-hui; Hu, Qing-ping; Chen, Yong-kang
2016-09-01
The low light level and infrared color fusion technology has achieved great success in the field of night vision, the technology is designed to make the hot target of fused image pop out with intenser colors, represent the background details with a nearest color appearance to nature, and improve the ability in target discovery, detection and identification. The low light level images have great noise under low illumination, and that the existing color fusion methods are easily to be influenced by low light level channel noise. To be explicit, when the low light level image noise is very large, the quality of the fused image decreases significantly, and even targets in infrared image would be submerged by the noise. This paper proposes an adaptive color night vision technology, the noise evaluation parameters of low light level image is introduced into fusion process, which improve the robustness of the color fusion. The color fuse results are still very good in low-light situations, which shows that this method can effectively improve the quality of low light level and infrared fused image under low illumination conditions.
Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera
Qu, Yufu; Huang, Jianyu; Zhang, Xuan
2018-01-01
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable. PMID:29342908
Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.
Qu, Yufu; Huang, Jianyu; Zhang, Xuan
2018-01-14
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.
NASA Astrophysics Data System (ADS)
Mu, Wei; Qi, Jin; Lu, Hong; Schabath, Matthew; Balagurunathan, Yoganand; Tunali, Ilke; Gillies, Robert James
2018-02-01
Purpose: Investigate the ability of using complementary information provided by the fusion of PET/CT images to predict immunotherapy response in non-small cell lung cancer (NSCLC) patients. Materials and methods: We collected 64 patients diagnosed with primary NSCLC treated with anti PD-1 checkpoint blockade. Using PET/CT images, fused images were created following multiple methodologies, resulting in up to 7 different images for the tumor region. Quantitative image features were extracted from the primary image (PET/CT) and the fused images, which included 195 from primary images and 1235 features from the fusion images. Three clinical characteristics were also analyzed. We then used support vector machine (SVM) classification models to identify discriminant features that predict immunotherapy response at baseline. Results: A SVM built with 87 fusion features and 13 primary PET/CT features on validation dataset had an accuracy and area under the ROC curve (AUROC) of 87.5% and 0.82, respectively, compared to a model built with 113 original PET/CT features on validation dataset 78.12% and 0.68. Conclusion: The fusion features shows better ability to predict immunotherapy response prediction compared to individual image features.
Robbins, Alexa; Zarate, Yuri A; Hartzell, Larry D
2018-01-01
This report describes the presentation of a newborn male with circumferential tongue-palate fusion associated with cleft palate and alveolar bands. After intraoral adhesions lysis, the patient was diagnosed with Pierre Robin sequence. A family history of cleft lip and palate was noted, and interferon regulatory factor 6 ( IRF6) sequencing revealed a heterozygous variant, confirming the diagnosis of van der Woude syndrome. The disruption of IRF6 resulted in abnormal orofacial development including micrognathia and intraoral adhesions as well as tongue-palate fusion, then resulting in glossoptosis with airway obstruction and cleft palate.
Minami, Yasunori; Minami, Tomohiro; Hagiwara, Satoru; Ida, Hiroshi; Ueshima, Kazuomi; Nishida, Naoshi; Murakami, Takamichi; Kudo, Masatoshi
2018-05-01
To assess the clinical feasibility of US-US image overlay fusion with evaluation of the ablative margin in radiofrequency ablation (RFA) for hepatocellular carcinoma (HCC). Fifty-three patients with 68 HCCs measuring 0.9-4.0 cm who underwent RFA guided by US-US overlay image fusion were included in this retrospective study. By an overlay of pre-/postoperative US, the tumor image could be projected onto the ablative hyperechoic zone. Therefore, the ablative margin three-dimensionally could be shown during the RFA procedure. US-US image overlay was compared to dynamic CT a few days after RFA for assessment of early treatment response. Accuracy of graded response was calculated, and the performance of US-US image overlay fusion was compared with that of CT using a Kappa agreement test. Technically effective ablation was achieved in a single session, and 59 HCCs (86.8 %) succeeded in obtaining a 5-mm margin on CT. The response with US-US image overlay correctly predicted early CT evaluation with an accuracy of 92.6 % (63/68) (k = 0.67; 95 % CI: 0.39-0.95). US-US image overlay fusion can be proposed as a feasible guidance in RFA with a safety margin and predicts early response of treatment assessment with high accuracy. • US-US image overlay fusion visualizes the ablative margin during RFA procedure. • Visualizing the margin during the procedure can prompt immediate complementary treatment. • US image fusion correlates with the results of early evaluation CT.
[Experience of Fusion image guided system in endonasal endoscopic surgery].
Wen, Jingying; Zhen, Hongtao; Shi, Lili; Cao, Pingping; Cui, Yonghua
2015-08-01
To review endonasal endoscopic surgeries aided by Fusion image guided system, and to explore the application value of Fusion image guided system in endonasal endoscopic surgeries. Retrospective research. Sixty cases of endonasal endoscopic surgeries aided by Fusion image guided system were analysed including chronic rhinosinusitis with polyp (n = 10), fungus sinusitis (n = 5), endoscopic optic nerve decompression (n = 16), inverted papilloma of the paranasal sinus (n = 9), ossifying fibroma of sphenoid bone (n = 1), malignance of the paranasal sinus (n = 9), cerebrospinal fluid leak (n = 5), hemangioma of orbital apex (n = 2) and orbital reconstruction (n = 3). Sixty cases of endonasal endoscopic surgeries completed successfully without any complications. Fusion image guided system can help to identify the ostium of paranasal sinus, lamina papyracea and skull base. Fused CT-CTA images, or fused MR-MRA images can help to localize the optic nerve or internal carotid arteiy . Fused CT-MR images can help to detect the range of the tumor. It spent (7.13 ± 1.358) minutes for image guided system to do preoperative preparation and the surgical navigation accuracy reached less than 1mm after proficient. There was no device localization problem because of block or head set loosed. Fusion image guided system make endonasal endoscopic surgery to be a true microinvasive and exact surgery. It spends less preoperative preparation time, has high surgical navigation accuracy, improves the surgical safety and reduces the surgical complications.
Dong, Yi; Wang, Wen-Ping; Mao, Feng; Ji, Zheng-Biao; Huang, Bei-Jian
2016-04-01
The aim of this study is to explore the value of volume navigation image fusion-assisted contrast-enhanced ultrasound (CEUS) in detection for radiofrequency ablation guidance of hepatocellular carcinomas (HCCs), which were undetectable on conventional ultrasound. From May 2012 to May 2014, 41 patients with 49 HCCs were included in this study. All lesions were detected by dynamic magnetic resonance imaging (MRI) and planned for radiofrequency ablation but were undetectable on conventional ultrasound. After a bolus injection of 2.4 ml SonoVue® (Bracco, Italy), LOGIQ E9 ultrasound system with volume navigation system (version R1.0.5, GE Healthcare, Milwaukee, WI, USA) was used to fuse CEUS and MRI images. The fusion time, fusion success rate, lesion enhancement pattern, and detection rate were analyzed. Image fusions were conducted successfully in 49 HCCs, the technical success rate was 100%. The average fusion time was (9.2 ± 2.1) min (6-12 min). The mean diameter of HCCs was 25.2 ± 5.3 mm (mean ± SD), and mean depth was 41.8 ± 17.2 mm. The detection rate of HCCs using CEUS/MRI imaging fusion (95.9%, 47/49) was significantly higher than CEUS (42.9%, 21/49) (P < 0.05). For small HCCs (diameter, 1-2 cm), the detection rate using imaging fusion (96.9%, 32/33) was also significantly higher than CEUS (18.2%, 6/33) (P < 0.01). All HCCs displayed a rapid wash-in pattern in the arterial phase of CEUS. Imaging fusion combining CEUS and MRI is a promising technique to improve the detection, precise localization, and accurate diagnosis of undetectable HCCs on conventional ultrasound, especially small and atypical HCCs. © 2015 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
Dynamic image fusion and general observer preference
NASA Astrophysics Data System (ADS)
Burks, Stephen D.; Doe, Joshua M.
2010-04-01
Recent developments in image fusion give the user community many options for ways of presenting the imagery to an end-user. Individuals at the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate have developed an electronic system that allows users to quickly and efficiently determine optimal image fusion algorithms and color parameters based upon collected imagery and videos from environments that are typical to observers in a military environment. After performing multiple multi-band data collections in a variety of military-like scenarios, different waveband, fusion algorithm, image post-processing, and color choices are presented to observers as an output of the fusion system. The observer preferences can give guidelines as to how specific scenarios should affect the presentation of fused imagery.
Suckau, Detlev; Resemann, Anja
2009-12-01
The ability to match Top-Down protein sequencing (TDS) results by MALDI-TOF to protein sequences by classical protein database searching was evaluated in this work. Resulting from these analyses were the protein identity, the simultaneous assignment of the N- and C-termini and protein sequences of up to 70 residues from either terminus. In combination with de novo sequencing using the MALDI-TDS data, even fusion proteins were assigned and the detailed sequence around the fusion site was elucidated. MALDI-TDS allowed to efficiently match protein sequences quickly and to validate recombinant protein structures-in particular, protein termini-on the level of undigested proteins.
Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.
Franchi, G; Angulo, J; Moreaud, M; Sorbier, L
2018-01-01
The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.
Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias
2017-11-27
Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.
Walter, Uwe; Müller, Jan-Uwe; Rösche, Johannes; Kirsch, Michael; Grossmann, Annette; Benecke, Reiner; Wittstock, Matthias; Wolters, Alexander
2016-03-01
A combination of preoperative magnetic resonance imaging (MRI) with real-time transcranial ultrasound, known as fusion imaging, may improve postoperative control of deep brain stimulation (DBS) electrode location. Fusion imaging, however, employs a weak magnetic field for tracking the position of the ultrasound transducer and the patient's head. Here we assessed its feasibility, safety, and clinical relevance in patients with DBS. Eighteen imaging sessions were conducted in 15 patients (7 women; aged 52.4 ± 14.4 y) with DBS of subthalamic nucleus (n = 6), globus pallidus interna (n = 5), ventro-intermediate (n = 3), or anterior (n = 1) thalamic nucleus and clinically suspected lead displacement. Minimum distance between DBS generator and magnetic field transmitter was kept at 65 cm. The pre-implantation MRI dataset was loaded into the ultrasound system for the fusion imaging examination. The DBS lead position was rated using validated criteria. Generator DBS parameters and neurological state of patients were monitored. Magnetic resonance-ultrasound fusion imaging and volume navigation were feasible in all cases and provided with real-time imaging capabilities of DBS lead and its location within the superimposed magnetic resonance images. Of 35 assessed lead locations, 30 were rated optimal, three suboptimal, and two displaced. In two cases, electrodes were re-implanted after confirming their inappropriate location on computed tomography (CT) scan. No influence of fusion imaging on clinical state of patients, or on DBS implantable pulse generator function, was found. Magnetic resonance-ultrasound real-time fusion imaging of DBS electrodes is safe with distinct precautions and improves assessment of electrode location. It may lower the need for repeated CT or MRI scans in DBS patients. © 2015 International Parkinson and Movement Disorder Society.
Schulz-Wendtland, Rüdiger; Jud, Sebastian M.; Fasching, Peter A.; Hartmann, Arndt; Radicke, Marcus; Rauh, Claudia; Uder, Michael; Wunderle, Marius; Gass, Paul; Langemann, Hanna; Beckmann, Matthias W.; Emons, Julius
2017-01-01
Aim The combination of different imaging modalities through the use of fusion devices promises significant diagnostic improvement for breast pathology. The aim of this study was to evaluate image quality and clinical feasibility of a prototype fusion device (fusion prototype) constructed from a standard tomosynthesis mammography unit and a standard 3D ultrasound probe using a new method of breast compression. Materials and Methods Imaging was performed on 5 mastectomy specimens from patients with confirmed DCIS or invasive carcinoma (BI-RADS ™ 6). For the preclinical fusion prototype an ABVS system ultrasound probe from an Acuson S2000 was integrated into a MAMMOMAT Inspiration (both Siemens Healthcare Ltd) and, with the aid of a newly developed compression plate, digital mammogram and automated 3D ultrasound images were obtained. Results The quality of digital mammogram images produced by the fusion prototype was comparable to those produced using conventional compression. The newly developed compression plate did not influence the applied x-ray dose. The method was not more labour intensive or time-consuming than conventional mammography. From the technical perspective, fusion of the two modalities was achievable. Conclusion In this study, using only a few mastectomy specimens, the fusion of an automated 3D ultrasound machine with a standard mammography unit delivered images of comparable quality to conventional mammography. The device allows simultaneous ultrasound – the second important imaging modality in complementary breast diagnostics – without increasing examination time or requiring additional staff. PMID:28713173
NASA Technical Reports Server (NTRS)
Rahman, Zia-ur
2005-01-01
The purpose of this research was to develop enhancement and multi-sensor fusion algorithms and techniques to make it safer for the pilot to fly in what would normally be considered Instrument Flight Rules (IFR) conditions, where pilot visibility is severely restricted due to fog, haze or other weather phenomenon. We proposed to use the non-linear Multiscale Retinex (MSR) as the basic driver for developing an integrated enhancement and fusion engine. When we started this research, the MSR was being applied primarily to grayscale imagery such as medical images, or to three-band color imagery, such as that produced in consumer photography: it was not, however, being applied to other imagery such as that produced by infrared image sources. However, we felt that it was possible by using the MSR algorithm in conjunction with multiple imaging modalities such as long-wave infrared (LWIR), short-wave infrared (SWIR), and visible spectrum (VIS), we could substantially improve over the then state-of-the-art enhancement algorithms, especially in poor visibility conditions. We proposed the following tasks: 1) Investigate the effects of applying the MSR to LWIR and SWIR images. This consisted of optimizing the algorithm in terms of surround scales, and weights for these spectral bands; 2) Fusing the LWIR and SWIR images with the VIS images using the MSR framework to determine the best possible representation of the desired features; 3) Evaluating different mixes of LWIR, SWIR and VIS bands for maximum fog and haze reduction, and low light level compensation; 4) Modifying the existing algorithms to work with video sequences. Over the course of the 3 year research period, we were able to accomplish these tasks and report on them at various internal presentations at NASA Langley Research Center, and in presentations and publications elsewhere. A description of the work performed under the tasks is provided in Section 2. The complete list of relevant publications during the research periods is provided in Section 5. This research also resulted in the generation of intellectual property.
A sequence-based survey of the complex structural organization of tumor genomes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Colin; Raphael, Benjamin J.; Volik, Stanislav
2008-04-03
The genomes of many epithelial tumors exhibit extensive chromosomal rearrangements. All classes of genome rearrangements can be identified using End Sequencing Profiling (ESP), which relies on paired-end sequencing of cloned tumor genomes. In this study, brain, breast, ovary and prostate tumors along with three breast cancer cell lines were surveyed with ESP yielding the largest available collection of sequence-ready tumor genome breakpoints and providing evidence that some rearrangements may be recurrent. Sequencing and fluorescence in situ hybridization (FISH) confirmed translocations and complex tumor genome structures that include coamplification and packaging of disparate genomic loci with associated molecular heterogeneity. Comparison ofmore » the tumor genomes suggests recurrent rearrangements. Some are likely to be novel structural polymorphisms, whereas others may be bona fide somatic rearrangements. A recurrent fusion transcript in breast tumors and a constitutional fusion transcript resulting from a segmental duplication were identified. Analysis of end sequences for single nucleotide polymorphisms (SNPs) revealed candidate somatic mutations and an elevated rate of novel SNPs in an ovarian tumor. These results suggest that the genomes of many epithelial tumors may be far more dynamic and complex than previously appreciated and that genomic fusions including fusion transcripts and proteins may be common, possibly yielding tumor-specific biomarkers and therapeutic targets.« less
Analysis of a MULE-cyanide hydratase gene fusion in Verticillium dahliae
USDA-ARS?s Scientific Manuscript database
The genome of the phytopathogenic fungus Verticillium dahliae encodes numerous Class II “cut-and-paste” transposable elements, including those of a small group of MULE transposons. We have previously identified a fusion event between a MULE transposon sequence and sequence encoding a cyanide hydrata...
Fusion Imaging: A Novel Staging Modality in Testis Cancer
Sterbis, Joseph R.; Rice, Kevin R.; Javitt, Marcia C.; Schenkman, Noah S.; Brassell, Stephen A.
2010-01-01
Objective: Computed tomography and chest radiographs provide the standard imaging for staging, treatment, and surveillance of testicular germ cell neoplasms. Positron emission tomography has recently been utilized for staging, but is somewhat limited in its ability to provide anatomic localization. Fusion imaging combines the metabolic information provided by positron emission tomography with the anatomic precision of computed tomography. To the best of our knowledge, this represents the first study of the effectiveness using fusion imaging in evaluation of patients with testis cancer. Methods: A prospective study of 49 patients presenting to Walter Reed Army Medical Center with testicular cancer from 2003 to 2009 was performed. Fusion imaging was compared with conventional imaging, tumor markers, pathologic results, and clinical follow-up. Results: There were 14 true positives, 33 true negatives, 1 false positive, and 1 false negative. Sensitivity, specificity, positive predictive value, and negative predictive value were 93.3, 97.0, 93.3, and 97.0% respectively. In 11 patient scenarios, fusion imaging differed from conventional imaging. Utility was found in superior lesion detection compared to helical computed tomography due to anatomical/functional image co-registration, detection of micrometastasis in lymph nodes (pathologic nodes < 1cm), surveillance for recurrence post-chemotherapy, differentiating fibrosis from active disease in nodes < 2.5cm, and acting as a quality assurance measure to computed tomography alone. Conclusions: In addition to demonstrating a sensitivity and specificity comparable or superior to conventional imaging, fusion imaging shows promise in providing additive data that may assist in clinical decision-making. PMID:21103077
Fusion imaging: a novel staging modality in testis cancer.
Sterbis, Joseph R; Rice, Kevin R; Javitt, Marcia C; Schenkman, Noah S; Brassell, Stephen A
2010-11-05
Computed tomography and chest radiographs provide the standard imaging for staging, treatment, and surveillance of testicular germ cell neoplasms. Positron emission tomography has recently been utilized for staging, but is somewhat limited in its ability to provide anatomic localization. Fusion imaging combines the metabolic information provided by positron emission tomography with the anatomic precision of computed tomography. To the best of our knowledge, this represents the first study of the effectiveness using fusion imaging in evaluation of patients with testis cancer. A prospective study of 49 patients presenting to Walter Reed Army Medical Center with testicular cancer from 2003 to 2009 was performed. Fusion imaging was compared with conventional imaging, tumor markers, pathologic results, and clinical follow-up. There were 14 true positives, 33 true negatives, 1 false positive, and 1 false negative. Sensitivity, specificity, positive predictive value, and negative predictive value were 93.3, 97.0, 93.3, and 97.0% respectively. In 11 patient scenarios, fusion imaging differed from conventional imaging. Utility was found in superior lesion detection compared to helical computed tomography due to anatomical/functional image co-registration, detection of micrometastasis in lymph nodes (pathologic nodes < 1cm), surveillance for recurrence post-chemotherapy, differentiating fibrosis from active disease in nodes < 2.5cm, and acting as a quality assurance measure to computed tomography alone. In addition to demonstrating a sensitivity and specificity comparable or superior to conventional imaging, fusion imaging shows promise in providing additive data that may assist in clinical decision-making.
Kang, Tae Wook; Lee, Min Woo; Song, Kyoung Doo; Kim, Mimi; Kim, Seung Soo; Kim, Seong Hyun; Ha, Sang Yun
2017-01-01
To assess whether contrast-enhanced ultrasonography (CEUS) with Sonazoid can improve the lesion conspicuity and feasibility of percutaneous biopsies for focal hepatic lesions invisible on fusion imaging of real-time ultrasonography (US) with computed tomography/magnetic resonance images, and evaluate its impact on clinical decision making. The Institutional Review Board approved this retrospective study. Between June 2013 and January 2015, 711 US-guided percutaneous biopsies were performed for focal hepatic lesions. Biopsies were performed using CEUS for guidance if lesions were invisible on fusion imaging. We retrospectively evaluated the number of target lesions initially invisible on fusion imaging that became visible after applying CEUS, using a 4-point scale. Technical success rates of biopsies were evaluated based on histopathological results. In addition, the occurrence of changes in clinical decision making was assessed. Among 711 patients, 16 patients (2.3%) were included in the study. The median size of target lesions was 1.1 cm (range, 0.5-1.9 cm) in pre-procedural imaging. After CEUS, 15 of 16 (93.8%) focal hepatic lesions were visualized. The conspicuity score was significantly increased after adding CEUS, as compared to that on fusion imaging (p < 0.001). The technical success rate of biopsy was 87.6% (14/16). After biopsy, there were changes in clinical decision making for 11 of 16 patients (68.8%). The addition of CEUS could improve the conspicuity of focal hepatic lesions invisible on fusion imaging. This dual guidance using CEUS and fusion imaging may affect patient management via changes in clinical decision-making.
Wobser, Hella; Wiest, Reiner; Salzberger, Bernd; Wohlgemuth, Walter Alexander; Stroszczynski, Christian; Jung, Ernst-Michael
2014-01-01
To evaluate treatment response of hepatocellular carcinoma (HCC) after transarterial chemoembolization (TACE) with a new real-time imaging fusion technique of contrast-enhanced ultrasound (CEUS) with multi-slice detection computed tomography (CT) in comparison to conventional post-interventional follow-up. 40 patients with HCC (26 male, ages 46-81 years) were evaluated 24 hours after TACE using CEUS with ultrasound volume navigation and image fusion with CT compared to non-enhanced CT and follow-up contrast-enhanced CT after 6-8 weeks. Reduction of tumor vascularization to less than 25% was regarded as "successful" treatment, whereas reduction to levels >25% was considered as "partial" treatment response. Homogenous lipiodol retention was regarded as successful treatment in non-enhanced CT. Post-interventional image fusion of CEUS with CT was feasible in all 40 patients. In 24 patients (24/40), post-interventional image fusion with CEUS revealed residual tumor vascularity, that was confirmed by contrast-enhanced CT 6-8 weeks later in 24/24 patients. In 16 patients (16/40), post-interventional image fusion with CEUS demonstrated successful treatment, but follow-up CT detected residual viable tumor (6/16). Non-enhanced CT did not identify any case of treatment failure. Image fusion with CEUS assessed treatment efficacy with a specificity of 100%, sensitivity of 80% and a positive predictive value of 1 (negative predictive value 0.63). Image fusion of CEUS with CT allows a reliable, highly specific post-interventional evaluation of embolization response with good sensitivity without any further radiation exposure. It can detect residual viable tumor at early state, resulting in a close patient monitoring or re-therapy.
Eliciting an antibody response against a recombinant TSH containing fusion protein.
Mard-Soltani, Maysam; Rasaee, Mohamad Javad; Sheikhi, AbdolKarim; Hedayati, Mehdi
2017-01-01
Designing novel antigens to rise specific antibodies for Thyroid Stimulating Hormone (TSH) detection is of great significance. A novel fusion protein consisting of the C termini sequence of TSH beta subunit and a fusion sequence was designed and produced for rabbit immunization. Thereafter, the produced antibodies were purified and characterized for TSH detection. Our results indicate that the produced antibody is capable of sensitive and specific detection of TSH with low cross reactivity. This study underscores the applicability of designed fusion protein for specific and sensitive polyclonal antibody production and the importance of selecting an amenable region of the TSH for immunization.
Autofocus and fusion using nonlinear correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabazos-Marín, Alma Rocío; Álvarez-Borrego, Josué, E-mail: josue@cicese.mx; Coronel-Beltrán, Ángel
2014-10-06
In this work a new algorithm is proposed for auto focusing and images fusion captured by microscope's CCD. The proposed algorithm for auto focusing implements the spiral scanning of each image in the stack f(x, y){sub w} to define the V{sub w} vector. The spectrum of the vector FV{sub w} is calculated by fast Fourier transform. The best in-focus image is determined by a focus measure that is obtained by the FV{sub 1} nonlinear correlation vector, of the reference image, with each other FV{sub W} images in the stack. In addition, fusion is performed with a subset of selected imagesmore » f(x, y){sub SBF} like the images with best focus measurement. Fusion creates a new improved image f(x, y){sub F} with the selection of pixels of higher intensity.« less
Research on fusion algorithm of polarization image in tetrolet domain
NASA Astrophysics Data System (ADS)
Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing
2015-12-01
Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect
Osman, Christof; Noriega, Thomas R.; Okreglak, Voytek; Fung, Jennifer C.; Walter, Peter
2015-01-01
Mitochondrial DNA (mtDNA) is essential for mitochondrial and cellular function. In Saccharomyces cerevisiae, mtDNA is organized in nucleoprotein structures termed nucleoids, which are distributed throughout the mitochondrial network and are faithfully inherited during the cell cycle. How the cell distributes and inherits mtDNA is incompletely understood although an involvement of mitochondrial fission and fusion has been suggested. We developed a LacO-LacI system to noninvasively image mtDNA dynamics in living cells. Using this system, we found that nucleoids are nonrandomly spaced within the mitochondrial network and observed the spatiotemporal events involved in mtDNA inheritance. Surprisingly, cells deficient in mitochondrial fusion and fission distributed and inherited mtDNA normally, pointing to alternative pathways involved in these processes. We identified such a mechanism, where we observed fission-independent, but F-actin–dependent, tip generation that was linked to the positioning of mtDNA to the newly generated tip. Although mitochondrial fusion and fission were dispensable for mtDNA distribution and inheritance, we show through a combination of genetics and next-generation sequencing that their absence leads to an accumulation of mitochondrial genomes harboring deleterious structural variations that cluster at the origins of mtDNA replication, thus revealing crucial roles for mitochondrial fusion and fission in maintaining the integrity of the mitochondrial genome. PMID:25730886
Baños-Capilla, M C; García, M A; Bea, J; Pla, C; Larrea, L; López, E
2007-06-01
The quality of dosimetry in radiotherapy treatment requires the accurate delimitation of the gross tumor volume. This can be achieved by complementing the anatomical detail provided by CT images through fusion with other imaging modalities that provide additional metabolic and physiological information. Therefore, use of multiple imaging modalities for radiotherapy treatment planning requires an accurate image registration method. This work describes tests carried out on a Discovery LS positron emission/computed tomography (PET/CT) system by General Electric Medical Systems (GEMS), for its later use to obtain images to delimit the target in radiotherapy treatment. Several phantoms have been used to verify image correlation, in combination with fiducial markers, which were used as a system of external landmarks. We analyzed the geometrical accuracy of two different fusion methods with the images obtained with these phantoms. We first studied the fusion method used by the PET/CT system by GEMS (hardware fusion) on the basis that there is satisfactory coincidence between the reconstruction centers in CT and PET systems; and secondly the fiducial fusion, a registration method, by means of least-squares fitting algorithm of a landmark points system. The study concluded with the verification of the centroid position of some phantom components in both imaging modalities. Centroids were estimated through a calculation similar to center-of-mass, weighted by the value of the CT number and the uptake intensity in PET. The mean deviations found for the hardware fusion method were: deltax/ +/-sigma = 3.3 mm +/- 1.0 mm and /deltax/ +/-sigma = 3.6 mm +/- 1.0 mm. These values were substantially improved upon applying fiducial fusion based on external landmark points: /deltax/ +/-sigma = 0.7 mm +/- 0.8 mm and /deltax/ +/-sigma = 0.3 mm 1.7 mm. We also noted that differences found for each of the fusion methods were similar for both the axial and helical CT image acquisition protocols.
Infrared and visible image fusion based on total variation and augmented Lagrangian.
Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi
2017-11-01
This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.
Dim target detection method based on salient graph fusion
NASA Astrophysics Data System (ADS)
Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun
2018-02-01
Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.
Image fusion and navigation platforms for percutaneous image-guided interventions.
Rajagopal, Manoj; Venkatesan, Aradhana M
2016-04-01
Image-guided interventional procedures, particularly image guided biopsy and ablation, serve an important role in the care of the oncology patient. The need for tumor genomic and proteomic profiling, early tumor response assessment and confirmation of early recurrence are common scenarios that may necessitate successful biopsies of targets, including those that are small, anatomically unfavorable or inconspicuous. As image-guided ablation is increasingly incorporated into interventional oncology practice, similar obstacles are posed for the ablation of technically challenging tumor targets. Navigation tools, including image fusion and device tracking, can enable abdominal interventionalists to more accurately target challenging biopsy and ablation targets. Image fusion technologies enable multimodality fusion and real-time co-displays of US, CT, MRI, and PET/CT data, with navigational technologies including electromagnetic tracking, robotic, cone beam CT, optical, and laser guidance of interventional devices. Image fusion and navigational platform technology is reviewed in this article, including the results of studies implementing their use for interventional procedures. Pre-clinical and clinical experiences to date suggest these technologies have the potential to reduce procedure risk, time, and radiation dose to both the patient and the operator, with a valuable role to play for complex image-guided interventions.
Compressive hyperspectral and multispectral imaging fusion
NASA Astrophysics Data System (ADS)
Espitia, Óscar; Castillo, Sergio; Arguello, Henry
2016-05-01
Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.
Comparison and evaluation on image fusion methods for GaoFen-1 imagery
NASA Astrophysics Data System (ADS)
Zhang, Ningyu; Zhao, Junqing; Zhang, Ling
2016-10-01
Currently, there are many research works focusing on the best fusion method suitable for satellite images of SPOT, QuickBird, Landsat and so on, but only a few of them discuss the application of GaoFen-1 satellite images. This paper proposes a novel idea by using four fusion methods, such as principal component analysis transform, Brovey transform, hue-saturation-value transform, and Gram-Schmidt transform, from the perspective of keeping the original image spectral information. The experimental results showed that the transformed images by the four fusion methods not only retain high spatial resolution on panchromatic band but also have the abundant spectral information. Through comparison and evaluation, the integration of Brovey transform is better, but the color fidelity is not the premium. The brightness and color distortion in hue saturation-value transformed image is the largest. Principal component analysis transform did a good job in color fidelity, but its clarity still need improvement. Gram-Schmidt transform works best in color fidelity, and the edge of the vegetation is the most obvious, the fused image sharpness is higher than that of principal component analysis. Brovey transform, is suitable for distinguishing the Gram-Schmidt transform, and the most appropriate for GaoFen-1 satellite image in vegetation and non-vegetation area. In brief, different fusion methods have different advantages in image quality and class extraction, and should be used according to the actual application information and image fusion algorithm.
Günzel, Karsten; Cash, Hannes; Buckendahl, John; Königbauer, Maximilian; Asbach, Patrick; Haas, Matthias; Neymeyer, Jörg; Hinz, Stefan; Miller, Kurt; Kempkensteffen, Carsten
2017-01-13
To explore the diagnostic benefit of an additional image fusion of the sagittal plane in addition to the standard axial image fusion, using a sensor-based MRI/US fusion platform. During July 2013 and September 2015, 251 patients with at least one suspicious lesion on mpMRI (rated by PI-RADS) were included into the analysis. All patients underwent MRI/US targeted biopsy (TB) in combination with a 10 core systematic prostate biopsy (SB). All biopsies were performed on a sensor-based fusion system. Group A included 162 men who received TB by an axial MRI/US image fusion. Group B comprised 89 men in whom the TB was performed with an additional sagittal image fusion. The median age in group A was 67 years (IQR 61-72) and in group B 68 years (IQR 60-71). The median PSA level in group A was 8.10 ng/ml (IQR 6.05-14) and in group B 8.59 ng/ml (IQR 5.65-12.32). In group A the proportion of patients with a suspicious digital rectal examination (DRE) (14 vs. 29%, p = 0.007) and the proportion of primary biopsies (33 vs 46%, p = 0.046) were significantly lower. The rate of PI-RADS 3 lesions were overrepresented in group A compared to group B (19 vs. 9%; p = 0.044). Classified according to PI-RADS 3, 4 and 5, the detection rates of TB were 42, 48, 75% in group A and 25, 74, 90% in group B. The rate of PCa with a Gleason score ≥7 missed by TB was 33% (18 cases) in group A and 9% (5 cases) in group B; p-value 0.072. An explorative multivariate binary logistic regression analysis revealed that PI-RADS, a suspicious DRE and performing an additional sagittal image fusion were significant predictors for PCa detection in TB. 9 PCa were only detected by TB with sagittal fusion (sTB) and sTB identified 10 additional clinically significant PCa (Gleason ≥7). Performing an additional sagittal image fusion besides the standard axial fusion appears to improve the accuracy of the sensor-based MRI/US fusion platform.
Assessment of Spatiotemporal Fusion Algorithms for Planet and Worldview Images
Zhu, Xiaolin; Gao, Feng; Chou, Bryan; Li, Jiang; Shen, Yuzhong; Koperski, Krzysztof; Marchisio, Giovanni
2018-01-01
Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images. PMID:29614745
Assessment of Spatiotemporal Fusion Algorithms for Planet and Worldview Images.
Kwan, Chiman; Zhu, Xiaolin; Gao, Feng; Chou, Bryan; Perez, Daniel; Li, Jiang; Shen, Yuzhong; Koperski, Krzysztof; Marchisio, Giovanni
2018-03-31
Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images.
Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun
2015-01-01
Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.
NASA Astrophysics Data System (ADS)
Li, Xiaosong; Li, Huafeng; Yu, Zhengtao; Kong, Yingchun
2015-07-01
An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
Fusion of Geophysical Images in the Study of Archaeological Sites
NASA Astrophysics Data System (ADS)
Karamitrou, A. A.; Petrou, M.; Tsokas, G. N.
2011-12-01
This paper presents results from different fusion techniques between geophysical images from different modalities in order to combine them into one image with higher information content than the two original images independently. The resultant image will be useful for the detection and mapping of buried archaeological relics. The examined archaeological area is situated in Kampana site (NE Greece) near the ancient theater of Maronia city. Archaeological excavations revealed an ancient theater, an aristocratic house and the temple of the ancient Greek God Dionysus. Numerous ceramic objects found in the broader area indicated the probability of the existence of buried urban structure. In order to accurately locate and map the latter, geophysical measurements performed with the use of the magnetic method (vertical gradient of the magnetic field) and of the electrical method (apparent resistivity). We performed a semi-stochastic pixel based registration method between the geophysical images in order to fine register them by correcting their local spatial offsets produced by the use of hand held devices. After this procedure we applied to the registered images three different fusion approaches. Image fusion is a relatively new technique that not only allows integration of different information sources, but also takes advantage of the spatial and spectral resolution as well as the orientation characteristics of each image. We have used three different fusion techniques, fusion with mean values, with wavelets by enhancing selected frequency bands and curvelets giving emphasis at specific bands and angles (according the expecting orientation of the relics). In all three cases the fused images gave significantly better results than each of the original geophysical images separately. The comparison of the results of the three different approaches showed that the fusion with the use of curvelets, giving emphasis at the features' orientation, seems to give the best fused image. In the resultant image appear clear linear and ellipsoid features corresponding to potential archaeological relics.
Multispectral Image Enhancement Through Adaptive Wavelet Fusion
2016-09-14
13. SUPPLEMENTARY NOTES 14. ABSTRACT This research developed a multiresolution image fusion scheme based on guided filtering . Guided filtering can...effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale...details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at
Muscatello, Christopher M.; Domier, Calvin W.; Hu, Xing; ...
2014-08-13
Here, quasi-optical imaging at sub-THz frequencies has had a major impact on fusion plasma diagnostics. Mm-wave imaging reflectometry utilizes microwaves to actively probe fusion plasmas, inferring the local properties of electron density fluctuations. Electron cyclotron emission imaging is a multichannel radiometer that passively measures the spontaneous emission of microwaves from the plasma to infer local properties of electron temperature fluctuations. These imaging diagnostics work together to diagnose the characteristics of turbulence. Important quantities such as amplitude and wavenumber of coherent fluctuations, correlation lengths and decor relation times of turbulence, and poloidal flow velocity of the plasma are readily inferred.
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
Adaptive structured dictionary learning for image fusion based on group-sparse-representation
NASA Astrophysics Data System (ADS)
Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei
2018-04-01
Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.
Assessment of SPOT-6 optical remote sensing data against GF-1 using NNDiffuse image fusion algorithm
NASA Astrophysics Data System (ADS)
Zhao, Jinling; Guo, Junjie; Cheng, Wenjie; Xu, Chao; Huang, Linsheng
2017-07-01
A cross-comparison method was used to assess the SPOT-6 optical satellite imagery against Chinese GF-1 imagery using three types of indicators: spectral and color quality, fusion effect and identification potential. More specifically, spectral response function (SRF) curves were used to compare the two imagery, showing that the SRF curve shape of SPOT-6 is more like a rectangle compared to GF-1 in blue, green, red and near-infrared bands. NNDiffuse image fusion algorithm was used to evaluate the capability of information conservation in comparison with wavelet transform (WT) and principal component (PC) algorithms. The results show that NNDiffuse fused image has extremely similar entropy vales than original image (1.849 versus 1.852) and better color quality. In addition, the object-oriented classification toolset (ENVI EX) was used to identify greenlands for comparing the effect of self-fusion image of SPOT-6 and inter-fusion image between SPOT-6 and GF-1 based on the NNDiffuse algorithm. The overall accuracy is 97.27% and 76.88%, respectively, showing that self-fused image of SPOT-6 has better identification capability.
Some new classification methods for hyperspectral remote sensing
NASA Astrophysics Data System (ADS)
Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia
2006-10-01
Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.
The effect of multispectral image fusion enhancement on human efficiency.
Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M
2017-01-01
The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.
Benchmarking image fusion system design parameters
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2013-06-01
A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.
Pixel-based image fusion with false color mapping
NASA Astrophysics Data System (ADS)
Zhao, Wei; Mao, Shiyi
2003-06-01
In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mankovich, N.J.; Lambert, T.; Zrimec, T.
A project is underway to develop automated methods of fusing cerebral magnetic resonance angiography (MRA) and x-ray angiography (XRA) for creating accurate visualizations used in planning treatment of vascular disease. The authors have developed a vascular phantom suitable for testing segmentation and fusion algorithms with either derived images (pseudo-MRA/pseudo-XRA) or actual MRA or XRA image sequences. The initial unilateral arterial phantom design, based on normal human anatomy, contains 48 tapering vascular segments with lumen diameters from 2.5 millimeter to 0.25 millimeter. The initial phantom used rapid prototyping technology (stereolithography) with a 0.9 millimeter vessel wall fabricated in an ultraviolet-cured plastic.more » The model fabrication resulted in a hollow vessel model comprising the internal carotid artery, the ophthalmic artery, and the proximal segments of the anterior, middle, and posterior cerebral arteries. The complete model was fabricated but the model`s lumen could not be cleared for vessels with less than 1 millimeter diameter. Measurements of selected vascular outer diameters as judged against the CAD specification showed an accuracy of 0.14 mm and precision (standard deviation) of 0.15 mm. The plastic vascular model produced provides a fixed geometric framework for the evaluation of imaging protocols and the development of algorithms for both segmentation and fusion.« less
de Graaf, M; Boven, E; Oosterhoff, D; van der Meulen-Muileman, I H; Huls, G A; Gerritsen, W R; Haisma, H J; Pinedo, H M
2002-03-04
Monoclonal antibodies against tumour-associated antigens could be useful to deliver enzymes selectively to the site of a tumour for activation of a non-toxic prodrug. A completely human fusion protein may be advantageous for repeated administration, as host immune responses may be avoided. We have constructed a fusion protein consisting of a human single chain Fv antibody, C28, against the epithelial cell adhesion molecule and the human enzyme beta-glucuronidase. The sequences encoding C28 and human enzyme beta-glucuronidase were joined by a sequence encoding a flexible linker, and were preceded by the IgGkappa signal sequence for secretion of the fusion protein. A CHO cell line was engineered to secrete C28-beta-glucuronidase fusion protein. Antibody specificity and enzyme activity were retained in the secreted fusion protein that had an apparent molecular mass of 100 kDa under denaturing conditions. The fusion protein was able to convert a non-toxic prodrug of doxorubicin, N-[4-doxorubicin-N-carbonyl(oxymethyl)phenyl]-O-beta-glucuronyl carbamate to doxorubicin, resulting in cytotoxicity. A bystander effect was demonstrated, as doxorubicin was detected in all cells after N-[4-doxorubicin-N-carbonyl(oxymethyl)phenyl]-O-beta-glucuronyl carbamate administration when only 10% of the cells expressed the fusion protein. This is the first fully human and functional fusion protein consisting of an scFv against epithelial cell adhesion molecule and human enzyme beta-glucuronidase for future use in tumour-specific activation of a non-toxic glucuronide prodrug. Copyright 2002 Cancer Research UK
Multispectral multisensor image fusion using wavelet transforms
Lemeshewsky, George P.
1999-01-01
Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.
A comparative study of multi-focus image fusion validation metrics
NASA Astrophysics Data System (ADS)
Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael
2016-05-01
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
Wilson, R L; Stauffer, G V
1994-01-01
The gene encoding GcvA, the trans-acting regulatory protein for the Escherichia coli glycine cleavage enzyme system, has been sequenced. The gcvA locus contains an open reading frame of 930 nucleotides that could encode a protein with a molecular mass of 34.4 kDa, consistent with the results of minicell analysis indicating that GcvA is a polypeptide of approximately 33 kDa. The deduced amino acid sequence of GcvA revealed that this protein shares similarity with the LysR family of activator proteins. The transcription start site was found to be 72 bp upstream of the presumed translation start site. A chromosomal deletion of gcvA resulted in the inability of cells to activate the expression of a gcvT-lacZ gene fusion when grown in the presence of glycine and an inability to repress gcvT-lacZ expression when grown in the presence of inosine. The regulation of gcvA was examined by constructing a gcvA-lacZ gene fusion in which beta-galactosidase synthesis is under the control of the gcvA regulatory region. Although gcvA expression appears to be autogenously regulated over a two- to threefold range, it is neither induced by glycine nor repressed by inosine. Images PMID:8188587
Generation and validation of homozygous fluorescent knock-in cells using CRISPR-Cas9 genome editing.
Koch, Birgit; Nijmeijer, Bianca; Kueblbeck, Moritz; Cai, Yin; Walther, Nike; Ellenberg, Jan
2018-06-01
Gene tagging with fluorescent proteins is essential for investigations of the dynamic properties of cellular proteins. CRISPR-Cas9 technology is a powerful tool for inserting fluorescent markers into all alleles of the gene of interest (GOI) and allows functionality and physiological expression of the fusion protein. It is essential to evaluate such genome-edited cell lines carefully in order to preclude off-target effects caused by (i) incorrect insertion of the fluorescent protein, (ii) perturbation of the fusion protein by the fluorescent proteins or (iii) nonspecific genomic DNA damage by CRISPR-Cas9. In this protocol, we provide a step-by-step description of our systematic pipeline to generate and validate homozygous fluorescent knock-in cell lines.We have used the paired Cas9D10A nickase approach to efficiently insert tags into specific genomic loci via homology-directed repair (HDR) with minimal off-target effects. It is time-consuming and costly to perform whole-genome sequencing of each cell clone to check for spontaneous genetic variations occurring in mammalian cell lines. Therefore, we have developed an efficient validation pipeline of the generated cell lines consisting of junction PCR, Southern blotting analysis, Sanger sequencing, microscopy, western blotting analysis and live-cell imaging for cell-cycle dynamics. This protocol takes between 6 and 9 weeks. With this protocol, up to 70% of the targeted genes can be tagged homozygously with fluorescent proteins, thus resulting in physiological levels and phenotypically functional expression of the fusion proteins.
Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.
Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F
2013-09-01
The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.
A new hyperspectral image compression paradigm based on fusion
NASA Astrophysics Data System (ADS)
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
The three lives of viral fusion peptides
Apellániz, Beatriz; Huarte, Nerea; Largo, Eneko; Nieva, José L.
2014-01-01
Fusion peptides comprise conserved hydrophobic domains absolutely required for the fusogenic activity of glycoproteins from divergent virus families. After 30 years of intensive research efforts, the structures and functions underlying their high degree of sequence conservation are not fully elucidated. The long-hydrophobic viral fusion peptide (VFP) sequences are structurally constrained to access three successive states after biogenesis. Firstly, the VFP sequence must fulfill the set of native interactions required for (meta) stable folding within the globular ectodomains of glycoprotein complexes. Secondly, at the onset of the fusion process, they get transferred into the target cell membrane and adopt specific conformations therein. According to commonly accepted mechanistic models, membrane-bound states of the VFP might promote the lipid bilayer remodeling required for virus-cell membrane merger. Finally, at least in some instances, several VFPs co-assemble with transmembrane anchors into membrane integral helical bundles, following a locking movement hypothetically coupled to fusion-pore expansion. Here we review different aspects of the three major states of the VFPs, including the functional assistance by other membrane-transferring glycoprotein regions, and discuss briefly their potential as targets for clinical intervention. PMID:24704587
Kim, Shin-Hee; Xiao, Sa; Collins, Peter L; Samal, Siba K
2016-06-01
The cleavage site sequence of the fusion (F) protein contributes to a wide range of virulence of Newcastle disease virus (NDV). In this study, we identified other important amino acid sequences of the F protein that affect cleavage and modulation of fusion. We generated chimeric Beaudette C (BC) viruses containing the cleavage site sequence of avirulent strain LaSota (Las-Fc) together with various regions of the F protein of another virulent strain AKO. We found that the F1 subunit is important for cleavage inhibition. Further dissection of the F1 subunit showed that replacement of four amino acids in the BC/Las-Fc protein with their AKO counterparts (T341S, M384I, T385A and I386L) resulted in an increase in fusion and replication in vitro. In contrast, the mutation N403D greatly reduced cleavage and viral replication, and affected protein conformation. These findings will be useful in developing improved live NDV vaccines and vaccine vectors.
Biometric image enhancement using decision rule based image fusion techniques
NASA Astrophysics Data System (ADS)
Sagayee, G. Mary Amirtha; Arumugam, S.
2010-02-01
Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.
An infrared-visible image fusion scheme based on NSCT and compressed sensing
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Maldague, Xavier
2015-05-01
Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.
Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion
NASA Astrophysics Data System (ADS)
Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei
2018-06-01
Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.
Kang, Tae Wook; Song, Kyoung Doo; Kim, Mimi; Kim, Seung Soo; Kim, Seong Hyun; Ha, Sang Yun
2017-01-01
Objective To assess whether contrast-enhanced ultrasonography (CEUS) with Sonazoid can improve the lesion conspicuity and feasibility of percutaneous biopsies for focal hepatic lesions invisible on fusion imaging of real-time ultrasonography (US) with computed tomography/magnetic resonance images, and evaluate its impact on clinical decision making. Materials and Methods The Institutional Review Board approved this retrospective study. Between June 2013 and January 2015, 711 US-guided percutaneous biopsies were performed for focal hepatic lesions. Biopsies were performed using CEUS for guidance if lesions were invisible on fusion imaging. We retrospectively evaluated the number of target lesions initially invisible on fusion imaging that became visible after applying CEUS, using a 4-point scale. Technical success rates of biopsies were evaluated based on histopathological results. In addition, the occurrence of changes in clinical decision making was assessed. Results Among 711 patients, 16 patients (2.3%) were included in the study. The median size of target lesions was 1.1 cm (range, 0.5–1.9 cm) in pre-procedural imaging. After CEUS, 15 of 16 (93.8%) focal hepatic lesions were visualized. The conspicuity score was significantly increased after adding CEUS, as compared to that on fusion imaging (p < 0.001). The technical success rate of biopsy was 87.6% (14/16). After biopsy, there were changes in clinical decision making for 11 of 16 patients (68.8%). Conclusion The addition of CEUS could improve the conspicuity of focal hepatic lesions invisible on fusion imaging. This dual guidance using CEUS and fusion imaging may affect patient management via changes in clinical decision-making. PMID:28096725
A transversal approach for patch-based label fusion via matrix completion
Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Thung, Kim-Han; Guo, Yanrong; Shen, Dinggang
2015-01-01
Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge. PMID:26160394
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-05
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-01
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion. PMID:25569749
An efficient method for the fusion of light field refocused images
NASA Astrophysics Data System (ADS)
Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei
2018-04-01
Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.
Human visual system consistent quality assessment for remote sensing image fusion
NASA Astrophysics Data System (ADS)
Liu, Jun; Huang, Junyi; Liu, Shuguang; Li, Huali; Zhou, Qiming; Liu, Junchen
2015-07-01
Quality assessment for image fusion is essential for remote sensing application. Generally used indices require a high spatial resolution multispectral (MS) image for reference, which is not always readily available. Meanwhile, the fusion quality assessments using these indices may not be consistent with the Human Visual System (HVS). As an attempt to overcome this requirement and inconsistency, this paper proposes an HVS-consistent image fusion quality assessment index at the highest resolution without a reference MS image using Gaussian Scale Space (GSS) technology that could simulate the HVS. The spatial details and spectral information of original and fused images are first separated in GSS, and the qualities are evaluated using the proposed spatial and spectral quality index respectively. The overall quality is determined without a reference MS image by a combination of the proposed two indices. Experimental results on various remote sensing images indicate that the proposed index is more consistent with HVS evaluation compared with other widely used indices that may or may not require reference images.
Isolation and characterization of target sequences of the chicken CdxA homeobox gene.
Margalit, Y; Yarus, S; Shapira, E; Gruenbaum, Y; Fainsod, A
1993-01-01
The DNA binding specificity of the chicken homeodomain protein CDXA was studied. Using a CDXA-glutathione-S-transferase fusion protein, DNA fragments containing the binding site for this protein were isolated. The sources of DNA were oligonucleotides with random sequence and chicken genomic DNA. The DNA fragments isolated were sequenced and tested in DNA binding assays. Sequencing revealed that most DNA fragments are AT rich which is a common feature of homeodomain binding sites. By electrophoretic mobility shift assays it was shown that the different target sequences isolated bind to the CDXA protein with different affinities. The specific sequences bound by the CDXA protein in the genomic fragments isolated, were determined by DNase I footprinting. From the footprinted sequences, the CDXA consensus binding site was determined. The CDXA protein binds the consensus sequence A, A/T, T, A/T, A, T, A/G. The CAUDAL binding site in the ftz promoter is also included in this consensus sequence. When tested, some of the genomic target sequences were capable of enhancing the transcriptional activity of reporter plasmids when introduced into CDXA expressing cells. This study determined the DNA sequence specificity of the CDXA protein and it also shows that this protein can further activate transcription in cells in culture. Images PMID:7909943
A Remote Sensing Image Fusion Method based on adaptive dictionary learning
NASA Astrophysics Data System (ADS)
He, Tongdi; Che, Zongxi
2018-01-01
This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.
Li, You; Heavican, Tayla B.; Vellichirammal, Neetha N.; Iqbal, Javeed
2017-01-01
Abstract The RNA-Seq technology has revolutionized transcriptome characterization not only by accurately quantifying gene expression, but also by the identification of novel transcripts like chimeric fusion transcripts. The ‘fusion’ or ‘chimeric’ transcripts have improved the diagnosis and prognosis of several tumors, and have led to the development of novel therapeutic regimen. The fusion transcript detection is currently accomplished by several software packages, primarily relying on sequence alignment algorithms. The alignment of sequencing reads from fusion transcript loci in cancer genomes can be highly challenging due to the incorrect mapping induced by genomic alterations, thereby limiting the performance of alignment-based fusion transcript detection methods. Here, we developed a novel alignment-free method, ChimeRScope that accurately predicts fusion transcripts based on the gene fingerprint (as k-mers) profiles of the RNA-Seq paired-end reads. Results on published datasets and in-house cancer cell line datasets followed by experimental validations demonstrate that ChimeRScope consistently outperforms other popular methods irrespective of the read lengths and sequencing depth. More importantly, results on our in-house datasets show that ChimeRScope is a better tool that is capable of identifying novel fusion transcripts with potential oncogenic functions. ChimeRScope is accessible as a standalone software at (https://github.com/ChimeRScope/ChimeRScope/wiki) or via the Galaxy web-interface at (https://galaxy.unmc.edu/). PMID:28472320
Rennert, J; Georgieva, M; Schreyer, A G; Jung, W; Ross, C; Stroszczynski, C; Jung, E M
2011-01-01
To evaluate, whether image fusion of contrast enhanced ultrasound (CEUS) with CT or MRI affects the diagnosis and characterization of liver lesions or the therapeutic strategy of surgical or interventional procedures compared to the preliminary diagnosis. In a retrospective study the image fusion scans of CEUS with contrast enhanced CT or MRI of 100 patients (71 male, mean age 59 years, 0.3-85 years) with benign or malignant liver lesions were evaluated. Fundamental B-scan, color Doppler imaging and CEUS were performed in all patients by an experienced examiner using a multifrequency convex transducer (1-5 MHz, LOGIQ 9/GE) and volume navigation (Vnav). After a bolus injections of up to 2.4 ml SonoVue® (BRACCO, Italy) digital raw data was stored as cine-loops up to 5 min. In 74 patients, CEUS was fused with a pre-existing ceCT, in 26 patients a ceMRI was used. In all 100 patients (100%) the image quality in all modalities (ceCT, ceMRI and CEUS) was excellent or with only minor diagnostic limitations. Regarding the number of lesions revealed in image fusion of CEUS/ceCT/ceMRI and the preceding diagnostic method, concordant results were found in 84 patients. In 12 patients, additional lesions were found using fusion imaging causing subsequently a change of the therapeutical strategy. In 15 out of 21 patients with either concordant or discordant results regarding the number of lesions, image fusion allowed a definite diagnosis due to a continuous documentation of the microcirculation of the tumor and its contrast enhancement. A significant coherency (p < 0.05) among image fusion with either ceCT or ceMRI and CEUS and a subsequent change of therapeutic strategy was found. Image fusion with volume navigation (VNav) of CEUS with ceCT or ceMRI frequently allows a definite localization and diagnosis of hepatic lesions in patients with primary hepatic carcinoma or metastatic diseases. This might cause a change of the therapeutic strategy in many patients with hepatic lesions.
A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.
Khelifi, Lazhar; Mignotte, Max
2017-08-01
Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.
NASA Astrophysics Data System (ADS)
Paramanandham, Nirmala; Rajendiran, Kishore
2018-01-01
A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.
Broder, C C; Berger, E A
1993-01-01
The third complementarity-determining region (CDR3) within domain 1 of the human CD4 molecule has been suggested to play a critical role in membrane fusion mediated by the interaction of CD4 with the human immunodeficiency virus type 1 (HIV-1) envelope glycoprotein. To analyze in detail the role of CDR3 and adjacent regions in the fusion process, we used cassette mutagenesis to construct a panel of 30 site-directed mutations between residues 79 and 96 of the full-length CD4 molecule. The mutant proteins were transiently expressed by using recombinant vaccinia virus vectors and were analyzed for cell surface expression, recombinant gp120-binding activity, and overall structural integrity as assessed by reactivity with a battery of anti-CD4 monoclonal antibodies. Cells expressing the CD4 mutants were assayed for their ability to form syncytia when mixed with cells expressing the HIV-1 envelope glycoprotein. Surprisingly in view of published data from others, most of the mutations had little effect on syncytium-forming activity. Normal fusion was observed in 21 mutants, including substitution of human residues 85 to 95 with the corresponding sequences from either chimpanzee, rhesus, or mouse CD4; a panel of Ser-Arg double insertions after each residue from 86 to 91; and a number of other charge, hydrophobic, and proline substitutions and insertions within this region. The nine mutants that showed impaired fusion all displayed defective gp120 binding and disruption of overall structural integrity. In further contrast with results of other workers, we observed that transformant human cell lines expressing native chimpanzee or rhesus CD4 efficiently formed syncytia when mixed with cells expressing the HIV-1 envelope glycoprotein. These data refute the conclusion that certain mutations in the CDR3 region of CD4 abolish cell fusion activity, and they suggest that a wide variety of sequences can be functionally tolerated in this region, including those from highly divergent mammalian species. Syncytium formation mediated by several of the CDR3 mutants was partially or completely resistant to inhibition by the CDR3-directed monoclonal antibody L71, suggesting that the corresponding epitope is not directly involved in the fusion process.(ABSTRACT TRUNCATED AT 400 WORDS) Images PMID:8419649
Avian sarcoma virus 17 carries the jun oncogene.
Maki, Y; Bos, T J; Davis, C; Starbuck, M; Vogt, P K
1987-01-01
Biologically active molecular clones of avian sarcoma virus 17 (ASV 17) contain a replication-defective proviral genome of 3.5 kilobases (kb). The genome retains partial gag and env sequences, which flank a cell-derived putative oncogene of 0.93 kb, termed jun. The jun gene lacks preserved coding domains of tyrosine-specific protein kinases. It also shows no significant nucleic acid homology with other known oncogenes. The probable transformation-specific protein in ASV 17-transformed cells is a 55-kDa gag-jun fusion product. Images PMID:3033666
Application of polymer sensitive MRI sequence to localization of EEG electrodes.
Butler, Russell; Gilbert, Guillaume; Descoteaux, Maxime; Bernier, Pierre-Michel; Whittingstall, Kevin
2017-02-15
The growing popularity of simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) opens up the possibility of imaging EEG electrodes while the subject is in the scanner. Such information could be useful for improving the fusion of EEG-fMRI datasets. Here, we report for the first time how an ultra-short echo time (UTE) MR sequence can image the materials of an MR-compatible EEG cap, finding that electrodes and some parts of the wiring are visible in a high resolution UTE. Using these images, we developed a segmentation procedure to obtain electrode coordinates based on voxel intensity from the raw UTE, using hand labeled coordinates as the starting point. We were able to visualize and segment 95% of EEG electrodes using a short (3.5min) UTE sequence. We provide scripts and template images so this approach can now be easily implemented to obtain precise, subject-specific EEG electrode positions while adding minimal acquisition time to the simultaneous EEG-fMRI protocol. T1 gel artifacts are not robust enough to localize all electrodes across subjects, the polymers composing Brainvision cap electrodes are not visible on a T1, and adding T1 visible materials to the EEG cap is not always possible. We therefore consider our method superior to existing methods for obtaining electrode positions in the scanner, as it is hardware free and should work on a wide range of materials (caps). EEG electrode positions are obtained with high precision and no additional hardware. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Jun; Song, Minghui; Peng, Yuanxi
2018-03-01
Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.
Segmentation Fusion Techniques with Application to Plenoptic Images: A Survey.
NASA Astrophysics Data System (ADS)
Evin, D.; Hadad, A.; Solano, A.; Drozdowicz, B.
2016-04-01
The segmentation of anatomical and pathological structures plays a key role in the characterization of clinically relevant evidence from digital images. Recently, plenoptic imaging has emerged as a new promise to enrich the diagnostic potential of conventional photography. Since the plenoptic images comprises a set of slightly different versions of the target scene, we propose to make use of those images to improve the segmentation quality in relation to the scenario of a single image segmentation. The problem of finding a segmentation solution from multiple images of a single scene, is called segmentation fusion. This paper reviews the issue of segmentation fusion in order to find solutions that can be applied to plenoptic images, particularly images from the ophthalmological domain.
[An improved low spectral distortion PCA fusion method].
Peng, Shi; Zhang, Ai-Wu; Li, Han-Lun; Hu, Shao-Xing; Meng, Xian-Gang; Sun, Wei-Dong
2013-10-01
Aiming at the spectral distortion produced in PCA fusion process, the present paper proposes an improved low spectral distortion PCA fusion method. This method uses NCUT (normalized cut) image segmentation algorithm to make a complex hyperspectral remote sensing image into multiple sub-images for increasing the separability of samples, which can weaken the spectral distortions of traditional PCA fusion; Pixels similarity weighting matrix and masks were produced by using graph theory and clustering theory. These masks are used to cut the hyperspectral image and high-resolution image into some sub-region objects. All corresponding sub-region objects between the hyperspectral image and high-resolution image are fused by using PCA method, and all sub-regional integration results are spliced together to produce a new image. In the experiment, Hyperion hyperspectral data and Rapid Eye data were used. And the experiment result shows that the proposed method has the same ability to enhance spatial resolution and greater ability to improve spectral fidelity performance.
Kashimura, Hiroshi; Ogasawara, Kuniaki; Arai, Hiroshi; Beppu, Takaaki; Inoue, Takashi; Takahashi, Tsutomu; Matsuda, Koichi; Takahashi, Yujiro; Fujiwara, Shunrou; Ogawa, Akira
2008-09-01
A fusion technique for magnetic resonance (MR) angiography and MR imaging was developed to help assess the peritumoral angioarchitecture during surgical planning for meningioma. Three-dimensional time-of-flight (3D-TOF) and 3D-spoiled gradient recalled (SPGR) datasets were obtained from 10 patients with intracranial meningioma, and fused using newly developed volume registration and visualization software. Maximum intensity projection (MIP) images from 3D-TOF MR angiography and axial SPGR MR imaging were displayed at the same time on the monitor. Selecting a vessel on the real-time MIP image indicated the corresponding points on the axial image automatically. Fusion images showed displacement of the anterior cerebral or middle cerebral artery in 7 patients and encasement of the anterior cerebral arteries in 1 patient, with no relationship between the main arterial trunk and tumor in 2 patients. Fusion of MR angiography and MR imaging can clarify relationships between the intracranial vasculature and meningioma, and may be helpful for surgical planning for meningioma.
NASA Astrophysics Data System (ADS)
Liu, Chunhui; Zhang, Duona; Zhao, Xintao
2018-03-01
Saliency detection in synthetic aperture radar (SAR) images is a difficult problem. This paper proposed a multitask saliency detection (MSD) model for the saliency detection task of SAR images. We extract four features of the SAR image, which include the intensity, orientation, uniqueness, and global contrast, as the input of the MSD model. The saliency map is generated by the multitask sparsity pursuit, which integrates the multiple features collaboratively. Detection of different scale features is also taken into consideration. Subjective and objective evaluation of the MSD model verifies its effectiveness. Based on the saliency maps obtained by the MSD model, we apply the saliency map of the SAR image to the SAR and color optical image fusion. The experimental results of real data show that the saliency map obtained by the MSD model helps to improve the fusion effect, and the salient areas in the SAR image can be highlighted in the fusion results.
NASA Astrophysics Data System (ADS)
Zhou, Xiran; Liu, Jun; Liu, Shuguang; Cao, Lei; Zhou, Qiming; Huang, Huawen
2014-02-01
High spatial resolution and spectral fidelity are basic standards for evaluating an image fusion algorithm. Numerous fusion methods for remote sensing images have been developed. Some of these methods are based on the intensity-hue-saturation (IHS) transform and the generalized IHS (GIHS), which may cause serious spectral distortion. Spectral distortion in the GIHS is proven to result from changes in saturation during fusion. Therefore, reducing such changes can achieve high spectral fidelity. A GIHS-based spectral preservation fusion method that can theoretically reduce spectral distortion is proposed in this study. The proposed algorithm consists of two steps. The first step is spectral modulation (SM), which uses the Gaussian function to extract spatial details and conduct SM of multispectral (MS) images. This method yields a desirable visual effect without requiring histogram matching between the panchromatic image and the intensity of the MS image. The second step uses the Gaussian convolution function to restore lost edge details during SM. The proposed method is proven effective and shown to provide better results compared with other GIHS-based methods.
Kohoutek, Tobias K.; Mautz, Rainer; Wegner, Jan D.
2013-01-01
We present a novel approach for autonomous location estimation and navigation in indoor environments using range images and prior scene knowledge from a GIS database (CityGML). What makes this task challenging is the arbitrary relative spatial relation between GIS and Time-of-Flight (ToF) range camera further complicated by a markerless configuration. We propose to estimate the camera's pose solely based on matching of GIS objects and their detected location in image sequences. We develop a coarse-to-fine matching strategy that is able to match point clouds without any initial parameters. Experiments with a state-of-the-art ToF point cloud show that our proposed method delivers an absolute camera position with decimeter accuracy, which is sufficient for many real-world applications (e.g., collision avoidance). PMID:23435055
Canova-Davis, E; Eng, M; Mukku, V; Reifsnyder, D H; Olson, C V; Ling, V T
1992-01-01
Recombinant DNA techniques were used to biosynthesize human insulin-like growth factor I (hIGF-I) as a fusion protein wherein the fusion polypeptide is an IgG-binding moiety derived from staphylococcal protein A. This fusion protein is produced in Escherichia coli and secreted into the fermentation broth. In order to release mature recombinant-derived hIGF-I (rhIGF-I), the fusion protein is treated with hydroxylamine, which cleaves a susceptible Asn-Gly bond that has been engineered into the fusion protein gene. Reversed-phase h.p.l.c. was used to estimate the purity of the rhIGF-I preparations, especially for the quantification of the methionine sulphoxide-containing variant. It was determined that hydroxylamine cleavage of the fusion protein produced, as a side reaction, hydroxamates of the asparagine and glutamine residues in rhIGF-I. Although isoelectric focusing was effective in detecting, and reversed-phase h.p.l.c. for producing enriched fractions of the hydroxamate variants, ion-exchange chromatography was a more definitive procedure, as it allowed quantification and facile removal of these variants. The identity of the variants as hydroxamates was established by Staphylococcus aureus V8 proteinase digestion, followed by m.s., as the modification was transparent to amino acid and N-terminal sequence analyses. The biological activity of rhIGF-I was established by its ability to incorporate [3H]thymidine into the DNA of BALB/c373 cells and by a radioreceptor assay utilizing human placental membranes. Both assays demonstrate that the native, recombinant and methionine sulphoxide and hydroxamate IGF-I variants are essentially equipotent. Images Fig. 2. PMID:1637301
Sugita, Shintaro; Arai, Yasuhito; Tonooka, Akiko; Hama, Natsuko; Totoki, Yasushi; Fujii, Tomoki; Aoyama, Tomoyuki; Asanuma, Hiroko; Tsukahara, Tomohide; Kaya, Mitsunori; Shibata, Tatsuhiro; Hasegawa, Tadashi
2014-11-01
Differential diagnosis of small round cell sarcomas (SRCSs) grouped under the Ewing sarcoma family of tumors (ESFT) can be a challenging situation for pathologists. Recent studies have revealed that some groups of Ewing-like sarcoma show typical ESFT morphology but lack any EWSR1-ETS gene fusions. Here we identified a novel gene fusion, CIC-FOXO4, in a case of Ewing-like sarcoma with a t(X;19)(q13;q13.3) translocation. The patient was a 63-year-old man who had an asymptomatic, 30-mm, well-demarcated, intramuscular mass in his right posterior neck, and imaging findings suggested a diagnosis of high-grade sarcoma. He was treated with complete resection and subsequent radiotherapy and chemotherapy. He was alive without local recurrence or distant metastasis 6 months after the operation. Histologic examination revealed SRCS with abundant desmoplastic fibrous stroma suggesting a desmoplastic small round cell tumor. Immunohistochemical analysis showed weak to moderate and partial staining for MIC2 (CD99) and WT1, respectively. High-throughput transcriptome sequencing revealed a gene fusion, and the genomic rearrangement between the CIC and FOXO4 genes was identified by fluorescence in situ hybridization. Aside from the desmoplastic stroma, the CIC-FOXO4 fusion sarcoma showed morphologic and immunohistochemical similarity to ESFT and Ewing-like sarcomas, including the recently described CIC-DUX4 fusion sarcoma. Although clinicopathologic analysis with additional cases is necessary, we conclude that CIC-FOXO4 fusion sarcoma is a new type of Ewing-like sarcoma that has a specific genetic signature. These findings have important implications for the differential diagnosis of SRCS.
Lee, Myunggyo; Lee, Kyubum; Yu, Namhee; Jang, Insu; Choi, Ikjung; Kim, Pora; Jang, Ye Eun; Kim, Byounggun; Kim, Sunkyu; Lee, Byungwook; Kang, Jaewoo; Lee, Sanghyuk
2017-01-04
Fusion gene is an important class of therapeutic targets and prognostic markers in cancer. ChimerDB is a comprehensive database of fusion genes encompassing analysis of deep sequencing data and manual curations. In this update, the database coverage was enhanced considerably by adding two new modules of The Cancer Genome Atlas (TCGA) RNA-Seq analysis and PubMed abstract mining. ChimerDB 3.0 is composed of three modules of ChimerKB, ChimerPub and ChimerSeq. ChimerKB represents a knowledgebase including 1066 fusion genes with manual curation that were compiled from public resources of fusion genes with experimental evidences. ChimerPub includes 2767 fusion genes obtained from text mining of PubMed abstracts. ChimerSeq module is designed to archive the fusion candidates from deep sequencing data. Importantly, we have analyzed RNA-Seq data of the TCGA project covering 4569 patients in 23 cancer types using two reliable programs of FusionScan and TopHat-Fusion. The new user interface supports diverse search options and graphic representation of fusion gene structure. ChimerDB 3.0 is available at http://ercsb.ewha.ac.kr/fusiongene/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Fusion of laser and image sensory data for 3-D modeling of the free navigation space
NASA Technical Reports Server (NTRS)
Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.
1994-01-01
A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.
Henninger, B.; Putzer, D.; Kendler, D.; Uprimny, C.; Virgolini, I.; Gunsilius, E.; Bale, R.
2012-01-01
Aim. The purpose of this study was to evaluate the accuracy of 2-deoxy-2-[fluorine-18]fluoro-D-glucose (FDG) positron emission tomography (PET), computed tomography (CT), and software-based image fusion of both modalities in the imaging of non-Hodgkin's lymphoma (NHL) and Hodgkin's disease (HD). Methods. 77 patients with NHL (n = 58) or HD (n = 19) underwent a FDG PET scan, a contrast-enhanced CT, and a subsequent digital image fusion during initial staging or followup. 109 examinations of each modality were evaluated and compared to each other. Conventional staging procedures, other imaging techniques, laboratory screening, and follow-up data constituted the reference standard for comparison with image fusion. Sensitivity and specificity were calculated for CT and PET separately. Results. Sensitivity and specificity for detecting malignant lymphoma were 90% and 76% for CT and 94% and 91% for PET, respectively. A lymph node region-based analysis (comprising 14 defined anatomical regions) revealed a sensitivity of 81% and a specificity of 97% for CT and 96% and 99% for FDG PET, respectively. Only three of 109 image fusion findings needed further evaluation (false positive). Conclusion. Digital fusion of PET and CT improves the accuracy of staging, restaging, and therapy monitoring in patients with malignant lymphoma and may reduce the need for invasive diagnostic procedures. PMID:22654631
Molecular imaging of malignant tumor metabolism: whole-body image fusion of DWI/CT vs. PET/CT.
Reiner, Caecilia S; Fischer, Michael A; Hany, Thomas; Stolzmann, Paul; Nanz, Daniel; Donati, Olivio F; Weishaupt, Dominik; von Schulthess, Gustav K; Scheffel, Hans
2011-08-01
To prospectively investigate the technical feasibility and performance of image fusion for whole-body diffusion-weighted imaging (wbDWI) and computed tomography (CT) to detect metastases using hybrid positron emission tomography/computed tomography (PET/CT) as reference standard. Fifty-two patients (60 ± 14 years; 18 women) with different malignant tumor disease examined by PET/CT for clinical reasons consented to undergo additional wbDWI at 1.5 Tesla. WbDWI was performed using a diffusion-weighted single-shot echo-planar imaging during free breathing. Images at b = 0 s/mm(2) and b = 700 s/mm(2) were acquired and apparent diffusion coefficient (ADC) maps were generated. Image fusion of wbDWI and CT (from PET/CT scan) was performed yielding for wbDWI/CT fused image data. One radiologist rated the success of image fusion and diagnostic image quality. The presence or absence of metastases on wbDWI/CT fused images was evaluated together with the separate wbDWI and CT images by two different, independent radiologists blinded to results from PET/CT. Detection rate and positive predictive values for diagnosing metastases was calculated. PET/CT examinations were used as reference standard. PET/CT identified 305 malignant lesions in 39 of 52 (75%) patients. WbDWI/CT image fusion was technically successful and yielded diagnostic image quality in 73% and 92% of patients, respectively. Interobserver agreement for the evaluation of wbDWI/CT images was κ = 0.78. WbDWI/CT identified 270 metastases in 43 of 52 (83%) patients. Overall detection rate and positive predictive value of wbDWI/CT was 89% (95% CI, 0.85-0.92) and 94% (95% CI, 0.92-0.97), respectively. WbDWI/CT image fusion is technically feasible in a clinical setting and allows the diagnostic assessment of metastatic tumor disease detecting nine of 10 lesions as compared with PET/CT. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.
Li, Yi; Sun, Hong-chen; Guo, Xue-jun; Feng, Shu-zhang
2005-02-01
To clone the recombinant fusion gene of Escherichia coli heat-liable enterotoxin B subunit (Ltb) and Actinobacillus actinomycetemcomitans fimbria associative protein (Fap). Two couples of primers were designed for PCR according to the known sequence of ltb and fap. The ltb and fap gene were obtained by amplification PCR technique from plasmid EWD299 of Escherichia coli and Actinobacillus actinomycetemcomitans 310 DNA respectively, and fused them by PCR. The fusion gene ltb-fap were cloning into plasmid pET28a(+). The recombined plasmid pET28a ltb-fap was transformed into Escherichia coli DH5alpha. The recombinant was screened and identified by restriction enzyme and PCR. The cloned gene was sequenced. The ltb-fap about 531bp in size was obtained successfully, and identified by PCR, restrictive enzyme and sequence analysis. The vector of pET28a ltb-fap was obtained.
Conceptual design study of Fusion Experimental Reactor (FY86 FER): Safety
NASA Astrophysics Data System (ADS)
Seki, Yasushi; Iida, Hiromasa; Honda, Tsutomu
1987-08-01
This report describes the study on safety for FER (Fusion Experimental Reactor) which has been designed as a next step machine to the JT-60. Though the final purpose of this study is to have an image of design base accident, maximum credible accident and to assess their risk or probability, etc., as FER plant system, the emphasis of this years study is placed on fuel-gas circulation system where the tritium inventory is maximum. The report consists of two chapters. The first chapter summarizes the FER system and describes FMEA (Failure Mode and Effect Analysis) and related accident progression sequence for FER plant system as a whole. The second chapter of this report is focused on fuel-gas circulation system including purification, isotope separation and storage. Probability of risk is assessed by the probabilistic risk analysis (PRA) procedure based on FMEA, ETA and FTA.
Improving Echo-Guided Procedures Using an Ultrasound-CT Image Fusion System.
Diana, Michele; Halvax, Peter; Mertz, Damien; Legner, Andras; Brulé, Jean-Marcel; Robinet, Eric; Mutter, Didier; Pessaux, Patrick; Marescaux, Jacques
2015-06-01
Image fusion between ultrasound (US) and computed tomography (CT) scan or magnetic resonance can increase operator accuracy in targeting liver lesions, particularly when those are undetectable with US alone. We have developed a modular gel to simulate hepatic solid lesions for educational purposes in imaging and minimally invasive ablation techniques. We aimed to assess the impact of image fusion in targeting artificial hepatic lesions during the hands-on part of 2 courses (basic and advanced) in hepatobiliary surgery. Under US guidance, 10 fake tumors of various sizes were created in the livers of 2 pigs, by percutaneous injection of a biocompatible gel engineered to be hyperdense on CT scanning and barely detectable on US. A CT scan was obtained and a CT-US image fusion was performed using the ACUSON S3000 US system (Siemens Healthcare, Germany). A total of 12 blinded course attendants, were asked in turn to perform a 10-minute liver scan with US alone followed by a 10-minute scan using image fusion. Using US alone, the expert managed to identify all lesions successfully. The true positive rate for course attendants with US alone was 14/36 and 2/24 in the advanced and basic courses, respectively. The total number of false positives identified was 26. With image fusion, the rate of true positives significantly increased to 31/36 (P < .001) in the advanced group and 16/24 in the basic group (P < .001). The total number of false positives, considering all participants, decreased to 4 (P < .001). Image fusion significantly increases accuracy in targeting hepatic lesions and might improve echo-guided procedures. © The Author(s) 2015.
ERIC Educational Resources Information Center
Kim, Deok-Hwan; Chung, Chin-Wan
2003-01-01
Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…
Wang, Hongzhi; Yushkevich, Paul A.
2013-01-01
Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427
Goreczny, Sebastian; Dryzek, Pawel; Morgan, Gareth J; Lukaszewski, Maciej; Moll, Jadwiga A; Moszura, Tomasz
2017-08-01
We report initial experience with novel three-dimensional (3D) image fusion software for guidance of transcatheter interventions in congenital heart disease. Developments in fusion imaging have facilitated the integration of 3D roadmaps from computed tomography or magnetic resonance imaging datasets. The latest software allows live fusion of two-dimensional (2D) fluoroscopy with pre-registered 3D roadmaps. We reviewed all cardiac catheterizations guided with this software (Philips VesselNavigator). Pre-catheterization imaging and catheterization data were collected focusing on fusion of 3D roadmap, intervention guidance, contrast and radiation exposure. From 09/2015 until 06/2016, VesselNavigator was applied in 34 patients for guidance (n = 28) or planning (n = 6) of cardiac catheterization. In all 28 patients successful 2D-3D registration was performed. Bony structures combined with the cardiovascular silhouette were used for fusion in 26 patients (93%), calcifications in 9 (32%), previously implanted devices in 8 (29%) and low-volume contrast injection in 7 patients (25%). Accurate initial 3D roadmap alignment was achieved in 25 patients (89%). Six patients (22%) required realignment during the procedure due to distortion of the anatomy after introduction of stiff equipment. Overall, VesselNavigator was applied successfully in 27 patients (96%) without any complications related to 3D image overlay. VesselNavigator was useful in guidance of nearly all of cardiac catheterizations. The combination of anatomical markers and low-volume contrast injections allowed reliable 2D-3D registration in the vast majority of patients.
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Hu, Gang; Hu, Kai
2018-01-01
The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.
Nakajo, Kazuya; Tatsumi, Mitsuaki; Inoue, Atsuo; Isohashi, Kayako; Higuchi, Ichiro; Kato, Hiroki; Imaizumi, Masao; Enomoto, Takayuki; Shimosegawa, Eku; Kimura, Tadashi; Hatazawa, Jun
2010-02-01
We compared the diagnostic accuracy of fluorodeoxyglucose positron emission tomography/computed tomography (FDG PET/CT) and PET/magnetic resonance imaging (MRI) fusion images for gynecological malignancies. A total of 31 patients with gynecological malignancies were enrolled. FDG-PET images were fused to CT, T1- and T2-weighted images (T1WI, T2WI). PET-MRI fusion was performed semiautomatically. We performed three types of evaluation to demonstrate the usefulness of PET/MRI fusion images in comparison with that of inline PET/CT as follows: depiction of the uterus and the ovarian lesions on CT or MRI mapping images (first evaluation); additional information for lesion localization with PET and mapping images (second evaluation); and the image quality of fusion on interpretation (third evaluation). For the first evaluation, the score for T2WI (4.68 +/- 0.65) was significantly higher than that for CT (3.54 +/- 1.02) or T1WI (3.71 +/- 0.97) (P < 0.01). For the second evaluation, the scores for the localization of FDG accumulation showing that T2WI (2.74 +/- 0.57) provided significantly more additional information for the identification of anatomical sites of FDG accumulation than did CT (2.06 +/- 0.68) or T1WI (2.23 +/- 0.61) (P < 0.01). For the third evaluation, the three-point rating scale for the patient group as a whole demonstrated that PET/T2WI (2.72 +/- 0.54) localized the lesion significantly more convincingly than PET/CT (2.23 +/- 0.50) or PET/T1WI (2.29 +/- 0.53) (P < 0.01). PET/T2WI fusion images are superior for the detection and localization of gynecological malignancies.
Goudeketting, Seline R; Heinen, Stefan G; van den Heuvel, Daniel A; van Strijen, Marco J; de Haan, Michiel W; Slump, Cornelis H; de Vries, Jean-Paul P
2018-02-01
The effect of the insertion of guidewires and catheters on fusion accuracy of the three-dimensional (3D) image fusion technique during iliac percutaneous transluminal angioplasty (PTA) procedures has not yet been investigated. Technical validation of the 3D fusion technique was evaluated in 11 patients with common and/or external iliac artery lesions. A preprocedural contrast-enhanced magnetic resonance angiogram (CE-MRA) was segmented and manually registered to a cone-beam computed tomography image created at the beginning of the procedure for each patient. The treating physician visually scored the fusion accuracy (i.e., accurate [<2 mm], mismatch [2-5 mm], or inaccurate [>5 mm]) of the entire vasculature of the overlay with respect to the digital subtraction angiography (DSA) directly after the first obtained DSA. Contours of the vasculature of the fusion images and DSAs were drawn after the procedure. The cranial-caudal, lateral-medial, and absolute displacement were calculated between the vessel centerlines. To determine the influence of the catheters, displacement of the catheterized iliac trajectories were compared with the noncatheterized trajectories. Electronic databases were systematically searched for available literature published between January 2010 till August 2017. The mean registration error for all iliac trajectories (N.=20) was small (4.0±2.5 mm). No significant difference in fusion displacement was observed between catheterized (N.=11) and noncatheterized (N.=9) iliac arteries. The systematic literature search yielded 2 manuscripts with a total of 22 patients. The methodological quality of these studies was poor (≤11 MINORS Score), mainly due to a lack of a control group. Accurate image fusion based on preprocedural CE-MRA is possible and could potentially be of help in iliac PTA procedures. The flexible guidewires and angiographic catheters, routinely used during endovascular procedures of iliac arteries, did not cause significant displacement that influenced the image fusion. Current literature on 3D image fusion in iliac PTA procedures is of limited methodological quality.
NASA Astrophysics Data System (ADS)
Preibisch, Stephan; Rohlfing, Torsten; Hasak, Michael P.; Tomancak, Pavel
2008-03-01
Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.
Operational data fusion framework for building frequent Landsat-like imagery in a cloudy region
USDA-ARS?s Scientific Manuscript database
An operational data fusion framework is built to generate dense time-series Landsat-like images for a cloudy region by fusing Moderate Resolution Imaging Spectroradiometer (MODIS) data products and Landsat imagery. The Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) is integrated in ...
Aperture tolerances for neutron-imaging systems in inertial confinement fusion.
Ghilea, M C; Sangster, T C; Meyerhofer, D D; Lerche, R A; Disdier, L
2008-02-01
Neutron-imaging systems are being considered as an ignition diagnostic for the National Ignition Facility (NIF) [Hogan et al., Nucl. Fusion 41, 567 (2001)]. Given the importance of these systems, a neutron-imaging design tool is being used to quantify the effects of aperture fabrication and alignment tolerances on reconstructed neutron images for inertial confinement fusion. The simulations indicate that alignment tolerances of more than 1 mrad would introduce measurable features in a reconstructed image for both pinholes and penumbral aperture systems. These simulations further show that penumbral apertures are several times less sensitive to fabrication errors than pinhole apertures.
An object-oriented framework for medical image registration, fusion, and visualization.
Zhu, Yang-Ming; Cochoff, Steven M
2006-06-01
An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.
Mortezavi, Ashkan; Märzendorfer, Olivia; Donati, Olivio F; Rizzi, Gianluca; Rupp, Niels J; Wettstein, Marian S; Gross, Oliver; Sulser, Tullio; Hermanns, Thomas; Eberli, Daniel
2018-02-21
We evaluated the diagnostic accuracy of multiparametric magnetic resonance imaging and multiparametric magnetic resonance imaging/transrectal ultrasound fusion guided targeted biopsy against that of transperineal template saturation prostate biopsy to detect prostate cancer. We retrospectively analyzed the records of 415 men who consecutively presented for prostate biopsy between November 2014 and September 2016 at our tertiary care center. Multiparametric magnetic resonance imaging was performed using a 3 Tesla device without an endorectal coil, followed by transperineal template saturation prostate biopsy with the BiopSee® fusion system. Additional fusion guided targeted biopsy was done in men with a suspicious lesion on multiparametric magnetic resonance imaging, defined as Likert score 3 to 5. Any Gleason pattern 4 or greater was defined as clinically significant prostate cancer. The detection rates of multiparametric magnetic resonance imaging and fusion guided targeted biopsy were compared with the detection rate of transperineal template saturation prostate biopsy using the McNemar test. We obtained a median of 40 (range 30 to 55) and 3 (range 2 to 4) transperineal template saturation prostate biopsy and fusion guided targeted biopsy cores, respectively. Of the 124 patients (29.9%) without a suspicious lesion on multiparametric magnetic resonance imaging 32 (25.8%) were found to have clinically significant prostate cancer on transperineal template saturation prostate biopsy. Of the 291 patients (70.1%) with a Likert score of 3 to 5 clinically significant prostate cancer was detected in 129 (44.3%) by multiparametric magnetic resonance imaging fusion guided targeted biopsy, in 176 (60.5%) by transperineal template saturation prostate biopsy and in 187 (64.3%) by the combined approach. Overall 58 cases (19.9%) of clinically significant prostate cancer would have been missed if fusion guided targeted biopsy had been performed exclusively. The sensitivity of multiparametric magnetic resonance imaging and fusion guided targeted biopsy for clinically significant prostate cancer was 84.6% and 56.7% with a negative likelihood ratio of 0.35 and 0.46, respectively. Multiparametric magnetic resonance imaging alone should not be performed as a triage test due to a substantial number of false-negative cases with clinically significant prostate cancer. Systematic biopsy outperformed fusion guided targeted biopsy. Therefore, it will remain crucial in the diagnostic pathway of prostate cancer. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
de Graaf, M; Boven, E; Oosterhoff, D; van der Meulen-Muileman, I H; Huls, G A; Gerritsen, W R; Haisma, H J; Pinedo, H M
2002-01-01
Monoclonal antibodies against tumour-associated antigens could be useful to deliver enzymes selectively to the site of a tumour for activation of a non-toxic prodrug. A completely human fusion protein may be advantageous for repeated administration, as host immune responses may be avoided. We have constructed a fusion protein consisting of a human single chain Fv antibody, C28, against the epithelial cell adhesion molecule and the human enzyme β-glucuronidase. The sequences encoding C28 and human enzyme β-glucuronidase were joined by a sequence encoding a flexible linker, and were preceded by the IgGκ signal sequence for secretion of the fusion protein. A CHO cell line was engineered to secrete C28-β-glucuronidase fusion protein. Antibody specificity and enzyme activity were retained in the secreted fusion protein that had an apparent molecular mass of 100 kDa under denaturing conditions. The fusion protein was able to convert a non-toxic prodrug of doxorubicin, N-[4-doxorubicin-N-carbonyl(oxymethyl)phenyl]-O-β-glucuronyl carbamate to doxorubicin, resulting in cytotoxicity. A bystander effect was demonstrated, as doxorubicin was detected in all cells after N-[4-doxorubicin-N-carbonyl(oxymethyl)phenyl]-O-β-glucuronyl carbamate administration when only 10% of the cells expressed the fusion protein. This is the first fully human and functional fusion protein consisting of an scFv against epithelial cell adhesion molecule and human enzyme β-glucuronidase for future use in tumour-specific activation of a non-toxic glucuronide prodrug. British Journal of Cancer (2002) 86, 811–818. DOI: 10.1038/sj/bjc/6600143 www.bjcancer.com © 2002 Cancer Research UK PMID:11875747
Kim, Pora; Jia, Peilin; Zhao, Zhongming
2018-01-01
Abstract Assessing the impact of kinase in gene fusion is essential for both identifying driver fusion genes (FGs) and developing molecular targeted therapies. Kinase domain retention is a crucial factor in kinase fusion genes (KFGs), but such a systematic investigation has not been done yet. To this end, we analyzed kinase domain retention (KDR) status in chimeric protein sequences of 914 KFGs covering 312 kinases across 13 major cancer types. Based on 171 kinase domain-retained KFGs including 101 kinases, we studied their recurrence, kinase groups, fusion partners, exon-based expression depth, short DNA motifs around the break points and networks. Our results, such as more KDR than 5′-kinase fusion genes, combinatorial effects between 3′-KDR kinases and their 5′-partners and a signal transduction-specific DNA sequence motif in the break point intronic sequences, supported positive selection on 3′-kinase fusion genes in cancer. We introduced a degree-of-frequency (DoF) score to measure the possible number of KFGs of a kinase. Interestingly, kinases with high DoF scores tended to undergo strong gene expression alteration at the break points. Furthermore, our KDR gene fusion network analysis revealed six of the seven kinases with the highest DoF scores (ALK, BRAF, MET, NTRK1, NTRK3 and RET) were all observed in thyroid carcinoma. Finally, we summarized common features of ‘effective’ (highly recurrent) kinases in gene fusions such as expression alteration at break point, redundant usage in multiple cancer types and 3′-location tendency. Collectively, our findings are useful for prioritizing driver kinases and FGs and provided insights into KFGs’ clinical implications. PMID:28013235
Xia, Jun; He, Pin; Cai, Xiaodong; Zhang, Doudou; Xie, Ni
2017-10-15
Electrode position after deep brain stimulation (DBS) for Parkinson's disease (PD) needs to be confirmed, but there are concerns about the risk of postoperative magnetic resonance imaging (MRI) after DBS. These issues could be avoided by fusion images obtained from preoperative MRI and postoperative computed tomography (CT). This study aimed to investigate image fusion technology for displaying the position of the electrodes compared with postoperative MRI. This was a retrospective study of 32 patients with PD treated with bilateral subthalamic nucleus (STN) DBS between April 2015 and March 2016. The postoperative (same day) CT and preoperative MRI were fused using the Elekta Leksell 10.1 planning workstation (Elekta Instruments, Stockholm, Sweden). The position of the electrodes was compared between the fusion images and postoperative 1-2-week MRI. The position of the electrodes was highly correlated between the fusion and postoperative MRI (all r between 0.865 and 0.996; all P<0.001). The differences of the left electrode position in the lateral and vertical planes was significantly different between the two methods (0.30 and 0.24mm, respectively, both P<0.05), but there were no significant differences for the other electrode and planes (all P>0.05). The position of the electrodes was highly correlated between the fusion and postoperative MRI. The CT-MRI fusion images could be used to avoid the potential risks of MRI after DBS in patients with PD. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Zhang, Rui; Xin, Binjie
2016-08-01
Yarn density is always considered as the fundamental structural parameter used for the quality evaluation of woven fabrics. The conventional yarn density measurement method is based on one-side analysis. In this paper, a novel density measurement method is developed for yarn-dyed woven fabrics based on a dual-side fusion technique. Firstly, a lab-used dual-side imaging system is established to acquire both face-side and back-side images of woven fabric and the affine transform is used for the alignment and fusion of the dual-side images. Then, the color images of the woven fabrics are transferred from the RGB to the CIE-Lab color space, and the intensity information of the image extracted from the L component is used for texture fusion and analysis. Subsequently, three image fusion methods are developed and utilized to merge the dual-side images: the weighted average method, wavelet transform method and Laplacian pyramid blending method. The fusion efficacy of each method is evaluated by three evaluation indicators and the best of them is selected to do the reconstruction of the complete fabric texture. Finally, the yarn density of the fused image is measured based on the fast Fourier transform, and the yarn alignment image could be reconstructed using the inverse fast Fourier transform. Our experimental results show that the accuracy of density measurement by using the proposed method is close to 99.44% compared with the traditional method and the robustness of this new proposed method is better than that of conventional analysis methods.
Fusion genes with ALK as recurrent partner in ependymoma-like gliomas: a new brain tumor entity?
Olsen, Thale Kristin; Panagopoulos, Ioannis; Meling, Torstein R.; Micci, Francesca; Gorunova, Ludmila; Thorsen, Jim; Due-Tønnessen, Bernt; Scheie, David; Lund-Iversen, Marius; Krossnes, Bård; Saxhaug, Cathrine; Heim, Sverre; Brandal, Petter
2015-01-01
Background We have previously characterized 19 ependymal tumors using Giemsa banding and high-resolution comparative genomic hybridization. The aim of this study was to analyze these tumors searching for fusion genes. Methods RNA sequencing was performed in 12 samples. Potential fusion transcripts were assessed by seed count and structural chromosomal aberrations. Transcripts of interest were validated using fluorescence in situ hybridization and PCR followed by direct sequencing. Results RNA sequencing identified rearrangements of the anaplastic lymphoma kinase gene (ALK) in 2 samples. Both tumors harbored structural aberrations involving the ALK locus 2p23. Tumor 1 had an unbalanced t(2;14)(p23;q22) translocation which led to the fusion gene KTN1-ALK. Tumor 2 had an interstitial del(2)(p16p23) deletion causing the fusion of CCDC88A and ALK. In both samples, the breakpoint of ALK was located between exons 19 and 20. Both patients were infants and both tumors were supratentorial. The tumors were well demarcated from surrounding tissue and had both ependymal and astrocytic features but were diagnosed and treated as ependymomas. Conclusions By combining karyotyping and RNA sequencing, we identified the 2 first ever reported ALK rearrangements in CNS tumors. Such rearrangements may represent the hallmark of a new entity of pediatric glioma characterized by both ependymal and astrocytic features. Our findings are of particular importance because crizotinib, a selective ALK inhibitor, has demonstrated effect in patients with lung cancer harboring ALK rearrangements. Thus, ALK emerges as an interesting therapeutic target in patients with ependymal tumors carrying ALK fusions. PMID:25795305
Introduction to clinical and laboratory (small-animal) image registration and fusion.
Zanzonico, Pat B; Nehmeh, Sadek A
2006-01-01
Imaging has long been a vital component of clinical medicine and, increasingly, of biomedical research in small-animals. Clinical and laboratory imaging modalities can be divided into two general categories, structural (or anatomical) and functional (or physiological). The latter, in particular, has spawned what has come to be known as "molecular imaging". Image registration and fusion have rapidly emerged as invaluable components of both clinical and small-animal imaging and has lead to the development and marketing of a variety of multi-modality, e.g. PET-CT, devices which provide registered and fused three-dimensional image sets. This paper briefly reviews the basics of image registration and fusion and available clinical and small-animal multi-modality instrumentation.
Kozlov, M M; Chernomordik, L V
1998-01-01
Although membrane fusion mediated by influenza virus hemagglutinin (HA) is the best characterized example of ubiquitous protein-mediated fusion, it is still not known how the low-pH-induced refolding of HA trimers causes fusion. This refolding involves 1) repositioning of the hydrophobic N-terminal sequence of the HA2 subunit of HA ("fusion peptide"), and 2) the recruitment of additional residues to the alpha-helical coiled coil of a rigid central rod of the trimer. We propose here a mechanism by which these conformational changes can cause local bending of the viral membrane, priming it for fusion. In this model fusion is triggered by incorporation of fusion peptides into viral membrane. Refolding of a central rod exerts forces that pull the fusion peptides, tending to bend the membrane around HA trimer into a saddle-like shape. Elastic energy drives self-assembly of these HA-containing membrane elements in the plane of the membrane into a ring-like cluster. Bulging of the viral membrane within such cluster yields a dimple growing toward the bound target membrane. Bending stresses in the lipidic top of the dimple facilitate membrane fusion. We analyze the energetics of this proposed sequence of membrane rearrangements, and demonstrate that this simple mechanism may explain some of the known phenomenological features of fusion. PMID:9726939
Church, Alanna J; Calicchio, Monica L; Nardi, Valentina; Skalova, Alena; Pinto, Andre; Dillon, Deborah A; Gomez-Fernandez, Carmen R; Manoj, Namitha; Haimes, Josh D; Stahl, Joshua A; Dela Cruz, Filemon S; Tannenbaum-Dvir, Sarah; Glade-Bender, Julia L; Kung, Andrew L; DuBois, Steven G; Kozakewich, Harry P; Janeway, Katherine A; Perez-Atayde, Antonio R; Harris, Marian H
2018-03-01
Infantile fibrosarcoma and congenital mesoblastic nephroma are tumors of infancy traditionally associated with the ETV6-NTRK3 gene fusion. However, a number of case reports have identified variant fusions in these tumors. In order to assess the frequency of variant NTRK3 fusions, and in particular whether the recently identified EML4-NTRK3 fusion is recurrent, 63 archival cases of infantile fibrosarcoma, congenital mesoblastic nephroma, mammary analog secretory carcinoma and secretory breast carcinoma (tumor types that are known to carry recurrent ETV6-NTRK3 fusions) were tested with NTRK3 break-apart FISH, EML4-NTRK3 dual fusion FISH, and targeted RNA sequencing. The EML4-NTRK3 fusion was identified in two cases of infantile fibrosarcoma (one of which was previously described), and in one case of congenital mesoblastic nephroma, demonstrating that the EML4-NTRK3 fusion is a recurrent genetic event in these related tumors. The growing spectrum of gene fusions associated with infantile fibrosarcoma and congenital mesoblastic nephroma along with the recent availability of targeted therapies directed toward inhibition of NTRK signaling argue for alternate testing strategies beyond ETV6 break-apart FISH. The use of either NTRK3 FISH or next-generation sequencing will expand the number of cases in which an oncogenic fusion is identified and facilitate optimal diagnosis and treatment for patients.
Pfister, Karin; Schierling, Wilma; Jung, Ernst Michael; Apfelbeck, Hanna; Hennersperger, Christoph; Kasprzak, Piotr M
2016-01-01
To compare standardised 2D ultrasound (US) to the novel ultrasonographic imaging techniques 3D/4D US and image fusion (combined real-time display of B mode and CT scan) for routine measurement of aortic diameter in follow-up after endovascular aortic aneurysm repair (EVAR). 300 measurements were performed on 20 patients after EVAR by one experienced sonographer (3rd degree of the German society of ultrasound (DEGUM)) with a high-end ultrasound machine and a convex probe (1-5 MHz). An internally standardized scanning protocol of the aortic aneurysm diameter in B mode used a so called leading-edge method. In summary, five different US methods (2D, 3D free-hand, magnetic field tracked 3D - Curefab™, 4D volume sweep, image fusion), each including contrast-enhanced ultrasound (CEUS), were used for measurement of the maximum aortic aneurysm diameter. Standardized 2D sonography was the defined reference standard for statistical analysis. CEUS was used for endoleak detection. Technical success was 100%. In augmented transverse imaging the mean aortic anteroposterior (AP) diameter was 4.0±1.3 cm for 2D US, 4.0±1.2 cm for 3D Curefab™, and 3.9±1.3 cm for 4D US and 4.0±1.2 for image fusion. The mean differences were below 1 mm (0.2-0.9 mm). Concerning estimation of aneurysm growth, agreement was found between 2D, 3D and 4D US in 19 of the 20 patients (95%). Definitive decision could always be made by image fusion. CEUS was combined with all methods and detected two out of the 20 patients (10%) with an endoleak type II. In one case, endoleak feeding arteries remained unclear with 2D CEUS but could be clearly localized by 3D CEUS and image fusion. Standardized 2D US allows adequate routine follow-up of maximum aortic aneurysm diameter after EVAR. Image Fusion enables a definitive statement about aneurysm growth without the need for new CT imaging by combining the postoperative CT scan with real-time B mode in a dual image display. 3D/4D CEUS and image fusion can improve endoleak characterization in selected cases but are not mandatory for routine practice.
NASA Astrophysics Data System (ADS)
Poobalasubramanian, Mangalraj; Agrawal, Anupam
2016-10-01
The presented work proposes fusion of panchromatic and multispectral images in a shearlet domain. The proposed fusion rules rely on the regional considerations which makes the system efficient in terms of spatial enhancement. The luminance hue saturation-based color conversion system is utilized to avoid spectral distortions. The proposed fusion method is tested on Worldview2 and Ikonos datasets, and the proposed method is compared against other methodologies. The proposed fusion method performs well against the other compared methods in terms of subjective and objective evaluations.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
Intensity-hue-saturation-based image fusion using iterative linear regression
NASA Astrophysics Data System (ADS)
Cetin, Mufit; Tepecik, Abdulkadir
2016-10-01
The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.
Implementing and validating of pan-sharpening algorithms in open-source software
NASA Astrophysics Data System (ADS)
Pesántez-Cobos, Paúl; Cánovas-García, Fulgencio; Alonso-Sarría, Francisco
2017-10-01
Several approaches have been used in remote sensing to integrate images with different spectral and spatial resolutions in order to obtain fused enhanced images. The objective of this research is three-fold. To implement in R three image fusion techniques (High Pass Filter, Principal Component Analysis and Gram-Schmidt); to apply these techniques to merging multispectral and panchromatic images from five different images with different spatial resolutions; finally, to evaluate the results using the universal image quality index (Q index) and the ERGAS index. As regards qualitative analysis, Landsat-7 and Landsat-8 show greater colour distortion with the three pansharpening methods, although the results for the other images were better. Q index revealed that HPF fusion performs better for the QuickBird, IKONOS and Landsat-7 images, followed by GS fusion; whereas in the case of Landsat-8 and Natmur-08 images, the results were more even. Regarding the ERGAS spatial index, the ACP algorithm performed better for the QuickBird, IKONOS, Landsat-7 and Natmur-08 images, followed closely by the GS algorithm. Only for the Landsat-8 image did, the GS fusion present the best result. In the evaluation of spectral components, HPF results tended to be better and ACP results worse, the opposite was the case with the spatial components. Better quantitative results are obtained in Landsat-7 and Landsat-8 images with the three fusion methods than with the QuickBird, IKONOS and Natmur-08 images. This contrasts with the qualitative evaluation reflecting the importance of splitting the two evaluation approaches (qualitative and quantitative). Significant disagreement may arise when different methodologies are used to asses the quality of an image fusion. Moreover, it is not possible to designate, a priori, a given algorithm as the best, not only because of the different characteristics of the sensors, but also because of the different atmospherics conditions or peculiarities of the different study areas, among other reasons.
Perception-oriented fusion of multi-sensor imagery: visible, IR, and SAR
NASA Astrophysics Data System (ADS)
Sidorchuk, D.; Volkov, V.; Gladilin, S.
2018-04-01
This paper addresses the problem of image fusion of optical (visible and thermal domain) data and radar data for the purpose of visualization. These types of images typically contain a lot of complimentary information, and their joint visualization can be useful and more convenient for human user than a set of individual images. To solve the image fusion problem we propose a novel algorithm that utilizes some peculiarities of human color perception and based on the grey-scale structural visualization. Benefits of presented algorithm are exemplified by satellite imagery.
Yang, Jilong; Annala, Matti; Ji, Ping; Wang, Guowen; Zheng, Hong; Codgell, David; Du, Xiaoling; Fang, Zhiwei; Sun, Baocun; Nykter, Matti; Chen, Kexin; Zhang, Wei
2014-10-10
The identification of fusion genes such as SYT-SSX1/SSX2, PAX3-FOXO1, TPM3/TPM4-ALK and EWS-FLI1 in human sarcomas has provided important insight into the diagnosis and targeted therapy of sarcomas. No recurrent fusion has been reported in human osteosarcoma. Transcriptome sequencing was used to characterize the gene fusions and mutations in 11 human osteosarcomas. Nine of 11 samples were found to harbor genetic inactivating alterations in the TP53 pathway. Two recurrent fusion genes associated with the 12q locus, LRP1-SNRNP25 and KCNMB4-CCND3, were identified and validated by RT-PCR, Sanger sequencing and fluorescence in situ hybridization, and were found to be osteosarcoma specific in a validation cohort of 240 other sarcomas. Expression of LRP1-SNRNP25 fusion gene promoted SAOS-2 osteosarcoma cell migration and invasion. Expression of KCNMB4-CCND3 fusion gene promoted SAOS-2 cell migration. Our study represents the first whole transcriptome analysis of untreated human osteosarcoma. Our discovery of two osteosarcoma specific fusion genes associated with osteosarcoma cellular motility highlights the heterogeneity of osteosarcoma and provides opportunities for new treatment modalities.
Domain fusion analysis by applying relational algebra to protein sequence and domain databases
Truong, Kevin; Ikura, Mitsuhiko
2003-01-01
Background Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. Results This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at . Conclusion As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time. PMID:12734020
Image Fusion for Radiosurgery, Neurosurgery and Hypofractionated Radiotherapy.
Inoue, Hiroshi K; Nakajima, Atsushi; Sato, Hiro; Noda, Shin-Ei; Saitoh, Jun-Ichi; Suzuki, Yoshiyuki
2015-03-01
Precise target detection is essential for radiosurgery, neurosurgery and hypofractionated radiotherapy because treatment results and complication rates are related to accuracy of the target definition. In skull base tumors and tumors around the optic pathways, exact anatomical evaluation of cranial nerves are important to avoid adverse effects on these structures close to lesions. Three-dimensional analyses of structures obtained with MR heavy T2-images and image fusion with CT thin-sliced sections are desirable to evaluate fine structures during radiosurgery and microsurgery. In vascular lesions, angiography is most important for evaluations of whole structures from feeder to drainer, shunt, blood flow and risk factors of bleeding. However, exact sites and surrounding structures in the brain are not shown on angiography. True image fusions of angiography, MR images and CT on axial planes are ideal for precise target definition. In malignant tumors, especially recurrent head and neck tumors, biologically active areas of recurrent tumors are main targets of radiosurgery. PET scan is useful for quantitative evaluation of recurrences. However, the examination is not always available at the time of radiosurgery. Image fusion of MR diffusion images with CT is always available during radiosurgery and useful for the detection of recurrent lesions. All images are fused and registered on thin sliced CT sections and exactly demarcated targets are planned for treatment. Follow-up images are also able to register on this CT. Exact target changes, including volume, are possible in this fusion system. The purpose of this review is to describe the usefulness of image fusion for 1) skull base, 2) vascular, 3) recurrent target detection, and 4) follow-up analyses in radiosurgery, neurosurgery and hypofractionated radiotherapy.
Image Fusion for Radiosurgery, Neurosurgery and Hypofractionated Radiotherapy
Nakajima, Atsushi; Sato, Hiro; Noda, Shin-ei; Saitoh, Jun-ichi; Suzuki, Yoshiyuki
2015-01-01
Precise target detection is essential for radiosurgery, neurosurgery and hypofractionated radiotherapy because treatment results and complication rates are related to accuracy of the target definition. In skull base tumors and tumors around the optic pathways, exact anatomical evaluation of cranial nerves are important to avoid adverse effects on these structures close to lesions. Three-dimensional analyses of structures obtained with MR heavy T2-images and image fusion with CT thin-sliced sections are desirable to evaluate fine structures during radiosurgery and microsurgery. In vascular lesions, angiography is most important for evaluations of whole structures from feeder to drainer, shunt, blood flow and risk factors of bleeding. However, exact sites and surrounding structures in the brain are not shown on angiography. True image fusions of angiography, MR images and CT on axial planes are ideal for precise target definition. In malignant tumors, especially recurrent head and neck tumors, biologically active areas of recurrent tumors are main targets of radiosurgery. PET scan is useful for quantitative evaluation of recurrences. However, the examination is not always available at the time of radiosurgery. Image fusion of MR diffusion images with CT is always available during radiosurgery and useful for the detection of recurrent lesions. All images are fused and registered on thin sliced CT sections and exactly demarcated targets are planned for treatment. Follow-up images are also able to register on this CT. Exact target changes, including volume, are possible in this fusion system. The purpose of this review is to describe the usefulness of image fusion for 1) skull base, 2) vascular, 3) recurrent target detection, and 4) follow-up analyses in radiosurgery, neurosurgery and hypofractionated radiotherapy. PMID:26180676
Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian
2014-03-21
This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.
The role of syncytins in human reproduction and reproductive organ cancers.
Soygur, Bikem; Sati, Leyla
2016-11-01
Human life begins with sperm and oocyte fusion. After fertilization, various fusion events occur during human embryogenesis and morphogenesis. For example, the fusion of trophoblastic cells constitutes a key process for normal placental development. Fusion in the placenta is facilitated by syncytin 1 and syncytin 2. These syncytins arose from retroviral sequences that entered the primate genome 25 million and more than 40 million years ago respectively. About 8% of the human genome consists of similar human endogenous retroviral (HERVs) sequences. Many are inactive because of mutations or deletions. However, the role of the few that remain transcriptionally active has not been fully elucidated. Syncytin proteins maintain cell-cell fusogenic activity based on ENV: gene-mediated viral cell entry. In this review, we summarize how syncytins and their receptors are involved in fusion events during human reproduction. The significance of syncytins in tumorigenesis is also discussed. © 2016 Society for Reproduction and Fertility.
Bazl, M Rajabi; Rasaee, M J; Foruzandeh, M; Rahimpour, A; Kiani, J; Rahbarizadeh, F; Alirezapour, B; Mohammadi, M
2007-02-01
There is an increasing interest in the application of nanobodies such as VHH in the field of therapy and imaging. In the present study a stable genetically engineered cell line of Chinese hamster ovary (CHO) origin transfected using two sets of expression vectors was constructed in order to permit the cytoplasmic and extracellular expression of single domain antibody along with green fluorescent protein (GFP) as reporter gene. The quality of the constructs were examined both by the restriction map as well as sequence analysis. The gene transfection and protein expression was further examined by reverse transcription-polymerase chain reaction (RT-PCR). The transfected cells were grown in 200 microg/mL hygromycin containing media and the stable cell line obtained showed fluorescent activity for more than a period of 180 days. The production of fusion protein was also detected by fluorescent microscopy, fluorescent spectroscopy as well as by enzyme-linked immunosorbent assay (ELISA) analysis. This strategy allows a rapid production of recombinant fluobodies involving VHH, which can be used in various experiments such as imaging and detection in which a primary labeled antibody is required.
System integration and DICOM image creation for PET-MR fusion.
Hsiao, Chia-Hung; Kao, Tsair; Fang, Yu-Hua; Wang, Jiunn-Kuen; Guo, Wan-Yuo; Chao, Liang-Hsiao; Yen, Sang-Hue
2005-03-01
This article demonstrates a gateway system for converting image fusion results to digital imaging and communication in medicine (DICOM) objects. For the purpose of standardization and integration, we have followed the guidelines of the Integrated Healthcare Enterprise technical framework and developed a DICOM gateway. The gateway system combines data from hospital information system, image fusion results, and the information generated itself to constitute new DICOM objects. All the mandatory tags defined in standard DICOM object were generated in the gateway system. The gateway system will generate two series of SOP instances of each PET-MR fusion result; SOP (Service Object Pair) one for the reconstructed magnetic resonance (MR) images and the other for position emission tomography (PET) images. The size, resolution, spatial coordinates, and number of frames are the same in both series of SOP instances. Every new generated MR image exactly fits with one of the reconstructed PET images. Those DICOM images are stored to the picture archiving and communication system (PACS) server by means of standard DICOM protocols. When those images are retrieved and viewed by standard DICOM viewing systems, both images can be viewed at the same anatomy location. This system is useful for precise diagnosis and therapy.
Schwein, Adeline; Chinnadurai, Ponraj; Behler, Greg; Lumsden, Alan B; Bismuth, Jean; Bechara, Carlos F
2018-07-01
Fenestrated endovascular aneurysm repair (FEVAR) is an evolving technique to treat juxtarenal abdominal aortic aneurysms (AAAs). Catheterization of visceral and renal vessels after the deployment of the fenestrated main body device is often challenging, usually requiring additional fluoroscopy and multiple digital subtraction angiograms. The aim of this study was to assess the clinical utility and accuracy of a computed tomography angiography (CTA)-fluoroscopy image fusion technique in guiding visceral vessel cannulation during FEVAR. Between August 2014 and September 2016, all consecutive patients who underwent FEVAR at our institution using image fusion guidance were included. Preoperative CTA images were fused with intraoperative fluoroscopy after coregistering with non-contrast-enhanced cone beam computed tomography (syngo 3D3D image fusion; Siemens Healthcare, Forchheim, Germany). The ostia of the visceral vessels were electronically marked on CTA images (syngo iGuide Toolbox) and overlaid on live fluoroscopy to guide vessel cannulation after fenestrated device deployment. Clinical utility of image fusion was evaluated by assessing the number of dedicated angiograms required for each visceral or renal vessel cannulation and the use of optimized C-arm angulation. Accuracy of image fusion was evaluated from video recordings by three raters using a binary qualitative assessment scale. A total of 26 patients (17 men; mean age, 73.8 years) underwent FEVAR during the study period for juxtarenal AAA (17), pararenal AAA (6), and thoracoabdominal aortic aneurysm (3). Video recordings of fluoroscopy from 19 cases were available for review and assessment. A total of 46 vessels were cannulated; 38 of 46 (83%) of these vessels were cannulated without angiography but based only on image fusion guidance: 9 of 11 superior mesenteric artery cannulations and 29 of 35 renal artery cannulations. Binary qualitative assessment showed that 90% (36/40) of the virtual ostia overlaid on live fluoroscopy were accurate. Optimized C-arm angulations were achieved in 35% of vessel cannulations (0/9 for superior mesenteric artery cannulation, 12/25 for renal arteries). Preoperative CTA-fluoroscopy image fusion guidance during FEVAR is a valuable and accurate tool that allows visceral and renal vessel cannulation without the need of dedicated angiograms, thus avoiding additional injection of contrast material and radiation exposure. Further refinements, such as accounting for device-induced aortic deformation and automating the image fusion workflow, will bolster this technology toward optimal routine clinical use. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Multispectral image fusion based on fractal features
NASA Astrophysics Data System (ADS)
Tian, Jie; Chen, Jie; Zhang, Chunhua
2004-01-01
Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the composition of source pyramid images. So this fusion scheme is a multi-resolution analysis. The wavelet decomposition of image can be actually considered as special pyramid decomposition. According to wavelet decomposition theories, the approximation of image (formula available in paper) at resolution 2j+1 equal to its orthogonal projection in space , that is, where Ajf is the low-frequency approximation of image f(x, y) at resolution 2j and , , represent the vertical, horizontal and diagonal wavelet coefficients respectively at resolution 2j. These coefficients describe the high-frequency information of image at direction of vertical, horizontal and diagonal respectively. Ajf, , and are independent and can be considered as images. In this paper J is set to be 1, so the source image is decomposed to produce the son-images Af, D1f, D2f and D3f. To solve the problem of detecting artifacts, the concepts of vertical fractal dimension FD1, horizontal fractal dimension FD2 and diagonal fractal dimension FD3 are proposed in this paper. The vertical fractal dimension FD1 corresponds to the vertical wavelet coefficients image after the wavelet decomposition of source image, the horizontal fractal dimension FD2 corresponds to the horizontal wavelet coefficients and the diagonal fractal dimension FD3 the diagonal one. These definitions enrich the illustration of source images. Therefore they are helpful to classify the targets. Then the detection of artifacts in the decomposed images is a problem of pattern recognition in 4-D space. The combination of FD0, FD1, FD2 and FD3 make a vector of (FD0, FD1, FD2, FD3), which can be considered as a united feature vector of the studied image. All the parts of the images are classified in the 4-D pattern space created by the vector of (FD0, FD1, FD2, FD3) so that the area that contains man-made objects could be detected. This detection can be considered as a coarse recognition, and then the significant areas in each son-images are signed so that they can be dealt with special rules. There has been various fusion rules developed with each one aiming at a special problem. These rules have different performance, so it is very important to select an appropriate rule during the design of an image fusion system. Recent research denotes that the rule should be adjustable so that it is always suitable to extrude the features of targets and to preserve the pixels of useful information. In this paper, owing to the consideration that fractal dimension is one of the main features to distinguish man-made targets from natural objects, the fusion rule was defined that if the studied region of image contains man-made target, the pixels of the source image whose fractal dimension is minimal are saved to be the pixels of the fused image, otherwise, a weighted average operator is adopted to avoid loss of information. The main idea of this rule is to store the pixels with low fractal dimensions, so it can be named Minimal Fractal dimensions (MFD) fusion rule. This fractal-based algorithm is compared with a common weighted average fusion algorithm. An objective assessment is taken to the two fusion results. The criteria of Entropy, Cross-Entropy, Peak Signal-to-Noise Ratio (PSNR) and Standard Gray Scale Difference are defined in this paper. Reversely to the idea of constructing an ideal image as the assessing reference, the source images are selected to be the reference in this paper. It can be deemed that this assessment is to calculate how much the image quality has been enhanced and the quantity of information has been increased when the fused image is compared with the source images. The experimental results imply that the fractal-based multi-spectral fusion algorithm can effectively preserve the information of man-made objects with a high contrast. It is proved that this algorithm could well preserve features of military targets because that battlefield targets are most man-made objects and in common their images differ from fractal models obviously. Furthermore, the fractal features are not sensitive to the imaging conditions and the movement of targets, so this fractal-based algorithm may be very practical.
Live imaging of mouse secondary palate fusion
Kim, Seungil; Prochazka, Jan; Bush, Jeffrey O.
2017-01-01
LONG ABSTRACT The fusion of the secondary palatal shelves to form the intact secondary palate is a key process in mammalian development and its disruption can lead to cleft secondary palate, a common congenital anomaly in humans. Secondary palate fusion has been extensively studied leading to several proposed cellular mechanisms that may mediate this process. However, these studies have been mostly performed on fixed embryonic tissues at progressive timepoints during development or in fixed explant cultures analyzed at static timepoints. Static analysis is limited for the analysis of dynamic morphogenetic processes such a palate fusion and what types of dynamic cellular behaviors mediate palatal fusion is incompletely understood. Here we describe a protocol for live imaging of ex vivo secondary palate fusion in mouse embryos. To examine cellular behaviors of palate fusion, epithelial-specific Keratin14-cre was used to label palate epithelial cells in ROSA26-mTmGflox reporter embryos. To visualize filamentous actin, Lifeact-mRFPruby reporter mice were used. Live imaging of secondary palate fusion was performed by dissecting recently-adhered secondary palatal shelves of embryonic day (E) 14.5 stage embryos and culturing in agarose-containing media on a glass bottom dish to enable imaging with an inverted confocal microscope. Using this method, we have detected a variety of novel cellular behaviors during secondary palate fusion. An appreciation of how distinct cell behaviors are coordinated in space and time greatly contributes to our understanding of this dynamic morphogenetic process. This protocol can be applied to mutant mouse lines, or cultures treated with pharmacological inhibitors to further advance understanding of how secondary palate fusion is controlled. PMID:28784960
Lee, Junkyo; Lee, Min Woo; Choi, Dongil; Cha, Dong Ik; Lee, Sunyoung; Kang, Tae Wook; Yang, Jehoon; Jo, Jaemoon; Bang, Won-Chul; Kim, Jongsik; Shin, Dongkuk
2017-12-21
The purpose of this study was to evaluate the accuracy of an active contour model for estimating the posterior ablative margin in images obtained by the fusion of real-time ultrasonography (US) and 3-dimensional (3D) US or magnetic resonance (MR) images of an experimental tumor model for radiofrequency ablation. Chickpeas (n=12) and bovine rump meat (n=12) were used as an experimental tumor model. Grayscale 3D US and T1-weighted MR images were pre-acquired for use as reference datasets. US and MR/3D US fusion was performed for one group (n=4), and US and 3D US fusion only (n=8) was performed for the other group. Half of the models in each group were completely ablated, while the other half were incompletely ablated. Hyperechoic ablation areas were extracted using an active contour model from real-time US images, and the posterior margin of the ablation zone was estimated from the anterior margin. After the experiments, the ablated pieces of bovine rump meat were cut along the electrode path and the cut planes were photographed. The US images with the estimated posterior margin were compared with the photographs and post-ablation MR images. The extracted contours of the ablation zones from 12 US fusion videos and post-ablation MR images were also matched. In the four models fused under real-time US with MR/3D US, compression from the transducer and the insertion of an electrode resulted in misregistration between the real-time US and MR images, making the estimation of the ablation zones less accurate than was achieved through fusion between real-time US and 3D US. Eight of the 12 post-ablation 3D US images were graded as good when compared with the sectioned specimens, and 10 of the 12 were graded as good in a comparison with nicotinamide adenine dinucleotide staining and histopathologic results. Estimating the posterior ablative margin using an active contour model is a feasible way of predicting the ablation area, and US/3D US fusion was more accurate than US/MR fusion.
Hause, Anne M.; Henke, David M.; Avadhanula, Vasanthi; Shaw, Chad A.; Tapia, Lorena I.
2017-01-01
Background The fusion (F) protein of RSV is the major vaccine target. This protein undergoes a conformational change from pre-fusion to post-fusion. Both conformations share antigenic sites II and IV. Pre-fusion F has unique antigenic sites p27, ø, α2α3β3β4, and MPE8; whereas, post-fusion F has unique antigenic site I. Our objective was to determine the antigenic variability for RSV/A and RSV/B isolates from contemporary and historical genotypes compared to a historical RSV/A strain. Methods The F sequences of isolates from GenBank, Houston, and Chile (N = 1,090) were used for this analysis. Sequences were compared pair-wise to a reference sequence, a historical RSV/A Long strain. Variability (calculated as %) was defined as changes at each amino acid (aa) position when compared to the reference sequence. Only aa at antigenic sites with variability ≥5% were reported. Results A total of 1,090 sequences (822 RSV/A and 268 RSV/B) were analyzed. When compared to the reference F, those domains with the greatest number of non-synonymous changes included the signal peptide, p27, heptad repeat domain 2, antigenic site ø, and the transmembrane domain. RSV/A subgroup had 7 aa changes in the antigenic sites: site I (N = 1), II (N = 1), p27 (N = 4), α2α3β3β4(AM14) (N = 1), ranging in frequency from 7–91%. In comparison, RSV/B had 19 aa changes in antigenic sites: I (N = 3), II (N = 1), p27 (N = 9), ø (N = 4), α2α3β3β4(AM14) (N = 1), and MPE8 (N = 1), ranging in frequency from 79–100%. Discussion Although antigenic sites of RSV F are generally well conserved, differences are observed when comparing the two subgroups to the reference RSV/A Long strain. Further, these discrepancies are accented in the antigenic sites in pre-fusion F of RSV/B isolates, often occurring with a frequency of 100%. This could be of importance if a monovalent F protein from the historical GA1 genotype of RSV/A is used for vaccine development. PMID:28414749
Hause, Anne M; Henke, David M; Avadhanula, Vasanthi; Shaw, Chad A; Tapia, Lorena I; Piedra, Pedro A
2017-01-01
The fusion (F) protein of RSV is the major vaccine target. This protein undergoes a conformational change from pre-fusion to post-fusion. Both conformations share antigenic sites II and IV. Pre-fusion F has unique antigenic sites p27, ø, α2α3β3β4, and MPE8; whereas, post-fusion F has unique antigenic site I. Our objective was to determine the antigenic variability for RSV/A and RSV/B isolates from contemporary and historical genotypes compared to a historical RSV/A strain. The F sequences of isolates from GenBank, Houston, and Chile (N = 1,090) were used for this analysis. Sequences were compared pair-wise to a reference sequence, a historical RSV/A Long strain. Variability (calculated as %) was defined as changes at each amino acid (aa) position when compared to the reference sequence. Only aa at antigenic sites with variability ≥5% were reported. A total of 1,090 sequences (822 RSV/A and 268 RSV/B) were analyzed. When compared to the reference F, those domains with the greatest number of non-synonymous changes included the signal peptide, p27, heptad repeat domain 2, antigenic site ø, and the transmembrane domain. RSV/A subgroup had 7 aa changes in the antigenic sites: site I (N = 1), II (N = 1), p27 (N = 4), α2α3β3β4(AM14) (N = 1), ranging in frequency from 7-91%. In comparison, RSV/B had 19 aa changes in antigenic sites: I (N = 3), II (N = 1), p27 (N = 9), ø (N = 4), α2α3β3β4(AM14) (N = 1), and MPE8 (N = 1), ranging in frequency from 79-100%. Although antigenic sites of RSV F are generally well conserved, differences are observed when comparing the two subgroups to the reference RSV/A Long strain. Further, these discrepancies are accented in the antigenic sites in pre-fusion F of RSV/B isolates, often occurring with a frequency of 100%. This could be of importance if a monovalent F protein from the historical GA1 genotype of RSV/A is used for vaccine development.
Neutron imaging with bubble chambers for inertial confinement fusion
NASA Astrophysics Data System (ADS)
Ghilea, Marian C.
One of the main methods to obtain energy from controlled thermonuclear fusion is inertial confinement fusion (ICF), a process where nuclear fusion reactions are initiated by heating and compressing a fuel target, typically in the form of a pellet that contains deuterium and tritium, relying on the inertia of the fuel mass to provide confinement. In inertial confinement fusion experiments, it is important to distinguish failure mechanisms of the imploding capsule and unambiguously diagnose compression and hot spot formation in the fuel. Neutron imaging provides such a technique and bubble chambers are capable of generating higher resolution images than other types of neutron detectors. This thesis explores the use of a liquid bubble chamber to record high yield 14.1 MeV neutrons resulting from deuterium-tritium fusion reactions on ICF experiments. A design tool to deconvolve and reconstruct penumbral and pinhole neutron images was created, using an original ray tracing concept to simulate the neutron images. The design tool proved that misalignment and aperture fabrication errors can significantly decrease the resolution of the reconstructed neutron image. A theoretical model to describe the mechanism of bubble formation was developed. A bubble chamber for neutron imaging with Freon 115 as active medium was designed and implemented for the OMEGA laser system. High neutron yields resulting from deuterium-tritium capsule implosions were recorded. The bubble density was too low for neutron imaging on OMEGA but agreed with the model of bubble formation. The research done in here shows that bubble detectors are a promising technology for the higher neutron yields expected at National Ignition Facility (NIF).
Multispectral image fusion for illumination-invariant palmprint recognition
Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064
Multispectral image fusion for illumination-invariant palmprint recognition.
Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.
Thaden, Jeremy J; Sanon, Saurabh; Geske, Jeffrey B; Eleid, Mackram F; Nijhof, Niels; Malouf, Joseph F; Rihal, Charanjit S; Bruce, Charles J
2016-06-01
There has been significant growth in the volume and complexity of percutaneous structural heart procedures in the past decade. Increasing procedural complexity and accompanying reliance on multimodality imaging have fueled the development of fusion imaging to facilitate procedural guidance. The first clinically available system capable of echocardiographic and fluoroscopic fusion for real-time guidance of structural heart procedures was approved by the US Food and Drug Administration in 2012. Echocardiographic-fluoroscopic fusion imaging combines the precise catheter and device visualization of fluoroscopy with the soft tissue anatomy and color flow Doppler information afforded by echocardiography in a single image. This allows the interventionalist to perform precise catheter manipulations under fluoroscopy guidance while visualizing critical tissue anatomy provided by echocardiography. However, there are few data available addressing this technology's strengths and limitations in routine clinical practice. The authors provide a critical review of currently available echocardiographic-fluoroscopic fusion imaging for guidance of structural heart interventions to highlight its strengths, limitations, and potential clinical applications and to guide further research into value of this emerging technology. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
Knowledge guided information fusion for segmentation of multiple sclerosis lesions in MRI images
NASA Astrophysics Data System (ADS)
Zhu, Chaozhe; Jiang, Tianzi
2003-05-01
In this work, T1-, T2- and PD-weighted MR images of multiple sclerosis (MS) patients, providing information on the properties of tissues from different aspects, are treated as three independent information sources for the detection and segmentation of MS lesions. Based on information fusion theory, a knowledge guided information fusion framework is proposed to accomplish 3-D segmentation of MS lesions. This framework consists of three parts: (1) information extraction, (2) information fusion, and (3) decision. Information provided by different spectral images is extracted and modeled separately in each spectrum using fuzzy sets, aiming at managing the uncertainty and ambiguity in the images due to noise and partial volume effect. In the second part, the possible fuzzy map of MS lesions in each spectral image is constructed from the extracted information under the guidance of experts' knowledge, and then the final fuzzy map of MS lesions is constructed through the fusion of the fuzzy maps obtained from different spectrum. Finally, 3-D segmentation of MS lesions is derived from the final fuzzy map. Experimental results show that this method is fast and accurate.
Live cell imaging of in vitro human trophoblast syncytialization.
Wang, Rui; Dang, Yan-Li; Zheng, Ru; Li, Yue; Li, Weiwei; Lu, Xiaoyin; Wang, Li-Juan; Zhu, Cheng; Lin, Hai-Yan; Wang, Hongmei
2014-06-01
Human trophoblast syncytialization, a process of cell-cell fusion, is one of the most important yet least understood events during placental development. Investigating the fusion process in a placenta in vivo is very challenging given the complexity of this process. Application of primary cultured cytotrophoblast cells isolated from term placentas and BeWo cells derived from human choriocarcinoma formulates a biphasic strategy to achieve the mechanism of trophoblast cell fusion, as the former can spontaneously fuse to form the multinucleated syncytium and the latter is capable of fusing under the treatment of forskolin (FSK). Live-cell imaging is a powerful tool that is widely used to investigate many physiological or pathological processes in various animal models or humans; however, to our knowledge, the mechanism of trophoblast cell fusion has not been reported using a live- cell imaging manner. In this study, a live-cell imaging system was used to delineate the fusion process of primary term cytotrophoblast cells and BeWo cells. By using live staining with Hoechst 33342 or cytoplasmic dyes or by stably transfecting enhanced green fluorescent protein (EGFP) and DsRed2-Nuc reporter plasmids, we observed finger-like protrusions on the cell membranes of fusion partners before fusion and the exchange of cytoplasmic contents during fusion. In summary, this study provides the first video recording of the process of trophoblast syncytialization. Furthermore, the various live-cell imaging systems used in this study will help to yield molecular insights into the syncytialization process during placental development. © 2014 by the Society for the Study of Reproduction, Inc.
Shah, Nameeta; Lankerovich, Michael; Lee, Hwahyung; Yoon, Jae-Geun; Schroeder, Brett; Foltz, Greg
2013-11-22
RNA-seq has spurred important gene fusion discoveries in a number of different cancers, including lung, prostate, breast, brain, thyroid and bladder carcinomas. Gene fusion discovery can potentially lead to the development of novel treatments that target the underlying genetic abnormalities. In this study, we provide comprehensive view of gene fusion landscape in 185 glioblastoma multiforme patients from two independent cohorts. Fusions occur in approximately 30-50% of GBM patient samples. In the Ivy Center cohort of 24 patients, 33% of samples harbored fusions that were validated by qPCR and Sanger sequencing. We were able to identify high-confidence gene fusions from RNA-seq data in 53% of the samples in a TCGA cohort of 161 patients. We identified 13 cases (8%) with fusions retaining a tyrosine kinase domain in the TCGA cohort and one case in the Ivy Center cohort. Ours is the first study to describe recurrent fusions involving non-coding genes. Genomic locations 7p11 and 12q14-15 harbor majority of the fusions. Fusions on 7p11 are formed in focally amplified EGFR locus whereas 12q14-15 fusions are formed by complex genomic rearrangements. All the fusions detected in this study can be further visualized and analyzed using our website: http://ivygap.swedish.org/fusions. Our study highlights the prevalence of gene fusions as one of the major genomic abnormalities in GBM. The majority of the fusions are private fusions, and a minority of these recur with low frequency. A small subset of patients with fusions of receptor tyrosine kinases can benefit from existing FDA approved drugs and drugs available in various clinical trials. Due to the low frequency and rarity of clinically relevant fusions, RNA-seq of GBM patient samples will be a vital tool for the identification of patient-specific fusions that can drive personalized therapy.
SIRF: Simultaneous Satellite Image Registration and Fusion in a Unified Framework.
Chen, Chen; Li, Yeqing; Liu, Wei; Huang, Junzhou
2015-11-01
In this paper, we propose a novel method for image fusion with a high-resolution panchromatic image and a low-resolution multispectral (Ms) image at the same geographical location. The fusion is formulated as a convex optimization problem which minimizes a linear combination of a least-squares fitting term and a dynamic gradient sparsity regularizer. The former is to preserve accurate spectral information of the Ms image, while the latter is to keep sharp edges of the high-resolution panchromatic image. We further propose to simultaneously register the two images during the fusing process, which is naturally achieved by virtue of the dynamic gradient sparsity property. An efficient algorithm is then devised to solve the optimization problem, accomplishing a linear computational complexity in the size of the output image in each iteration. We compare our method against six state-of-the-art image fusion methods on Ms image data sets from four satellites. Extensive experimental results demonstrate that the proposed method substantially outperforms the others in terms of both spatial and spectral qualities. We also show that our method can provide high-quality products from coarsely registered real-world IKONOS data sets. Finally, a MATLAB implementation is provided to facilitate future research.
Apostolou, N; Papazoglou, Th; Koutsouris, D
2006-01-01
Image fusion is a process of combining information from multiple sensors. It is a useful tool implemented in the treatment planning programme of Gamma Knife Radiosurgery. In this paper we evaluate advanced image fusion algorithms for Matlab platform and head images. We develop nine level grayscale image fusion methods: average, principal component analysis (PCA), discrete wavelet transform (DWT) and Laplacian, filter - subtract - decimate (FSD), contrast, gradient, morphological pyramid and a shift invariant discrete wavelet transform (SIDWT) method in Matlab platform. We test these methods qualitatively and quantitatively. The quantitative criteria we use are the Root Mean Square Error (RMSE), the Mutual Information (MI), the Standard Deviation (STD), the Entropy (H), the Difference Entropy (DH) and the Cross Entropy (CEN). The qualitative are: natural appearance, brilliance contrast, presence of complementary features and enhancement of common features. Finally we make clinically useful suggestions.
NASA Astrophysics Data System (ADS)
Liu, Zhanwen; Feng, Yan; Chen, Hang; Jiao, Licheng
2017-10-01
A novel and effective image fusion method is proposed for creating a highly informative and smooth surface of fused image through merging visible and infrared images. Firstly, a two-scale non-subsampled shearlet transform (NSST) is employed to decompose the visible and infrared images into detail layers and one base layer. Then, phase congruency is adopted to extract the saliency maps from the detail layers and a guided filtering is proposed to compute the filtering output of base layer and saliency maps. Next, a novel weighted average technique is used to make full use of scene consistency for fusion and obtaining coefficients map. Finally the fusion image was acquired by taking inverse NSST of the fused coefficients map. Experiments show that the proposed approach can achieve better performance than other methods in terms of subjective visual effect and objective assessment.
Angiogram, fundus, and oxygen saturation optic nerve head image fusion
NASA Astrophysics Data System (ADS)
Cao, Hua; Khoobehi, Bahram
2009-02-01
A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.
Brock, Kristy K; Mutic, Sasa; McNutt, Todd R; Li, Hua; Kessler, Marc L
2017-07-01
Image registration and fusion algorithms exist in almost every software system that creates or uses images in radiotherapy. Most treatment planning systems support some form of image registration and fusion to allow the use of multimodality and time-series image data and even anatomical atlases to assist in target volume and normal tissue delineation. Treatment delivery systems perform registration and fusion between the planning images and the in-room images acquired during the treatment to assist patient positioning. Advanced applications are beginning to support daily dose assessment and enable adaptive radiotherapy using image registration and fusion to propagate contours and accumulate dose between image data taken over the course of therapy to provide up-to-date estimates of anatomical changes and delivered dose. This information aids in the detection of anatomical and functional changes that might elicit changes in the treatment plan or prescription. As the output of the image registration process is always used as the input of another process for planning or delivery, it is important to understand and communicate the uncertainty associated with the software in general and the result of a specific registration. Unfortunately, there is no standard mathematical formalism to perform this for real-world situations where noise, distortion, and complex anatomical variations can occur. Validation of the software systems performance is also complicated by the lack of documentation available from commercial systems leading to use of these systems in undesirable 'black-box' fashion. In view of this situation and the central role that image registration and fusion play in treatment planning and delivery, the Therapy Physics Committee of the American Association of Physicists in Medicine commissioned Task Group 132 to review current approaches and solutions for image registration (both rigid and deformable) in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. © 2017 American Association of Physicists in Medicine.
ALLFlight: detection of moving objects in IR and ladar images
NASA Astrophysics Data System (ADS)
Doehler, H.-U.; Peinecke, Niklas; Lueken, Thomas; Schmerwitz, Sven
2013-05-01
Supporting a helicopter pilot during landing and takeoff in degraded visual environment (DVE) is one of the challenges within DLR's project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites). Different types of sensors (TV, Infrared, mmW radar and laser radar) are mounted onto DLR's research helicopter FHS (flying helicopter simulator) for gathering different sensor data of the surrounding world. A high performance computer cluster architecture acquires and fuses all the information to get one single comprehensive description of the outside situation. While both TV and IR cameras deliver images with frame rates of 25 Hz or 30 Hz, Ladar and mmW radar provide georeferenced sensor data with only 2 Hz or even less. Therefore, it takes several seconds to detect or even track potential moving obstacle candidates in mmW or Ladar sequences. Especially if the helicopter is flying with higher speed, it is very important to minimize the detection time of obstacles in order to initiate a re-planning of the helicopter's mission timely. Applying feature extraction algorithms on IR images in combination with data fusion algorithms of extracted features and Ladar data can decrease the detection time appreciably. Based on real data from flight tests, the paper describes applied feature extraction methods for moving object detection, as well as data fusion techniques for combining features from TV/IR and Ladar data.
Zeitoun, Rania; Hussein, Manar
2017-11-01
To reach a practical approach to interpret MDCT findings in post-operative spine cases and to change the false belief of CT failure in the setting of instruments secondary to related artefacts. We performed observational retrospective analysis of premier, early and late MDCT scans in 68 post-operative spine patients, with emphasis on instruments related complications and osseous fusion status. We used a grading system for assessment of osseous fusion in 35 patients and we further analysed the findings in failure of fusion, grade (D). We observed a variety of instruments related complications (mostly screws medially penetrating the pedicle) and osseous fusion status in late scans. We graded 11 interbody and 14 posterolateral levels as osseous fusion failure, showing additional instruments related complications, end plates erosive changes, adjacent segments spondylosis and malalignment. Modern MDCT scanners provide high quality images and are strongly recommended in assessment of the instruments and status of osseous fusion. In post-operative imaging of the spine, it is essential to be aware for what you are looking for, in relevance to the date of surgery. Advances in knowledge: Modern MDCT scanners allow assessment of instruments position and integrity and osseous fusion status in post-operative spine. We propose a helpful algorithm to simplify interpreting post-operative spine imaging.
Research on segmentation based on multi-atlas in brain MR image
NASA Astrophysics Data System (ADS)
Qian, Yuejing
2018-03-01
Accurate segmentation of specific tissues in brain MR image can be effectively achieved with the multi-atlas-based segmentation method, and the accuracy mainly depends on the image registration accuracy and fusion scheme. This paper proposes an automatic segmentation method based on the multi-atlas for brain MR image. Firstly, to improve the registration accuracy in the area to be segmented, we employ a target-oriented image registration method for the refinement. Then In the label fusion, we proposed a new algorithm to detect the abnormal sparse patch and simultaneously abandon the corresponding abnormal sparse coefficients, this method is made based on the remaining sparse coefficients combined with the multipoint label estimator strategy. The performance of the proposed method was compared with those of the nonlocal patch-based label fusion method (Nonlocal-PBM), the sparse patch-based label fusion method (Sparse-PBM) and majority voting method (MV). Based on our experimental results, the proposed method is efficient in the brain MR images segmentation compared with MV, Nonlocal-PBM, and Sparse-PBM methods.
Xia, Qiu-Yuan; Wang, Xiao-Tong; Ye, Sheng-Bing; Wang, Xuan; Li, Rui; Shi, Shan-Shan; Fang, Ru; Zhang, Ru-Song; Ma, Heng-Hui; Lu, Zhen-Feng; Shen, Qin; Bao, Wei; Zhou, Xiao-Jun; Rao, Qiu
2018-04-01
MITF, TFE3, TFEB and TFEC belong to the same microphthalmia-associated transcription factor family (MiT). Two transcription factors in this family have been identified in two unusual types of renal cell carcinoma (RCC): Xp11 translocation RCC harbouring TFE3 gene fusions and t(6;11) RCC harbouring a MALAT1-TFEB gene fusion. The 2016 World Health Organisation classification of renal neoplasia grouped these two neoplasms together under the category of MiT family translocation RCC. RCCs associated with the other two MiT family members, MITF and TFEC, have rarely been reported. Herein, we identify a case of MITF translocation RCC with the novel PRCC-MITF gene fusion by RNA sequencing. Histological examination of the present tumour showed typical features of MiT family translocation RCCs, overlapping with Xp11 translocation RCC and t(6;11) RCC. However, this tumour showed negative results in TFE3 and TFEB immunochemistry and split fluorescence in-situ hybridisation (FISH) assays. The other MiT family members, MITF and TFEC, were tested further immunochemically and also showed negative results. RNA sequencing and reverse transcription-polymerase chain reaction confirmed the presence of a PRCC-MITF gene fusion: a fusion of PRCC exon 5 to MITF exon 4. We then developed FISH assays covering MITF break-apart probes and PRCC-MITF fusion probes to detect the MITF gene rearrangement. This study both proves the recurring existence of MITF translocation RCC and expands the genotype spectrum of MiT family translocation RCCs. © 2017 John Wiley & Sons Ltd.
von Spiczak, Jochen; Mannil, Manoj; Kozerke, Sebastian; Alkadhi, Hatem; Manka, Robert
2018-03-30
Since patients with myocardial hypoperfusion due to coronary artery disease (CAD) with preserved viability are known to benefit from revascularization, accurate differentiation of hypoperfusion from scar is desirable. To develop a framework for 3D fusion of whole-heart dynamic cardiac MR perfusion and late gadolinium enhancement (LGE) to delineate stress-induced myocardial hypoperfusion and scar. Prospective feasibility study. Sixteen patients (61 ± 14 years, two females) with known/suspected CAD. 1.5T (nine patients); 3.0T (seven patients); whole-heart dynamic 3D cardiac MR perfusion (3D-PERF, under adenosine stress); 3D LGE inversion recovery sequences (3D-SCAR). A software framework was developed for 3D fusion of 3D-PERF and 3D-SCAR. Computation steps included: 1) segmentation of the left ventricle in 3D-PERF and 3D-SCAR; 2) semiautomatic thresholding of perfusion/scar data; 3) automatic calculation of ischemic/scar burden (ie, pathologic relative to total myocardium); 4) projection of perfusion/scar values onto artificial template of the left ventricle; 5) semiautomatic coregistration to an exemplary heart contour easing 3D orientation; and 6) 3D rendering of the combined datasets using automatically defined color tables. All tasks were performed by two independent, blinded readers (J.S. and R.M.). Intraclass correlation coefficients (ICC) for determining interreader agreement. Image acquisition, postprocessing, and 3D fusion were feasible in all cases. In all, 10/16 patients showed stress-induced hypoperfusion in 3D-PERF; 8/16 patients showed LGE in 3D-SCAR. For 3D-PERF, semiautomatic thresholding was possible in all patients. For 3D-SCAR, automatic thresholding was feasible where applicable. Average ischemic burden was 11 ± 7% (J.S.) and 12 ± 7% (R.M.). Average scar burden was 8 ± 5% (J.S.) and 7 ± 4% (R.M.). Interreader agreement was excellent (ICC for 3D-PERF = 0.993, for 3D-SCAR = 0.99). 3D fusion of 3D-PERF and 3D-SCAR facilitates intuitive delineation of stress-induced myocardial hypoperfusion and scar. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
Joint image registration and fusion method with a gradient strength regularization
NASA Astrophysics Data System (ADS)
Lidong, Huang; Wei, Zhao; Jun, Wang
2015-05-01
Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.
Adaptive Intuitionistic Fuzzy Enhancement of Brain Tumor MR Images
NASA Astrophysics Data System (ADS)
Deng, He; Deng, Wankai; Sun, Xianping; Ye, Chaohui; Zhou, Xin
2016-10-01
Image enhancement techniques are able to improve the contrast and visual quality of magnetic resonance (MR) images. However, conventional methods cannot make up some deficiencies encountered by respective brain tumor MR imaging modes. In this paper, we propose an adaptive intuitionistic fuzzy sets-based scheme, called as AIFE, which takes information provided from different MR acquisitions and tries to enhance the normal and abnormal structural regions of the brain while displaying the enhanced results as a single image. The AIFE scheme firstly separates an input image into several sub images, then divides each sub image into object and background areas. After that, different novel fuzzification, hyperbolization and defuzzification operations are implemented on each object/background area, and finally an enhanced result is achieved via nonlinear fusion operators. The fuzzy implementations can be processed in parallel. Real data experiments demonstrate that the AIFE scheme is not only effectively useful to have information from images acquired with different MR sequences fused in a single image, but also has better enhancement performance when compared to conventional baseline algorithms. This indicates that the proposed AIFE scheme has potential for improving the detection and diagnosis of brain tumors.
NASA Astrophysics Data System (ADS)
Todorov, Evgueni; Boulware, Paul; Gaah, Kingsley
2018-03-01
Nondestructive evaluation (NDE) at various fabrication stages is required to assure quality of feedstock and solid builds. Industry efforts are shifting towards solutions that can provide real-time monitoring of additive manufacturing (AM) fabrication process layer-by-layer while the component is being built to reduce or eliminate dependence on post-process inspection. Array eddy current (AEC), electromagnetic NDE technique was developed and implemented to directly scan the component without physical contact with the powder and fused layer surfaces at elevated temperatures inside a LPBF chamber. The technique can detect discontinuities, surface irregularities, and undesirable metallurgical phase transformations in magnetic and nonmagnetic conductive materials used for laser fusion. The AEC hardware and software were integrated with the L-PBF test bed. Two layer-by-layer tests of Inconel 625 coupons with AM built discontinuities and lack of fusion were conducted inside the L-PBF chamber. The AEC technology demonstrated excellent sensitivity to seeded, natural surface, and near-surface-embedded discontinuities, while also detecting surface topography. The data was acquired and imaged in a layer-by-layer sequence demonstrating the real-time monitoring capabilities of this new technology.
Yoakim, M; Hou, W; Liu, Y; Carpenter, C L; Kapeller, R; Schaffhausen, B S
1992-01-01
The binding of phosphatidylinositol-3-kinase to the polyomavirus middle T antigen is facilitated by tyrosine phosphorylation of middle T on residue 315. The pp85 subunit of phosphatidylinositol-3-kinase contains two SH2 domains, one in the middle of the molecule and one at the C terminus. When assayed by blotting with phosphorylated middle T, the more N-terminal SH2 domain is responsible for binding to middle T. When assayed in solution with glutathione S transferase fusions, both SH2s are capable of binding phosphorylated middle T. While both SH2 fusions can compete with intact pp85 for binding to middle T, the C-terminal SH2 is the more efficient of the two. Interaction between pp85 or its SH2 domains and middle T can be blocked by a synthetic peptide comprising the tyrosine phosphorylation sequence around middle T residue 315. Despite the fact that middle T can interact with both SH2s, these domains are not equivalent. Only the C-terminal SH2-middle T interaction was blocked by anti-SH2 antibody; the two SH2 fusions also interact with different cellular proteins. Images PMID:1380095
Layer-Based Approach for Image Pair Fusion.
Son, Chang-Hwan; Zhang, Xiao-Ping
2016-04-20
Recently, image pairs, such as noisy and blurred images or infrared and noisy images, have been considered as a solution to provide high-quality photographs under low lighting conditions. In this paper, a new method for decomposing the image pairs into two layers, i.e., the base layer and the detail layer, is proposed for image pair fusion. In the case of infrared and noisy images, simple naive fusion leads to unsatisfactory results due to the discrepancies in brightness and image structures between the image pair. To address this problem, a local contrast-preserving conversion method is first proposed to create a new base layer of the infrared image, which can have visual appearance similar to another base layer such as the denoised noisy image. Then, a new way of designing three types of detail layers from the given noisy and infrared images is presented. To estimate the noise-free and unknown detail layer from the three designed detail layers, the optimization framework is modeled with residual-based sparsity and patch redundancy priors. To better suppress the noise, an iterative approach that updates the detail layer of the noisy image is adopted via a feedback loop. This proposed layer-based method can also be applied to fuse another noisy and blurred image pair. The experimental results show that the proposed method is effective for solving the image pair fusion problem.
Color image fusion for concealed weapon detection
NASA Astrophysics Data System (ADS)
Toet, Alexander
2003-09-01
Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the non-literal nature of these images. Especially for dynamic crowd surveillance purposes it may be impossible to rapidly asses with certainty which individual in the crowd is the one carrying the observed weapon. Sensor fusion is an enabling technology that may be used to solve this problem. Through fusion the signal of the sensor that depicts the weapon can be displayed in the context provided by a sensor of a different modality. We propose an image fusion scheme in which non-literal imagery can be fused with standard color images such that the result clearly displays the observed weapons in the context of the original color image. The procedure is such that the relevant contrast details from the non-literal image are transferred to the color image without altering the original color distribution of this image. The result is a natural looking color image that fluently combines all details from both input sources. When an observer who performs a dynamic crowd surveillance task, detects a weapon in the scene, he will also be able to quickly determine which person in the crowd is actually carrying the observed weapon (e.g. "the man with the red T-shirt and blue jeans"). The method is illustrated by the fusion of thermal 8-12 μm imagery with standard RGB color images.
Standardizing Quality Assessment of Fused Remotely Sensed Images
NASA Astrophysics Data System (ADS)
Pohl, C.; Moellmann, J.; Fries, K.
2017-09-01
The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.
Multi-focus image fusion and robust encryption algorithm based on compressive sensing
NASA Astrophysics Data System (ADS)
Xiao, Di; Wang, Lan; Xiang, Tao; Wang, Yong
2017-06-01
Multi-focus image fusion schemes have been studied in recent years. However, little work has been done in multi-focus image transmission security. This paper proposes a scheme that can reduce data transmission volume and resist various attacks. First, multi-focus image fusion based on wavelet decomposition can generate complete scene images and optimize the perception of the human eye. The fused images are sparsely represented with DCT and sampled with structurally random matrix (SRM), which reduces the data volume and realizes the initial encryption. Then the obtained measurements are further encrypted to resist noise and crop attack through combining permutation and diffusion stages. At the receiver, the cipher images can be jointly decrypted and reconstructed. Simulation results demonstrate the security and robustness of the proposed scheme.
Chavan, Satishkumar S; Mahajan, Abhishek; Talbar, Sanjay N; Desai, Subhash; Thakur, Meenakshi; D'cruz, Anil
2017-02-01
Neurocysticercosis (NCC) is a parasite infection caused by the tapeworm Taenia solium in its larvae stage which affects the central nervous system of the human body (a definite host). It results in the formation of multiple lesions in the brain at different locations during its various stages. During diagnosis of such symptomatic patients, these lesions can be better visualized using a feature based fusion of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). This paper presents a novel approach to Multimodality Medical Image Fusion (MMIF) used for the analysis of the lesions for the diagnostic purpose and post treatment review of NCC. The MMIF presented here is a technique of combining CT and MRI data of the same patient into a new slice using a Nonsubsampled Rotated Complex Wavelet Transform (NSRCxWT). The forward NSRCxWT is applied on both the source modalities separately to extract the complementary and the edge related features. These features are then combined to form a composite spectral plane using average and maximum value selection fusion rules. The inverse transformation on this composite plane results into a new, visually better, and enriched fused image. The proposed technique is tested on the pilot study data sets of patients infected with NCC. The quality of these fused images is measured using objective and subjective evaluation metrics. Objective evaluation is performed by estimating the fusion parameters like entropy, fusion factor, image quality index, edge quality measure, mean structural similarity index measure, etc. The fused images are also evaluated for their visual quality using subjective analysis with the help of three expert radiologists. The experimental results on 43 image data sets of 17 patients are promising and superior when compared with the state of the art wavelet based fusion algorithms. The proposed algorithm can be a part of computer-aided detection and diagnosis (CADD) system which assists the radiologists in clinical practices. Copyright © 2016 Elsevier Ltd. All rights reserved.
Live-cell imaging of conidial anastomosis tube fusion during colony initiation in Fusarium oxysporum
Kurian, Smija M.; Di Pietro, Antonio
2018-01-01
Fusarium oxysporum exhibits conidial anastomosis tube (CAT) fusion during colony initiation to form networks of conidial germlings. Here we determined the optimal culture conditions for this fungus to undergo CAT fusion between microconidia in liquid medium. Extensive high resolution, confocal live-cell imaging was performed to characterise the different stages of CAT fusion, using genetically encoded fluorescent labelling and vital fluorescent organelle stains. CAT homing and fusion were found to be dependent on adhesion to the surface, in contrast to germ tube development which occurs in the absence of adhesion. Staining with fluorescently labelled concanavalin A indicated that the cell wall composition of CATs differs from that of microconidia and germ tubes. The movement of nuclei, mitochondria, vacuoles and lipid droplets through fused germlings was observed by live-cell imaging. PMID:29734342
Kurian, Smija M; Di Pietro, Antonio; Read, Nick D
2018-01-01
Fusarium oxysporum exhibits conidial anastomosis tube (CAT) fusion during colony initiation to form networks of conidial germlings. Here we determined the optimal culture conditions for this fungus to undergo CAT fusion between microconidia in liquid medium. Extensive high resolution, confocal live-cell imaging was performed to characterise the different stages of CAT fusion, using genetically encoded fluorescent labelling and vital fluorescent organelle stains. CAT homing and fusion were found to be dependent on adhesion to the surface, in contrast to germ tube development which occurs in the absence of adhesion. Staining with fluorescently labelled concanavalin A indicated that the cell wall composition of CATs differs from that of microconidia and germ tubes. The movement of nuclei, mitochondria, vacuoles and lipid droplets through fused germlings was observed by live-cell imaging.
Information Fusion and Visualisation in Anti Asymmetric Warfare
2006-12-01
thermal detectors (e.g. bolometers). They used the MWIR and LWIR part of the IR spectrum. Quantum detectors realize an image rate of over 100Hz while... panorama image by image fusion of several sensors components. EO designators are distinguished through their agility and increased resolution
Joint interpretation of geophysical data using Image Fusion techniques
NASA Astrophysics Data System (ADS)
Karamitrou, A.; Tsokas, G.; Petrou, M.
2013-12-01
Joint interpretation of geophysical data produced from different methods is a challenging area of research in a wide range of applications. In this work we apply several image fusion approaches to combine maps of electrical resistivity, electromagnetic conductivity, vertical gradient of the magnetic field, magnetic susceptibility, and ground penetrating radar reflections, in order to detect archaeological relics. We utilize data gathered from Arkansas University, with the support of the U.S. Department of Defense, through the Strategic Environmental Research and Development Program (SERDP-CS1263). The area of investigation is the Army City, situated in Riley Country of Kansas, USA. The depth of the relics is estimated about 30 cm from the surface, yet the surface indications of its existence are limited. We initially register the images from the different methods to correct from random offsets due to the use of hand-held devices during the measurement procedure. Next, we apply four different image fusion approaches to create combined images, using fusion with mean values, wavelet decomposition, curvelet transform, and curvelet transform enhancing the images along specific angles. We create seven combinations of pairs between the available geophysical datasets. The combinations are such that for every pair at least one high-resolution method (resistivity or magnetic gradiometry) is included. Our results indicate that in almost every case the method of mean values produces satisfactory fused images that corporate the majority of the features of the initial images. However, the contrast of the final image is reduced, and in some cases the averaging process nearly eliminated features that are fade in the original images. Wavelet based fusion outputs also good results, providing additional control in selecting the feature wavelength. Curvelet based fusion is proved the most effective method in most of the cases. The ability of curvelet domain to unfold the image in terms of space, wavenumber, and orientation, provides important advantages compared with the rest of the methods by allowing the incorporation of a-priori information about the orientation of the potential targets.
Multi-sensor fusion of Landsat 8 thermal infrared (TIR) and panchromatic (PAN) images.
Jung, Hyung-Sup; Park, Sung-Whan
2014-12-18
Data fusion is defined as the combination of data from multiple sensors such that the resulting information is better than would be possible when the sensors are used individually. The multi-sensor fusion of panchromatic (PAN) and thermal infrared (TIR) images is a good example of this data fusion. While a PAN image has higher spatial resolution, a TIR one has lower spatial resolution. In this study, we have proposed an efficient method to fuse Landsat 8 PAN and TIR images using an optimal scaling factor in order to control the trade-off between the spatial details and the thermal information. We have compared the fused images created from different scaling factors and then tested the performance of the proposed method at urban and rural test areas. The test results show that the proposed method merges the spatial resolution of PAN image and the temperature information of TIR image efficiently. The proposed method may be applied to detect lava flows of volcanic activity, radioactive exposure of nuclear power plants, and surface temperature change with respect to land-use change.
Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao
2017-01-01
To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved. PMID:28640181
Aggarwal, A; Adam, R D; Nash, T E
1989-01-01
The amino acid sequence of a 29.4-kilodalton [corrected] structural protein located in the ventral disk and axostyle of Giardia lamblia was determined. Clone lambda M16 from a mung bean expression library in lambda gt11 expressed a fusion protein recognized by three different isolate-specific antisera and sera from G. lamblia-infected gerbils. One of the three EcoRI fragments (M16; 1.26 kilobases) encoded the recognized protein. Sequence analysis revealed a single open reading frame of 813 base pairs. Two areas showed conservation of the positions of some amino acids. The abundance of arginine, glutamic acid, and threonine was increased. Two potential alpha-helical regions were deduced in the regions of repeats. Antisera to the M16 fusion protein reacted specifically with internal components of the ventral disk and axostyle, as well as Giardia fractions enriched for ventral disk structural proteins. An identical protein was recognized in different isolates by anti-M16, and a single identical band was recognized in Southern blots using the M16 1.26-kilobase fragment as a probe. Therefore, the 29.4-kilodaltion [corrected] protein appears to be highly conserved compared with variant surface proteins. Images PMID:2925253
Klessen, C; Schmidt, K H; Gumpert, J; Grosse, H H; Malke, H
1989-01-01
To circumvent problems encountered in the synthesis of active chymosin in a number of bacteria and fungi, a recombinant DNA L-form expression system that directed the complete secretion of fully activable prochymosin into the extracellular culture medium was developed. The expression plasmid constructions involved the in-frame fusion of prochymosin cDNA minus codons 1 to 4 to streptococcal pyrogenic exotoxin type A gene (speA') sequences, including the speA promoter, ribosomal binding site, and signal sequence and five codons of mature SpeA. Secretion of fusion prochymosin enzymatically and immunologically indistinguishable from bovine prochymosin was achieved after transformation of two stable protoplast type L-form strains derived from Proteus mirabilis. The secreted proenzyme was converted by autocatalytic processing to chymosin showing milk-clotting activity. In controlled laboratory fermentation processes, a maximum specific rate of activable prochymosin synthesis of 0.57 x 10(-3)/h was determined from the time courses of biomass dry weight and product formation. Yields as high as 40 +/- 10 micrograms/ml were obtained in the cell-free culture fluid of strain L99 carrying a naturally altered expression plasmid of increased segregational stability. The expression-secretion system described may be generally useful for production of recombinant mammalian proteins synthesized intracellularly as aberrantly folded insoluble aggregates. Images PMID:2499253
Studies on the expression of an H-2K/human growth hormone fusion gene in giant transgenic mice.
Morello, D; Moore, G; Salmon, A M; Yaniv, M; Babinet, C
1986-01-01
Transgenic mice carrying the H-2K/human growth hormone (hGH) fusion gene were produced by microinjecting into the pronucleus of fertilized eggs DNA molecules containing 2 kb of the 5' flanking sequences (including promoter) of the class I H-2Kb gene joined to the coding sequences of the hGH gene. Thirteen transgenic mice were obtained which all contained detectable levels of hGH hormone in their blood. Nine grew larger than their control litter-mates. Endogenous H-2Kb and exogenous hGH mRNA levels were analysed by S1 nuclease digestion experiments. hGH transcripts were found in all the tissues examined and the pattern of expression paralleled that of endogenous H-2K gene expression, being high in liver and lymphoid organs and low in muscle and brain. Thus 2 kb of the 5' promoter/regulatory region of the H-2K gene are sufficient to ensure regulated expression of hGH in transgenic mice. This promoter may therefore be of use to target the expression of different exogenous genes in most tissues of transgenic mice and to study the biological role of the corresponding proteins in different cellular environments. Images Fig. 2. Fig. 3. Fig. 4. Fig. 5. PMID:3019667
Elastin-like Polypeptide (ELP) Charge Influences Self-Assembly of ELP-mCherry Fusion Proteins.
Mills, Carolyn E; Michaud, Zachary; Olsen, Bradley D
2018-05-23
Self-assembly of protein-polymer bioconjugates presents an elegant strategy for controlling nanostructure and orientation of globular proteins in functional materials. Recent work has shown that genetic fusion of globular protein mCherry to an elastin-like polypeptide (ELP) yields similar self-assembly behavior to these protein-polymer bioconjugates. In the context of studying protein-polymer bioconjugate self-assembly, the mutability of the ELP sequence allows several different properties of the ELP block to be tuned orthogonally while maintaining consistent polypeptide backbone chemistry. This work uses this ELP sequence tunability in combination with the precise control offered by genetic engineering of an amino acid sequence to generate a library of four novel ELP sequences that are used to study the combined effect of charge and hydrophobicity on ELP-mCherry fusion protein self-assembly. Concentrated solution self-assembly is studied by small-angle X-ray scattering (SAXS) and depolarized light scattering (DPLS). These experiments show that fusions containing a negatively charged ELP block do not assemble at all, and fusions with a charge balanced ELP block exhibit a weak propensity for assembly. By comparison, the fusion containing an uncharged ELP block starts to order at 40 wt % in solution and at all concentrations measured has sharper, more intense SAXS peaks than other fusion proteins. These experiments show that charge character of the ELP block is a stronger predictor of self-assembly behavior than the hydrophobicity of the ELP block. Dilute solution small-angle neutron scattering (SANS) on the ELPs alone suggests that all ELPs used in this study (including the uncharged ELP) adopt dilute solution conformations similar to those of traditional polymers, including polyampholytes and polyelectrolytes. Finally, dynamic light scattering studies on ELP-mCherry blends shows that there is no significant complexation between the charged ELPs and mCherry. Therefore, it is proposed that the superior self-assembly of fusion proteins containing uncharged ELP block is due to effective repulsions between charged and uncharged blocks due to local charge correlation effects and, in the case of anionic ELPs, repulsion between like charges within the ELP block.
Domain fusion analysis by applying relational algebra to protein sequence and domain databases.
Truong, Kevin; Ikura, Mitsuhiko
2003-05-06
Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at http://calcium.uhnres.utoronto.ca/pi. As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time.
RET fusion as a novel driver of medullary thyroid carcinoma.
Grubbs, Elizabeth G; Ng, Patrick Kwok-Shing; Bui, Jacquelin; Busaidy, Naifa L; Chen, Ken; Lee, Jeffrey E; Lu, Xinyan; Lu, Hengyu; Meric-Bernstam, Funda; Mills, Gordon B; Palmer, Gary; Perrier, Nancy D; Scott, Kenneth L; Shaw, Kenna R; Waguespack, Steven G; Williams, Michelle D; Yelensky, Roman; Cote, Gilbert J
2015-03-01
Oncogenic RET tyrosine kinase gene fusions and activating mutations have recently been identified in lung cancers, prompting initiation of targeted therapy trials in this disease. Although RET point mutation has been identified as a driver of tumorigenesis in medullary thyroid carcinoma (MTC), no fusions have been described to date. We evaluated the role of RET fusion as an oncogenic driver in MTC. We describe a patient who died from aggressive sporadic MTC < 10 months after diagnosis. Her tumor was evaluated by means of next-generation sequencing, including an intronic capture strategy. A reciprocal translocation involving RET intron 12 was identified. The fusion was validated using a targeted break apart fluorescence in situ hybridization probe, and RNA sequencing confirmed the existence of an in-frame fusion transcript joining MYH13 exon 35 with RET exon 12. Ectopic expression of fusion product in a murine Ba/F3 cell reporter model established strong oncogenicity. Three tyrosine kinase inhibitors currently used to treat MTC in clinical practice blocked tumorigenic cell growth. This finding represents the report of a novel RET fusion, the first of its kind described in MTC. The finding of this potential novel oncogenic mechanism has clear implications for sporadic MTC, which in the majority of cases has no driver mutation identified. The presence of a RET fusion also provides a plausible target for RET tyrosine kinase inhibitor therapies.
Value of Image Fusion in Coronary Angiography for the Detection of Coronary Artery Bypass Grafts.
Plessis, Julien; Warin Fresse, Karine; Cahouch, Zachary; Manigold, Thibaut; Letocart, Vincent; Le Gloan, Laurianne; Guyomarch, Béatrice; Guerin, Patrice
2016-06-10
Coronary angiography is more complex in patients with coronary artery bypass grafts (CABG). Image fusion is a new technology that allows the overlay of a computed tomography (CT) three-dimension (3D) model with fluoroscopic images in real time. This single-center prospective study included 66 previous CABG patients undergoing coronary and bypass graft angiography. Image fusion coronary angiographies (fusion group, 20 patients) were compared to conventional coronary angiographies (control group, 46 patients). The fusion group included patients for whom a previous chest CT scan with contrast was available. For patients in this group, aorta and CABG were reconstructed in 3D from CT acquisitions and merged in real time with fluoroscopic images. The following parameters were compared: time needed to localize the CABG; procedure duration; air kerma (AK); dose area product (DAP); and volume of contrast media injected. Results are expressed as median. There were no significant differences between the 2 groups in patient demographics and procedure characteristics (access site, number of bypass to be found, and interventional cardiologist's experience). The time to localize CABG was significantly shorter in the fusion group (7.3 versus 12.4 minutes; P=0.002), as well as the procedure duration (20.6 versus 25.6 minutes; P=0.002), AK (610 versus 814 mGy; P=0.02), DAP (4390 versus 5922.5 cGy·cm(2); P=0.02), and volume of iodinated contrast media (85 versus 116 cc; P=0.002). 3D image fusion improves the CABG detection in coronary angiography and reduces the time necessary to localize CABG, total procedure time duration, radiation exposure, and volume of contrast media. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Multisensor fusion for 3D target tracking using track-before-detect particle filter
NASA Astrophysics Data System (ADS)
Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.
2015-05-01
This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.
Identifying transposon insertions and their effects from RNA-sequencing data.
de Ruiter, Julian R; Kas, Sjors M; Schut, Eva; Adams, David J; Koudijs, Marco J; Wessels, Lodewyk F A; Jonkers, Jos
2017-07-07
Insertional mutagenesis using engineered transposons is a potent forward genetic screening technique used to identify cancer genes in mouse model systems. In the analysis of these screens, transposon insertion sites are typically identified by targeted DNA-sequencing and subsequently assigned to predicted target genes using heuristics. As such, these approaches provide no direct evidence that insertions actually affect their predicted targets or how transcripts of these genes are affected. To address this, we developed IM-Fusion, an approach that identifies insertion sites from gene-transposon fusions in standard single- and paired-end RNA-sequencing data. We demonstrate IM-Fusion on two separate transposon screens of 123 mammary tumors and 20 B-cell acute lymphoblastic leukemias, respectively. We show that IM-Fusion accurately identifies transposon insertions and their true target genes. Furthermore, by combining the identified insertion sites with expression quantification, we show that we can determine the effect of a transposon insertion on its target gene(s) and prioritize insertions that have a significant effect on expression. We expect that IM-Fusion will significantly enhance the accuracy of cancer gene discovery in forward genetic screens and provide initial insight into the biological effects of insertions on candidate cancer genes. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Tensor functors between Morita duals of fusion categories
NASA Astrophysics Data System (ADS)
Galindo, César; Plavnik, Julia Yael
2017-03-01
Given a fusion category C and an indecomposable C -module category M , the fusion category C^*_{_{M}} of C-module endofunctors of M is called the (Morita) dual fusion category of C with respect to M . We describe tensor functors between two arbitrary duals C^*_{_{M}} and D^*_N in terms of data associated to C and D . We apply the results to G-equivariantizations of fusion categories and group-theoretical fusion categories. We describe the orbits of the action of the Brauer-Picard group on the set of module categories and we propose a categorification of the Rosenberg-Zelinsky sequence for fusion categories.
NASA Astrophysics Data System (ADS)
Wang, Y.; Tobias, B.; Chang, Y.-T.; Yu, J.-H.; Li, M.; Hu, F.; Chen, M.; Mamidanna, M.; Phan, T.; Pham, A.-V.; Gu, J.; Liu, X.; Zhu, Y.; Domier, C. W.; Shi, L.; Valeo, E.; Kramer, G. J.; Kuwahara, D.; Nagayama, Y.; Mase, A.; Luhmann, N. C., Jr.
2017-07-01
Electron cyclotron emission (ECE) imaging is a passive radiometric technique that measures electron temperature fluctuations; and microwave imaging reflectometry (MIR) is an active radar imaging technique that measures electron density fluctuations. Microwave imaging diagnostic instruments employing these techniques have made important contributions to fusion science and have been adopted at major fusion facilities worldwide including DIII-D, EAST, ASDEX Upgrade, HL-2A, KSTAR, LHD, and J-TEXT. In this paper, we describe the development status of three major technological advancements: custom mm-wave integrated circuits (ICs), digital beamforming (DBF), and synthetic diagnostic modeling (SDM). These have the potential to greatly advance microwave fusion plasma imaging, enabling compact and low-noise transceiver systems with real-time, fast tracking ability to address critical fusion physics issues, including ELM suppression and disruptions in the ITER baseline scenario, naturally ELM-free states such as QH-mode, and energetic particle confinement (i.e. Alfvén eigenmode stability) in high-performance regimes that include steady-state and advanced tokamak scenarios. Furthermore, these systems are fully compatible with today’s most challenging non-inductive heating and current drive systems and capable of operating in harsh environments, making them the ideal approach for diagnosing long-pulse and steady-state tokamaks.
Wang, Y.; Tobias, B.; Chang, Y. -T.; ...
2017-03-14
Electron cyclotron emission (ECE) imaging is a passive radiometric technique that measures electron temperature fluctuations; and microwave imaging reflectometry (MIR) is an active radar imaging technique that measures electron density fluctuations. The microwave imaging diagnostic instruments employing these techniques have made important contributions to fusion science and have been adopted at major fusion facilities worldwide including DIII-D, EAST, ASDEX Upgrade, HL-2A, KSTAR, LHD, and J-TEXT. In this paper, we describe the development status of three major technological advancements: custom mm-wave integrated circuits (ICs), digital beamforming (DBF), and synthetic diagnostic modeling (SDM). These also have the potential to greatly advance microwavemore » fusion plasma imaging, enabling compact and low-noise transceiver systems with real-time, fast tracking ability to address critical fusion physics issues, including ELM suppression and disruptions in the ITER baseline scenario, naturally ELM-free states such as QH-mode, and energetic particle confinement (i.e. Alfven eigenmode stability) in high-performance regimes that include steady-state and advanced tokamak scenarios. Furthermore, these systems are fully compatible with today's most challenging non-inductive heating and current drive systems and capable of operating in harsh environments, making them the ideal approach for diagnosing long-pulse and steady-state tokamaks.« less
Research on polarization imaging information parsing method
NASA Astrophysics Data System (ADS)
Yuan, Hongwu; Zhou, Pucheng; Wang, Xiaolong
2016-11-01
Polarization information parsing plays an important role in polarization imaging detection. This paper focus on the polarization information parsing method: Firstly, the general process of polarization information parsing is given, mainly including polarization image preprocessing, multiple polarization parameters calculation, polarization image fusion and polarization image tracking, etc.; And then the research achievements of the polarization information parsing method are presented, in terms of polarization image preprocessing, the polarization image registration method based on the maximum mutual information is designed. The experiment shows that this method can improve the precision of registration and be satisfied the need of polarization information parsing; In terms of multiple polarization parameters calculation, based on the omnidirectional polarization inversion model is built, a variety of polarization parameter images are obtained and the precision of inversion is to be improve obviously; In terms of polarization image fusion , using fuzzy integral and sparse representation, the multiple polarization parameters adaptive optimal fusion method is given, and the targets detection in complex scene is completed by using the clustering image segmentation algorithm based on fractal characters; In polarization image tracking, the average displacement polarization image characteristics of auxiliary particle filtering fusion tracking algorithm is put forward to achieve the smooth tracking of moving targets. Finally, the polarization information parsing method is applied to the polarization imaging detection of typical targets such as the camouflage target, the fog and latent fingerprints.
Wang, Zhixiong; Cheng, Yulan; Abraham, John M; Yan, Rong; Liu, Xi; Chen, Wei; Ibrahim, Sariat; Schroth, Gary P; Ke, Xiquan; He, Yulong; Meltzer, Stephen J
2017-10-15
Studies of chromosomal rearrangements and fusion transcripts have elucidated mechanisms of tumorigenesis and led to targeted cancer therapies. This study was aimed at identifying novel fusion transcripts in esophageal adenocarcinoma (EAC). To identify new fusion transcripts associated with EAC, targeted RNA sequencing and polymerase chain reaction (PCR) verification were performed in 40 EACs and matched nonmalignant specimens from the same patients. Genomic PCR and Sanger sequencing were performed to find the breakpoint of fusion genes. Five novel in-frame fusion transcripts were identified and verified in 40 EACs and in a validation cohort of 15 additional EACs (55 patients in all): fibroblast growth factor receptor 2 (FGFR2)-GRB2-associated binding protein 2 (GAB2) in 2 of 55 or 3.6%, Niemann-Pick C1 (NPC1)-maternal embryonic leucine zipper kinase (MELK) in 2 of 55 or 3.6%, ubiquitin-specific peptidase 54 (USP54)-calcium/calmodulin dependent protein kinase II γ (CAMK2G) in 2 of 55 or 3.6%, megakaryoblastic leukemia (translocation) 1 (MKL1)-fibulin 1 (FBLN1) in 1 of 55 or 1.8%, and CCR4-NOT transcription complex subunit 2 (CNOT2)-chromosome 12 open reading frame 49 (C12orf49) in 1 of 55 or 1.8%. A genomic analysis indicated that NPC1-MELK arose from a complex interchromosomal translocation event involving chromosomes 18, 3, and 9 with 3 rearrangement points, and this was consistent with chromoplexy. These data indicate that fusion transcripts occur at a stable frequency in EAC. Furthermore, our results indicate that chromoplexy is an underlying mechanism that generates fusion transcripts in EAC. These and other fusion transcripts merit further study as diagnostic markers and potential therapeutic targets in EAC. Cancer 2017;123:3916-24. © 2017 American Cancer Society. © 2017 American Cancer Society.
Moche, M; Busse, H; Dannenberg, C; Schulz, T; Schmitgen, A; Trantakis, C; Winkler, D; Schmidt, F; Kahn, T
2001-11-01
The aim of this work was to realize and clinically evaluate an image fusion platform for the integration of preoperative MRI and fMRI data into the intraoperative images of an interventional MRI system with a focus on neurosurgical procedures. A vertically open 0.5 T MRI scanner was equipped with a dedicated navigation system enabling the registration of additional imaging modalities (MRI, fMRI, CT) with the intraoperatively acquired data sets. These merged image data served as the basis for interventional planning and multimodal navigation. So far, the system has been used in 70 neurosurgical interventions (13 of which involved image data fusion--requiring 15 minutes extra time). The augmented navigation system is characterized by a higher frame rate and a higher image quality as compared to the system-integrated navigation based on continuously acquired (near) real time images. Patient movement and tissue shifts can be immediately detected by monitoring the morphological differences between both navigation scenes. The multimodal image fusion allowed a refined navigation planning especially for the resection of deeply seated brain lesions or pathologies close to eloquent areas. Augmented intraoperative orientation and instrument guidance improve the safety and accuracy of neurosurgical interventions.
A novel false color mapping model-based fusion method of visual and infrared images
NASA Astrophysics Data System (ADS)
Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu
2013-12-01
A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.
A tri-modality image fusion method for target delineation of brain tumors in radiotherapy.
Guo, Lu; Shen, Shuming; Harris, Eleanor; Wang, Zheng; Jiang, Wei; Guo, Yu; Feng, Yuanming
2014-01-01
To develop a tri-modality image fusion method for better target delineation in image-guided radiotherapy for patients with brain tumors. A new method of tri-modality image fusion was developed, which can fuse and display all image sets in one panel and one operation. And a feasibility study in gross tumor volume (GTV) delineation using data from three patients with brain tumors was conducted, which included images of simulation CT, MRI, and 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) examinations before radiotherapy. Tri-modality image fusion was implemented after image registrations of CT+PET and CT+MRI, and the transparency weight of each modality could be adjusted and set by users. Three radiation oncologists delineated GTVs for all patients using dual-modality (MRI/CT) and tri-modality (MRI/CT/PET) image fusion respectively. Inter-observer variation was assessed by the coefficient of variation (COV), the average distance between surface and centroid (ADSC), and the local standard deviation (SDlocal). Analysis of COV was also performed to evaluate intra-observer volume variation. The inter-observer variation analysis showed that, the mean COV was 0.14(± 0.09) and 0.07(± 0.01) for dual-modality and tri-modality respectively; the standard deviation of ADSC was significantly reduced (p<0.05) with tri-modality; SDlocal averaged over median GTV surface was reduced in patient 2 (from 0.57 cm to 0.39 cm) and patient 3 (from 0.42 cm to 0.36 cm) with the new method. The intra-observer volume variation was also significantly reduced (p = 0.00) with the tri-modality method as compared with using the dual-modality method. With the new tri-modality image fusion method smaller inter- and intra-observer variation in GTV definition for the brain tumors can be achieved, which improves the consistency and accuracy for target delineation in individualized radiotherapy.
Nakamura, Makiko; Mie, Masayasu; Mihara, Hisakazu; Nakamura, Makoto; Kobatake, Eiry
2009-10-01
An artificially designed fusion protein, which was designed to have strong cell adhesive activity and an active functional unit that enhances neuronal differentiation of mouse N1E-115 neuroblast cells, was developed. In this study, a laminin-1-derived IKVAV sequence, which stimulates neurite outgrowth in conditions of serum deprivation, was engineered and incorporated into an elastin-derived structural unit. The designed fusion protein also had a cell-adhesive RGD sequence derived from fibronectin. The resultant fusion protein could adsorb efficiently onto hydrophobic culture surfaces and showed cell adhesion activity similar to laminin. N1E-115 cells grown on the fusion protein exhibited more cells with neurites than cells grown on laminin-1. These results indicated that the constructed protein could retain properties of incorporated functional peptides and could provide effective signal transport. The strategy of designing multi-functional fusion proteins has the possibility for supporting current tissue engineering techniques. (c) 2009 Wiley Periodicals, Inc.
Zhong, Shan; Zhang, Hai-ping; Zheng, Jie; Bai, Dong-yu; Fu, Li; Chen, Pei-qiong
2013-04-01
To investigate the frequency of EML4-ALK fusion gene in non-small-cell lung cancer (NSCLC) patients, and its correlation with clinicopathologic features. Real-time PCR was used to detect the presence of EML4-ALK fusion gene in 268 cases of NSCLCs using paraffin-embedded tissue samples(among which 164 samples were re-validated by Sanger sequencing). Related clinicopathological correlation was analyzed. EML4-ALK fusion gene was found in 4.1% (11/268) of the cases. One hundred and sixty four samples were verified by Sanger sequencing, and the overall coincidence of the results of two methods (Sanger sequencing and Real-time PCR) was 100%. Female patients (5.9%, 5/85), ≤ 60 years of age (4.3%, 6/140), non-smokers (6.8%, 8/118) and adenocarcinomas (7.6%, 10/132) had a higher mutation rate than that in male patients (3.3%, 6/183), > 60 years of age (4.0%, 5/124), smokers (1.6%, 2/132) and squamous cell carcinomas (1.3%, 1/79), although no statistical significance in age (P = 0.918), gender (P = 0.503), smoking history (P = 0.092) and histological type (P = 0.094). Chinese NSCLC patients have a 4.1% detection rate of EML4-ALK fusion gene in the tumor tissues. Female, non-smoker and adenocarcinoma histological subtype tend to be associated with a higher rate of EML4-ALK gene fusion.
Design of an Image Fusion Phantom for a Small Animal microPET/CT Scanner Prototype
NASA Astrophysics Data System (ADS)
Nava-García, Dante; Alva-Sánchez, Héctor; Murrieta-Rodríguez, Tirso; Martínez-Dávalos, Arnulfo; Rodríguez-Villafuerte, Mercedes
2010-12-01
Two separate microtomography systems recently developed at Instituto de Física, UNAM, produce anatomical (microCT) and physiological images (microPET) of small animals. In this work, the development and initial tests of an image fusion method based on fiducial markers for image registration between the two modalities are presented. A modular Helix/Line-Sources phantom was designed and constructed; this phantom contains fiducial markers that can be visualized in both imaging systems. The registration was carried out by solving the rigid body alignment problem of Procrustes to obtain rotation and translation matrices required to align the two sets of images. The microCT/microPET image fusion of the Helix/Line-Sources phantom shows excellent visual coincidence between different structures, showing a calculated target-registration-error of 0.32 mm.
NASA Astrophysics Data System (ADS)
Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.
1994-09-01
A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.
Yu, Yao; Zhang, Wen-Bo; Liu, Xiao-Jing; Guo, Chuan-Bin; Yu, Guang-Yan; Peng, Xin
2017-06-01
The purpose of this study was to describe new technology assisted by 3-dimensional (3D) image fusion of 18 F-fluorodeoxyglucose (FDG)-positron emission tomography (PET)/computed tomography (CT) and contrast-enhanced CT (CECT) for computer planning of a maxillectomy of recurrent maxillary squamous cell carcinoma and defect reconstruction. Treatment of recurrent maxillary squamous cell carcinoma usually includes tumor resection and free flap reconstruction. FDG-PET/CT provided images of regions of abnormal glucose uptake and thus showed metabolic tumor volume to guide tumor resection. CECT data were used to create 3D reconstructed images of vessels to show the vascular diameters and locations, so that the most suitable vein and artery could be selected during anastomosis of the free flap. The data from preoperative maxillofacial CECT scans and FDG-PET/CT imaging were imported into the navigation system (iPlan 3.0; Brainlab, Feldkirchen, Germany). Three-dimensional image fusion between FDG-PET/CT and CECT was accomplished using Brainlab software according to the position of the 2 skulls simulated in the CECT image and PET/CT image, respectively. After verification of the image fusion accuracy, the 3D reconstruction images of the metabolic tumor, vessels, and other critical structures could be visualized within the same coordinate system. These sagittal, coronal, axial, and 3D reconstruction images were used to determine the virtual osteotomy sites and reconstruction plan, which was provided to the surgeon and used for surgical navigation. The average shift of the 3D image fusion between FDG-PET/CT and CECT was less than 1 mm. This technique, by clearly showing the metabolic tumor volume and the most suitable vessels for anastomosis, facilitated resection and reconstruction of recurrent maxillary squamous cell carcinoma. We used 3D image fusion of FDG-PET/CT and CECT to successfully accomplish resection and reconstruction of recurrent maxillary squamous cell carcinoma. This method has the potential to improve the clinical outcomes of these challenging procedures. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Oneil, William F.
1993-01-01
The fusion of radar and electro-optic (E-O) sensor images presents unique challenges. The two sensors measure different properties of the real three-dimensional (3-D) world. Forming the sensor outputs into a common format does not mask these differences. In this paper, the conditions under which fusion of the two sensor signals is possible are explored. The program currently planned to investigate this problem is briefly discussed.
Morris, C J; Lidstrom, M E
1992-01-01
In Methylobacterium extorquens AM1, gene encoding methanol dehydrogenase polypeptides are transcriptionally regulated in response to C1 compounds, including methanol (M. E. Lidstrom and D. I. Stirling, Annu. Rev. Microbiol. 44:27-57, 1990). In order to study this regulation, a transcriptional fusion has been constructed between a beta-galactosidase reporter gene and a 1.55-kb XhoI-SalI fragment of M. extorquens AM1rif DNA encoding the N terminus of the methanol dehydrogenase large subunit (moxF) and 1,289 bp of upstream DNA. The fusion exhibited orientation-specific promoter activity in M. extorquens AM1rif but was expressed constitutively when the transcriptional fusion was located on the plasmid. However, correct regulation was restored when the construction was inserted in the M. extorquens AM1rif chromosome. This DNA fragment was shown to contain both the moxFJGI promoter and the sequences necessary in cis for its transcriptional regulation by methanol. Transcription from this promoter was studied in the M. extorquens AM1rif moxB mutant strains UV4rif and UV25rif, which have a pleiotropic phenotype with regard to the components of methanol oxidation. In these mutants, beta-galactosidase activity from the fusion was reduced to a level equal to that of the vector background when the fusion was present in both plasmid and chromosomal locations. Since both constitutive and methanol-inducible promoter activities were lost in the mutants, moxB appears to be required for transcription of the genes encoding the methanol dehydrogenase polypeptides. Images PMID:1624436
Multi exposure image fusion algorithm based on YCbCr space
NASA Astrophysics Data System (ADS)
Yang, T. T.; Fang, P. Y.
2018-05-01
To solve the problem that scene details and visual effects are difficult to be optimized in high dynamic image synthesis, we proposes a multi exposure image fusion algorithm for processing low dynamic range images in YCbCr space, and weighted blending of luminance and chromatic aberration components respectively. The experimental results show that the method can retain color effect of the fused image while balancing details of the bright and dark regions of the high dynamic image.
Huang, Yan; Bi, Duyan; Wu, Dongpeng
2018-04-11
There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang
2017-12-01
In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.
Huang, Yan; Bi, Duyan; Wu, Dongpeng
2018-01-01
There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods. PMID:29641505
Fourier domain image fusion for differential X-ray phase-contrast breast imaging.
Coello, Eduardo; Sperl, Jonathan I; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne
2017-04-01
X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well. Copyright © 2017 Elsevier B.V. All rights reserved.
Spatial Statistical Data Fusion for Remote Sensing Applications
NASA Technical Reports Server (NTRS)
Nguyen, Hai
2010-01-01
Data fusion is the process of combining information from heterogeneous sources into a single composite picture of the relevant process, such that the composite picture is generally more accurate and complete than that derived from any single source alone. Data collection is often incomplete, sparse, and yields incompatible information. Fusion techniques can make optimal use of such data. When investment in data collection is high, fusion gives the best return. Our study uses data from two satellites: (1) Multiangle Imaging SpectroRadiometer (MISR), (2) Moderate Resolution Imaging Spectroradiometer (MODIS).
An imaging method of wavefront coding system based on phase plate rotation
NASA Astrophysics Data System (ADS)
Yi, Rigui; Chen, Xi; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua
2018-01-01
Wave-front coding has a great prospect in extending the depth of the optical imaging system and reducing optical aberrations, but the image quality and noise performance are inevitably reduced. According to the theoretical analysis of the wave-front coding system and the phase function expression of the cubic phase plate, this paper analyzed and utilized the feature that the phase function expression would be invariant in the new coordinate system when the phase plate rotates at different angles around the z-axis, and we proposed a method based on the rotation of the phase plate and image fusion. First, let the phase plate rotated at a certain angle around the z-axis, the shape and distribution of the PSF obtained on the image surface remain unchanged, the rotation angle and direction are consistent with the rotation angle of the phase plate. Then, the middle blurred image is filtered by the point spread function of the rotation adjustment. Finally, the reconstruction images were fused by the method of the Laplacian pyramid image fusion and the Fourier transform spectrum fusion method, and the results were evaluated subjectively and objectively. In this paper, we used Matlab to simulate the images. By using the Laplacian pyramid image fusion method, the signal-to-noise ratio of the image is increased by 19% 27%, the clarity is increased by 11% 15% , and the average gradient is increased by 4% 9% . By using the Fourier transform spectrum fusion method, the signal-to-noise ratio of the image is increased by 14% 23%, the clarity is increased by 6% 11% , and the average gradient is improved by 2% 6%. The experimental results show that the image processing by the above method can improve the quality of the restored image, improving the image clarity, and can effectively preserve the image information.
Detection of buried objects by fusing dual-band infrared images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, G.A.; Sengupta, S.K.; Sherwood, R.J.
1993-11-01
We have conducted experiments to demonstrate the enhanced detectability of buried land mines using sensor fusion techniques. Multiple sensors, including visible imagery, infrared imagery, and ground penetrating radar (GPR), have been used to acquire data on a number of buried mines and mine surrogates. Because the visible wavelength and GPR data are currently incomplete. This paper focuses on the fusion of two-band infrared images. We use feature-level fusion and supervised learning with the probabilistic neural network (PNN) to evaluate detection performance. The novelty of the work lies in the application of advanced target recognition algorithms, the fusion of dual-band infraredmore » images and evaluation of the techniques using two real data sets.« less
Guler, N; Volegov, P; Danly, C R; Grim, G P; Merrill, F E; Wilde, C H
2012-10-01
Inertial confinement fusion experiments at the National Ignition Facility are designed to understand the basic principles of creating self-sustaining fusion reactions by laser driven compression of deuterium-tritium (DT) filled cryogenic plastic capsules. The neutron imaging diagnostic provides information on the distribution of the central fusion reaction region and the surrounding DT fuel by observing neutron images in two different energy bands for primary (13-17 MeV) and down-scattered (6-12 MeV) neutrons. From this, the final shape and size of the compressed capsule can be estimated and the symmetry of the compression can be inferred. These experiments provide small sources with high yield neutron flux. An aperture design that includes an array of pinholes and penumbral apertures has provided the opportunity to image the same source with two different techniques. This allows for an evaluation of these different aperture designs and reconstruction algorithms.
NASA Astrophysics Data System (ADS)
Guler, Nevzat; Aragonez, Robert J.; Archuleta, Thomas N.; Batha, Steven H.; Clark, David D.; Clark, Deborah J.; Danly, Chris R.; Day, Robert D.; Fatherley, Valerie E.; Finch, Joshua P.; Gallegos, Robert A.; Garcia, Felix P.; Grim, Gary; Hsu, Albert H.; Jaramillo, Steven A.; Loomis, Eric N.; Mares, Danielle; Martinson, Drew D.; Merrill, Frank E.; Morgan, George L.; Munson, Carter; Murphy, Thomas J.; Oertel, John A.; Polk, Paul J.; Schmidt, Derek W.; Tregillis, Ian L.; Valdez, Adelaida C.; Volegov, Petr L.; Wang, Tai-Sen F.; Wilde, Carl H.; Wilke, Mark D.; Wilson, Douglas C.; Atkinson, Dennis P.; Bower, Dan E.; Drury, Owen B.; Dzenitis, John M.; Felker, Brian; Fittinghoff, David N.; Frank, Matthias; Liddick, Sean N.; Moran, Michael J.; Roberson, George P.; Weiss, Paul; Buckles, Robert A.; Cradick, Jerry R.; Kaufman, Morris I.; Lutz, Steve S.; Malone, Robert M.; Traille, Albert
2013-11-01
Inertial Confinement Fusion experiments at the National Ignition Facility (NIF) are designed to understand and test the basic principles of self-sustaining fusion reactions by laser driven compression of deuterium-tritium (DT) filled cryogenic plastic (CH) capsules. The experimental campaign is ongoing to tune the implosions and characterize the burning plasma conditions. Nuclear diagnostics play an important role in measuring the characteristics of these burning plasmas, providing feedback to improve the implosion dynamics. The Neutron Imaging (NI) diagnostic provides information on the distribution of the central fusion reaction region and the surrounding DT fuel by collecting images at two different energy bands for primary (13-15 MeV) and downscattered (10-12 MeV) neutrons. From these distributions, the final shape and size of the compressed capsule can be estimated and the symmetry of the compression can be inferred. The first downscattered neutron images from imploding ICF capsules are shown in this paper.
Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie
2015-01-01
We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871
High resolution isotopic analysis of U-bearing particles via fusion of SIMS and EDS images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarolli, Jay G.; Naes, Benjamin E.; Garcia, Benjamin J.
Image fusion of secondary ion mass spectrometry (SIMS) images and X-ray elemental maps from energy-dispersive spectroscopy (EDS) was performed to facilitate the isolation and re-analysis of isotopically unique U-bearing particles where the highest precision SIMS measurements are required. Image registration, image fusion and particle micromanipulation were performed on a subset of SIMS images obtained from a large area pre-screen of a particle distribution from a sample containing several certified reference materials (CRM) U129A, U015, U150, U500 and U850, as well as a standard reference material (SRM) 8704 (Buffalo River Sediment) to simulate particles collected on swipes during routine inspections ofmore » declared uranium enrichment facilities by the International Atomic Energy Agency (IAEA). In total, fourteen particles, ranging in size from 5 – 15 µm, were isolated and re-analyzed by SIMS in multi-collector mode identifying nine particles of CRM U129A, one of U150, one of U500 and three of U850. These identifications were made within a few percent errors from the National Institute of Standards and Technology (NIST) certified atom percent values for 234U, 235U and 238U for the corresponding CRMs. This work represents the first use of image fusion to enhance the accuracy and precision of isotope ratio measurements for isotopically unique U-bearing particles for nuclear safeguards applications. Implementation of image fusion is essential for the identification of particles of interests that fall below the spatial resolution of the SIMS images.« less
NASA Astrophysics Data System (ADS)
Zhou, Yi; Li, Qi
2017-01-01
A dual-axis reflective continuous-wave terahertz (THz) confocal scanning polarization imaging system was adopted. THz polarization imaging experiments on gaps on film and metallic letters "BeLLE" were carried out. Imaging results indicate that the THz polarization imaging is sensitive to the tilted gap or wide flat gap, suggesting the THz polarization imaging is able to detect edges and stains. An image fusion method based on the digital image processing was proposed to ameliorate the imaging quality of metallic letters "BeLLE." Objective and subjective evaluation both prove that this method can improve the imaging quality.
The fusion of satellite and UAV data: simulation of high spatial resolution band
NASA Astrophysics Data System (ADS)
Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata
2017-10-01
Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.
Strong FGFR3 staining is a marker for FGFR3 fusions in diffuse gliomas
Annala, Matti; Lehtinen, Birgitta; Kesseli, Juha; Haapasalo, Joonas; Ruusuvuori, Pekka; Yli-Harja, Olli; Visakorpi, Tapio; Haapasalo, Hannu; Nykter, Matti; Zhang, Wei
2017-01-01
Abstract Background Inhibitors of fibroblast growth factor receptors (FGFRs) have recently arisen as a promising treatment option for patients with FGFR alterations. Gene fusions involving FGFR3 and transforming acidic coiled-coil protein 3 (TACC3) have been detected in diffuse gliomas and other malignancies, and fusion-positive cases have responded well to FGFR inhibition. As high FGFR3 expression has been detected in fusion-positive tumors, we sought to determine the clinical significance of FGFR3 protein expression level as well as its potential for indicating FGFR3 fusions. Methods We performed FGFR3 immunohistochemistry on tissue microarrays containing 676 grades II–IV astrocytomas and 116 grades II–III oligodendroglial tumor specimens. Fifty-one cases were further analyzed using targeted sequencing. Results Moderate to strong FGFR3 staining was detected in gliomas of all grades, was more common in females, and was associated with poor survival in diffuse astrocytomas. Targeted sequencing identified FGFR3-TACC3 fusions and an FGFR3-CAMK2A fusion in 10 of 15 strongly stained cases, whereas no fusions were found in 36 negatively to moderately stained cases. Fusion-positive cases were predominantly female and negative for IDH and EGFR/PDGFRA/MET alterations. These and moderately stained cases show lower MIB-1 proliferation index than negatively to weakly stained cases. Furthermore, stronger FGFR3 expression was commonly observed in malignant tissue regions of lower cellularity in fusion-negative cases. Importantly, subregional negative FGFR3 staining was also observed in a few fusion-positive cases. Conclusions Strong FGFR3 protein expression is indicative of FGFR3 fusions and may serve as a clinically applicable predictive marker for treatment regimens based on FGFR inhibitors. PMID:28379477
Kamogawa, Junji; Kato, Osamu; Morizane, Tatsunori; Hato, Taizo
2015-01-01
There have been several imaging studies of cervical radiculopathy, but no three-dimensional (3D) images have shown the path, position, and pathological changes of the cervical nerve roots and spinal root ganglion relative to the cervical bony structure. The objective of this study was to introduce a technique that enables the virtual pathology of the nerve root to be assessed using 3D magnetic resonance (MR)/computed tomography (CT) fusion images that show the compression of the proximal portion of the cervical nerve root by both the herniated disc and the preforaminal or foraminal bony spur in patients with cervical radiculopathy. MR and CT images were obtained from three patients with cervical radiculopathy. 3D MR images were placed onto 3D CT images using a computer workstation. The entire nerve root could be visualized in 3D with or without the vertebrae. The most important characteristic evident on the images was flattening of the nerve root by a bony spur. The affected root was constricted at a pre-ganglion site. In cases of severe deformity, the flattened portion of the root seemed to change the angle of its path, resulting in twisted condition. The 3D MR/CT fusion imaging technique enhances visualization of pathoanatomy in cervical hidden area that is composed of the root and intervertebral foramen. This technique provides two distinct advantages for diagnosis of cervical radiculopathy. First, the isolation of individual vertebra clarifies the deformities of the whole root groove, including both the uncinate process and superior articular process in the cervical spine. Second, the tortuous or twisted condition of a compressed root can be visualized. The surgeon can identify the narrowest face of the root if they view the MR/CT fusion image from the posterolateral-inferior direction. Surgeons use MR/CT fusion images as a pre-operative map and for intraoperative navigation. The MR/CT fusion images can also be used as educational materials for all hospital staff and for patients and patients' families who provide informed consent for treatments.
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)
2001-01-01
The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.
A wavelet-based adaptive fusion algorithm of infrared polarization imaging
NASA Astrophysics Data System (ADS)
Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang
2011-08-01
The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.
Putzer, Daniel; Henninger, Benjamin; Kovacs, Peter; Uprimny, Christian; Kendler, Dorota; Jaschke, Werner; Bale, Reto J
2016-06-01
Even as PET/CT provides valuable diagnostic information in a great number of clinical indications, availability of hybrid PET/CT scanners is mainly limited to clinical centers. A software-based image fusion would facilitate combined image reading of CT and PET data sets if hardware image fusion is not available. To analyze the relevance of retrospective image fusion of separately acquired PET and CT data sets, we studied the accuracy, practicability and reproducibility of three different image registration techniques. We evaluated whole-body 18F-FDG-PET and CT data sets of 71 oncologic patients. Images were fused retrospectively using Stealth Station System, Treon (Medtronic Inc., Louisville, CO, USA) equipped with Cranial4 Software. External markers fixed to a vacuum mattress were used as reference for exact repositioning. Registration was repeated using internal anatomic landmarks and Automerge software, assessing accuracy for all three methods, measuring distances of liver representation in CT and PET with reference to a common coordinate system. On first measurement of image fusions with external markers, 53 were successful, 16 feasible and 2 not successful. Using anatomic landmarks, 42 were successful, 26 feasible and 3 not successful. Using Automerge Software only 13 were successful. The mean distance between center points in PET and CT was 7.69±4.96 mm on first, and 7.65±4.2 mm on second measurement. Results with external markers correlate very well and inaccuracies are significantly lower (P<0.001) than results using anatomical landmarks (10.38±6.13 mm and 10.83±6.23 mm). Analysis revealed a significantly faster alignment using external markers (P<0.001). External fiducials in combination with immobilization devices and breathing protocols allow for highly accurate image fusion cost-effectively and significantly less time, posing an attractive alternative for PET/CT interpretation when a hybrid scanner is not available.
Robust multi-atlas label propagation by deep sparse representation
Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong
2016-01-01
Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods. PMID:27942077
Robust multi-atlas label propagation by deep sparse representation.
Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong
2017-03-01
Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer ( label-specific dictionaries ) consists of groups of representative atlas patches and the subsequent layers ( residual dictionaries ) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.
2011-01-01
Background The advent of genomics-based technologies has revolutionized many fields of biological enquiry. However, chromosome walking or flanking sequence cloning is still a necessary and important procedure to determining gene structure. Such methods are used to identify T-DNA insertion sites and so are especially relevant for organisms where large T-DNA insertion libraries have been created, such as rice and Arabidopsis. The currently available methods for flanking sequence cloning, including the popular TAIL-PCR technique, are relatively laborious and slow. Results Here, we report a simple and effective fusion primer and nested integrated PCR method (FPNI-PCR) for the identification and cloning of unknown genomic regions flanked known sequences. In brief, a set of universal primers was designed that consisted of various 15-16 base arbitrary degenerate oligonucleotides. These arbitrary degenerate primers were fused to the 3' end of an adaptor oligonucleotide which provided a known sequence without degenerate nucleotides, thereby forming the fusion primers (FPs). These fusion primers are employed in the first step of an integrated nested PCR strategy which defines the overall FPNI-PCR protocol. In order to demonstrate the efficacy of this novel strategy, we have successfully used it to isolate multiple genomic sequences namely, 21 orthologs of genes in various species of Rosaceace, 4 MYB genes of Rosa rugosa, 3 promoters of transcription factors of Petunia hybrida, and 4 flanking sequences of T-DNA insertion sites in transgenic tobacco lines and 6 specific genes from sequenced genome of rice and Arabidopsis. Conclusions The successful amplification of target products through FPNI-PCR verified that this novel strategy is an effective, low cost and simple procedure. Furthermore, FPNI-PCR represents a more sensitive, rapid and accurate technique than the established TAIL-PCR and hiTAIL-PCR procedures. PMID:22093809
Effects of spatial resolution ratio in image fusion
Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.
2008-01-01
In image fusion, the spatial resolution ratio can be defined as the ratio between the spatial resolution of the high-resolution panchromatic image and that of the low-resolution multispectral image. This paper attempts to assess the effects of the spatial resolution ratio of the input images on the quality of the fused image. Experimental results indicate that a spatial resolution ratio of 1:10 or higher is desired for optimal multisensor image fusion provided the input panchromatic image is not downsampled to a coarser resolution. Due to the synthetic pixels generated from resampling, the quality of the fused image decreases as the spatial resolution ratio decreases (e.g. from 1:10 to 1:30). However, even with a spatial resolution ratio as small as 1:30, the quality of the fused image is still better than the original multispectral image alone for feature interpretation. In cases where the spatial resolution ratio is too small (e.g. 1:30), to obtain better spectral integrity of the fused image, one may downsample the input high-resolution panchromatic image to a slightly lower resolution before fusing it with the multispectral image.
NASA Astrophysics Data System (ADS)
Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng
2018-01-01
Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.
Ogata, Yuji; Nakahara, Tadaki; Ode, Kenichi; Matsusaka, Yohji; Katagiri, Mari; Iwabuchi, Yu; Itoh, Kazunari; Ichimura, Akira; Jinzaki, Masahiro
2017-05-01
We developed a method of image data projection of bone SPECT into 3D volume-rendered CT images for 3D SPECT/CT fusion. The aims of our study were to evaluate its feasibility and clinical usefulness. Whole-body bone scintigraphy (WB) and SPECT/CT scans were performed in 318 cancer patients using a dedicated SPECT/CT systems. Volume data of bone SPECT and CT were fused to obtain 2D SPECT/CT images. To generate our 3D SPECT/CT images, colored voxel data of bone SPECT were projected onto the corresponding location of the volume-rendered CT data after a semi-automatic bone extraction. Then, the resultant 3D images were blended with conventional volume-rendered CT images, allowing to grasp the three-dimensional relationship between bone metabolism and anatomy. WB and SPECT (WB + SPECT), 2D SPECT/CT fusion, and 3D SPECT/CT fusion were evaluated by two independent reviewers in the diagnosis of bone metastasis. The inter-observer variability and diagnostic accuracy in these three image sets were investigated using a four-point diagnostic scale. Increased bone metabolism was found in 744 metastatic sites and 1002 benign changes. On a per-lesion basis, inter-observer agreements in the diagnosis of bone metastasis were 0.72 for WB + SPECT, 0.90 for 2D SPECT/CT, and 0.89 for 3D SPECT/CT. Receiver operating characteristic analyses for the diagnostic accuracy of bone metastasis showed that WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT had an area under the curve of 0.800, 0.983, and 0.983 for reader 1, 0.865, 0.992, and 0.993 for reader 2, respectively (WB + SPECT vs. 2D or 3D SPECT/CT, p < 0.001; 2D vs. 3D SPECT/CT, n.s.). The durations of interpretation of WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT images were 241 ± 75, 225 ± 73, and 182 ± 71 s for reader 1 and 207 ± 72, 190 ± 73, and 179 ± 73 s for reader 2, respectively. As a result, it took shorter time to read 3D SPECT/CT images than 2D SPECT/CT (p < 0.0001) or WB + SPECT images (p < 0.0001). 3D SPECT/CT fusion offers comparable diagnostic accuracy to 2D SPECT/CT fusion. The visual effect of 3D SPECT/CT fusion facilitates reduction of reading time compared to 2D SPECT/CT fusion.
Cooper, J A; Kashishian, A
1993-01-01
We have used a transient expression system and mutant platelet-derived growth factor (PDGF) receptors to study the binding specificities of the Src homology 2 (SH2) regions of the Ras GTPase-activator protein (GAP) and the p85 alpha subunit of phosphatidylinositol 3-kinase (PI3 kinase). A number of fusion proteins, each tagged with an epitope allowing recognition by a monoclonal antibody, were expressed at levels comparable to those of endogenous GAP. Fusion proteins containing the central SH2-SH3-SH2 region of GAP or the C-terminal region of p85 alpha, which includes two SH2 domains, bound to PDGF receptors in response to PDGF stimulation. Both fusion proteins showed the same requirements for tyrosine phosphorylation sites in the PDGF receptor as the full-length proteins from which they were derived, i.e., binding of the GAP fusion protein was reduced by mutation of Tyr-771, and binding of the p85 fusion protein was reduced by mutation of Tyr-740, Tyr-751, or both residues. Fusion proteins containing single SH2 domains from either GAP or p85 alpha did not bind detectably to PDGF receptors in this system, suggesting that two SH2 domains in a single polypeptide cooperate to raise the affinity of binding. The sequence specificities of individual SH2 domains were deduced from the binding properties of fusion proteins containing one SH2 domain from GAP and another from p85. The results suggest that the C-terminal GAP SH2 domain specifies binding to Tyr-771, the C-terminal p85 alpha SH2 domain binds to either Tyr-740 or Tyr-751, and each protein's N-terminal SH2 domain binds to unidentified phosphorylation sites.(ABSTRACT TRUNCATED AT 250 WORDS) Images PMID:8382774
Yu, Ai-Ping; Shi, Bing-Xing; Dong, Chun-Na; Jiang, Zhong-Hua; Wu, Zu-Ze
2005-07-01
To combine the fibrinolytic with anticoagulant activities for therapy of thrombotic deseases, a fusion protein made of tissue-type plasminogen activator (t-PA) and hirudin was constructed and expressed in chia pastoris. To improve thrombolytic properties of t-PA and reduce bleeding side effect of hirudin, FXa-recognition sequence was introduced between t-PA and hirudin molecules.The anticoagulant activity of hirudin can be target-released through cleavage of FXa at thrombus site. t-PA gene and hirudin gene with FXa-recognition sequence at its 5'-terminal were obtained by RT-PCR and PCR respectively. The fusion protein gene was cloned into plasmid pIC9K and electroporated into the genome of Pichia pastoris GS115. The expression of fusion protein was induced by methanol in shaking flask and secreted into the culture medium. Two forms of the fusion protein, single-chain and double-chain linked by a disulfide bond (due to the cleveage of t-PA at Arg275-Ile276), were obtained. The intact fusion protein retained the fibrinolytic activity but lacked any anticoagulant activity. After cleavage by FXa, the fusion protein liberated intact free hirudin to exert its anticoagulant activity. So, the fusion protein is a bifunctional molecule having good prospect to develop into a new targeted therapeutic agent with reduced bleeding side effect for thrombotic diseases.
Prostate seed implant quality assessment using MR and CT image fusion.
Amdur, R J; Gladstone, D; Leopold, K A; Harris, R D
1999-01-01
After a seed implant of the prostate, computerized tomography (CT) is ideal for determining seed distribution but soft tissue anatomy is frequently not well visualized. Magnetic resonance (MR) images soft tissue anatomy well but seed visualization is problematic. We describe a method of fusing CT and MR images to exploit the advantages of both of these modalities when assessing the quality of a prostate seed implant. Eleven consecutive prostate seed implant patients were imaged with axial MR and CT scans. MR and CT images were fused in three dimensions using the Pinnacle 3.0 version of the ADAC treatment planning system. The urethra and bladder base were used to "line up" MR and CT image sets during image fusion. Alignment was accomplished using translation and rotation in the three ortho-normal planes. Accuracy of image fusion was evaluated by calculating the maximum deviation in millimeters between the center of the urethra on axial MR versus CT images. Implant quality was determined by comparing dosimetric results to previously set parameters. Image fusion was performed with a high degree of accuracy. When lining up the urethra and base of bladder, the maximum difference in axial position of the urethra between MR and CT averaged 2.5 mm (range 1.3-4.0 mm, SD 0.9 mm). By projecting CT-derived dose distributions over MR images of soft tissue structures, qualitative and quantitative evaluation of implant quality is straightforward. The image-fusion process we describe provides a sophisticated way of assessing the quality of a prostate seed implant. Commercial software makes the process time-efficient and available to any clinical practice with a high-quality treatment planning system. While we use MR to image soft tissue structures, the process could be used with any imaging modality that is able to visualize the prostatic urethra (e.g., ultrasound).
Automated image-based assay for evaluation of HIV neutralization and cell-to-cell fusion inhibition.
Sheik-Khalil, Enas; Bray, Mark-Anthony; Özkaya Şahin, Gülsen; Scarlatti, Gabriella; Jansson, Marianne; Carpenter, Anne E; Fenyö, Eva Maria
2014-08-30
Standardized techniques to detect HIV-neutralizing antibody responses are of great importance in the search for an HIV vaccine. Here, we present a high-throughput, high-content automated plaque reduction (APR) assay based on automated microscopy and image analysis that allows evaluation of neutralization and inhibition of cell-cell fusion within the same assay. Neutralization of virus particles is measured as a reduction in the number of fluorescent plaques, and inhibition of cell-cell fusion as a reduction in plaque area. We found neutralization strength to be a significant factor in the ability of virus to form syncytia. Further, we introduce the inhibitory concentration of plaque area reduction (ICpar) as an additional measure of antiviral activity, i.e. fusion inhibition. We present an automated image based high-throughput, high-content HIV plaque reduction assay. This allows, for the first time, simultaneous evaluation of neutralization and inhibition of cell-cell fusion within the same assay, by quantifying the reduction in number of plaques and mean plaque area, respectively. Inhibition of cell-to-cell fusion requires higher quantities of inhibitory reagent than inhibition of virus neutralization.
NASA Astrophysics Data System (ADS)
Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua
2017-05-01
The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.
Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryu, Seun; Lin, Guang; Sun, Xin
2013-01-01
Fast image reconstruction from statistical information is critical in image fusion from multimodality chemical imaging instrumentation to create high resolution image with large domain. Stochastic methods have been used widely in image reconstruction from two point correlation function. The main challenge is to increase the efficiency of reconstruction. A novel simulated annealing method is proposed for fast solution of image reconstruction. Combining the advantage of very fast cooling schedules, dynamic adaption and parallelization, the new simulation annealing algorithm increases the efficiencies by several orders of magnitude, making the large domain image fusion feasible.
NASA Astrophysics Data System (ADS)
Sukawattanavijit, Chanika; Srestasathiern, Panu
2017-10-01
Land Use and Land Cover (LULC) information are significant to observe and evaluate environmental change. LULC classification applying remotely sensed data is a technique popularly employed on a global and local dimension particularly, in urban areas which have diverse land cover types. These are essential components of the urban terrain and ecosystem. In the present, object-based image analysis (OBIA) is becoming widely popular for land cover classification using the high-resolution image. COSMO-SkyMed SAR data was fused with THAICHOTE (namely, THEOS: Thailand Earth Observation Satellite) optical data for land cover classification using object-based. This paper indicates a comparison between object-based and pixel-based approaches in image fusion. The per-pixel method, support vector machines (SVM) was implemented to the fused image based on Principal Component Analysis (PCA). For the objectbased classification was applied to the fused images to separate land cover classes by using nearest neighbor (NN) classifier. Finally, the accuracy assessment was employed by comparing with the classification of land cover mapping generated from fused image dataset and THAICHOTE image. The object-based data fused COSMO-SkyMed with THAICHOTE images demonstrated the best classification accuracies, well over 85%. As the results, an object-based data fusion provides higher land cover classification accuracy than per-pixel data fusion.
Duarte, Cristiana; Pinto-Gouveia, José
2017-12-01
This study examined the phenomenology of shame experiences from childhood and adolescence in a sample of women with Binge Eating Disorder. Moreover, a path analysis was investigated testing whether the association between shame-related memories which are traumatic and central to identity, and binge eating symptoms' severity, is mediated by current external shame, body image shame and body image cognitive fusion. Participants in this study were 114 patients, who were assessed through the Eating Disorder Examination and the Shame Experiences Interview, and through self-report measures of external shame, body image shame, body image cognitive fusion and binge eating symptoms. Shame experiences where physical appearance was negatively commented or criticized by others were the most frequently recalled. A path analysis showed a good fit between the hypothesised mediational model and the data. The traumatic and centrality qualities of shame-related memories predicted current external shame, especially body image shame. Current shame feelings were associated with body image cognitive fusion, which, in turn, predicted levels of binge eating symptomatology. Findings support the relevance of addressing early shame-related memories and negative affective and self-evaluative experiences, namely related to body image, in the understanding and management of binge eating. Copyright © 2017 Elsevier B.V. All rights reserved.
Statistical image quantification toward optimal scan fusion and change quantification
NASA Astrophysics Data System (ADS)
Potesil, Vaclav; Zhou, Xiang Sean
2007-03-01
Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.
NASA Astrophysics Data System (ADS)
Merkel, Ronny; Gruhn, Stefan; Dittmann, Jana; Vielhauer, Claus; Bräutigam, Anja
2012-03-01
Determining the age of latent fingerprint traces found at crime scenes is an unresolved research issue since decades. Solving this issue could provide criminal investigators with the specific time a fingerprint trace was left on a surface, and therefore would enable them to link potential suspects to the time a crime took place as well as to reconstruct the sequence of events or eliminate irrelevant fingerprints to ensure privacy constraints. Transferring imaging techniques from different application areas, such as 3D image acquisition, surface measurement and chemical analysis to the domain of lifting latent biometric fingerprint traces is an upcoming trend in forensics. Such non-destructive sensor devices might help to solve the challenge of determining the age of a latent fingerprint trace, since it provides the opportunity to create time series and process them using pattern recognition techniques and statistical methods on digitized 2D, 3D and chemical data, rather than classical, contact-based capturing techniques, which alter the fingerprint trace and therefore make continuous scans impossible. In prior work, we have suggested to use a feature called binary pixel, which is a novel approach in the working field of fingerprint age determination. The feature uses a Chromatic White Light (CWL) image sensor to continuously scan a fingerprint trace over time and retrieves a characteristic logarithmic aging tendency for 2D-intensity as well as 3D-topographic images from the sensor. In this paper, we propose to combine such two characteristic aging features with other 2D and 3D features from the domains of surface measurement, microscopy, photography and spectroscopy, to achieve an increase in accuracy and reliability of a potential future age determination scheme. Discussing the feasibility of such variety of sensor devices and possible aging features, we propose a general fusion approach, which might combine promising features to a joint age determination scheme in future. We furthermore demonstrate the feasibility of the introduced approach by exemplary fusing the binary pixel features based on 2D-intensity and 3D-topographic images of the mentioned CWL sensor. We conclude that a formula based age determination approach requires very precise image data, which cannot be achieved at the moment, whereas a machine learning based classification approach seems to be feasible, if an adequate amount of features can be provided.
One pedigree we all may have come from - did Adam and Eve have the chromosome 2 fusion?
Stankiewicz, Paweł
2016-01-01
In contrast to Great Apes, who have 48 chromosomes, modern humans and likely Neandertals and Denisovans have and had, respectively, 46 chromosomes. The reduction in chromosome number was caused by the head-to-head fusion of two ancestral chromosomes to form human chromosome 2 (HSA2) and may have contributed to the reproductive barrier with Great Apes. Next generation sequencing and molecular clock analyses estimated that this fusion arose prior to our last common ancestor with Neandertal and Denisovan hominins ~ 0.74 - 4.5 million years ago. I propose that, unlike recurrent Robertsonian translocations in humans, the HSA2 fusion was a single nonrecurrent event that spread through a small polygamous clan population bottleneck. Its heterozygous to homozygous conversion, fixation, and accumulation in the succeeding populations was likely facilitated by an evolutionary advantage through the genomic loss rather than deregulation of expression of the gene(s) flanking the HSA2 fusion site at 2q13. The origin of HSA2 might have been a critical evolutionary event influencing higher cognitive functions in various early subspecies of hominins. Next generation sequencing of Homo heidelbergensis and Homo erectus genomes and complete reconstruction of DNA sequence of the orthologous subtelomeric chromosomes in Great Apes should enable more precise timing of HSA2 formation and better understanding of its evolutionary consequences.
Covariance descriptor fusion for target detection
NASA Astrophysics Data System (ADS)
Cukur, Huseyin; Binol, Hamidullah; Bal, Abdullah; Yavuz, Fatih
2016-05-01
Target detection is one of the most important topics for military or civilian applications. In order to address such detection tasks, hyperspectral imaging sensors provide useful images data containing both spatial and spectral information. Target detection has various challenging scenarios for hyperspectral images. To overcome these challenges, covariance descriptor presents many advantages. Detection capability of the conventional covariance descriptor technique can be improved by fusion methods. In this paper, hyperspectral bands are clustered according to inter-bands correlation. Target detection is then realized by fusion of covariance descriptor results based on the band clusters. The proposed combination technique is denoted Covariance Descriptor Fusion (CDF). The efficiency of the CDF is evaluated by applying to hyperspectral imagery to detect man-made objects. The obtained results show that the CDF presents better performance than the conventional covariance descriptor.
Telomere dynamics in an immortal human cell line.
Murnane, J P; Sabatier, L; Marder, B A; Morgan, W F
1994-01-01
The integration of transfected plasmid DNA at the telomere of chromosome 13 in an immortalized simian virus 40-transformed human cell line provided the first opportunity to study polymorphism in the number of telomeric repeat sequences on the end of a single chromosome. Three subclones of this cell line were selected for analysis: one with a long telomere on chromosome 13, one with a short telomere, and one with such extreme polymorphism that no distinct band was discernible. Further subcloning demonstrated that telomere polymorphism resulted from both gradual changes and rapid changes that sometimes involved many kilobases. The gradual changes were due to the shortening of telomeres at a rate similar to that reported for telomeres of somatic cells without telomerase, eventually resulting in the loss of nearly all of the telomere. However, telomeres were not generally lost completely, as shown by the absence of polymorphism in the subtelomeric plasmid sequences. Instead, telomeres that were less than a few hundred base pairs in length showed a rapid, highly heterogeneous increase in size. Rapid changes in telomere length also occurred on longer telomeres. The frequency of this type of change in telomere length varied among the subclones and correlated with chromosome fusion. Therefore, the rapid changes in telomere length appeared occasionally to result in the complete loss of telomeric repeat sequences. Rapid changes in telomere length have been associated with telomere loss and chromosome instability in yeast and could be responsible for the high rate of chromosome fusion observed in many human tumor cell lines. Images PMID:7957062
Integrating image quality in 2nu-SVM biometric match score fusion.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2007-10-01
This paper proposes an intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2nu-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.
Goudeketting, Seline R; Heinen, Stefan G H; Ünlü, Çağdaş; van den Heuvel, Daniel A F; de Vries, Jean-Paul P M; van Strijen, Marco J; Sailer, Anna M
2017-08-01
To systematically review and meta-analyze the added value of 3-dimensional (3D) image fusion technology in endovascular aortic repair for its potential to reduce contrast media volume, radiation dose, procedure time, and fluoroscopy time. Electronic databases were systematically searched for studies published between January 2010 and March 2016 that included a control group describing 3D fusion imaging in endovascular aortic procedures. Two independent reviewers assessed the methodological quality of the included studies and extracted data on iodinated contrast volume, radiation dose, procedure time, and fluoroscopy time. The contrast use for standard and complex endovascular aortic repairs (fenestrated, branched, and chimney) were pooled using a random-effects model; outcomes are reported as the mean difference with 95% confidence intervals (CIs). Seven studies, 5 retrospective and 2 prospective, involving 921 patients were selected for analysis. The methodological quality of the studies was moderate (median 17, range 15-18). The use of fusion imaging led to an estimated mean reduction in iodinated contrast of 40.1 mL (95% CI 16.4 to 63.7, p=0.002) for standard procedures and a mean 70.7 mL (95% CI 44.8 to 96.6, p<0.001) for complex repairs. Secondary outcome measures were not pooled because of potential bias in nonrandomized data, but radiation doses, procedure times, and fluoroscopy times were lower, although not always significantly, in the fusion group in 6 of the 7 studies. Compared with the control group, 3D fusion imaging is associated with a significant reduction in the volume of contrast employed for standard and complex endovascular aortic procedures, which can be particularly important in patients with renal failure. Radiation doses, procedure times, and fluoroscopy times were reduced when 3D fusion was used.
NASA Astrophysics Data System (ADS)
Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo
2014-02-01
In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.
Chinnadurai, Ponraj; Duran, Cassidy; Al-Jabbari, Odeaa; Abu Saleh, Walid K; Lumsden, Alan; Bismuth, Jean
2016-01-01
To report our initial experience and highlight the value of using intraoperative C-arm cone beam computed tomography (CT; DynaCT(®)) image fusion guidance along with steerable robotic endovascular catheter navigation to optimize vessel cannulation. Between May 2013 and January 2015, all patients who underwent endovascular procedures using DynaCT image fusion technique along with Hansen Magellan vascular robotic catheter were included in this study. As a part of preoperative planning, relevant vessel landmarks were electronically marked in contrast-enhanced multi-slice computed tomography images and stored. At the beginning of procedure, an intraoperative noncontrast C-arm cone beam CT (syngo DynaCT(®), Siemens Medical Solutions USA Inc.) was acquired in the hybrid suite. Preoperative images were then coregistered to intraoperative DynaCT images using aortic wall calcifications and bone landmarks. Stored landmarks were then overlaid on 2-dimensional (2D) live fluoroscopic images as virtual markers that are updated in real-time with C-arm, table movements and image zoom. Vascular access and robotic catheter (Magellan(®), Hansen Medical) was setup per standard. Vessel cannulation was performed based on electronic virtual markers on live fluoroscopy using robotic catheter. The impact of 3-dimensional (3D) image fusion guidance on robotic vessel cannulation was evaluated retrospectively, by assessing quantitative parameters like number of angiograms acquired before vessel cannulation and qualitative parameters like accuracy of vessel ostium and centerline markers. All 17 vessels were cannulated successfully in 14 patients' attempted using robotic catheter and image fusion guidance. Median vessel diameter at origin was 5.4 mm (range, 2.3-13 mm), whereas 12 of 17 (70.6%) vessels had either calcified and/or stenosed origin from parent vessel. Nine of 17 vessels (52.9 %) were cannulated without any contrast injection. Median number of angiograms required before cannulation was 0 (range, 0-2). On qualitative assessment, 14 of 15 vessels (93.3%) had grade = 1 accuracy (guidewire inside virtual ostial marker). Fourteen of 14 vessels had grade = 1 accuracy (virtual centerlines that matched with the actual vessel trajectory during cannulation). In this small series, the experience of using DynaCT image fusion guidance together with a steerable endovascular robotic catheter indicates that such image fusion strategies can enhance intraoperative 2D fluoroscopy by bringing preoperative 3D information about vascular stenosis and/or calcification, angulation, and take off from main vessel thereby facilitating ultimate vessel cannulation. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1990-01-01
Various papers on human and machine strategies in sensor fusion are presented. The general topics addressed include: active vision, measurement and analysis of visual motion, decision models for sensor fusion, implementation of sensor fusion algorithms, applying sensor fusion to image analysis, perceptual modules and their fusion, perceptual organization and object recognition, planning and the integration of high-level knowledge with perception, using prior knowledge and context in sensor fusion.
A new fusion protein platform for quantitatively measuring activity of multiple proteases
2014-01-01
Background Recombinant proteins fused with specific cleavage sequences are widely used as substrate for quantitatively analyzing the activity of proteases. Here we propose a new fusion platform for multiple proteases, by using diaminopropionate ammonia-lyase (DAL) as the fusion protein. It was based on the finding that a fused His6-tag could significantly decreases the activities of DAL from E. coli (eDAL) and Salmonella typhimurium (sDAL). Previously, we have shown that His6GST-tagged eDAL could be used to determine the activity of tobacco etch virus protease (TEVp) under different temperatures or in the denaturant at different concentrations. In this report, we will assay different tags and cleavage sequences on DAL for expressing yield in E. coli, stability of the fused proteins and performance of substrate of other common proteases. Results We tested seven different protease cleavage sequences (rhinovirus 3C, TEV protease, factor Xa, Ssp DnaB intein, Sce VMA1 intein, thrombin and enterokinase), three different tags (His6, GST, CBD and MBP) and two different DALs (eDAL and sDAL), for their performance as substrate to the seven corresponding proteases. Among them, we found four active DAL-fusion substrates suitable for TEVp, factor Xa, thrombin and DnaB intein. Enterokinase cleaved eDAL at undesired positions and did not process sDAL. Substitution of GST with MBP increase the expression level of the fused eDAL and this fusion protein was suitable as a substrate for analyzing activity of rhinovirus 3C. We demonstrated that SUMO protease Ulp1 with a N-terminal His6-tag or MBP tag displayed different activity using the designed His6SUMO-eDAL as substrate. Finally, owing to the high level of the DAL-fusion protein in E. coli, these protein substrates can also be detected directly from the crude extract. Conclusion The results show that our designed DAL-fusion proteins can be used to quantify the activities of both sequence- and conformational-specific proteases, with sufficient substrate specificity. PMID:24649897
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, Tetsuya; Hara, Hirohisa; Murakami, Izumi
2017-06-10
Line intensities emerging from the Ne-sequence iron ion (Fe xvii) are measured in the laboratory, by the Large Helical Device at the National Institute for Fusion Science, and in the solar corona by the EUV Imaging Spectrometer (EIS) on board the Hinode mission. The intensity ratios of Fe xvii λ 204.6/ λ 254.8 are derived in the laboratory by unblending the contributions of the Fe xiii and xii line intensities. They are consistent with theoretical predictions and solar observations, the latter of which endorses the in-flight radiometric calibrations of the EIS instrument. The still remaining temperature-dependent behavior of the linemore » ratio suggests the contamination of lower-temperature iron lines that are blended with the λ 204.6 line.« less
Magnetic resonance imaging-ultrasound fusion biopsy for prediction of final prostate pathology.
Le, Jesse D; Stephenson, Samuel; Brugger, Michelle; Lu, David Y; Lieu, Patricia; Sonn, Geoffrey A; Natarajan, Shyam; Dorey, Frederick J; Huang, Jiaoti; Margolis, Daniel J A; Reiter, Robert E; Marks, Leonard S
2014-11-01
We explored the impact of magnetic resonance imaging-ultrasound fusion prostate biopsy on the prediction of final surgical pathology. A total of 54 consecutive men undergoing radical prostatectomy at UCLA after fusion biopsy were included in this prospective, institutional review board approved pilot study. Using magnetic resonance imaging-ultrasound fusion, tissue was obtained from a 12-point systematic grid (mapping biopsy) and from regions of interest detected by multiparametric magnetic resonance imaging (targeted biopsy). A single radiologist read all magnetic resonance imaging, and a single pathologist independently rereviewed all biopsy and whole mount pathology, blinded to prior interpretation and matched specimen. Gleason score concordance between biopsy and prostatectomy was the primary end point. Mean patient age was 62 years and median prostate specific antigen was 6.2 ng/ml. Final Gleason score at prostatectomy was 6 (13%), 7 (70%) and 8-9 (17%). A tertiary pattern was detected in 17 (31%) men. Of 45 high suspicion (image grade 4-5) magnetic resonance imaging targets 32 (71%) contained prostate cancer. The per core cancer detection rate was 20% by systematic mapping biopsy and 42% by targeted biopsy. The highest Gleason pattern at prostatectomy was detected by systematic mapping biopsy in 54%, targeted biopsy in 54% and a combination in 81% of cases. Overall 17% of cases were upgraded from fusion biopsy to final pathology and 1 (2%) was downgraded. The combination of targeted biopsy and systematic mapping biopsy was needed to obtain the best predictive accuracy. In this pilot study magnetic resonance imaging-ultrasound fusion biopsy allowed for the prediction of final prostate pathology with greater accuracy than that reported previously using conventional methods (81% vs 40% to 65%). If confirmed, these results will have important clinical implications. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Natarajan, Shyam; Jones, Tonye A; Priester, Alan M; Geoghegan, Rory; Lieu, Patricia; Delfin, Merdie; Felker, Ely; Margolis, Daniel J A; Sisk, Anthony; Pantuck, Allan; Grundfest, Warren; Marks, Leonard S
2017-10-01
Focal laser ablation is a potential treatment in some men with prostate cancer. Currently focal laser ablation is performed by radiologists in a magnetic resonance imaging unit (in bore). We evaluated the safety and feasibility of performing focal laser ablation in a urology clinic (out of bore) using magnetic resonance imaging-ultrasound fusion for guidance. A total of 11 men with intermediate risk prostate cancer were enrolled in this prospective, institutional review board approved pilot study. Magnetic resonance imaging-ultrasound fusion was used to guide laser fibers transrectally into regions of interest harboring intermediate risk prostate cancer. Thermal probes were inserted for real-time monitoring of intraprostatic temperatures during laser activation. Multiparametric magnetic resonance imaging (3 Tesla) was done immediately after treatment and at 6 months along with comprehensive fusion biopsy. Ten of 11 patients were successfully treated while under local anesthesia. Mean procedure time was 95 minutes (range 71 to 105). Posttreatment magnetic resonance imaging revealed a confined zone of nonperfusion in all 10 men. Mean zone volume was 4.3 cc (range 2.1 to 6.0). No CTCAE grade 3 or greater adverse events developed and no changes were observed in urinary or sexual function. At 6 months magnetic resonance imaging-ultrasound fusion biopsy of the treatment site showed no cancer in 3 patients, microfocal Gleason 3 + 3 in another 3 and persistent intermediate risk prostate cancer in 4. Focal laser ablation of prostate cancer appears safe and feasible with the patient under local anesthesia in a urology clinic using magnetic resonance imaging-ultrasound fusion for guidance and thermal probes for monitoring. Further development is necessary to refine out of bore focal laser ablation and additional studies are needed to determine appropriate treatment margins and oncologic efficacy. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Ashamed and Fused with Body Image and Eating: Binge Eating as an Avoidance Strategy.
Duarte, Cristiana; Pinto-Gouveia, José; Ferreira, Cláudia
2017-01-01
Binge Eating Disorder (BED) is currently recognized as a severe disorder associated with relevant psychiatric and physical comorbidity, and marked emotional distress. Shame is a specific negative emotion that has been highlighted as central in eating disorders. However, the effect of shame and underlying mechanisms on binge eating symptomatology severity remained unclear. This study examines the role of shame, depressive symptoms, weight and shape concerns and eating concerns, and body image-related cognitive fusion, on binge eating symptomatology severity. Participated in this study 73 patients with the diagnosis of BED, established through a clinical interview-Eating Disorder Examination 17.0D-who completed measures of external shame, body-image related cognitive fusion, depressive symptoms and binge eating symptomatology. Results revealed positive associations between binge eating severity and depressive symptoms, shame, weight and shape concerns, eating concerns and body image-related cognitive fusion. A path analysis showed that, when controlling for the effect of depressive symptoms, external shame has a direct effect on binge eating severity, and an indirect effect mediated by increased eating concern and higher levels of body image-related cognitive fusion. Results confirmed the plausibility of the model, which explained 43% of the severity of binge eating symptoms. The proposed model suggests that, in BED patients, perceiving that others see the self negatively may be associated with an entanglement with body image-related thoughts and concerns about eating, which may, in turn, fuel binge eating symptoms. Findings have important clinical implications supporting the relevance of addressing shame and associated processes in binge eating. Copyright © 2015 John Wiley & Sons, Ltd. Shame is a significant predictor of symptomatology severity of BED patients. Shame significantly impacts binge eating, even controlling for depressive symptoms. Shame significantly predicts body image-related cognitive fusion and eating concern. Body image-fusion and eating concern mediate the link between shame and binge eating. Binge eating may be seen as an avoidance strategy from negative self-evaluations. Copyright © 2015 John Wiley & Sons, Ltd.
18F-FDG PET/MRI fusion in characterizing pancreatic tumors: comparison to PET/CT.
Tatsumi, Mitsuaki; Isohashi, Kayako; Onishi, Hiromitsu; Hori, Masatoshi; Kim, Tonsok; Higuchi, Ichiro; Inoue, Atsuo; Shimosegawa, Eku; Takeda, Yutaka; Hatazawa, Jun
2011-08-01
To demonstrate that positron emission tomography (PET)/magnetic resonance imaging (MRI) fusion was feasible in characterizing pancreatic tumors (PTs), comparing MRI and computed tomography (CT) as mapping images for fusion with PET as well as fused PET/MRI and PET/CT. We retrospectively reviewed 47 sets of (18)F-fluorodeoxyglucose ((18)F -FDG) PET/CT and MRI examinations to evaluate suspected or known pancreatic cancer. To assess the ability of mapping images for fusion with PET, CT (of PET/CT), T1- and T2-weighted (w) MR images (all non-contrast) were graded regarding the visibility of PT (5-point confidence scale). Fused PET/CT, PET/T1-w or T2-w MR images of the upper abdomen were evaluated to determine whether mapping images provided additional diagnostic information to PET alone (3-point scale). The overall quality of PET/CT or PET/MRI sets in diagnosis was also assessed (3-point scale). These PET/MRI-related scores were compared to PET/CT-related scores and the accuracy in characterizing PTs was compared. Forty-three PTs were visualized on CT or MRI, including 30 with abnormal FDG uptake and 13 without. The confidence score for the visibility of PT was significantly higher on T1-w MRI than CT. The scores for additional diagnostic information to PET and overall quality of each image set in diagnosis were significantly higher on the PET/T1-w MRI set than the PET/CT set. The diagnostic accuracy was higher on PET/T1-w or PET/T2-w MRI (93.0 and 90.7%, respectively) than PET/CT (88.4%), but statistical significance was not obtained. PET/MRI fusion, especially PET with T1-w MRI, was demonstrated to be superior to PET/CT in characterizing PTs, offering better mapping and fusion image quality.
NASA Astrophysics Data System (ADS)
Singh, Dharmendra; Kumar, Harish
Earth observation satellites provide data that covers different portions of the electromagnetic spectrum at different spatial and spectral resolutions. The increasing availability of information products generated from satellite images are extending the ability to understand the patterns and dynamics of the earth resource systems at all scales of inquiry. In which one of the most important application is the generation of land cover classification from satellite images for understanding the actual status of various land cover classes. The prospect for the use of satel-lite images in land cover classification is an extremely promising one. The quality of satellite images available for land-use mapping is improving rapidly by development of advanced sensor technology. Particularly noteworthy in this regard is the improved spatial and spectral reso-lution of the images captured by new satellite sensors like MODIS, ASTER, Landsat 7, and SPOT 5. For the full exploitation of increasingly sophisticated multisource data, fusion tech-niques are being developed. Fused images may enhance the interpretation capabilities. The images used for fusion have different temporal, and spatial resolution. Therefore, the fused image provides a more complete view of the observed objects. It is one of the main aim of image fusion to integrate different data in order to obtain more information that can be de-rived from each of the single sensor data alone. A good example of this is the fusion of images acquired by different sensors having a different spatial resolution and of different spectral res-olution. Researchers are applying the fusion technique since from three decades and propose various useful methods and techniques. The importance of high-quality synthesis of spectral information is well suited and implemented for land cover classification. More recently, an underlying multiresolution analysis employing the discrete wavelet transform has been used in image fusion. It was found that multisensor image fusion is a tradeoff between the spectral information from a low resolution multi-spectral images and the spatial information from a high resolution multi-spectral images. With the wavelet transform based fusion method, it is easy to control this tradeoff. A new transform, the curvelet transform was used in recent years by Starck. A ridgelet transform is applied to square blocks of detail frames of undecimated wavelet decomposition, consequently the curvelet transform is obtained. Since the ridgelet transform possesses basis functions matching directional straight lines therefore, the curvelet transform is capable of representing piecewise linear contours on multiple scales through few significant coefficients. This property leads to a better separation between geometric details and background noise, which may be easily reduced by thresholding curvelet coefficients before they are used for fusion. The Terra and Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) instrument provides high radiometric sensitivity (12 bit) in 36 spectral bands ranging in wavelength from 0.4 m to 14.4 m and also it is freely available. Two bands are imaged at a nominal resolution of 250 m at nadir, with five bands at 500 m, and the remaining 29 bands at 1 km. In this paper, the band 1 of spatial resolution 250 m and bandwidth 620-670 nm, and band 2, of spatial resolution of 250m and bandwidth 842-876 nm is considered as these bands has special features to identify the agriculture and other land covers. In January 2006, the Advanced Land Observing Satellite (ALOS) was successfully launched by the Japan Aerospace Exploration Agency (JAXA). The Phased Arraytype L-band SAR (PALSAR) sensor onboard the satellite acquires SAR imagery at a wavelength of 23.5 cm (frequency 1.27 GHz) with capabilities of multimode and multipolarization observation. PALSAR can operate in several modes: the fine-beam single (FBS) polarization mode (HH), fine-beam dual (FBD) polariza-tion mode (HH/HV or VV/VH), polarimetric (PLR) mode (HH/HV/VH/VV), and ScanSAR (WB) mode (HH/VV) [15]. These makes PALSAR imagery very attractive for spatially and temporally consistent monitoring system. The Overview of Principal Component Analysis is that the most of the information within all the bands can be compressed into a much smaller number of bands with little loss of information. It allows us to extract the low-dimensional subspaces that capture the main linear correlation among the high-dimensional image data. This facilitates viewing the explained variance or signal in the available imagery, allowing both gross and more subtle features in the imagery to be seen. In this paper we have explored the fusion technique for enhancing the land cover classification of low resolution satellite data espe-cially freely available satellite data. For this purpose, we have considered to fuse the PALSAR principal component data with MODIS principal component data. Initially, the MODIS band 1 and band 2 is considered, its principal component is computed. Similarly the PALSAR HH, HV and VV polarized data are considered, and there principal component is also computed. con-sequently, the PALSAR principal component image is fused with MODIS principal component image. The aim of this paper is to analyze the effect of classification accuracy on major type of land cover types like agriculture, water and urban bodies with fusion of PALSAR data to MODIS data. Curvelet transformation has been applied for fusion of these two satellite images and Minimum Distance classification technique has been applied for the resultant fused image. It is qualitatively and visually observed that the overall classification accuracy of MODIS image after fusion is enhanced. This type of fusion technique may be quite helpful in near future to use freely available satellite data to develop monitoring system for different land cover classes on the earth.
Fragile sites, dysfunctional telomere and chromosome fusions: What is 5S rDNA role?
Barros, Alain Victor; Wolski, Michele Andressa Vier; Nogaroto, Viviane; Almeida, Mara Cristina; Moreira-Filho, Orlando; Vicari, Marcelo Ricardo
2017-04-15
Repetitive DNA regions are known as fragile chromosomal sites which present a high flexibility and low stability. Our focus was characterize fragile sites in 5S rDNA regions. The Ancistrus sp. species shows a diploid number of 50 and an indicative Robertsonian fusion at chromosomal pair 1. Two sequences of 5S rDNA were identified: 5S.1 rDNA and 5S.2 rDNA. The first sequence gathers the necessary structures to gene expression and shows a functional secondary structure prediction. Otherwise, the 5S.2 rDNA sequence does not contain the upstream sequences that are required to expression, furthermore its structure prediction reveals a nonfunctional ribosomal RNA. The chromosomal mapping revealed several 5S.1 and 5S.2 rDNA clusters. In addition, the 5S.2 rDNA clusters were found in acrocentric and metacentric chromosomes proximal regions. The pair 1 5S.2 rDNA cluster is co-located with interstitial telomeric sites (ITS). Our results indicate that its clusters are hotspots to chromosomal breaks. During the meiotic prophase bouquet arrangement, double strand breaks (DSBs) at proximal 5S.2 rDNA of acrocentric chromosomes could lead to homologous and non-homologous repair mechanisms as Robertsonian fusions. Still, ITS sites provides chromosomal instability, resulting in telomeric recombination via TRF2 shelterin protein and a series of breakage-fusion-bridge cycles. Our proposal is that 5S rDNA derived sequences, act as chromosomal fragile sites in association with some chromosomal rearrangements of Loricariidae. Copyright © 2017 Elsevier B.V. All rights reserved.
Sun, Lu; Xie, Shuping; Qi, Jing; Liu, Ergang; Liu, Di; Liu, Quan; Chen, Sunhui; He, Huining; Yang, Victor C
2017-11-15
Matrix metalloproteinases (MMPs) activatable imaging probe has been explored for tumor detection. However, activation of the probe is mainly done in the extracellular space without intracellular uptake of the probe for more sensitivity. Although cell-penetrating peptides (CPPs) have been demonstrated to enable intracellular delivery of the imaging probe, they nevertheless encounter off-target delivery of the cargos to normal tissues. Herein, we have developed a dual MMP-2-activatable and tumor cell-permeable magnetic nanoprobe to simultaneously achieve selective and intracellular tumor imaging. This novel imaging probe was constructed by self-assembling a hexahistidine-tagged (His-tagged) fluorescent fusion protein chimera and nickel ferrite nanoparticles via a chelation mechanism. The His-tagged fluorescent protein chimera consisted of a red fluorescent protein mCherry that acted as the fluorophore, the low-molecular-weight protamine peptide as the CPP, and the MMP-2 cleavage sequence fused with the hexahistidine tag, whereas the nickel ferrite nanoparticles functioned as the His-tagged protein binder and also the fluorescent quencher. Both in vitro and in vivo results revealed that this imaging probe would not only remain nonpermeable to normal tissues, thereby offsetting the nonselective cellular uptake, but was also suppressed of fluorescent signals during magnetic tumor-targeting in the circulation, primarily because of the masking of the CPP activity and quenching of the fluorophore by the associated NiFe 2 O 4 nanoparticles. However, these properties were recovered or "turned on" by the action of tumor-associated MMP-2 stimuli, leading to cell penetration of the nanoprobes as well as fluorescence restoration and visualization within the tumor cells. In this regard, the presented tumor-activatable and cell-permeable system deems to be an appealing platform to achieve selective tumor imaging and intracellular protein delivery. Its impact is therefore significant, far-reaching, and wide-spread.
Evaluation of Night Vision Devices for Image Fusion Studies
2004-12-01
July 2004. http://www.sensorsmag.com/articles/0400/34/main.shtml Task, Harry L., Hartman, Richard T., Marasco , Peter L., Methods for Measuring...Press, Bellingham, Washington, 1998. Burt, Peter J. & Kolczynski, Raymond J., David Sarnoff Research Center, Enhanced Image Capture through Fusion
Lee, Minsu; Shin, Su-Jin; Oh, Young Taik; Jung, Dae Chul; Cho, Nam Hoon; Choi, Young Deuk; Park, Sung Yoon
2017-09-01
To investigate the utility of fused high b value diffusion-weighted imaging (DWI) and T2-weighted imaging (T2WI) for evaluating depth of invasion in bladder cancer. We included 62 patients with magnetic resonance imaging (MRI) and surgically confirmed urothelial carcinoma in the urinary bladder. An experienced genitourinary radiologist analysed the depth of invasion (T stage <2 or ≥2) using T2WI, DWI, T2WI plus DWI, and fused DWI and T2WI (fusion MRI). Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and accuracy were investigated. Area under the curve (AUC) was analysed to identify T stage ≥2. The rate of patients with surgically confirmed T stage ≥2 was 41.9% (26/62). Sensitivity, specificity, PPV, NPV and accuracy were 50.0%, 55.6%, 44.8%, 60.6% and 53.2%, respectively, with T2WI; 57.7%, 77.8%, 65.2%, 71.8% and 69.4%, respectively, with DWI; 65.4%, 80.6%, 70.8%, 76.3% and 74.2%, respectively, with T2WI plus DWI and 80.8%, 77.8%, 72.4%, 84.9% and 79.0%, respectively, with fusion MRI. AUC was 0.528 with T2WI, 0.677 with DWI, 0.730 with T2WI plus DWI and 0.793 with fusion MRI for T stage ≥2. Fused high b value DWI and T2WI may be a promising non-contrast MRI technique for assessing depth of invasion in bladder cancer. • Accuracy of fusion MRI was 79.0% for T stage ≥2 in bladder cancer. • AUC of fusion MRI was 0.793 for T stage ≥2 in bladder cancer. • Diagnostic performance of fusion MRI was comparable with T2WI plus DWI. • As a non-contrast MRI technique, fusion MRI is useful for bladder cancer.
Electron cyclotron emission imaging and applications in magnetic fusion energy
NASA Astrophysics Data System (ADS)
Tobias, Benjamin John
Energy production through the burning of fossil fuels is an unsustainable practice. Exponentially increasing energy consumption and dwindling natural resources ensure that coal and gas fueled power plants will someday be a thing of the past. However, even before fuel reserves are depleted, our planet may well succumb to disastrous side effects, namely the build up of carbon emissions in the environment triggering world-wide climate change and the countless industrial spills of pollutants that continue to this day. Many alternatives are currently being developed, but none has so much promise as fusion nuclear energy, the energy of the sun. The confinement of hot plasma at temperatures in excess of 100 million Kelvin by a carefully arranged magnetic field for the realization of a self-sustaining fusion power plant requires new technologies and improved understanding of fundamental physical phenomena. Imaging of electron cyclotron radiation lends insight into the spatial and temporal behavior of electron temperature fluctuations and instabilities, providing a powerful diagnostic for investigations into basic plasma physics and nuclear fusion reactor operation. This dissertation presents the design and implementation of a new generation of Electron Cyclotron Emission Imaging (ECEI) diagnostics on toroidal magnetic fusion confinement devices, or tokamaks, around the world. The underlying physics of cyclotron radiation in fusion plasmas is reviewed, and a thorough discussion of millimeter wave imaging techniques and heterodyne radiometry in ECEI follows. The imaging of turbulence and fluid flows has evolved over half a millennium since Leonardo da Vinci's first sketches of cascading water, and applications for ECEI in fusion research are broad ranging. Two areas of physical investigation are discussed in this dissertation: the identification of poloidal shearing in Alfven eigenmode structures predicted by hybrid gyrofluid-magnetohydrodynamic (gyrofluid-MHD) modeling, and magnetic field line displacement during precursor oscillations associated with the sawtooth crash, a disruptive instability observed both in tokamak plasmas with high core current and in the magnetized plasmas of solar flares and other interstellar plasmas. Understanding both of these phenomena is essential for the future of magnetic fusion energy, and important new observations described herein underscore the advantages of imaging techniques in experimental physics.
2014-10-01
Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The study investigates whether fusion PET/MRI imaging with 18F- choline PET/CT and...imaging with 18F- choline PET/CT and diffusion-weighted MRI can be successfully applied to target prostate cancer using image-guided prostate...Completed task. The 18F- choline synthesis was implemented and optimized for routine radiotracer production. RDRC committee approval as part of the IRB
NASA Astrophysics Data System (ADS)
Pan, Feng; Deng, Yating; Ma, Xichao; Xiao, Wen
2017-11-01
Digital holographic microtomography is improved and applied to the measurements of three-dimensional refractive index distributions of fusion spliced optical fibers. Tomographic images are reconstructed from full-angle phase projection images obtained with a setup-rotation approach, in which the laser source, the optical system and the image sensor are arranged on an optical breadboard and synchronously rotated around the fixed object. For retrieving high-quality tomographic images, a numerical method is proposed to compensate the unwanted movements of the object in the lateral, axial and vertical directions during rotation. The compensation is implemented on the two-dimensional phase images instead of the sinogram. The experimental results exhibit distinctly the internal structures of fusion splices between a single-mode fiber and other fibers, including a multi-mode fiber, a panda polarization maintaining fiber, a bow-tie polarization maintaining fiber and a photonic crystal fiber. In particular, the internal structure distortion in the fusion areas can be intuitively observed, such as the expansion of the stress zones of polarization maintaining fibers, the collapse of the air holes of photonic crystal fibers, etc.
Towards Omni-Tomography—Grand Fusion of Multiple Modalities for Simultaneous Interior Tomography
Wang, Ge; Zhang, Jie; Gao, Hao; Weir, Victor; Yu, Hengyong; Cong, Wenxiang; Xu, Xiaochen; Shen, Haiou; Bennett, James; Furth, Mark; Wang, Yue; Vannier, Michael
2012-01-01
We recently elevated interior tomography from its origin in computed tomography (CT) to a general tomographic principle, and proved its validity for other tomographic modalities including SPECT, MRI, and others. Here we propose “omni-tomography”, a novel concept for the grand fusion of multiple tomographic modalities for simultaneous data acquisition in a region of interest (ROI). Omni-tomography can be instrumental when physiological processes under investigation are multi-dimensional, multi-scale, multi-temporal and multi-parametric. Both preclinical and clinical studies now depend on in vivo tomography, often requiring separate evaluations by different imaging modalities. Over the past decade, two approaches have been used for multimodality fusion: Software based image registration and hybrid scanners such as PET-CT, PET-MRI, and SPECT-CT among others. While there are intrinsic limitations with both approaches, the main obstacle to the seamless fusion of multiple imaging modalities has been the bulkiness of each individual imager and the conflict of their physical (especially spatial) requirements. To address this challenge, omni-tomography is now unveiled as an emerging direction for biomedical imaging and systems biomedicine. PMID:22768108
Research on multi-source image fusion technology in haze environment
NASA Astrophysics Data System (ADS)
Ma, GuoDong; Piao, Yan; Li, Bing
2017-11-01
In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.
A research on radiation calibration of high dynamic range based on the dual channel CMOS
NASA Astrophysics Data System (ADS)
Ma, Kai; Shi, Zhan; Pan, Xiaodong; Wang, Yongsheng; Wang, Jianghua
2017-10-01
The dual channel complementary metal-oxide semiconductor (CMOS) can get high dynamic range (HDR) image through extending the gray level of the image by using image fusion with high gain channel image and low gain channel image in a same frame. In the process of image fusion with dual channel, it adopts the coefficients of radiation response of a pixel from dual channel in a same frame, and then calculates the gray level of the pixel in the HDR image. For the coefficients of radiation response play a crucial role in image fusion, it has to find an effective method to acquire these parameters. In this article, it makes a research on radiation calibration of high dynamic range based on the dual channel CMOS, and designs an experiment to calibrate the coefficients of radiation response for the sensor it used. In the end, it applies these response parameters in the dual channel CMOS which calibrates, and verifies the correctness and feasibility of the method mentioned in this paper.
Target detection method by airborne and spaceborne images fusion based on past images
NASA Astrophysics Data System (ADS)
Chen, Shanjing; Kang, Qing; Wang, Zhenggang; Shen, ZhiQiang; Pu, Huan; Han, Hao; Gu, Zhongzheng
2017-11-01
To solve the problem that remote sensing target detection method has low utilization rate of past remote sensing data on target area, and can not recognize camouflage target accurately, a target detection method by airborne and spaceborne images fusion based on past images is proposed in this paper. The target area's past of space remote sensing image is taken as background. The airborne and spaceborne remote sensing data is fused and target feature is extracted by the means of airborne and spaceborne images registration, target change feature extraction, background noise suppression and artificial target feature extraction based on real-time aerial optical remote sensing image. Finally, the support vector machine is used to detect and recognize the target on feature fusion data. The experimental results have established that the proposed method combines the target area change feature of airborne and spaceborne remote sensing images with target detection algorithm, and obtains fine detection and recognition effect on camouflage and non-camouflage targets.
Multimodal Medical Image Fusion by Adaptive Manifold Filter.
Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna
2015-01-01
Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.
Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study
Sappa, Angel D.; Carvajal, Juan A.; Aguilera, Cristhian A.; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X.
2016-01-01
This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR). PMID:27294938
Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study.
Sappa, Angel D; Carvajal, Juan A; Aguilera, Cristhian A; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X
2016-06-10
This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR).
Single-Scale Fusion: An Effective Approach to Merging Images.
Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C
2017-01-01
Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.
NASA Astrophysics Data System (ADS)
Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.
2018-04-01
In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.
Li, Xinxin; Wu, Zhihao; Zhang, Chuanfu; Jia, Leili; Song, Hongbin; Xu, Yuanyong
2014-01-01
To construct a eukaryotic expression vector containing human complement receptor 2 (CR2)-Fc and express the CR2-Fc fusion protein in Chinese hamster ovary (CHO) cells. The extracellular domain of human CR2 and IgG1 Fc were respectively amplified, ligated and inserted into the eukaryotic expression vector PCI-neo. After verified by restriction enzyme digestion and sequencing, the recombinant plasmid was transfected into CHO K1 cells. The ones with stable expression of the fusion protein were obtained by means of G418 selection. The expression of the CR2-Fc fusion protein was detected and confirmed by SDS-PAGE and Western blotting. Restriction enzyme digestion and sequencing demonstrated that the recombinant plasmid was valid. SDS-PAGE showed that relative molecular mass (Mr;) of the purified product was consistent with the expected value. Western blotting further proved the single band at the same position. We constructed the eukaryotic expression vector of CR2-Fc/PCI-neo successfully. The obtained fusion protein was active and can be used for the further study of the role in HIV control.
A genetically encoded fluorescent tRNA is active in live-cell protein synthesis
Masuda, Isao; Igarashi, Takao; Sakaguchi, Reiko; Nitharwal, Ram G.; Takase, Ryuichi; Han, Kyu Young; Leslie, Benjamin J.; Liu, Cuiping; Gamper, Howard; Ha, Taekjip; Sanyal, Suparna
2017-01-01
Abstract Transfer RNAs (tRNAs) perform essential tasks for all living cells. They are major components of the ribosomal machinery for protein synthesis and they also serve in non-ribosomal pathways for regulation and signaling metabolism. We describe the development of a genetically encoded fluorescent tRNA fusion with the potential for imaging in live Escherichia coli cells. This tRNA fusion carries a Spinach aptamer that becomes fluorescent upon binding of a cell-permeable and non-toxic fluorophore. We show that, despite having a structural framework significantly larger than any natural tRNA species, this fusion is a viable probe for monitoring tRNA stability in a cellular quality control mechanism that degrades structurally damaged tRNA. Importantly, this fusion is active in E. coli live-cell protein synthesis allowing peptidyl transfer at a rate sufficient to support cell growth, indicating that it is accommodated by translating ribosomes. Imaging analysis shows that this fusion and ribosomes are both excluded from the nucleoid, indicating that the fusion and ribosomes are in the cytosol together possibly engaged in protein synthesis. This fusion methodology has the potential for developing new tools for live-cell imaging of tRNA with the unique advantage of both stoichiometric labeling and broader application to all cells amenable to genetic engineering. PMID:27956502
Fusion of imaging and nonimaging data for surveillance aircraft
NASA Astrophysics Data System (ADS)
Shahbazian, Elisa; Gagnon, Langis; Duquet, Jean Remi; Macieszczak, Maciej; Valin, Pierre
1997-06-01
This paper describes a phased incremental integration approach for application of image analysis and data fusion technologies to provide automated intelligent target tracking and identification for airborne surveillance on board an Aurora Maritime Patrol Aircraft. The sensor suite of the Aurora consists of a radar, an identification friend or foe (IFF) system, an electronic support measures (ESM) system, a spotlight synthetic aperture radar (SSAR), a forward looking infra-red (FLIR) sensor and a link-11 tactical datalink system. Lockheed Martin Canada (LMCan) is developing a testbed, which will be used to analyze and evaluate approaches for combining the data provided by the existing sensors, which were initially not designed to feed a fusion system. Three concurrent research proof-of-concept activities provide techniques, algorithms and methodology into three sequential phases of integration of this testbed. These activities are: (1) analysis of the fusion architecture (track/contact/hybrid) most appropriate for the type of data available, (2) extraction and fusion of simple features from the imaging data into the fusion system performing automatic target identification, and (3) development of a unique software architecture which will permit integration and independent evolution, enhancement and optimization of various decision aid capabilities, such as multi-sensor data fusion (MSDF), situation and threat assessment (STA) and resource management (RM).
Simulated disparity and peripheral blur interact during binocular fusion.
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J
2014-07-17
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual’s aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. © 2014 ARVO.
Simulated disparity and peripheral blur interact during binocular fusion
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J
2014-01-01
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual's aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. PMID:25034260
Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.
Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn
2016-04-20
Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.
Nazarena Pizzi, M; Aguadé Bruix, S; Cuéllar Calabria, H; Aliaga, V; Candell Riera, J
2010-01-01
A 77-year old patient was admitted for acute coronary syndrome without ST elevation. His risk was stratified using the myocardial perfusion gated SPECT, mild inferior ischemia being observed. Thus, medical therapy was optimized and the patient was discharged. He continued with exertional dyspnea so a coronary CT angiography was performed. It revealed severe lesions in the proximal RCA. SPECT-CT fusion images correlated the myocardial perfusion defect with a posterior descending artery from the RCA, in a co-dominant coronary area. Subsequently, cardiac catheterism was indicated for his treatment. The current use of image fusion studies is limited to patients in whom it is difficult to attribute a perfusion defect to a specific coronary artery. In our patient, the fusion images helped to distinguish between the RCA and the circumflex artery as the culprit artery of ischemia. Copyright © 2010 Elsevier España, S.L. y SEMNIM. All rights reserved.
Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan
2018-01-01
Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.