NASA Technical Reports Server (NTRS)
Oneil, William F.
1993-01-01
The fusion of radar and electro-optic (E-O) sensor images presents unique challenges. The two sensors measure different properties of the real three-dimensional (3-D) world. Forming the sensor outputs into a common format does not mask these differences. In this paper, the conditions under which fusion of the two sensor signals is possible are explored. The program currently planned to investigate this problem is briefly discussed.
Goreczny, Sebastian; Dryzek, Pawel; Morgan, Gareth J; Lukaszewski, Maciej; Moll, Jadwiga A; Moszura, Tomasz
2017-08-01
We report initial experience with novel three-dimensional (3D) image fusion software for guidance of transcatheter interventions in congenital heart disease. Developments in fusion imaging have facilitated the integration of 3D roadmaps from computed tomography or magnetic resonance imaging datasets. The latest software allows live fusion of two-dimensional (2D) fluoroscopy with pre-registered 3D roadmaps. We reviewed all cardiac catheterizations guided with this software (Philips VesselNavigator). Pre-catheterization imaging and catheterization data were collected focusing on fusion of 3D roadmap, intervention guidance, contrast and radiation exposure. From 09/2015 until 06/2016, VesselNavigator was applied in 34 patients for guidance (n = 28) or planning (n = 6) of cardiac catheterization. In all 28 patients successful 2D-3D registration was performed. Bony structures combined with the cardiovascular silhouette were used for fusion in 26 patients (93%), calcifications in 9 (32%), previously implanted devices in 8 (29%) and low-volume contrast injection in 7 patients (25%). Accurate initial 3D roadmap alignment was achieved in 25 patients (89%). Six patients (22%) required realignment during the procedure due to distortion of the anatomy after introduction of stiff equipment. Overall, VesselNavigator was applied successfully in 27 patients (96%) without any complications related to 3D image overlay. VesselNavigator was useful in guidance of nearly all of cardiac catheterizations. The combination of anatomical markers and low-volume contrast injections allowed reliable 2D-3D registration in the vast majority of patients.
Panuccio, Giuseppe; Torsello, Giovanni Federico; Pfister, Markus; Bisdas, Theodosios; Bosiers, Michel J; Torsello, Giovanni; Austermann, Martin
2016-12-01
To assess the usability of a fully automated fusion imaging engine prototype, matching preinterventional computed tomography with intraoperative fluoroscopic angiography during endovascular aortic repair. From June 2014 to February 2015, all patients treated electively for abdominal and thoracoabdominal aneurysms were enrolled prospectively. Before each procedure, preoperative planning was performed with a fully automated fusion engine prototype based on computed tomography angiography, creating a mesh model of the aorta. In a second step, this three-dimensional dataset was registered with the two-dimensional intraoperative fluoroscopy. The main outcome measure was the applicability of the fully automated fusion engine. Secondary outcomes were freedom from failure of automatic segmentation or of the automatic registration as well as accuracy of the mesh model, measuring deviations from intraoperative angiography in millimeters, if applicable. Twenty-five patients were enrolled in this study. The fusion imaging engine could be used in successfully 92% of the cases (n = 23). Freedom from failure of automatic segmentation was 44% (n = 11). The freedom from failure of the automatic registration was 76% (n = 19), the median error of the automatic registration process was 0 mm (interquartile range, 0-5 mm). The fully automated fusion imaging engine was found to be applicable in most cases, albeit in several cases a fully automated data processing was not possible, requiring manual intervention. The accuracy of the automatic registration yielded excellent results and promises a useful and simple to use technology. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Advanced Scintillator Detectors for Neutron Imaging in Inertial Confinement Fusion
NASA Astrophysics Data System (ADS)
Geppert-Kleinrath, Verena; Danly, Christopher; Merrill, Frank; Simpson, Raspberry; Volegov, Petr; Wilde, Carl
2016-10-01
The neutron imaging team at Los Alamos National Laboratory (LANL) has been providing two-dimensional neutron imaging of the inertial confinement fusion process at the National Ignition Facility (NIF) for over five years. Neutron imaging is a powerful tool in which position-sensitive detectors register neutrons emitted in the fusion reactions, producing a picture of the burning fuel. Recent images have revealed possible multi-dimensional asymmetries, calling for additional views to facilitate three-dimensional imaging. These will be along shorter lines of sight to stay within the existing facility at NIF. In order to field imaging capabilities equivalent to the existing system several technological challenges have to be met: high spatial resolution, high light output, and fast scintillator response to capture lower-energy neutrons, which have scattered from non-burning regions of fuel. Deuterated scintillators are a promising candidate to achieve the timing and resolution required; a systematic study of deuterated and non-deuterated polystyrene and liquid samples is currently ongoing. A test stand has been implemented to measure the response function, and preliminary data on resolution and light output have been obtained at the LANL Weapons Neutrons Research facility.
The evolution of image-guided lumbosacral spine surgery.
Bourgeois, Austin C; Faulkner, Austin R; Pasciak, Alexander S; Bradley, Yong C
2015-04-01
Techniques and approaches of spinal fusion have considerably evolved since their first description in the early 1900s. The incorporation of pedicle screw constructs into lumbosacral spine surgery is among the most significant advances in the field, offering immediate stability and decreased rates of pseudarthrosis compared to previously described methods. However, early studies describing pedicle screw fixation and numerous studies thereafter have demonstrated clinically significant sequelae of inaccurate surgical fusion hardware placement. A number of image guidance systems have been developed to reduce morbidity from hardware malposition in increasingly complex spine surgeries. Advanced image guidance systems such as intraoperative stereotaxis improve the accuracy of pedicle screw placement using a variety of surgical approaches, however their clinical indications and clinical impact remain debated. Beginning with intraoperative fluoroscopy, this article describes the evolution of image guided lumbosacral spinal fusion, emphasizing two-dimensional (2D) and three-dimensional (3D) navigational methods.
NASA Astrophysics Data System (ADS)
Pan, Feng; Deng, Yating; Ma, Xichao; Xiao, Wen
2017-11-01
Digital holographic microtomography is improved and applied to the measurements of three-dimensional refractive index distributions of fusion spliced optical fibers. Tomographic images are reconstructed from full-angle phase projection images obtained with a setup-rotation approach, in which the laser source, the optical system and the image sensor are arranged on an optical breadboard and synchronously rotated around the fixed object. For retrieving high-quality tomographic images, a numerical method is proposed to compensate the unwanted movements of the object in the lateral, axial and vertical directions during rotation. The compensation is implemented on the two-dimensional phase images instead of the sinogram. The experimental results exhibit distinctly the internal structures of fusion splices between a single-mode fiber and other fibers, including a multi-mode fiber, a panda polarization maintaining fiber, a bow-tie polarization maintaining fiber and a photonic crystal fiber. In particular, the internal structure distortion in the fusion areas can be intuitively observed, such as the expansion of the stress zones of polarization maintaining fibers, the collapse of the air holes of photonic crystal fibers, etc.
Baek, Jihye; Huh, Jangyoung; Kim, Myungsoo; Hyun An, So; Oh, Yoonjin; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena
2013-02-01
To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Volume measurement, using 3D US, shows a 2.8 ± 1.5% error, 4.4 ± 3.0% error for CT, and 3.1 ± 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.
Introduction to clinical and laboratory (small-animal) image registration and fusion.
Zanzonico, Pat B; Nehmeh, Sadek A
2006-01-01
Imaging has long been a vital component of clinical medicine and, increasingly, of biomedical research in small-animals. Clinical and laboratory imaging modalities can be divided into two general categories, structural (or anatomical) and functional (or physiological). The latter, in particular, has spawned what has come to be known as "molecular imaging". Image registration and fusion have rapidly emerged as invaluable components of both clinical and small-animal imaging and has lead to the development and marketing of a variety of multi-modality, e.g. PET-CT, devices which provide registered and fused three-dimensional image sets. This paper briefly reviews the basics of image registration and fusion and available clinical and small-animal multi-modality instrumentation.
Schwein, Adeline; Lu, Tony; Chinnadurai, Ponraj; Kitkungvan, Danai; Shah, Dipan J; Chakfe, Nabil; Lumsden, Alan B; Bismuth, Jean
2017-01-01
Endovascular recanalization is considered first-line therapy for chronic central venous occlusion (CVO). Unlike arteries, in which landmarks such as wall calcifications provide indirect guidance for endovascular navigation, sclerotic veins without known vascular branching patterns impose significant challenges. Therefore, safe wire access through such chronic lesions mostly relies on intuition and experience. Studies have shown that magnetic resonance venography (MRV) can be performed safely in these patients, and the boundaries of occluded veins may be visualized on specific MRV sequences. Intraoperative image fusion techniques have become more common to guide complex arterial endovascular procedures. The aim of this study was to assess the feasibility and utility of MRV and intraoperative cone-beam computed tomography (CBCT) image fusion technique during endovascular CVO recanalization. During the study period, patients with symptomatic CVO and failed standard endovascular recanalization underwent further recanalization attempts with use of intraoperative MRV image fusion guidance. After preoperative MRV and intraoperative CBCT image coregistration, a virtual centerline path of the occluded segment was electronically marked in MRV and overlaid on real-time two-dimensional fluoroscopy images. Technical success, fluoroscopy times, radiation doses, number of venograms before recanalization, and accuracy of the virtual centerline overlay were evaluated. Four patients underwent endovascular CVO recanalization with use of intraoperative MRV image fusion guidance. Mean (± standard deviation) time for image fusion was 6:36 ± 00:51 mm:ss. The lesion was successfully crossed in all patients without complications. Mean fluoroscopy time for lesion crossing was 12.5 ± 3.4 minutes. Mean total fluoroscopy time was 28.8 ± 6.5 minutes. Mean total radiation dose was 15,185 ± 7747 μGy/m 2 , and mean radiation dose from CBCT acquisition was 2788 ± 458 μGy/m 2 (18% of mean total radiation dose). Mean number of venograms before recanalization was 1.6 ± 0.9, whereas two lesions were crossed without any prior venography. On qualitative analysis, virtual centerlines from MRV were aligned with actual guidewire trajectory on fluoroscopy in all four cases. MRV image fusion is feasible and may improve success, safety, and the surgeon's confidence during CVO recanalization. Similar to arterial interventions, three-dimensional MRV imaging and image fusion techniques could foster innovative solutions for such complex venous interventions and have the potential to affect a great number of patients. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Schwein, Adeline; Chinnadurai, Ponraj; Shah, Dipan J; Lumsden, Alan B; Bechara, Carlos F; Bismuth, Jean
2017-05-01
Three-dimensional image fusion of preoperative computed tomography (CT) angiography with fluoroscopy using intraoperative noncontrast cone-beam CT (CBCT) has been shown to improve endovascular procedures by reducing procedure length, radiation dose, and contrast media volume. However, patients with a contraindication to CT angiography (renal insufficiency, iodinated contrast allergy) may not benefit from this image fusion technique. The primary objective of this study was to evaluate the feasibility of magnetic resonance angiography (MRA) and fluoroscopy image fusion using noncontrast CBCT as a guidance tool during complex endovascular aortic procedures, especially in patients with renal insufficiency. All endovascular aortic procedures done under MRA image fusion guidance at a single-center were retrospectively reviewed. The patients had moderate to severe renal insufficiency and underwent diagnostic contrast-enhanced magnetic resonance imaging after gadolinium or ferumoxytol injection. Relevant vascular landmarks electronically marked in MRA images were overlaid on real-time two-dimensional fluoroscopy for image guidance, after image fusion with noncontrast intraoperative CBCT. Technical success, time for image registration, procedure time, fluoroscopy time, number of digital subtraction angiography (DSA) acquisitions before stent deployment or vessel catheterization, and renal function before and after the procedure were recorded. The image fusion accuracy was qualitatively evaluated on a binary scale by three physicians after review of image data showing virtual landmarks from MRA on fluoroscopy. Between November 2012 and March 2016, 10 patients underwent endovascular procedures for aortoiliac aneurysmal disease or aortic dissection using MRA image fusion guidance. All procedures were technically successful. A paired t-test analysis showed no difference between preimaging and postoperative renal function (P = .6). The mean time required for MRA-CBCT image fusion was 4:09 ± 01:31 min:sec. Total fluoroscopy time was 20.1 ± 6.9 minutes. Five of 10 patients (50%) underwent stent graft deployment without any predeployment DSA acquisition. Three of six vessels (50%) were cannulated under image fusion guidance without any precannulation DSA runs, and the remaining vessels were cannulated after one planning DSA acquisition. Qualitative evaluation showed 14 of 22 virtual landmarks (63.6%) from MRA overlaid on fluoroscopy were completely accurate, without the need for adjustment. Five of eight incorrect virtual landmarks (iliac and visceral arteries) resulted from vessel deformation caused by endovascular devices. Ferumoxytol or gadolinium-enhanced MRA imaging and image fusion with fluoroscopy using noncontrast CBCT is feasible and allows patients with renal insufficiency to benefit from optimal guidance during complex endovascular aortic procedures, while preserving their residual renal function. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong
2016-07-01
Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less
Rouabah, K; Varoquaux, A; Caporossi, J M; Louis, G; Jacquier, A; Bartoli, J M; Moulin, G; Vidal, V
2016-11-01
The purpose of this study was to assess the feasibility and utility of image fusion (Easy-TIPS) obtained from pre-procedure CT angiography and per-procedure real-time fluoroscopy for portal vein puncture during transjugular intrahepatic portosystemic shunt (TIPS) placement. Eighteen patients (15 men, 3 women) with a mean age of 63 years (range: 48-81 years; median age, 65 years) were included in the study. All patients underwent TIPS placement by two groups of radiologists (one group with radiologists of an experience<3 years and one with an experience≥3 years) using fusion imaging obtained from three-dimensional computed tomography angiography of the portal vein and real-time fluoroscopic images of the portal vein. Image fusion was used to guide the portal vein puncture during TIPS placement. At the end of the procedure, the interventional radiologists evaluated the utility of fusion imaging for portal vein puncture during TIPS placement. Mismatch between three-dimensional computed tomography angiography and real-time fluoroscopic images of the portal vein on image fusion was quantitatively analyzed. Posttreatment CT time, number of the puncture attempts, total radiation exposure and radiation from the retrograde portography were also recorded. Image fusion was considered useful for portal vein puncture in 13/18 TIPS procedures (72%). The mean posttreatment time to obtain fusion images was 16.4minutes. 3D volume rendered CT angiography images was strictly superimposed on direct portography in 10/18 procedures (56%). The mismatch mean value was 0.69cm in height and 0.28cm laterally. A mean number of 4.6 portal vein puncture attempts was made. Eight patients required less than three attempts. The mean radiation dose from retrograde portography was 421.2dGy.cm 2 , corresponding to a mean additional exposure of 19%. Fusion imaging resulting from image fusion from pre-procedural CT angiography is feasible, safe and makes portal puncture easier during TIPS placement. Copyright © 2016 Editions françaises de radiologie. Published by Elsevier Masson SAS. All rights reserved.
Image Fusion and 3D Roadmapping in Endovascular Surgery.
Jones, Douglas W; Stangenberg, Lars; Swerdlow, Nicholas J; Alef, Matthew; Lo, Ruby; Shuja, Fahad; Schermerhorn, Marc L
2018-05-21
Practitioners of endovascular surgery have historically utilized two-dimensional (2D) intraoperative fluoroscopic imaging, with intra-vascular contrast opacification, to treat complex three-dimensional (3D) pathology. Recently, major technical developments in intraoperative imaging have made image fusion techniques possible: the creation of a 3D patient-specific vascular roadmap based on preoperative imaging which aligns with intraoperative fluoroscopy, with many potential benefits. First, a 3D model is segmented from preoperative imaging, typically a CT scan. The model is then used to plan for the procedure, with placement of specific markers and storing of C-arm angles that will be used for intra-operative guidance. At the time of the procedure, an intraoperative cone-beam CT is performed and the 3D model is registered to the patient's on-table anatomy. Finally, the system is used for live guidance where the 3D model is codisplayed overlying fluoroscopic images. Copyright © 2018. Published by Elsevier Inc.
Chen, Yuanbo; Li, Hulin; Wu, Dingtao; Bi, Keming; Liu, Chunxiao
2014-12-01
Construction of three-dimensional (3D) model of renal tumor facilitated surgical planning and imaging guidance of manual image fusion in laparoscopic partial nephrectomy (LPN) for intrarenal tumors. Fifteen patients with intrarenal tumors underwent LPN between January and December 2012. Computed tomography-based reconstruction of the 3D models of renal tumors was performed using Mimics 12.1 software. Surgical planning was performed through morphometry and multi-angle visual views of the tumor model. Two-step manual image fusion superimposed 3D model images onto 2D laparoscopic images. The image fusion was verified by intraoperative ultrasound. Imaging-guided laparoscopic hilar clamping and tumor excision was performed. Manual fusion time, patient demographics, surgical details, and postoperative treatment parameters were analyzed. The reconstructed 3D tumor models accurately represented the patient's physiological anatomical landmarks. The surgical planning markers were marked successfully. Manual image fusion was flexible and feasible with fusion time of 6 min (5-7 min). All surgeries were completed laparoscopically. The median tumor excision time was 5.4 min (3.5-10 min), whereas the median warm ischemia time was 25.5 min (16-32 min). Twelve patients (80 %) demonstrated renal cell carcinoma on final pathology, and all surgical margins were negative. No tumor recurrence was detected after a media follow-up of 1 year (3-15 months). The surgical planning and two-step manual image fusion based on 3D model of renal tumor facilitated visible-imaging-guided tumor resection with negative margin in LPN for intrarenal tumor. It is promising and moves us one step closer to imaging-guided surgery.
Compressive hyperspectral and multispectral imaging fusion
NASA Astrophysics Data System (ADS)
Espitia, Óscar; Castillo, Sergio; Arguello, Henry
2016-05-01
Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.
Development of position measurement unit for flying inertial fusion energy target
NASA Astrophysics Data System (ADS)
Tsuji, R.; Endo, T.; Yoshida, H.; Norimatsu, T.
2016-03-01
We have reported the present status in the development of a position measurement unit (PMU) for a flying inertial fusion energy (IFE) target. The PMU, which uses Arago spot phenomena, is designed to have a measurement accuracy smaller than 1 μm. By employing divergent, pulsed orthogonal laser beam illumination, we can measure the time and the target position at the pulsed illumination. The two-dimensional Arago spot image is compressed into one-dimensional image by a cylindrical lens for real-time processing. The PMU are set along the injection path of the flying target. The local positions of the target in each PMU are transferred to the controller and analysed to calculate the target trajectory. Two methods are presented to calculate the arrival time and the arrival position of the target at the reactor centre.
Anand, Rishi; Gorev, Maxim V; Poghosyan, Hermine; Pothier, Lindsay; Matkins, John; Kotler, Gregory; Moroz, Sarah; Armstrong, James; Nemtsov, Sergei V; Orlov, Michael V
2016-08-01
To compare the efficacy and accuracy of rotational angiography with three-dimensional reconstruction (3DATG) image merged with electro-anatomical mapping (EAM) vs. CT-EAM. A prospective, randomized, parallel, two-center study conducted in 36 patients (25 men, age 65 ± 10 years) undergoing AF ablation (33 % paroxysmal, 67 % persistent) guided by 3DATG (group 1) vs. CT (group 2) image fusion with EAM. 3DATG was performed on the Philips Allura Xper FD 10 system. Procedural characteristics including time, radiation exposure, outcome, and navigation accuracy were compared between two groups. There was no significant difference between the groups in total procedure duration or time spent for various procedural steps. Minor differences in procedural characteristics were present between two centers. Segmentation and fusion time for 3DATG or CT-EAM was short and similar between both centers. Accuracy of navigation guided by either method was high and did not depend on left atrial size. Maintenance of sinus rhythm between the two groups was no different up to 24 months of follow-up. This study did not find superiority of 3DATG-EAM image merge to guide AF ablation when compared to CT-EAM fusion. Both merging techniques result in similar navigation accuracy.
Multisensor fusion for 3D target tracking using track-before-detect particle filter
NASA Astrophysics Data System (ADS)
Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.
2015-05-01
This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.
Multispectral image fusion for illumination-invariant palmprint recognition
Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064
Multispectral image fusion for illumination-invariant palmprint recognition.
Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.
The neutron imaging diagnostic at NIF (invited).
Merrill, F E; Bower, D; Buckles, R; Clark, D D; Danly, C R; Drury, O B; Dzenitis, J M; Fatherley, V E; Fittinghoff, D N; Gallegos, R; Grim, G P; Guler, N; Loomis, E N; Lutz, S; Malone, R M; Martinson, D D; Mares, D; Morley, D J; Morgan, G L; Oertel, J A; Tregillis, I L; Volegov, P L; Weiss, P B; Wilde, C H; Wilson, D C
2012-10-01
A neutron imaging diagnostic has recently been commissioned at the National Ignition Facility (NIF). This new system is an important diagnostic tool for inertial fusion studies at the NIF for measuring the size and shape of the burning DT plasma during the ignition stage of Inertial Confinement Fusion (ICF) implosions. The imaging technique utilizes a pinhole neutron aperture, placed between the neutron source and a neutron detector. The detection system measures the two dimensional distribution of neutrons passing through the pinhole. This diagnostic has been designed to collect two images at two times. The long flight path for this diagnostic, 28 m, results in a chromatic separation of the neutrons, allowing the independently timed images to measure the source distribution for two neutron energies. Typically the first image measures the distribution of the 14 MeV neutrons and the second image of the 6-12 MeV neutrons. The combination of these two images has provided data on the size and shape of the burning plasma within the compressed capsule, as well as a measure of the quantity and spatial distribution of the cold fuel surrounding this core.
NASA Astrophysics Data System (ADS)
Volegov, P. L.; Danly, C. R.; Fittinghoff, D.; Geppert-Kleinrath, V.; Grim, G.; Merrill, F. E.; Wilde, C. H.
2017-11-01
Neutron, gamma-ray, and x-ray imaging are important diagnostic tools at the National Ignition Facility (NIF) for measuring the two-dimensional (2D) size and shape of the neutron producing region, for probing the remaining ablator and measuring the extent of the DT plasmas during the stagnation phase of Inertial Confinement Fusion implosions. Due to the difficulty and expense of building these imagers, at most only a few two-dimensional projections images will be available to reconstruct the three-dimensional (3D) sources. In this paper, we present a technique that has been developed for the 3D reconstruction of neutron, gamma-ray, and x-ray sources from a minimal number of 2D projections using spherical harmonics decomposition. We present the detailed algorithms used for this characterization and the results of reconstructed sources from experimental neutron and x-ray data collected at OMEGA and NIF.
Biwasaka, Hitoshi; Saigusa, Kiyoshi; Aoki, Yasuhiro
2005-03-01
In this study, the applicability of holography in the 3-dimensional recording of forensic objects such as skulls and mandibulae, and the accuracy of the reconstructed 3-D images, were examined. The virtual holographic image, which records the 3-dimensional data of the original object, is visually observed on the other side of the holographic plate, and reproduces the 3-dimensional shape of the object well. Another type of holographic image, the real image, is focused on a frosted glass screen, and cross-sectional images of the object can be observed. When measuring the distances between anatomical reference points using an image-processing software, the average deviations in the holographic images as compared to the actual objects were less than 0.1 mm. Therefore, holography could be useful as a 3-dimensional recording method of forensic objects. Two superimposition systems using holographic images were examined. In the 2D-3D system, the transparent virtual holographic image of an object is directly superimposed onto the digitized photograph of the same object on the LCD monitor. On the other hand, in the video system, the holographic image captured by the CCD camera is superimposed onto the digitized photographic image using a personal computer. We found that the discrepancy between the outlines of the superimposed holographic and photographic dental images using the video system was smaller than that using the 2D-3D system. Holography seemed to perform comparably to the computer graphic system; however, a fusion with the digital technique would expand the utility of holography in superimposition.
Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.
Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn
2016-01-01
Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.
Towards Omni-Tomography—Grand Fusion of Multiple Modalities for Simultaneous Interior Tomography
Wang, Ge; Zhang, Jie; Gao, Hao; Weir, Victor; Yu, Hengyong; Cong, Wenxiang; Xu, Xiaochen; Shen, Haiou; Bennett, James; Furth, Mark; Wang, Yue; Vannier, Michael
2012-01-01
We recently elevated interior tomography from its origin in computed tomography (CT) to a general tomographic principle, and proved its validity for other tomographic modalities including SPECT, MRI, and others. Here we propose “omni-tomography”, a novel concept for the grand fusion of multiple tomographic modalities for simultaneous data acquisition in a region of interest (ROI). Omni-tomography can be instrumental when physiological processes under investigation are multi-dimensional, multi-scale, multi-temporal and multi-parametric. Both preclinical and clinical studies now depend on in vivo tomography, often requiring separate evaluations by different imaging modalities. Over the past decade, two approaches have been used for multimodality fusion: Software based image registration and hybrid scanners such as PET-CT, PET-MRI, and SPECT-CT among others. While there are intrinsic limitations with both approaches, the main obstacle to the seamless fusion of multiple imaging modalities has been the bulkiness of each individual imager and the conflict of their physical (especially spatial) requirements. To address this challenge, omni-tomography is now unveiled as an emerging direction for biomedical imaging and systems biomedicine. PMID:22768108
Volegov, P. L.; Danly, C. R.; Merrill, F. E.; ...
2015-11-24
The neutron imaging system at the National Ignition Facility is an important diagnostic tool for measuring the two-dimensional size and shape of the source of neutrons produced in the burning deuterium-tritium plasma during the stagnation phase of inertial confinement fusion implosions. Few two-dimensional projections of neutronimages are available to reconstruct the three-dimensionalneutron source. In our paper, we present a technique that has been developed for the 3Dreconstruction of neutron and x-raysources from a minimal number of 2D projections. Here, we present the detailed algorithms used for this characterization and the results of reconstructedsources from experimental data collected at Omega.
An efficient classification method based on principal component and sparse representation.
Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang
2016-01-01
As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.
Rolls, A E; Maurel, B; Davis, M; Constantinou, J; Hamilton, G; Mastracci, T M
2016-09-01
Fusion of three-dimensional (3D) computed tomography and intraoperative two-dimensional imaging in endovascular surgery relies on manual rigid co-registration of bony landmarks and tracking of hardware to provide a 3D overlay (hardware-based tracking, HWT). An alternative technique (image-based tracking, IMT) uses image recognition to register and place the fusion mask. We present preliminary experience with an agnostic fusion technology that uses IMT, with the aim of comparing the accuracy of overlay for this technology with HWT. Data were collected prospectively for 12 patients. All devices were deployed using both IMT and HWT fusion assistance concurrently. Postoperative analysis of both systems was performed by three blinded expert observers, from selected time-points during the procedures, using the displacement of fusion rings, the overlay of vascular markings and the true ostia of renal arteries. The Mean overlay error and the deviation from mean error was derived using image analysis software. Comparison of the mean overlay error was made between IMT and HWT. The validity of the point-picking technique was assessed. IMT was successful in all of the first 12 cases, whereas technical learning curve challenges thwarted HWT in four cases. When independent operators assessed the degree of accuracy of the overlay, the median error for IMT was 3.9 mm (IQR 2.89-6.24, max 9.5) versus 8.64 mm (IQR 6.1-16.8, max 24.5) for HWT (p = .001). Variance per observer was 0.69 mm(2) and 95% limit of agreement ±1.63. In this preliminary study, the error of magnitude of displacement from the "true anatomy" during image overlay in IMT was less than for HWT. This confirms that ongoing manual re-registration, as recommended by the manufacturer, should be performed for HWT systems to maintain accuracy. The error in position of the fusion markers for IMT was consistent, thus may be considered predictable. Copyright © 2016 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
Yu, Yao; Zhang, Wen-Bo; Liu, Xiao-Jing; Guo, Chuan-Bin; Yu, Guang-Yan; Peng, Xin
2017-06-01
The purpose of this study was to describe new technology assisted by 3-dimensional (3D) image fusion of 18 F-fluorodeoxyglucose (FDG)-positron emission tomography (PET)/computed tomography (CT) and contrast-enhanced CT (CECT) for computer planning of a maxillectomy of recurrent maxillary squamous cell carcinoma and defect reconstruction. Treatment of recurrent maxillary squamous cell carcinoma usually includes tumor resection and free flap reconstruction. FDG-PET/CT provided images of regions of abnormal glucose uptake and thus showed metabolic tumor volume to guide tumor resection. CECT data were used to create 3D reconstructed images of vessels to show the vascular diameters and locations, so that the most suitable vein and artery could be selected during anastomosis of the free flap. The data from preoperative maxillofacial CECT scans and FDG-PET/CT imaging were imported into the navigation system (iPlan 3.0; Brainlab, Feldkirchen, Germany). Three-dimensional image fusion between FDG-PET/CT and CECT was accomplished using Brainlab software according to the position of the 2 skulls simulated in the CECT image and PET/CT image, respectively. After verification of the image fusion accuracy, the 3D reconstruction images of the metabolic tumor, vessels, and other critical structures could be visualized within the same coordinate system. These sagittal, coronal, axial, and 3D reconstruction images were used to determine the virtual osteotomy sites and reconstruction plan, which was provided to the surgeon and used for surgical navigation. The average shift of the 3D image fusion between FDG-PET/CT and CECT was less than 1 mm. This technique, by clearly showing the metabolic tumor volume and the most suitable vessels for anastomosis, facilitated resection and reconstruction of recurrent maxillary squamous cell carcinoma. We used 3D image fusion of FDG-PET/CT and CECT to successfully accomplish resection and reconstruction of recurrent maxillary squamous cell carcinoma. This method has the potential to improve the clinical outcomes of these challenging procedures. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Shi, Chaoyang; Tercero, Carlos; Ikeda, Seiichi; Ooe, Katsutoshi; Fukuda, Toshio; Komori, Kimihiro; Yamamoto, Kiyohito
2012-09-01
It is desirable to reduce aortic stent graft installation time and the amount of contrast media used for this process. Guidance with augmented reality can achieve this by facilitating alignment of the stent graft with the renal and mesenteric arteries. For this purpose, a sensor fusion is proposed between intravascular ultrasound (IVUS) and magnetic trackers to construct three-dimensional virtual reality models of the blood vessels, as well as improvements to the gradient vector flow snake for boundary detection in ultrasound images. In vitro vasculature imaging experiments were done with hybrid probe and silicone models of the vasculature. The dispersion of samples for the magnetic tracker in the hybrid probe increased less than 1 mm when the IVUS was activated. Three-dimensional models of the descending thoracic aorta, with cross-section radius average error of 0.94 mm, were built from the data fusion. The development of this technology will enable reduction in the amount of contrast media required for in vivo and real-time three-dimensional blood vessel imaging. Copyright © 2012 John Wiley & Sons, Ltd.
Muscatello, Christopher M.; Domier, Calvin W.; Hu, Xing; ...
2014-08-13
Here, quasi-optical imaging at sub-THz frequencies has had a major impact on fusion plasma diagnostics. Mm-wave imaging reflectometry utilizes microwaves to actively probe fusion plasmas, inferring the local properties of electron density fluctuations. Electron cyclotron emission imaging is a multichannel radiometer that passively measures the spontaneous emission of microwaves from the plasma to infer local properties of electron temperature fluctuations. These imaging diagnostics work together to diagnose the characteristics of turbulence. Important quantities such as amplitude and wavenumber of coherent fluctuations, correlation lengths and decor relation times of turbulence, and poloidal flow velocity of the plasma are readily inferred.
Inertial confinement fusion quarterly report, October--December 1992. Volume 3, No. 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixit, S.N.
1992-12-31
This report contains papers on the following topics: The Beamlet Front End: Prototype of a new pulse generation system;imaging biological objects with x-ray lasers; coherent XUV generation via high-order harmonic generation in rare gases; theory of high-order harmonic generation; two-dimensional computer simulations of ultra- intense, short-pulse laser-plasma interactions; neutron detectors for measuring the fusion burn history of ICF targets; the recirculator; and lasnex evolves to exploit computer industry advances.
Focus measure method based on the modulus of the gradient of the color planes for digital microscopy
NASA Astrophysics Data System (ADS)
Hurtado-Pérez, Román; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso; Aguilar-Valdez, J. Félix; Ortega-Mendoza, Gabriel
2018-02-01
The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.
Baco, Eduard; Ukimura, Osamu; Rud, Erik; Vlatkovic, Ljiljana; Svindland, Aud; Aron, Manju; Palmer, Suzanne; Matsugasumi, Toru; Marien, Arnaud; Bernhard, Jean-Christophe; Rewcastle, John C; Eggesbø, Heidi B; Gill, Inderbir S
2015-04-01
Prostate biopsies targeted by elastic fusion of magnetic resonance (MR) and three-dimensional (3D) transrectal ultrasound (TRUS) images may allow accurate identification of the index tumor (IT), defined as the lesion with the highest Gleason score or the largest volume or extraprostatic extension. To determine the accuracy of MR-TRUS image-fusion biopsy in characterizing ITs, as confirmed by correlation with step-sectioned radical prostatectomy (RP) specimens. Retrospective analysis of 135 consecutive patients who sequentially underwent pre-biopsy MR, MR-TRUS image-fusion biopsy, and robotic RP at two centers between January 2010 and September 2013. Image-guided biopsies of MR-suspected IT lesions were performed with tracking via real-time 3D TRUS. The largest geographically distinct cancer focus (IT lesion) was independently registered on step-sectioned RP specimens. A validated schema comprising 27 regions of interest was used to identify the IT center location on MR images and in RP specimens, as well as the location of the midpoint of the biopsy trajectory, and variables were correlated. The concordance between IT location on biopsy and RP specimens was 95% (128/135). The coefficient for correlation between IT volume on MRI and histology was r=0.663 (p<0.001). The maximum cancer core length on biopsy was weakly correlated with RP tumor volume (r=0.466, p<0.001). The concordance of primary Gleason pattern between targeted biopsy and RP specimens was 90% (115/128; κ=0.76). The study limitations include retrospective evaluation of a selected patient population, which limits the generalizability of the results. Use of MR-TRUS image fusion to guide prostate biopsies reliably identified the location and primary Gleason pattern of the IT lesion in >90% of patients, but showed limited ability to predict cancer volume, as confirmed by step-sectioned RP specimens. Biopsies targeted using magnetic resonance images combined with real-time three-dimensional transrectal ultrasound allowed us to reliably identify the spatial location of the most important tumor in prostate cancer and characterize its aggressiveness. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Application of Virtual Navigation with Multimodality Image Fusion in Foramen Ovale Cannulation.
Qiu, Xixiong; Liu, Weizong; Zhang, Mingdong; Lin, Hengzhou; Zhou, Shoujun; Lei, Yi; Xia, Jun
2017-11-01
Idiopathic trigeminal neuralgia (ITN) can be effectively treated with radiofrequency thermocoagulation. However, this procedure requires cannulation of the foramen ovale, and conventional cannulation methods are associated with high failure rates. Multimodality imaging can improve the accuracy of cannulation because each imaging method can compensate for the drawbacks of the other. We aim to determine the feasibility and accuracy of percutaneous foramen ovale cannulation under the guidance of virtual navigation with multimodality image fusion in a self-designed anatomical model of human cadaveric heads. Five cadaveric head specimens were investigated in this study. Spiral computed tomography (CT) scanning clearly displayed the foramen ovale in all five specimens (10 foramina), which could not be visualized using two-dimensional ultrasound alone. The ultrasound and spiral CT images were fused, and percutaneous cannulation of the foramen ovale was performed under virtual navigation. After this, spiral CT scanning was immediately repeated to confirm the accuracy of the cannulation. Postprocedural spiral CT confirmed that the ultrasound and CT images had been successfully fused for all 10 foramina, which were accurately and successfully cannulated. The success rates of both image fusion and cannulation were 100%. Virtual navigation with multimodality image fusion can substantially facilitate foramen ovale cannulation and is worthy of clinical application. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Face-iris multimodal biometric scheme based on feature level fusion
NASA Astrophysics Data System (ADS)
Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei
2015-11-01
Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.
Gkioulekas, Eleftherios
2016-09-01
Using the fusion-rules hypothesis for three-dimensional and two-dimensional Navier-Stokes turbulence, we generalize a previous nonperturbative locality proof to multiple applications of the nonlinear interactions operator on generalized structure functions of velocity differences. We call this generalization of nonperturbative locality to multiple applications of the nonlinear interactions operator "multilocality." The resulting cross terms pose a new challenge requiring a new argument and the introduction of a new fusion rule that takes advantage of rotational symmetry. Our main result is that the fusion-rules hypothesis implies both locality and multilocality in both the IR and UV limits for the downscale energy cascade of three-dimensional Navier-Stokes turbulence and the downscale enstrophy cascade and inverse energy cascade of two-dimensional Navier-Stokes turbulence. We stress that these claims relate to nonperturbative locality of generalized structure functions on all orders and not the term-by-term perturbative locality of diagrammatic theories or closure models that involve only two-point correlation and response functions.
NASA Astrophysics Data System (ADS)
Poggio, Andrew J.
1988-10-01
This issue of Energy and Technology Review contains: Neutron Penumbral Imaging of Laser-Fusion Targets--using our new penumbral-imaging diagnostic, we have obtained the first images that can be used to measure directly the deuterium-tritium burn region in laser-driven fusion targets; Computed Tomography for Nondestructive Evaluation--various computed tomography systems and computational techniques are used in nondestructive evaluation; Three-Dimensional Image Analysis for Studying Nuclear Chromatin Structure--we have developed an optic-electronic system for acquiring cross-sectional views of cell nuclei, and computer codes to analyze these images and reconstruct the three-dimensional structures they represent; Imaging in the Nuclear Test Program--advanced techniques produce images of unprecedented detail and resolution from Nevada Test Site data; and Computational X-Ray Holography--visible-light experiments and numerically simulated holograms test our ideas about an X-ray microscope for biological research.
PET and MRI image fusion based on combination of 2-D Hilbert transform and IHS method.
Haddadpour, Mozhdeh; Daneshvar, Sabalan; Seyedarabi, Hadi
2017-08-01
The process of medical image fusion is combining two or more medical images such as Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) and mapping them to a single image as fused image. So purpose of our study is assisting physicians to diagnose and treat the diseases in the least of the time. We used Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) as input images, so fused them based on combination of two dimensional Hilbert transform (2-D HT) and Intensity Hue Saturation (IHS) method. Evaluation metrics that we apply are Discrepancy (D k ) as an assessing spectral features and Average Gradient (AG k ) as an evaluating spatial features and also Overall Performance (O.P) to verify properly of the proposed method. In this paper we used three common evaluation metrics like Average Gradient (AG k ) and the lowest Discrepancy (D k ) and Overall Performance (O.P) to evaluate the performance of our method. Simulated and numerical results represent the desired performance of proposed method. Since that the main purpose of medical image fusion is preserving both spatial and spectral features of input images, so based on numerical results of evaluation metrics such as Average Gradient (AG k ), Discrepancy (D k ) and Overall Performance (O.P) and also desired simulated results, it can be concluded that our proposed method can preserve both spatial and spectral features of input images. Copyright © 2017 Chang Gung University. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chaa, Mourad; Boukezzoula, Naceur-Eddine; Attia, Abdelouahab
2017-01-01
Two types of scores extracted from two-dimensional (2-D) and three-dimensional (3-D) palmprint for personal recognition systems are merged, introducing a local image descriptor for 2-D palmprint-based recognition systems, named bank of binarized statistical image features (B-BSIF). The main idea of B-BSIF is that the extracted histograms from the binarized statistical image features (BSIF) code images (the results of applying the different BSIF descriptor size with the length 12) are concatenated into one to produce a large feature vector. 3-D palmprint contains the depth information of the palm surface. The self-quotient image (SQI) algorithm is applied for reconstructing illumination-invariant 3-D palmprint images. To extract discriminative Gabor features from SQI images, Gabor wavelets are defined and used. Indeed, the dimensionality reduction methods have shown their ability in biometrics systems. Given this, a principal component analysis (PCA)+linear discriminant analysis (LDA) technique is employed. For the matching process, the cosine Mahalanobis distance is applied. Extensive experiments were conducted on a 2-D and 3-D palmprint database with 10,400 range images from 260 individuals. Then, a comparison was made between the proposed algorithm and other existing methods in the literature. Results clearly show that the proposed framework provides a higher correct recognition rate. Furthermore, the best results were obtained by merging the score of B-BSIF descriptor with the score of the SQI+Gabor wavelets+PCA+LDA method, yielding an equal error rate of 0.00% and a recognition rate of rank-1=100.00%.
Makino, Yuki; Imai, Yasuharu; Igura, Takumi; Hori, Masatoshi; Fukuda, Kazuto; Sawai, Yoshiyuki; Kogita, Sachiyo; Fujita, Norihiko; Takehara, Tetsuo; Murakami, Takamichi
2015-01-01
To assess the feasibility of fusion of pre- and post-ablation gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid-enhanced magnetic resonance imaging (Gd-EOB-DTPA-MRI) to evaluate the effects of radiofrequency ablation (RFA) of hepatocellular carcinoma (HCC), compared with similarly fused CT images This retrospective study included 67 patients with 92 HCCs treated with RFA. Fusion images of pre- and post-RFA dynamic CT, and pre- and post-RFA Gd-EOB-DTPA-MRI were created, using a rigid registration method. The minimal ablative margin measured on fusion imaging was categorized into three groups: (1) tumor protruding outside the ablation zone boundary, (2) ablative margin 0-<5.0 mm beyond the tumor boundary, and (3) ablative margin ≥5.0 mm beyond the tumor boundary. The categorization of minimal ablative margins was compared between CT and MR fusion images. In 57 (62.0%) HCCs, treatment evaluation was possible both on CT and MR fusion images, and the overall agreement between them for the categorization of minimal ablative margin was good (κ coefficient = 0.676, P < 0.01). MR fusion imaging enabled treatment evaluation in a significantly larger number of HCCs than CT fusion imaging (86/92 [93.5%] vs. 62/92 [67.4%], P < 0.05). Fusion of pre- and post-ablation Gd-EOB-DTPA-MRI is feasible for treatment evaluation after RFA. It may enable accurate treatment evaluation in cases where CT fusion imaging is not helpful.
Zhang, Dongxia; Gan, Yangzhou; Xiong, Jing; Xia, Zeyang
2017-02-01
Complete three-dimensional(3D) tooth model provides essential information to assist orthodontists for diagnosis and treatment planning. Currently, 3D tooth model is mainly obtained by segmentation and reconstruction from dental computed tomography(CT) images. However, the accuracy of 3D tooth model reconstructed from dental CT images is low and not applicable for invisalign design. And another serious problem also occurs, i.e. frequentative dental CT scan during different intervals of orthodontic treatment often leads to radiation to the patients. Hence, this paper proposed a method to reconstruct tooth model based on fusion of dental CT images and laser-scanned images. A complete3 D tooth model was reconstructed with the registration and fusion between the root reconstructed from dental CT images and the crown reconstructed from laser-scanned images. The crown of the complete 3D tooth model reconstructed with the proposed method has higher accuracy. Moreover, in order to reconstruct complete 3D tooth model of each orthodontic treatment interval, only one pre-treatment CT scan is needed and in the orthodontic treatment process only the laser-scan is required. Therefore, radiation to the patients can be reduced significantly.
Goudeketting, Seline R; Heinen, Stefan G H; Ünlü, Çağdaş; van den Heuvel, Daniel A F; de Vries, Jean-Paul P M; van Strijen, Marco J; Sailer, Anna M
2017-08-01
To systematically review and meta-analyze the added value of 3-dimensional (3D) image fusion technology in endovascular aortic repair for its potential to reduce contrast media volume, radiation dose, procedure time, and fluoroscopy time. Electronic databases were systematically searched for studies published between January 2010 and March 2016 that included a control group describing 3D fusion imaging in endovascular aortic procedures. Two independent reviewers assessed the methodological quality of the included studies and extracted data on iodinated contrast volume, radiation dose, procedure time, and fluoroscopy time. The contrast use for standard and complex endovascular aortic repairs (fenestrated, branched, and chimney) were pooled using a random-effects model; outcomes are reported as the mean difference with 95% confidence intervals (CIs). Seven studies, 5 retrospective and 2 prospective, involving 921 patients were selected for analysis. The methodological quality of the studies was moderate (median 17, range 15-18). The use of fusion imaging led to an estimated mean reduction in iodinated contrast of 40.1 mL (95% CI 16.4 to 63.7, p=0.002) for standard procedures and a mean 70.7 mL (95% CI 44.8 to 96.6, p<0.001) for complex repairs. Secondary outcome measures were not pooled because of potential bias in nonrandomized data, but radiation doses, procedure times, and fluoroscopy times were lower, although not always significantly, in the fusion group in 6 of the 7 studies. Compared with the control group, 3D fusion imaging is associated with a significant reduction in the volume of contrast employed for standard and complex endovascular aortic procedures, which can be particularly important in patients with renal failure. Radiation doses, procedure times, and fluoroscopy times were reduced when 3D fusion was used.
Novel cooperative neural fusion algorithms for image restoration and image fusion.
Xia, Youshen; Kamel, Mohamed S
2007-02-01
To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.
Kashimura, Hiroshi; Ogasawara, Kuniaki; Arai, Hiroshi; Beppu, Takaaki; Inoue, Takashi; Takahashi, Tsutomu; Matsuda, Koichi; Takahashi, Yujiro; Fujiwara, Shunrou; Ogawa, Akira
2008-09-01
A fusion technique for magnetic resonance (MR) angiography and MR imaging was developed to help assess the peritumoral angioarchitecture during surgical planning for meningioma. Three-dimensional time-of-flight (3D-TOF) and 3D-spoiled gradient recalled (SPGR) datasets were obtained from 10 patients with intracranial meningioma, and fused using newly developed volume registration and visualization software. Maximum intensity projection (MIP) images from 3D-TOF MR angiography and axial SPGR MR imaging were displayed at the same time on the monitor. Selecting a vessel on the real-time MIP image indicated the corresponding points on the axial image automatically. Fusion images showed displacement of the anterior cerebral or middle cerebral artery in 7 patients and encasement of the anterior cerebral arteries in 1 patient, with no relationship between the main arterial trunk and tumor in 2 patients. Fusion of MR angiography and MR imaging can clarify relationships between the intracranial vasculature and meningioma, and may be helpful for surgical planning for meningioma.
Neumann, Jan-Oliver; Giese, Henrik; Biller, Armin; Nagel, Armin M; Kiening, Karl
2015-01-01
Magnetic resonance imaging (MRI) is replacing computed tomography (CT) as the main imaging modality for stereotactic transformations. MRI is prone to spatial distortion artifacts, which can lead to inaccuracy in stereotactic procedures. Modern MRI systems provide distortion correction algorithms that may ameliorate this problem. This study investigates the different options of distortion correction using standard 1.5-, 3- and 7-tesla MRI scanners. A phantom was mounted on a stereotactic frame. One CT scan and three MRI scans were performed. At all three field strengths, two 3-dimensional sequences, volumetric interpolated breath-hold examination (VIBE) and magnetization-prepared rapid acquisition with gradient echo, were acquired, and automatic distortion correction was performed. Global stereotactic transformation of all 13 datasets was performed and two stereotactic planning workflows (MRI only vs. CT/MR image fusion) were subsequently analysed. Distortion correction on the 1.5- and 3-tesla scanners caused a considerable reduction in positional error. The effect was more pronounced when using the VIBE sequences. By using co-registration (CT/MR image fusion), even a lower positional error could be obtained. In ultra-high-field (7 T) MR imaging, distortion correction introduced even higher errors. However, the accuracy of non-corrected 7-tesla sequences was comparable to CT/MR image fusion 3-tesla imaging. MRI distortion correction algorithms can reduce positional errors by up to 60%. For stereotactic applications of utmost precision, we recommend a co-registration to an additional CT dataset. © 2015 S. Karger AG, Basel.
V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S
2016-12-01
The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.
Spectral edge: gradient-preserving spectral mapping for image fusion.
Connah, David; Drew, Mark S; Finlayson, Graham D
2015-12-01
This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple "ansatz" (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.
NASA Astrophysics Data System (ADS)
Czarski, T.; Chernyshova, M.; Pozniak, K. T.; Kasprowicz, G.; Byszuk, A.; Juszczyk, B.; Wojenski, A.; Zabolotny, W.; Zienkiewicz, P.
2015-12-01
The measurement system based on GEM - Gas Electron Multiplier detector is developed for X-ray diagnostics of magnetic confinement fusion plasmas. The Triple Gas Electron Multiplier (T-GEM) is presented as soft X-ray (SXR) energy and position sensitive detector. The paper is focused on the measurement subject and describes the fundamental data processing to obtain reliable characteristics (histograms) useful for physicists. So, it is the software part of the project between the electronic hardware and physics applications. The project is original and it was developed by the paper authors. Multi-channel measurement system and essential data processing for X-ray energy and position recognition are considered. Several modes of data acquisition determined by hardware and software processing are introduced. Typical measuring issues are deliberated for the enhancement of data quality. The primary version based on 1-D GEM detector was applied for the high-resolution X-ray crystal spectrometer KX1 in the JET tokamak. The current version considers 2-D detector structures initially for the investigation purpose. Two detector structures with single-pixel sensors and multi-pixel (directional) sensors are considered for two-dimensional X-ray imaging. Fundamental output characteristics are presented for one and two dimensional detector structure. Representative results for reference source and tokamak plasma are demonstrated.
Fujioka, Shinsuke; Fujiwara, Takashi; Tanabe, Minoru; Nishimura, Hiroaki; Nagatomo, Hideo; Ohira, Shinji; Inubushi, Yuichi; Shiraga, Hiroyuki; Azechi, Hiroshi
2010-10-01
Ultrafast, two-dimensional x-ray imaging is an important diagnostics for the inertial fusion energy research, especially in investigating implosion dynamics at the final stage of the fuel compression. Although x-ray radiography was applied to observing the implosion dynamics, intense x-rays emitted from the high temperature and dense fuel core itself are often superimposed on the radiograph. This problem can be solved by coupling the x-ray radiography with monochromatic x-ray imaging technique. In the experiment, 2.8 or 5.2 keV backlight x-rays emitted from laser-irradiated polyvinyl chloride or vanadium foils were selectively imaged by spherically bent quartz crystals with discriminating the out-of-band emission from the fuel core. This x-ray radiography system achieved 24 μm and 100 ps of spatial and temporal resolutions, respectively.
Recognizing human activities using appearance metric feature and kinematics feature
NASA Astrophysics Data System (ADS)
Qian, Huimin; Zhou, Jun; Lu, Xinbiao; Wu, Xinye
2017-05-01
The problem of automatically recognizing human activities from videos through the fusion of the two most important cues, appearance metric feature and kinematics feature, is considered. And a system of two-dimensional (2-D) Poisson equations is introduced to extract the more discriminative appearance metric feature. Specifically, the moving human blobs are first detected out from the video by background subtraction technique to form a binary image sequence, from which the appearance feature designated as the motion accumulation image and the kinematics feature termed as centroid instantaneous velocity are extracted. Second, 2-D discrete Poisson equations are employed to reinterpret the motion accumulation image to produce a more differentiated Poisson silhouette image, from which the appearance feature vector is created through the dimension reduction technique called bidirectional 2-D principal component analysis, considering the balance between classification accuracy and time consumption. Finally, a cascaded classifier based on the nearest neighbor classifier and two directed acyclic graph support vector machine classifiers, integrated with the fusion of the appearance feature vector and centroid instantaneous velocity vector, is applied to recognize the human activities. Experimental results on the open databases and a homemade one confirm the recognition performance of the proposed algorithm.
Gui, Long; Jurgens, Eric M.; Ebner, Jamie L.
2015-01-01
ABSTRACT In order to deliver their genetic material to host cells during infection, enveloped viruses use specialized proteins on their surfaces that bind cellular receptors and induce fusion of the viral and host membranes. In paramyxoviruses, a diverse family of single-stranded RNA (ssRNA) viruses, including several important respiratory pathogens, such as parainfluenza viruses, the attachment and fusion machinery is composed of two separate proteins: a receptor binding protein (hemagglutinin-neuraminidase [HN]) and a fusion (F) protein that interact to effect membrane fusion. Here we used negative-stain and cryo-electron tomography to image the 3-dimensional ultrastructure of human parainfluenza virus 3 (HPIV3) virions in the absence of receptor engagement. We observed that HN exists in at least two organizations. The first were arrays of tetrameric HN that lacked closely associated F proteins: in these purely HN arrays, HN adopted a “heads-down” configuration. In addition, we observed regions of complex surface density that contained HN in an apparently extended “heads-up” form, colocalized with prefusion F trimers. This colocalization with prefusion F prior to receptor engagement supports a model for fusion in which HN in its heads-up state and F may interact prior to receptor engagement without activating F, and that interaction with HN in this configuration is not sufficient to activate F. Only upon receptor engagement by HN’s globular head does HN transmit its activating signal to F. PMID:25691596
Yeates, Todd O.; Padilla, Jennifer; Colovos, Chris
2004-06-29
Novel fusion proteins capable of self-assembling into regular structures, as well as nucleic acids encoding the same, are provided. The subject fusion proteins comprise at least two oligomerization domains rigidly linked together, e.g. through an alpha helical linking group. Also provided are regular structures comprising a plurality of self-assembled fusion proteins of the subject invention, and methods for producing the same. The subject fusion proteins find use in the preparation of a variety of nanostructures, where such structures include: cages, shells, double-layer rings, two-dimensional layers, three-dimensional crystals, filaments, and tubes.
Kamogawa, Junji; Kato, Osamu; Morizane, Tatsunori; Hato, Taizo
2015-01-01
There have been several imaging studies of cervical radiculopathy, but no three-dimensional (3D) images have shown the path, position, and pathological changes of the cervical nerve roots and spinal root ganglion relative to the cervical bony structure. The objective of this study was to introduce a technique that enables the virtual pathology of the nerve root to be assessed using 3D magnetic resonance (MR)/computed tomography (CT) fusion images that show the compression of the proximal portion of the cervical nerve root by both the herniated disc and the preforaminal or foraminal bony spur in patients with cervical radiculopathy. MR and CT images were obtained from three patients with cervical radiculopathy. 3D MR images were placed onto 3D CT images using a computer workstation. The entire nerve root could be visualized in 3D with or without the vertebrae. The most important characteristic evident on the images was flattening of the nerve root by a bony spur. The affected root was constricted at a pre-ganglion site. In cases of severe deformity, the flattened portion of the root seemed to change the angle of its path, resulting in twisted condition. The 3D MR/CT fusion imaging technique enhances visualization of pathoanatomy in cervical hidden area that is composed of the root and intervertebral foramen. This technique provides two distinct advantages for diagnosis of cervical radiculopathy. First, the isolation of individual vertebra clarifies the deformities of the whole root groove, including both the uncinate process and superior articular process in the cervical spine. Second, the tortuous or twisted condition of a compressed root can be visualized. The surgeon can identify the narrowest face of the root if they view the MR/CT fusion image from the posterolateral-inferior direction. Surgeons use MR/CT fusion images as a pre-operative map and for intraoperative navigation. The MR/CT fusion images can also be used as educational materials for all hospital staff and for patients and patients' families who provide informed consent for treatments.
Brahme, Anders; Nyman, Peter; Skatt, Björn
2008-05-01
A four-dimensional (4D) laser camera (LC) has been developed for accurate patient imaging in diagnostic and therapeutic radiology. A complementary metal-oxide semiconductor camera images the intersection of a scanned fan shaped laser beam with the surface of the patient and allows real time recording of movements in a three-dimensional (3D) or four-dimensional (4D) format (3D +time). The LC system was first designed as an accurate patient setup tool during diagnostic and therapeutic applications but was found to be of much wider applicability as a general 4D photon "tag" for the surface of the patient in different clinical procedures. It is presently used as a 3D or 4D optical benchmark or tag for accurate delineation of the patient surface as demonstrated for patient auto setup, breathing and heart motion detection. Furthermore, its future potential applications in gating, adaptive therapy, 3D or 4D image fusion between most imaging modalities and image processing are discussed. It is shown that the LC system has a geometrical resolution of about 0, 1 mm and that the rigid body repositioning accuracy is about 0, 5 mm below 20 mm displacements, 1 mm below 40 mm and better than 2 mm at 70 mm. This indicates a slight need for repeated repositioning when the initial error is larger than about 50 mm. The positioning accuracy with standard patient setup procedures for prostate cancer at Karolinska was found to be about 5-6 mm when independently measured using the LC system. The system was found valuable for positron emission tomography-computed tomography (PET-CT) in vivo tumor and dose delivery imaging where it potentially may allow effective correction for breathing artifacts in 4D PET-CT and image fusion with lymph node atlases for accurate target volume definition in oncology. With a LC system in all imaging and radiation therapy rooms, auto setup during repeated diagnostic and therapeutic procedures may save around 5 min per session, increase accuracy and allow efficient image fusion between all imaging modalities employed.
Heideklang, René; Shokouhi, Parisa
2016-01-01
This article focuses on the fusion of flaw indications from multi-sensor nondestructive materials testing. Because each testing method makes use of a different physical principle, a multi-method approach has the potential of effectively differentiating actual defect indications from the many false alarms, thus enhancing detection reliability. In this study, we propose a new technique for aggregating scattered two- or three-dimensional sensory data. Using a density-based approach, the proposed method explicitly addresses localization uncertainties such as registration errors. This feature marks one of the major of advantages of this approach over pixel-based image fusion techniques. We provide guidelines on how to set all the key parameters and demonstrate the technique’s robustness. Finally, we apply our fusion approach to experimental data and demonstrate its capability to locate small defects by substantially reducing false alarms under conditions where no single-sensor method is adequate. PMID:26784200
Technical overview of the millimeter-wave imaging reflectometer on the DIII-D tokamak (invited)
Muscatello, Christopher M.; Domier, Calvin W.; Hu, Xing; ...
2014-07-22
The two-dimensional mm-wave imaging reflectometer (MIR) on DIII-D is a multi-faceted device for diagnosing electron density fluctuations in fusion plasmas. Its multi-channel, multi-frequency capabilities and high sensitivity permit visualization and quantitative diagnosis of density perturbations, including correlation length, wavenumber, mode propagation velocity, and dispersion. The two-dimensional capabilities of MIR are made possible with twelve vertically separated sightlines and four-frequency operation (corresponding to four radial channels). The 48-channel DIII-D MIR system has a tunable source that can be stepped in 500 µs increments over a range of 56 to 74 GHz. An innovative optical design keeps both on-axis and off-axis channelsmore » focused at the cutoff surface, permitting imaging over an extended poloidal region. As a result, the integrity of the MIR optical design is confirmed by comparing Gaussian beam calculations to laboratory measurements of the transmitter beam pattern and receiver antenna patterns.« less
Minami, Yasunori; Minami, Tomohiro; Hagiwara, Satoru; Ida, Hiroshi; Ueshima, Kazuomi; Nishida, Naoshi; Murakami, Takamichi; Kudo, Masatoshi
2018-05-01
To assess the clinical feasibility of US-US image overlay fusion with evaluation of the ablative margin in radiofrequency ablation (RFA) for hepatocellular carcinoma (HCC). Fifty-three patients with 68 HCCs measuring 0.9-4.0 cm who underwent RFA guided by US-US overlay image fusion were included in this retrospective study. By an overlay of pre-/postoperative US, the tumor image could be projected onto the ablative hyperechoic zone. Therefore, the ablative margin three-dimensionally could be shown during the RFA procedure. US-US image overlay was compared to dynamic CT a few days after RFA for assessment of early treatment response. Accuracy of graded response was calculated, and the performance of US-US image overlay fusion was compared with that of CT using a Kappa agreement test. Technically effective ablation was achieved in a single session, and 59 HCCs (86.8 %) succeeded in obtaining a 5-mm margin on CT. The response with US-US image overlay correctly predicted early CT evaluation with an accuracy of 92.6 % (63/68) (k = 0.67; 95 % CI: 0.39-0.95). US-US image overlay fusion can be proposed as a feasible guidance in RFA with a safety margin and predicts early response of treatment assessment with high accuracy. • US-US image overlay fusion visualizes the ablative margin during RFA procedure. • Visualizing the margin during the procedure can prompt immediate complementary treatment. • US image fusion correlates with the results of early evaluation CT.
Hyperspectral face recognition with spatiospectral information fusion and PLS regression.
Uzair, Muhammad; Mahmood, Arif; Mian, Ajmal
2015-03-01
Hyperspectral imaging offers new opportunities for face recognition via improved discrimination along the spectral dimension. However, it poses new challenges, including low signal-to-noise ratio, interband misalignment, and high data dimensionality. Due to these challenges, the literature on hyperspectral face recognition is not only sparse but is limited to ad hoc dimensionality reduction techniques and lacks comprehensive evaluation. We propose a hyperspectral face recognition algorithm using a spatiospectral covariance for band fusion and partial least square regression for classification. Moreover, we extend 13 existing face recognition techniques, for the first time, to perform hyperspectral face recognition.We formulate hyperspectral face recognition as an image-set classification problem and evaluate the performance of seven state-of-the-art image-set classification techniques. We also test six state-of-the-art grayscale and RGB (color) face recognition algorithms after applying fusion techniques on hyperspectral images. Comparison with the 13 extended and five existing hyperspectral face recognition techniques on three standard data sets show that the proposed algorithm outperforms all by a significant margin. Finally, we perform band selection experiments to find the most discriminative bands in the visible and near infrared response spectrum.
Collaborative classification of hyperspectral and visible images with convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Mengmeng; Li, Wei; Du, Qian
2017-10-01
Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.
Fusion and quality analysis for remote sensing images using contourlet transform
NASA Astrophysics Data System (ADS)
Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram
2013-05-01
Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.
Chinnadurai, Ponraj; Duran, Cassidy; Al-Jabbari, Odeaa; Abu Saleh, Walid K; Lumsden, Alan; Bismuth, Jean
2016-01-01
To report our initial experience and highlight the value of using intraoperative C-arm cone beam computed tomography (CT; DynaCT(®)) image fusion guidance along with steerable robotic endovascular catheter navigation to optimize vessel cannulation. Between May 2013 and January 2015, all patients who underwent endovascular procedures using DynaCT image fusion technique along with Hansen Magellan vascular robotic catheter were included in this study. As a part of preoperative planning, relevant vessel landmarks were electronically marked in contrast-enhanced multi-slice computed tomography images and stored. At the beginning of procedure, an intraoperative noncontrast C-arm cone beam CT (syngo DynaCT(®), Siemens Medical Solutions USA Inc.) was acquired in the hybrid suite. Preoperative images were then coregistered to intraoperative DynaCT images using aortic wall calcifications and bone landmarks. Stored landmarks were then overlaid on 2-dimensional (2D) live fluoroscopic images as virtual markers that are updated in real-time with C-arm, table movements and image zoom. Vascular access and robotic catheter (Magellan(®), Hansen Medical) was setup per standard. Vessel cannulation was performed based on electronic virtual markers on live fluoroscopy using robotic catheter. The impact of 3-dimensional (3D) image fusion guidance on robotic vessel cannulation was evaluated retrospectively, by assessing quantitative parameters like number of angiograms acquired before vessel cannulation and qualitative parameters like accuracy of vessel ostium and centerline markers. All 17 vessels were cannulated successfully in 14 patients' attempted using robotic catheter and image fusion guidance. Median vessel diameter at origin was 5.4 mm (range, 2.3-13 mm), whereas 12 of 17 (70.6%) vessels had either calcified and/or stenosed origin from parent vessel. Nine of 17 vessels (52.9 %) were cannulated without any contrast injection. Median number of angiograms required before cannulation was 0 (range, 0-2). On qualitative assessment, 14 of 15 vessels (93.3%) had grade = 1 accuracy (guidewire inside virtual ostial marker). Fourteen of 14 vessels had grade = 1 accuracy (virtual centerlines that matched with the actual vessel trajectory during cannulation). In this small series, the experience of using DynaCT image fusion guidance together with a steerable endovascular robotic catheter indicates that such image fusion strategies can enhance intraoperative 2D fluoroscopy by bringing preoperative 3D information about vascular stenosis and/or calcification, angulation, and take off from main vessel thereby facilitating ultimate vessel cannulation. Copyright © 2016 Elsevier Inc. All rights reserved.
Multispectral image fusion for target detection
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-09-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan
2018-01-01
Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.
Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.
Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F
2013-09-01
The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.
Sadagopan, Shankar N; Veldtman, Gruschen R; Sivaprakasam, Muthukumaran C; Keeton, Barry R; Gnanapragasam, James P; Salmon, Anthony P; Haw, Marcus P; Vettukattil, Joseph J
2006-10-01
To define the anatomic characteristics of the congenitally malformed and severely stenotic aortic valve using trans-thoracic real time three-dimensional echocardiography, and to compare and contrast this with the valvar morphology as seen at surgery. Prospective cross-sectional observational study. Tertiary centre for paediatric cardiology. All patients requiring aortic valvotomy between December 2003 and July 2004 were evaluated prior to surgery with three-dimensional echocardiography. Full volume loop images were acquired using the Phillips Sonos 7500 system. A single observer analysed the images using "Q lab 4.1" software. The details were then compared with operative findings. We identified 8 consecutive patients, with a median age of 16 weeks, ranging from 1 day to 11 years, with median weight of 7.22 kilograms, ranging from 2.78 to 22 kilograms. The measured diameter of the valvar orifice, and the number of leaflets identified, corresponded closely with surgical assessment. The sites of fusion of the leaflets were correctly identified by the echocardiographic imaging in all cases. Fusion between the right and non-coronary leaflets was identified in half the patients. Dysplasia was observed in 3 patients, with 1 patient having nodules and 2 shown to have excrescences. At surgery, nodules were excised, and excrescences were trimmed. The dysplastic changes correlated well with operative findings, though statistically not significant. We recommend trans-thoracic real time three-dimensional echocardiography for the assessment of the congenitally malformed aortic valve, particularly to identify sites of fusion between leaflets and to measure the orificial diameter. The definition of nodularity, and the prognosis of nodules based on the mode of intervention, will need a comparative study of patients submitted to balloon dilation as well as those undergoing surgical valvotomy.
Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga
2017-11-01
Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.
Optimal wavelength band clustering for multispectral iris recognition.
Gong, Yazhuo; Zhang, David; Shi, Pengfei; Yan, Jingqi
2012-07-01
This work explores the possibility of clustering spectral wavelengths based on the maximum dissimilarity of iris textures. The eventual goal is to determine how many bands of spectral wavelengths will be enough for iris multispectral fusion and to find these bands that will provide higher performance of iris multispectral recognition. A multispectral acquisition system was first designed for imaging the iris at narrow spectral bands in the range of 420 to 940 nm. Next, a set of 60 human iris images that correspond to the right and left eyes of 30 different subjects were acquired for an analysis. Finally, we determined that 3 clusters were enough to represent the 10 feature bands of spectral wavelengths using the agglomerative clustering based on two-dimensional principal component analysis. The experimental results suggest (1) the number, center, and composition of clusters of spectral wavelengths and (2) the higher performance of iris multispectral recognition based on a three wavelengths-bands fusion.
CBCT-based 3D MRA and angiographic image fusion and MRA image navigation for neuro interventions.
Zhang, Qiang; Zhang, Zhiqiang; Yang, Jiakang; Sun, Qi; Luo, Yongchun; Shan, Tonghui; Zhang, Hao; Han, Jingfeng; Liang, Chunyang; Pan, Wenlong; Gu, Chuanqi; Mao, Gengsheng; Xu, Ruxiang
2016-08-01
Digital subtracted angiography (DSA) remains the gold standard for diagnosis of cerebral vascular diseases and provides intraprocedural guidance. This practice involves extensive usage of x-ray and iodinated contrast medium, which can induce side effects. In this study, we examined the accuracy of 3-dimensional (3D) registration of magnetic resonance angiography (MRA) and DSA imaging for cerebral vessels, and tested the feasibility of using preprocedural MRA for real-time guidance during endovascular procedures.Twenty-three patients with suspected intracranial arterial lesions were enrolled. The contrast medium-enhanced 3D DSA of target vessels were acquired in 19 patients during endovascular procedures, and the images were registered with preprocedural MRA for fusion accuracy evaluation. Low-dose noncontrasted 3D angiography of the skull was performed in the other 4 patients, and registered with the MRA. The MRA was overlaid afterwards with 2D live fluoroscopy to guide endovascular procedures.The 3D registration of the MRA and angiography demonstrated a high accuracy for vessel lesion visualization in all 19 patients examined. Moreover, MRA of the intracranial vessels, registered to the noncontrasted 3D angiography in the 4 patients, provided real-time 3D roadmap to successfully guide the endovascular procedures. Radiation dose to patients and contrast medium usage were shown to be significantly reduced.Three-dimensional MRA and angiography fusion can accurately generate cerebral vasculature images to guide endovascular procedures. The use of the fusion technology could enhance clinical workflow while minimizing contrast medium usage and radiation dose, and hence lowering procedure risks and increasing treatment safety.
CBCT-based 3D MRA and angiographic image fusion and MRA image navigation for neuro interventions
Zhang, Qiang; Zhang, Zhiqiang; Yang, Jiakang; Sun, Qi; Luo, Yongchun; Shan, Tonghui; Zhang, Hao; Han, Jingfeng; Liang, Chunyang; Pan, Wenlong; Gu, Chuanqi; Mao, Gengsheng; Xu, Ruxiang
2016-01-01
Abstract Digital subtracted angiography (DSA) remains the gold standard for diagnosis of cerebral vascular diseases and provides intraprocedural guidance. This practice involves extensive usage of x-ray and iodinated contrast medium, which can induce side effects. In this study, we examined the accuracy of 3-dimensional (3D) registration of magnetic resonance angiography (MRA) and DSA imaging for cerebral vessels, and tested the feasibility of using preprocedural MRA for real-time guidance during endovascular procedures. Twenty-three patients with suspected intracranial arterial lesions were enrolled. The contrast medium-enhanced 3D DSA of target vessels were acquired in 19 patients during endovascular procedures, and the images were registered with preprocedural MRA for fusion accuracy evaluation. Low-dose noncontrasted 3D angiography of the skull was performed in the other 4 patients, and registered with the MRA. The MRA was overlaid afterwards with 2D live fluoroscopy to guide endovascular procedures. The 3D registration of the MRA and angiography demonstrated a high accuracy for vessel lesion visualization in all 19 patients examined. Moreover, MRA of the intracranial vessels, registered to the noncontrasted 3D angiography in the 4 patients, provided real-time 3D roadmap to successfully guide the endovascular procedures. Radiation dose to patients and contrast medium usage were shown to be significantly reduced. Three-dimensional MRA and angiography fusion can accurately generate cerebral vasculature images to guide endovascular procedures. The use of the fusion technology could enhance clinical workflow while minimizing contrast medium usage and radiation dose, and hence lowering procedure risks and increasing treatment safety. PMID:27512846
Medical image registration based on normalized multidimensional mutual information
NASA Astrophysics Data System (ADS)
Li, Qi; Ji, Hongbing; Tong, Ming
2009-10-01
Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.
Visualization and Sequencing of Membrane Remodeling Leading to Influenza Virus Fusion
Gui, Long; Ebner, Jamie L.; Mileant, Alexander; Williams, James A.
2016-01-01
ABSTRACT Protein-mediated membrane fusion is an essential step in many fundamental biological events, including enveloped virus infection. The nature of protein and membrane intermediates and the sequence of membrane remodeling during these essential processes remain poorly understood. Here we used cryo-electron tomography (cryo-ET) to image the interplay between influenza virus and vesicles with a range of lipid compositions. By following the population kinetics of membrane fusion intermediates imaged by cryo-ET, we found that membrane remodeling commenced with the hemagglutinin fusion protein spikes grappling onto the target membrane, followed by localized target membrane dimpling as local clusters of hemagglutinin started to undergo conformational refolding. The local dimples then transitioned to extended, tightly apposed contact zones where the two proximal membrane leaflets were in most cases indistinguishable from each other, suggesting significant dehydration and possible intermingling of the lipid head groups. Increasing the content of fusion-enhancing cholesterol or bis-monoacylglycerophosphate in the target membrane led to an increase in extended contact zone formation. Interestingly, hemifused intermediates were found to be extremely rare in the influenza virus fusion system studied here, most likely reflecting the instability of this state and its rapid conversion to postfusion complexes, which increased in population over time. By tracking the populations of fusion complexes over time, the architecture and sequence of membrane reorganization leading to efficient enveloped virus fusion were thus resolved. IMPORTANCE Enveloped viruses employ specialized surface proteins to mediate fusion of cellular and viral membranes that results in the formation of pores through which the viral genetic material is delivered to the cell. For influenza virus, the trimeric hemagglutinin (HA) glycoprotein spike mediates host cell attachment and membrane fusion. While structures of a subset of conformations and parts of the fusion machinery have been characterized, the nature and sequence of membrane deformations during fusion have largely eluded characterization. Building upon studies that focused on early stages of HA-mediated membrane remodeling, here cryo-electron tomography (cryo-ET) was used to image the three-dimensional organization of intact influenza virions at different stages of fusion with liposomes, leading all the way to completion of the fusion reaction. By monitoring the evolution of fusion intermediate populations over the course of acid-induced fusion, we identified the progression of membrane reorganization that leads to efficient fusion by an enveloped virus. PMID:27226364
Self characterization of a coded aperture array for neutron source imaging
NASA Astrophysics Data System (ADS)
Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; Guler, N.; Merrill, F. E.; Wilde, C. H.
2014-12-01
The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (˜100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.
Designing Image Operators for MRI-PET Image Fusion of the Brain
NASA Astrophysics Data System (ADS)
Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.
2006-09-01
Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sailer, Anna M., E-mail: anni.sailer@mumc.nl; Haan, Michiel W. de, E-mail: m.de.haan@mumc.nl; Graaf, Rick de, E-mail: r.de.graaf@mumc.nl
PurposeThis study was designed to evaluate the feasibility of endovascular guidance by means of live fluoroscopy fusion with magnetic resonance angiography (MRA) and computed tomography angiography (CTA).MethodsFusion guidance was evaluated in 20 endovascular peripheral artery interventions in 17 patients. Fifteen patients had received preinterventional diagnostic MRA and two patients had undergone CTA. Time for fluoroscopy with MRA/CTA coregistration was recorded. Feasibility of fusion guidance was evaluated according to the following criteria: for every procedure the executing interventional radiologists recorded whether 3D road-mapping provided added value (yes vs. no) and whether PTA and/or stenting could be performed relying on the fusionmore » road-map without need for diagnostic contrast-enhanced angiogram series (CEAS) (yes vs. no). Precision of the fusion road-map was evaluated by recording maximum differences between the position of the vasculature on the virtual CTA/MRA images and conventional angiography.ResultsAverage time needed for image coregistration was 5 ± 2 min. Three-dimensional road-map added value was experienced in 15 procedures in 12 patients. In half of the patients (8/17), intervention was performed relying on the fusion road-map only, without diagnostic CEAS. In two patients, MRA roadmap showed a false-positive lesion. Excluding three patients with inordinate movements, mean difference in position of vasculature on angiography and MRA/CTA road-map was 1.86 ± 0.95 mm, implying that approximately 95 % of differences were between 0 and 3.72 mm (2 ± 1.96 standard deviation).ConclusionsFluoroscopy with MRA/CTA fusion guidance for peripheral artery interventions is feasible. By reducing the number of CEAS, this technology may contribute to enhance procedural safety.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Lei
Magnetic confinement fusion is one of the most promising approaches to achieve fusion energy. With the rapid increase of the computational power over the past decades, numerical simulation have become an important tool to study the fusion plasmas. Eventually, the numerical models will be used to predict the performance of future devices, such as the International Thermonuclear Experiment Reactor (ITER) or DEMO. However, the reliability of these models needs to be carefully validated against experiments before the results can be trusted. The validation between simulations and measurements is hard particularly because the quantities directly available from both sides are different.more » While the simulations have the information of the plasma quantities calculated explicitly, the measurements are usually in forms of diagnostic signals. The traditional way of making the comparison relies on the diagnosticians to interpret the measured signals as plasma quantities. The interpretation is in general very complicated and sometimes not even unique. In contrast, given the plasma quantities from the plasma simulations, we can unambiguously calculate the generation and propagation of the diagnostic signals. These calculations are called synthetic diagnostics, and they enable an alternate way to compare the simulation results with the measurements. In this dissertation, we present a platform for developing and applying synthetic diagnostic codes. Three diagnostics on the platform are introduced. The reflectometry and beam emission spectroscopy diagnostics measure the electron density, and the electron cyclotron emission diagnostic measures the electron temperature. The theoretical derivation and numerical implementation of a new two dimensional Electron cyclotron Emission Imaging code is discussed in detail. This new code has shown the potential to address many challenging aspects of the present ECE measurements, such as runaway electron effects, and detection of the cross phase between the electron temperature and density fluctuations.« less
Joda, Tim; Brägger, Urs; Gallucci, German
2015-01-01
Digital developments have led to the opportunity to compose simulated patient models based on three-dimensional (3D) skeletal, facial, and dental imaging. The aim of this systematic review is to provide an update on the current knowledge, to report on the technical progress in the field of 3D virtual patient science, and to identify further research needs to accomplish clinical translation. Searches were performed electronically (MEDLINE and OVID) and manually up to March 2014 for studies of 3D fusion imaging to create a virtual dental patient. Inclusion criteria were limited to human studies reporting on the technical protocol for superimposition of at least two different 3D data sets and medical field of interest. Of the 403 titles originally retrieved, 51 abstracts and, subsequently, 21 full texts were selected for review. Of the 21 full texts, 18 studies were included in the systematic review. Most of the investigations were designed as feasibility studies. Three different types of 3D data were identified for simulation: facial skeleton, extraoral soft tissue, and dentition. A total of 112 patients were investigated in the development of 3D virtual models. Superimposition of data on the facial skeleton, soft tissue, and/or dentition is a feasible technique to create a virtual patient under static conditions. Three-dimensional image fusion is of interest and importance in all fields of dental medicine. Future research should focus on the real-time replication of a human head, including dynamic movements, capturing data in a single step.
Ogata, Yuji; Nakahara, Tadaki; Ode, Kenichi; Matsusaka, Yohji; Katagiri, Mari; Iwabuchi, Yu; Itoh, Kazunari; Ichimura, Akira; Jinzaki, Masahiro
2017-05-01
We developed a method of image data projection of bone SPECT into 3D volume-rendered CT images for 3D SPECT/CT fusion. The aims of our study were to evaluate its feasibility and clinical usefulness. Whole-body bone scintigraphy (WB) and SPECT/CT scans were performed in 318 cancer patients using a dedicated SPECT/CT systems. Volume data of bone SPECT and CT were fused to obtain 2D SPECT/CT images. To generate our 3D SPECT/CT images, colored voxel data of bone SPECT were projected onto the corresponding location of the volume-rendered CT data after a semi-automatic bone extraction. Then, the resultant 3D images were blended with conventional volume-rendered CT images, allowing to grasp the three-dimensional relationship between bone metabolism and anatomy. WB and SPECT (WB + SPECT), 2D SPECT/CT fusion, and 3D SPECT/CT fusion were evaluated by two independent reviewers in the diagnosis of bone metastasis. The inter-observer variability and diagnostic accuracy in these three image sets were investigated using a four-point diagnostic scale. Increased bone metabolism was found in 744 metastatic sites and 1002 benign changes. On a per-lesion basis, inter-observer agreements in the diagnosis of bone metastasis were 0.72 for WB + SPECT, 0.90 for 2D SPECT/CT, and 0.89 for 3D SPECT/CT. Receiver operating characteristic analyses for the diagnostic accuracy of bone metastasis showed that WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT had an area under the curve of 0.800, 0.983, and 0.983 for reader 1, 0.865, 0.992, and 0.993 for reader 2, respectively (WB + SPECT vs. 2D or 3D SPECT/CT, p < 0.001; 2D vs. 3D SPECT/CT, n.s.). The durations of interpretation of WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT images were 241 ± 75, 225 ± 73, and 182 ± 71 s for reader 1 and 207 ± 72, 190 ± 73, and 179 ± 73 s for reader 2, respectively. As a result, it took shorter time to read 3D SPECT/CT images than 2D SPECT/CT (p < 0.0001) or WB + SPECT images (p < 0.0001). 3D SPECT/CT fusion offers comparable diagnostic accuracy to 2D SPECT/CT fusion. The visual effect of 3D SPECT/CT fusion facilitates reduction of reading time compared to 2D SPECT/CT fusion.
Three-dimensional ocular kinematics underlying binocular single vision
Misslisch, H.
2016-01-01
We have analyzed the binocular coordination of the eyes during far-to-near refixation saccades based on the evaluation of distance ratios and angular directions of the projected target images relative to the eyes' rotation centers. By defining the geometric point of binocular single vision, called Helmholtz point, we found that disparities during fixations of targets at near distances were limited in the subject's three-dimensional visual field to the vertical and forward directions. These disparities collapsed to simple vertical disparities in the projective binocular image plane. Subjects were able to perfectly fuse the vertically disparate target images with respect to the projected Helmholtz point of single binocular vision, independent of the particular location relative to the horizontal plane of regard. Target image fusion was achieved by binocular torsion combined with corrective modulations of the differential half-vergence angles of the eyes in the horizontal plane. Our findings support the notion that oculomotor control combines vergence in the horizontal plane of regard with active torsion in the frontal plane to achieve fusion of the dichoptic binocular target images. PMID:27655969
Simulated disparity and peripheral blur interact during binocular fusion.
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J
2014-07-17
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual’s aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. © 2014 ARVO.
Simulated disparity and peripheral blur interact during binocular fusion
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J
2014-01-01
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual's aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. PMID:25034260
NASA Astrophysics Data System (ADS)
Li, Chenguang; Yang, Xianjun
2016-10-01
The Magnetized Plasma Fusion Reactor concept is proposed as a magneto-inertial fusion approach based on the target plasma created through the collision merging of two oppositely translating field reversed configuration plasmas, which is then compressed by the imploding liner driven by the pulsed-power driver. The target creation process is described by a two-dimensional magnetohydrodynamics model, resulting in the typical target parameters. The implosion process and the fusion reaction are modeled by a simple zero-dimensional model, taking into account the alpha particle heating and the bremsstrahlung radiation loss. The compression on the target can be 2D cylindrical or 2.4D with the additive axial contraction taken into account. The dynamics of the liner compression and fusion burning are simulated and the optimum fusion gain and the associated target parameters are predicted. The scientific breakeven could be achieved at the optimized conditions.
3D fluorescence anisotropy imaging using selective plane illumination microscopy.
Hedde, Per Niklas; Ranjit, Suman; Gratton, Enrico
2015-08-24
Fluorescence anisotropy imaging is a popular method to visualize changes in organization and conformation of biomolecules within cells and tissues. In such an experiment, depolarization effects resulting from differences in orientation, proximity and rotational mobility of fluorescently labeled molecules are probed with high spatial resolution. Fluorescence anisotropy is typically imaged using laser scanning and epifluorescence-based approaches. Unfortunately, those techniques are limited in either axial resolution, image acquisition speed, or by photobleaching. In the last decade, however, selective plane illumination microscopy has emerged as the preferred choice for three-dimensional time lapse imaging combining axial sectioning capability with fast, camera-based image acquisition, and minimal light exposure. We demonstrate how selective plane illumination microscopy can be utilized for three-dimensional fluorescence anisotropy imaging of live cells. We further examined the formation of focal adhesions by three-dimensional time lapse anisotropy imaging of CHO-K1 cells expressing an EGFP-paxillin fusion protein.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
NASA Astrophysics Data System (ADS)
Singh, Dharmendra; Kumar, Harish
Earth observation satellites provide data that covers different portions of the electromagnetic spectrum at different spatial and spectral resolutions. The increasing availability of information products generated from satellite images are extending the ability to understand the patterns and dynamics of the earth resource systems at all scales of inquiry. In which one of the most important application is the generation of land cover classification from satellite images for understanding the actual status of various land cover classes. The prospect for the use of satel-lite images in land cover classification is an extremely promising one. The quality of satellite images available for land-use mapping is improving rapidly by development of advanced sensor technology. Particularly noteworthy in this regard is the improved spatial and spectral reso-lution of the images captured by new satellite sensors like MODIS, ASTER, Landsat 7, and SPOT 5. For the full exploitation of increasingly sophisticated multisource data, fusion tech-niques are being developed. Fused images may enhance the interpretation capabilities. The images used for fusion have different temporal, and spatial resolution. Therefore, the fused image provides a more complete view of the observed objects. It is one of the main aim of image fusion to integrate different data in order to obtain more information that can be de-rived from each of the single sensor data alone. A good example of this is the fusion of images acquired by different sensors having a different spatial resolution and of different spectral res-olution. Researchers are applying the fusion technique since from three decades and propose various useful methods and techniques. The importance of high-quality synthesis of spectral information is well suited and implemented for land cover classification. More recently, an underlying multiresolution analysis employing the discrete wavelet transform has been used in image fusion. It was found that multisensor image fusion is a tradeoff between the spectral information from a low resolution multi-spectral images and the spatial information from a high resolution multi-spectral images. With the wavelet transform based fusion method, it is easy to control this tradeoff. A new transform, the curvelet transform was used in recent years by Starck. A ridgelet transform is applied to square blocks of detail frames of undecimated wavelet decomposition, consequently the curvelet transform is obtained. Since the ridgelet transform possesses basis functions matching directional straight lines therefore, the curvelet transform is capable of representing piecewise linear contours on multiple scales through few significant coefficients. This property leads to a better separation between geometric details and background noise, which may be easily reduced by thresholding curvelet coefficients before they are used for fusion. The Terra and Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) instrument provides high radiometric sensitivity (12 bit) in 36 spectral bands ranging in wavelength from 0.4 m to 14.4 m and also it is freely available. Two bands are imaged at a nominal resolution of 250 m at nadir, with five bands at 500 m, and the remaining 29 bands at 1 km. In this paper, the band 1 of spatial resolution 250 m and bandwidth 620-670 nm, and band 2, of spatial resolution of 250m and bandwidth 842-876 nm is considered as these bands has special features to identify the agriculture and other land covers. In January 2006, the Advanced Land Observing Satellite (ALOS) was successfully launched by the Japan Aerospace Exploration Agency (JAXA). The Phased Arraytype L-band SAR (PALSAR) sensor onboard the satellite acquires SAR imagery at a wavelength of 23.5 cm (frequency 1.27 GHz) with capabilities of multimode and multipolarization observation. PALSAR can operate in several modes: the fine-beam single (FBS) polarization mode (HH), fine-beam dual (FBD) polariza-tion mode (HH/HV or VV/VH), polarimetric (PLR) mode (HH/HV/VH/VV), and ScanSAR (WB) mode (HH/VV) [15]. These makes PALSAR imagery very attractive for spatially and temporally consistent monitoring system. The Overview of Principal Component Analysis is that the most of the information within all the bands can be compressed into a much smaller number of bands with little loss of information. It allows us to extract the low-dimensional subspaces that capture the main linear correlation among the high-dimensional image data. This facilitates viewing the explained variance or signal in the available imagery, allowing both gross and more subtle features in the imagery to be seen. In this paper we have explored the fusion technique for enhancing the land cover classification of low resolution satellite data espe-cially freely available satellite data. For this purpose, we have considered to fuse the PALSAR principal component data with MODIS principal component data. Initially, the MODIS band 1 and band 2 is considered, its principal component is computed. Similarly the PALSAR HH, HV and VV polarized data are considered, and there principal component is also computed. con-sequently, the PALSAR principal component image is fused with MODIS principal component image. The aim of this paper is to analyze the effect of classification accuracy on major type of land cover types like agriculture, water and urban bodies with fusion of PALSAR data to MODIS data. Curvelet transformation has been applied for fusion of these two satellite images and Minimum Distance classification technique has been applied for the resultant fused image. It is qualitatively and visually observed that the overall classification accuracy of MODIS image after fusion is enhanced. This type of fusion technique may be quite helpful in near future to use freely available satellite data to develop monitoring system for different land cover classes on the earth.
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators
Bai, Xiangzhi
2015-01-01
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.
Bai, Xiangzhi
2015-07-15
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.
Pixel-based image fusion with false color mapping
NASA Astrophysics Data System (ADS)
Zhao, Wei; Mao, Shiyi
2003-06-01
In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.
Shinkawa, Norihiro; Hirai, Toshinori; Nishii, Ryuichi; Yukawa, Nobuhiro
2017-06-01
To determine the feasibility of human identification through the two-dimensional (2D) fusion of postmortem computed tomography (PMCT) and antemortem chest radiography. The study population consisted of 15 subjects who had undergone chest radiography studies more than 12 months before death. Fused images in which a chest radiograph was fused with a PMCT image were obtained for those subjects using a workstation, and the minimum distance gaps between corresponding anatomical landmarks (located at soft tissue and bone sites) in the images obtained with the two modalities were calculated. For each fused image, the mean of all these minimum distance gaps was recorded as the mean distance gap (MDG). For each subject, the MDG obtained for the same-subject fused image (i.e., where both of the images that were fused derived from that subject) was compared with the MDGs for different-subject fused images (i.e., where only one of the images that were fused derived from that subject; the other image derived from a different subject) in order to determine whether same-subject fused images can be reliably distinguished from different-subject fused images. The MDGs of the same-subject fused images were found to be significantly smaller than the MDGs of the different-subject fused images (p < 0.01). When bone landmarks were used, the same-subject fused image was found to be the fused image with the lowest MDG for 33.3% of the subjects, the fused image with the lowest or second-lowest MDG for 73.3% of the subjects, and the fused image with the lowest, second-lowest, or third-lowest MDG for 86.7% of the subjects. The application of bone landmarks rather than soft-tissue landmarks made it significantly more likely that, for each subject, the same-subject fused image would have the lowest MDG (or one of the lowest MDGs) of all the fused images compared (p < 0.05). The 2D fusion of antemortem chest radiography and postmortem CT images may assist in human identification.
Wang, Qian; Song, Enmin; Jin, Renchao; Han, Ping; Wang, Xiaotong; Zhou, Yanying; Zeng, Jianchao
2009-06-01
The aim of this study was to develop a novel algorithm for segmenting lung nodules on three-dimensional (3D) computed tomographic images to improve the performance of computer-aided diagnosis (CAD) systems. The database used in this study consists of two data sets obtained from the Lung Imaging Database Consortium. The first data set, containing 23 nodules (22% irregular nodules, 13% nonsolid nodules, 17% nodules attached to other structures), was used for training. The second data set, containing 64 nodules (37% irregular nodules, 40% nonsolid nodules, 62% nodules attached to other structures), was used for testing. Two key techniques were developed in the segmentation algorithm: (1) a 3D extended dynamic programming model, with a newly defined internal cost function based on the information between adjacent slices, allowing parameters to be adapted to each slice, and (2) a multidirection fusion technique, which makes use of the complementary relationships among different directions to improve the final segmentation accuracy. The performance of this approach was evaluated by the overlap criterion, complemented by the true-positive fraction and the false-positive fraction criteria. The mean values of the overlap, true-positive fraction, and false-positive fraction for the first data set achieved using the segmentation scheme were 66%, 75%, and 15%, respectively, and the corresponding values for the second data set were 58%, 71%, and 22%, respectively. The experimental results indicate that this segmentation scheme can achieve better performance for nodule segmentation than two existing algorithms reported in the literature. The proposed 3D extended dynamic programming model is an effective way to segment sequential images of lung nodules. The proposed multidirection fusion technique is capable of reducing segmentation errors especially for no-nodule and near-end slices, thus resulting in better overall performance.
Image Fusion for Radiosurgery, Neurosurgery and Hypofractionated Radiotherapy.
Inoue, Hiroshi K; Nakajima, Atsushi; Sato, Hiro; Noda, Shin-Ei; Saitoh, Jun-Ichi; Suzuki, Yoshiyuki
2015-03-01
Precise target detection is essential for radiosurgery, neurosurgery and hypofractionated radiotherapy because treatment results and complication rates are related to accuracy of the target definition. In skull base tumors and tumors around the optic pathways, exact anatomical evaluation of cranial nerves are important to avoid adverse effects on these structures close to lesions. Three-dimensional analyses of structures obtained with MR heavy T2-images and image fusion with CT thin-sliced sections are desirable to evaluate fine structures during radiosurgery and microsurgery. In vascular lesions, angiography is most important for evaluations of whole structures from feeder to drainer, shunt, blood flow and risk factors of bleeding. However, exact sites and surrounding structures in the brain are not shown on angiography. True image fusions of angiography, MR images and CT on axial planes are ideal for precise target definition. In malignant tumors, especially recurrent head and neck tumors, biologically active areas of recurrent tumors are main targets of radiosurgery. PET scan is useful for quantitative evaluation of recurrences. However, the examination is not always available at the time of radiosurgery. Image fusion of MR diffusion images with CT is always available during radiosurgery and useful for the detection of recurrent lesions. All images are fused and registered on thin sliced CT sections and exactly demarcated targets are planned for treatment. Follow-up images are also able to register on this CT. Exact target changes, including volume, are possible in this fusion system. The purpose of this review is to describe the usefulness of image fusion for 1) skull base, 2) vascular, 3) recurrent target detection, and 4) follow-up analyses in radiosurgery, neurosurgery and hypofractionated radiotherapy.
Image Fusion for Radiosurgery, Neurosurgery and Hypofractionated Radiotherapy
Nakajima, Atsushi; Sato, Hiro; Noda, Shin-ei; Saitoh, Jun-ichi; Suzuki, Yoshiyuki
2015-01-01
Precise target detection is essential for radiosurgery, neurosurgery and hypofractionated radiotherapy because treatment results and complication rates are related to accuracy of the target definition. In skull base tumors and tumors around the optic pathways, exact anatomical evaluation of cranial nerves are important to avoid adverse effects on these structures close to lesions. Three-dimensional analyses of structures obtained with MR heavy T2-images and image fusion with CT thin-sliced sections are desirable to evaluate fine structures during radiosurgery and microsurgery. In vascular lesions, angiography is most important for evaluations of whole structures from feeder to drainer, shunt, blood flow and risk factors of bleeding. However, exact sites and surrounding structures in the brain are not shown on angiography. True image fusions of angiography, MR images and CT on axial planes are ideal for precise target definition. In malignant tumors, especially recurrent head and neck tumors, biologically active areas of recurrent tumors are main targets of radiosurgery. PET scan is useful for quantitative evaluation of recurrences. However, the examination is not always available at the time of radiosurgery. Image fusion of MR diffusion images with CT is always available during radiosurgery and useful for the detection of recurrent lesions. All images are fused and registered on thin sliced CT sections and exactly demarcated targets are planned for treatment. Follow-up images are also able to register on this CT. Exact target changes, including volume, are possible in this fusion system. The purpose of this review is to describe the usefulness of image fusion for 1) skull base, 2) vascular, 3) recurrent target detection, and 4) follow-up analyses in radiosurgery, neurosurgery and hypofractionated radiotherapy. PMID:26180676
Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery
Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir
2017-01-01
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659
NASA Astrophysics Data System (ADS)
Rousson, Johanna; Haar, Jérémy; Santal, Sarah; Kumcu, Asli; Platiša, Ljiljana; Piepers, Bastian; Kimpe, Tom; Philips, Wilfried
2016-03-01
While three-dimensional (3-D) imaging systems are entering hospitals, no study to date has explored the luminance calibration needs of 3-D stereoscopic diagnostic displays and if they differ from two-dimensional (2-D) displays. Since medical display calibration incorporates the human contrast sensitivity function (CSF), we first assessed the 2-D CSF for benchmarking and then examined the impact of two image parameters on the 3-D stereoscopic CSF: (1) five depth plane (DP) positions (between DP: -171 and DP: 2853 mm), and (2) three 3-D inclinations (0 deg, 45 deg, and 60 deg around the horizontal axis of a DP). Stimuli were stereoscopic images of a vertically oriented 2-D Gabor patch at one of seven frequencies ranging from 0.4 to 10 cycles/deg. CSFs were measured for seven to nine human observers with a staircase procedure. The results indicate that the 2-D CSF model remains valid for a 3-D stereoscopic display regardless of the amount of disparity between the stereo images. We also found that the 3-D CSF at DP≠0 does not differ from the 3-D CSF at DP=0 for DPs and disparities which allow effortless binocular fusion. Therefore, the existing 2-D medical luminance calibration algorithm remains an appropriate tool for calibrating polarized stereoscopic medical displays.
NASA Astrophysics Data System (ADS)
Hu, Jianqiang; Liu, Ahdi; Zhou, Chu; Zhang, Xiaohui; Wang, Mingyuan; Zhang, Jin; Feng, Xi; Li, Hong; Xie, Jinlin; Liu, Wandong; Yu, Changxuan
2017-08-01
A new integrated technique for fast and accurate measurement of the quasi-optics, especially for the microwave/millimeter wave diagnostic systems of fusion plasma, has been developed. Using the LabVIEW-based comprehensive scanning system, we can realize not only automatic but also fast and accurate measurement, which will help to eliminate the effects of temperature drift and standing wave/multi-reflection. With the Matlab-based asymmetric two-dimensional Gaussian fitting method, all the desired parameters of the microwave beam can be obtained. This technique can be used in the design and testing of microwave diagnostic systems such as reflectometers and the electron cyclotron emission imaging diagnostic systems of the Experimental Advanced Superconducting Tokamak.
A color fusion method of infrared and low-light-level images based on visual perception
NASA Astrophysics Data System (ADS)
Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa
2014-11-01
The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.
Fusion of infrared and visible images based on BEMD and NSDFB
NASA Astrophysics Data System (ADS)
Zhu, Pan; Huang, Zhanhua; Lei, Hai
2016-07-01
This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.
[An improved medical image fusion algorithm and quality evaluation].
Chen, Meiling; Tao, Ling; Qian, Zhiyu
2009-08-01
Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.
Goudeketting, Seline R; Heinen, Stefan G; van den Heuvel, Daniel A; van Strijen, Marco J; de Haan, Michiel W; Slump, Cornelis H; de Vries, Jean-Paul P
2018-02-01
The effect of the insertion of guidewires and catheters on fusion accuracy of the three-dimensional (3D) image fusion technique during iliac percutaneous transluminal angioplasty (PTA) procedures has not yet been investigated. Technical validation of the 3D fusion technique was evaluated in 11 patients with common and/or external iliac artery lesions. A preprocedural contrast-enhanced magnetic resonance angiogram (CE-MRA) was segmented and manually registered to a cone-beam computed tomography image created at the beginning of the procedure for each patient. The treating physician visually scored the fusion accuracy (i.e., accurate [<2 mm], mismatch [2-5 mm], or inaccurate [>5 mm]) of the entire vasculature of the overlay with respect to the digital subtraction angiography (DSA) directly after the first obtained DSA. Contours of the vasculature of the fusion images and DSAs were drawn after the procedure. The cranial-caudal, lateral-medial, and absolute displacement were calculated between the vessel centerlines. To determine the influence of the catheters, displacement of the catheterized iliac trajectories were compared with the noncatheterized trajectories. Electronic databases were systematically searched for available literature published between January 2010 till August 2017. The mean registration error for all iliac trajectories (N.=20) was small (4.0±2.5 mm). No significant difference in fusion displacement was observed between catheterized (N.=11) and noncatheterized (N.=9) iliac arteries. The systematic literature search yielded 2 manuscripts with a total of 22 patients. The methodological quality of these studies was poor (≤11 MINORS Score), mainly due to a lack of a control group. Accurate image fusion based on preprocedural CE-MRA is possible and could potentially be of help in iliac PTA procedures. The flexible guidewires and angiographic catheters, routinely used during endovascular procedures of iliac arteries, did not cause significant displacement that influenced the image fusion. Current literature on 3D image fusion in iliac PTA procedures is of limited methodological quality.
Single-Side Two-Location Spotlight Imaging for Building Based on MIMO Through-Wall-Radar.
Jia, Yong; Zhong, Xiaoling; Liu, Jiangang; Guo, Yong
2016-09-07
Through-wall-radar imaging is of interest for mapping the wall layout of buildings and for the detection of stationary targets within buildings. In this paper, we present an easy single-side two-location spotlight imaging method for both wall layout mapping and stationary target detection by utilizing multiple-input multiple-output (MIMO) through-wall-radar. Rather than imaging for building walls directly, the images of all building corners are generated to speculate wall layout indirectly by successively deploying the MIMO through-wall-radar at two appropriate locations on only one side of the building and then carrying out spotlight imaging with two different squint-views. In addition to the ease of implementation, the single-side two-location squint-view detection also has two other advantages for stationary target imaging. The first one is the fewer multi-path ghosts, and the second one is the smaller region of side-lobe interferences from the corner images in comparison to the wall images. Based on Computer Simulation Technology (CST) electromagnetic simulation software, we provide multiple sets of validation results where multiple binary panorama images with clear images of all corners and stationary targets are obtained by combining two single-location images with the use of incoherent additive fusion and two-dimensional cell-averaging constant-false-alarm-rate (2D CA-CFAR) detection.
Probing the mechanism of fusion in a two-dimensional computer simulation.
Chanturiya, Alexandr; Scaria, Puthurapamil; Kuksenok, Oleksandr; Woodle, Martin C
2002-01-01
A two-dimensional (2D) model of lipid bilayers was developed and used to investigate a possible role of membrane lateral tension in membrane fusion. We found that an increase of lateral tension in contacting monolayers of 2D analogs of liposomes and planar membranes could cause not only hemifusion, but also complete fusion when internal pressure is introduced in the model. With a certain set of model parameters it was possible to induce hemifusion-like structural changes by a tension increase in only one of the two contacting bilayers. The effect of lysolipids was modeled as an insertion of a small number of extra molecules into the cis or trans side of the interacting bilayers at different stages of simulation. It was found that cis insertion arrests fusion and trans insertion has no inhibitory effect on fusion. The possibility of protein participation in tension-driven fusion was tested in simulation, with one of two model liposomes containing a number of structures capable of reducing the area occupied by them in the outer monolayer. It was found that condensation of these structures was sufficient to produce membrane reorganization similar to that observed in simulations with "protein-free" bilayers. These data support the hypothesis that changes in membrane lateral tension may be responsible for fusion in both model phospholipid membranes and in biological protein-mediated fusion. PMID:12023230
NASA Astrophysics Data System (ADS)
Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang
2012-01-01
The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-11-26
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
Mitochondrial Dynamics Tracking with Two-Photon Phosphorescent Terpyridyl Iridium(III) Complexes
NASA Astrophysics Data System (ADS)
Huang, Huaiyi; Zhang, Pingyu; Qiu, Kangqiang; Huang, Juanjuan; Chen, Yu; Ji, Liangnian; Chao, Hui
2016-02-01
Mitochondrial dynamics, including fission and fusion, control the morphology and function of mitochondria, and disruption of mitochondrial dynamics leads to Parkinson’s disease, Alzheimer’s disease, metabolic diseases, and cancers. Currently, many types of commercial mitochondria probes are available, but high excitation energy and low photo-stability render them unsuitable for tracking mitochondrial dynamics in living cells. Therefore, mitochondrial targeting agents that exhibit superior anti-photo-bleaching ability, deep tissue penetration and intrinsically high three-dimensional resolutions are urgently needed. Two-photon-excited compounds that use low-energy near-infrared excitation lasers have emerged as non-invasive tools for cell imaging. In this work, terpyridyl cyclometalated Ir(III) complexes (Ir1-Ir3) are demonstrated as one- and two-photon phosphorescent probes for real-time imaging and tracking of mitochondrial morphology changes in living cells.
Vollnhals, Florian; Audinot, Jean-Nicolas; Wirtz, Tom; Mercier-Bonin, Muriel; Fourquaux, Isabelle; Schroeppel, Birgit; Kraushaar, Udo; Lev-Ram, Varda; Ellisman, Mark H; Eswara, Santhana
2017-10-17
Correlative microscopy combining various imaging modalities offers powerful insights into obtaining a comprehensive understanding of physical, chemical, and biological phenomena. In this article, we investigate two approaches for image fusion in the context of combining the inherently lower-resolution chemical images obtained using secondary ion mass spectrometry (SIMS) with the high-resolution ultrastructural images obtained using electron microscopy (EM). We evaluate the image fusion methods with three different case studies selected to broadly represent the typical samples in life science research: (i) histology (unlabeled tissue), (ii) nanotoxicology, and (iii) metabolism (isotopically labeled tissue). We show that the intensity-hue-saturation fusion method often applied for EM-sharpening can result in serious image artifacts, especially in cases where different contrast mechanisms interplay. Here, we introduce and demonstrate Laplacian pyramid fusion as a powerful and more robust alternative method for image fusion. Both physical and technical aspects of correlative image overlay and image fusion specific to SIMS-based correlative microscopy are discussed in detail alongside the advantages, limitations, and the potential artifacts. Quantitative metrics to evaluate the results of image fusion are also discussed.
Autofocus algorithm using one-dimensional Fourier transform and Pearson correlation
NASA Astrophysics Data System (ADS)
Bueno Mario, A.; Alvarez-Borrego, Josue; Acho, L.
2004-10-01
A new autofocus algorithm based on one-dimensional Fourier transform and Pearson correlation for Z automatized microscope is proposed. Our goal is to determine in fast response time and accuracy, the best focused plane through an algorithm. We capture in bright and dark field several images set at different Z distances from biological organism sample. The algorithm uses the one-dimensional Fourier transform to obtain the image frequency content of a vectors pattern previously defined comparing the Pearson correlation of these frequency vectors versus the reference image frequency vector, the most out of focus image, we find the best focusing. Experimental results showed the algorithm has fast response time and accuracy in getting the best focus plane from captured images. In conclusions, the algorithm can be implemented in real time systems due fast response time, accuracy and robustness. The algorithm can be used to get focused images in bright and dark field and it can be extended to include fusion techniques to construct multifocus final images beyond of this paper.
Godlewski, Guilhem; Gaubert, Jacques; Cristol-Gaubert, Renée; Radi, Maïada; Baecker, Volker; Travo, Pierre; Prudhomme, Michel; Prat-Pradal, Dominique
2011-10-01
The purpose of the present study was to illustrate the modality of rotation of ventral and dorsal pancreatic buds by three-dimensional (3D) reconstructions in the rat embryos, during the Carnegie stages 13-17. Serial sections of thirty rat embryos stages 13-17, were observed. The embryos were fixed in Bouin's solution, dehydrated, and paraffin embedded. The sections, 7 μm thick, were cut in longitudinal or transverse planes and were stained alternately by hematoxylin-eosin or Heindenhain' azan. The images were digitalized by Canon Camera 350 EOS D. The 3D reconstruction was performed by computer using Cell Image Analyser software. The two pancreatic buds ventral and dorsal, were clearly identified at stage 13, in anterior and posterior position, respectively, in relation to the duodenum. In stage 15, the duodenum started its rotation of 90° clockwise. The ventral bud moved 90° from the midline to the right. In stage 16, the ventral pancreas continued its rotation until 180° in posterior position behind the duodenum. In stage 17, the two pancreatic buds were related closely to the ventral part of the portal vein. The two buds began to merge. The anterior face of the pancreas's head was arising from the dorsal pancreatic bud. The rest of the head including the omental tuberosity and the uncinate process emanated from the ventral pancreatic bud. The use of 3D reconstruction of the pancreas of rat embryos illustrates the modality of the two pancreatic buds rotation and fusion. This method explains the final position of the pancreas.
Direct observation of intermediate states in model membrane fusion
Keidel, Andrea; Bartsch, Tobias F.; Florin, Ernst-Ludwig
2016-01-01
We introduce a novel assay for membrane fusion of solid supported membranes on silica beads and on coverslips. Fusion of the lipid bilayers is induced by bringing an optically trapped bead in contact with the coverslip surface while observing the bead’s thermal motion with microsecond temporal and nanometer spatial resolution using a three-dimensional position detector. The probability of fusion is controlled by the membrane tension on the particle. We show that the progression of fusion can be monitored by changes in the three-dimensional position histograms of the bead and in its rate of diffusion. We were able to observe all fusion intermediates including transient fusion, formation of a stalk, hemifusion and the completion of a fusion pore. Fusion intermediates are characterized by axial but not lateral confinement of the motion of the bead and independently by the change of its rate of diffusion due to the additional drag from the stalk-like connection between the two membranes. The detailed information provided by this assay makes it ideally suited for studies of early events in pure lipid bilayer fusion or fusion assisted by fusogenic molecules. PMID:27029285
Direct observation of intermediate states in model membrane fusion.
Keidel, Andrea; Bartsch, Tobias F; Florin, Ernst-Ludwig
2016-03-31
We introduce a novel assay for membrane fusion of solid supported membranes on silica beads and on coverslips. Fusion of the lipid bilayers is induced by bringing an optically trapped bead in contact with the coverslip surface while observing the bead's thermal motion with microsecond temporal and nanometer spatial resolution using a three-dimensional position detector. The probability of fusion is controlled by the membrane tension on the particle. We show that the progression of fusion can be monitored by changes in the three-dimensional position histograms of the bead and in its rate of diffusion. We were able to observe all fusion intermediates including transient fusion, formation of a stalk, hemifusion and the completion of a fusion pore. Fusion intermediates are characterized by axial but not lateral confinement of the motion of the bead and independently by the change of its rate of diffusion due to the additional drag from the stalk-like connection between the two membranes. The detailed information provided by this assay makes it ideally suited for studies of early events in pure lipid bilayer fusion or fusion assisted by fusogenic molecules.
Walter, Uwe; Niendorf, Thoralf; Graessl, Andreas; Rieger, Jan; Krüger, Paul-Christian; Langner, Sönke; Guthoff, Rudolf F; Stachs, Oliver
2014-05-01
A combination of magnetic resonance images with real-time high-resolution ultrasound known as fusion imaging may improve ophthalmologic examination. This study was undertaken to evaluate the feasibility of orbital high-field magnetic resonance and real-time colour Doppler ultrasound image fusion and navigation. This case study, performed between April and June 2013, included one healthy man (age, 47 years) and two patients (one woman, 57 years; one man, 67 years) with choroidal melanomas. All cases underwent 7.0-T magnetic resonance imaging using a custom-made ocular imaging surface coil. The Digital Imaging and Communications in Medicine volume data set was then loaded into the ultrasound system for manual registration of the live ultrasound image and fusion imaging examination. Data registration, matching and then volume navigation were feasible in all cases. Fusion imaging provided real-time imaging capabilities and high tissue contrast of choroidal tumour and optic nerve. It also allowed adding a real-time colour Doppler signal on magnetic resonance images for assessment of vasculature of tumour and retrobulbar structures. The combination of orbital high-field magnetic resonance and colour Doppler ultrasound image fusion and navigation is feasible. Multimodal fusion imaging promises to foster assessment and monitoring of choroidal melanoma and optic nerve disorders. • Orbital magnetic resonance and colour Doppler ultrasound real-time fusion imaging is feasible • Fusion imaging combines the spatial and temporal resolution advantages of each modality • Magnetic resonance and ultrasound fusion imaging improves assessment of choroidal melanoma vascularisation.
Multiview echocardiography fusion using an electromagnetic tracking system.
Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; Paakkanen, Riitta; Khan, Nehan; Noga, Michelle; Boulanger, Pierre; Becher, Harald
2016-08-01
Three-dimensional ultrasound is an emerging modality for the assessment of complex cardiac anatomy and function. The advantages of this modality include lack of ionizing radiation, portability, low cost, and high temporal resolution. Major limitations include limited field-of-view, reliance on frequently limited acoustic windows, and poor signal to noise ratio. This study proposes a novel approach to combine multiple views into a single image using an electromagnetic tracking system in order to improve the field-of-view. The novel method has several advantages: 1) it does not rely on image information for alignment, and therefore, the method does not require image overlap; 2) the alignment accuracy of the proposed approach is not affected by any poor image quality as in the case of image registration based approaches; 3) in contrast to previous optical tracking based system, the proposed approach does not suffer from line-of-sight limitation; and 4) it does not require any initial calibration. In this pilot project, we were able to show that using a heart phantom, our method can fuse multiple echocardiographic images and improve the field-of view. Quantitative evaluations showed that the proposed method yielded a nearly optimal alignment of image data sets in three-dimensional space. The proposed method demonstrates the electromagnetic system can be used for the fusion of multiple echocardiography images with a seamless integration of sensors to the transducer.
Lee, Junkyo; Lee, Min Woo; Choi, Dongil; Cha, Dong Ik; Lee, Sunyoung; Kang, Tae Wook; Yang, Jehoon; Jo, Jaemoon; Bang, Won-Chul; Kim, Jongsik; Shin, Dongkuk
2017-12-21
The purpose of this study was to evaluate the accuracy of an active contour model for estimating the posterior ablative margin in images obtained by the fusion of real-time ultrasonography (US) and 3-dimensional (3D) US or magnetic resonance (MR) images of an experimental tumor model for radiofrequency ablation. Chickpeas (n=12) and bovine rump meat (n=12) were used as an experimental tumor model. Grayscale 3D US and T1-weighted MR images were pre-acquired for use as reference datasets. US and MR/3D US fusion was performed for one group (n=4), and US and 3D US fusion only (n=8) was performed for the other group. Half of the models in each group were completely ablated, while the other half were incompletely ablated. Hyperechoic ablation areas were extracted using an active contour model from real-time US images, and the posterior margin of the ablation zone was estimated from the anterior margin. After the experiments, the ablated pieces of bovine rump meat were cut along the electrode path and the cut planes were photographed. The US images with the estimated posterior margin were compared with the photographs and post-ablation MR images. The extracted contours of the ablation zones from 12 US fusion videos and post-ablation MR images were also matched. In the four models fused under real-time US with MR/3D US, compression from the transducer and the insertion of an electrode resulted in misregistration between the real-time US and MR images, making the estimation of the ablation zones less accurate than was achieved through fusion between real-time US and 3D US. Eight of the 12 post-ablation 3D US images were graded as good when compared with the sectioned specimens, and 10 of the 12 were graded as good in a comparison with nicotinamide adenine dinucleotide staining and histopathologic results. Estimating the posterior ablative margin using an active contour model is a feasible way of predicting the ablation area, and US/3D US fusion was more accurate than US/MR fusion.
Gender recognition from unconstrained and articulated human body.
Wu, Qin; Guo, Guodong
2014-01-01
Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition.
Gender Recognition from Unconstrained and Articulated Human Body
Wu, Qin; Guo, Guodong
2014-01-01
Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition. PMID:24977203
Landcover classification in MRF context using Dempster-Shafer fusion for multisensor imagery.
Sarkar, Anjan; Banerjee, Anjan; Banerjee, Nilanjan; Brahma, Siddhartha; Kartikeyan, B; Chakraborty, Manab; Majumder, K L
2005-05-01
This work deals with multisensor data fusion to obtain landcover classification. The role of feature-level fusion using the Dempster-Shafer rule and that of data-level fusion in the MRF context is studied in this paper to obtain an optimally segmented image. Subsequently, segments are validated and classification accuracy for the test data is evaluated. Two examples of data fusion of optical images and a synthetic aperture radar image are presented, each set having been acquired on different dates. Classification accuracies of the technique proposed are compared with those of some recent techniques in literature for the same image data.
Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian
2014-03-21
This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.
Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B
2013-03-01
Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.
Design of an Image Fusion Phantom for a Small Animal microPET/CT Scanner Prototype
NASA Astrophysics Data System (ADS)
Nava-García, Dante; Alva-Sánchez, Héctor; Murrieta-Rodríguez, Tirso; Martínez-Dávalos, Arnulfo; Rodríguez-Villafuerte, Mercedes
2010-12-01
Two separate microtomography systems recently developed at Instituto de Física, UNAM, produce anatomical (microCT) and physiological images (microPET) of small animals. In this work, the development and initial tests of an image fusion method based on fiducial markers for image registration between the two modalities are presented. A modular Helix/Line-Sources phantom was designed and constructed; this phantom contains fiducial markers that can be visualized in both imaging systems. The registration was carried out by solving the rigid body alignment problem of Procrustes to obtain rotation and translation matrices required to align the two sets of images. The microCT/microPET image fusion of the Helix/Line-Sources phantom shows excellent visual coincidence between different structures, showing a calculated target-registration-error of 0.32 mm.
Feng, Peng; Wang, Jing; Wei, Biao; Mi, Deling
2013-01-01
A hybrid multiscale and multilevel image fusion algorithm for green fluorescent protein (GFP) image and phase contrast image of Arabidopsis cell is proposed in this paper. Combining intensity-hue-saturation (IHS) transform and sharp frequency localization Contourlet transform (SFL-CT), this algorithm uses different fusion strategies for different detailed subbands, which include neighborhood consistency measurement (NCM) that can adaptively find balance between color background and gray structure. Also two kinds of neighborhood classes based on empirical model are taken into consideration. Visual information fidelity (VIF) as an objective criterion is introduced to evaluate the fusion image. The experimental results of 117 groups of Arabidopsis cell image from John Innes Center show that the new algorithm cannot only make the details of original images well preserved but also improve the visibility of the fusion image, which shows the superiority of the novel method to traditional ones. PMID:23476716
NASA Astrophysics Data System (ADS)
Bonfiglio, D.; Chacón, L.; Cappello, S.
2010-08-01
With the increasing impact of scientific discovery via advanced computation, there is presently a strong emphasis on ensuring the mathematical correctness of computational simulation tools. Such endeavor, termed verification, is now at the center of most serious code development efforts. In this study, we address a cross-benchmark nonlinear verification study between two three-dimensional magnetohydrodynamics (3D MHD) codes for fluid modeling of fusion plasmas, SPECYL [S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996)] and PIXIE3D [L. Chacón, Phys. Plasmas 15, 056103 (2008)], in their common limit of application: the simple viscoresistive cylindrical approximation. SPECYL is a serial code in cylindrical geometry that features a spectral formulation in space and a semi-implicit temporal advance, and has been used extensively to date for reversed-field pinch studies. PIXIE3D is a massively parallel code in arbitrary curvilinear geometry that features a conservative, solenoidal finite-volume discretization in space, and a fully implicit temporal advance. The present study is, in our view, a first mandatory step in assessing the potential of any numerical 3D MHD code for fluid modeling of fusion plasmas. Excellent agreement is demonstrated over a wide range of parameters for several fusion-relevant cases in both two- and three-dimensional geometries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonfiglio, Daniele; Chacon, Luis; Cappello, Susanna
2010-01-01
With the increasing impact of scientific discovery via advanced computation, there is presently a strong emphasis on ensuring the mathematical correctness of computational simulation tools. Such endeavor, termed verification, is now at the center of most serious code development efforts. In this study, we address a cross-benchmark nonlinear verification study between two three-dimensional magnetohydrodynamics (3D MHD) codes for fluid modeling of fusion plasmas, SPECYL [S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996)] and PIXIE3D [L. Chacon, Phys. Plasmas 15, 056103 (2008)], in their common limit of application: the simple viscoresistive cylindrical approximation. SPECYL is a serial code inmore » cylindrical geometry that features a spectral formulation in space and a semi-implicit temporal advance, and has been used extensively to date for reversed-field pinch studies. PIXIE3D is a massively parallel code in arbitrary curvilinear geometry that features a conservative, solenoidal finite-volume discretization in space, and a fully implicit temporal advance. The present study is, in our view, a first mandatory step in assessing the potential of any numerical 3D MHD code for fluid modeling of fusion plasmas. Excellent agreement is demonstrated over a wide range of parameters for several fusion-relevant cases in both two- and three-dimensional geometries.« less
Range and Panoramic Image Fusion Into a Textured Range Image for Culture Heritage Documentation
NASA Astrophysics Data System (ADS)
Bila, Z.; Reznicek, J.; Pavelka, K.
2013-07-01
This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.
Fully Convolutional Network-Based Multifocus Image Fusion.
Guo, Xiaopeng; Nie, Rencan; Cao, Jinde; Zhou, Dongming; Qian, Wenhua
2018-07-01
As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is an efficient technology that can synthesize an all-in-focus image using several partially focused images. Previous methods have accomplished the fusion task in spatial or transform domains. However, fusion rules are always a problem in most methods. In this letter, from the aspect of focus region detection, we propose a novel multifocus image fusion method based on a fully convolutional network (FCN) learned from synthesized multifocus images. The primary novelty of this method is that the pixel-wise focus regions are detected through a learning FCN, and the entire image, not just the image patches, are exploited to train the FCN. First, we synthesize 4500 pairs of multifocus images by repeatedly using a gaussian filter for each image from PASCAL VOC 2012, to train the FCN. After that, a pair of source images is fed into the trained FCN, and two score maps indicating the focus property are generated. Next, an inversed score map is averaged with another score map to produce an aggregative score map, which take full advantage of focus probabilities in two score maps. We implement the fully connected conditional random field (CRF) on the aggregative score map to accomplish and refine a binary decision map for the fusion task. Finally, we exploit the weighted strategy based on the refined decision map to produce the fused image. To demonstrate the performance of the proposed method, we compare its fused results with several start-of-the-art methods not only on a gray data set but also on a color data set. Experimental results show that the proposed method can achieve superior fusion performance in both human visual quality and objective assessment.
NASA Technical Reports Server (NTRS)
Beyer, J.; Jacobus, C.; Mitchell, B.
1987-01-01
Range imagery from a laser scanner can be used to provide sufficient information for docking and obstacle avoidance procedures to be performed automatically. Three dimensional model-based computer vision algorithms in development can perform these tasks even with targets which may not be cooperative (that is, objects without special targets or markers to provide unambiguous location points). Roll, pitch and yaw of the vehicle can be taken into account as image scanning takes place, so that these can be corrected when the image is converted from egocentric to world coordinates. Other attributes of the sensor, such as the registered reflectence and texture channels, provide additional data sources for algorithm robustness. Temporal fusion of sensor immages can take place in the work coordinate domain, allowing for the building of complex maps in three dimensional space.
Attenberger, Ulrike I; Rathmann, Nils; Sertdemir, Metin; Riffel, Philipp; Weidner, Anja; Kannengiesser, Stefan; Morelli, John N; Schoenberg, Stefan O; Hausmann, Daniel
2016-06-01
Spatially-tailored (RF) excitation pulses in echo-planar imaging (EPI), combined with a decreased FOV in the phase-encoding direction, enable a reduction of k-space acquisition lines, which shortens the echo train length (ETL) and reduces susceptibility artifacts. The purpose of this study was to evaluate the image quality of a zoomed EPI (z-EPI) sequence in diffusion-weighted imaging (DWI) of the prostate in comparison to a conventional single-shot EPI using single-channel (c-EPI1) and multi-channel (c-EPI2) RF excitation, with and without use of an endorectal coil. 33 consecutive patients (mean age: 61 +/- 9 years; mean PSA: 8.67±6.23 ng/ml) with examinations between 10/2012 and 02/2014 were analyzed in this retrospective study. In 26 of 33 patients the initial multiparametric (mp)-MRI was performed on a whole-body 3T scanner (Magnetom Trio, Siemens, Erlangen, Germany) using an endorectal coil (c (conventional)-EPI1). Zoomed-EPI (Z-EPI) examinations of these patients and a complete mp-MRI protocol including c-EPI2 of 7 additional patients were carried out on another 3T wb MR scanner with two-channel dynamic parallel transmit capability (Magnetom Skyra with TimTX TrueShape, Siemens). For z-EPI, the one-dimensional spatially selective RF excitation pulse was replaced by a two-dimensional RF pulse. Degree of image blur and susceptibility artifacts (0=not present to 3= non-diagnostic), maximum image distortion (mm), apparent diffusion coefficient (ADC) values, as well as overall scan preference were evaluated. SNR maps were generated to compare c-EPI2 and z-EPI. Overall image quality of z-EPI was preferred by both readers in all examinations with a single exception. Susceptibility artifacts were rated significantly lower on z-EPI compared to both other methods (z-EPI vs c-EPI1: p<0.01; z-EPI vs c-EPI2: p<0.01) as well as image blur (z-EPI vs c-EPI1: p<0.01; z-EPI vs c-EPI2: p<0.01). Image distortion was not statistically significantly reduced with z-EPI (z-EPI vs c-EPI1: p=0.12; z-EPI vs c-EPI2: p=0.42). Interobserver agreement for ratings of susceptibility artifacts, image blur and overall scan preference was good. SNR was higher for z-EPI than for c-EPI1 (n=1). Z-EPI leads to significant improvements in image quality and artifacts as well as image blur reduction improving prostate DWI and enabling accurate fusion with conventional sequences. The improved fusion could lead to advantages in the field of MRI-guided biopsy suspicous lesions and performance of locally ablative procedures for prostate cancer. Copyright © 2015. Published by Elsevier GmbH.
Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie
2015-01-01
We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871
Lipid-bilayer-assisted two-dimensional self-assembly of DNA origami nanostructures
NASA Astrophysics Data System (ADS)
Suzuki, Yuki; Endo, Masayuki; Sugiyama, Hiroshi
2015-08-01
Self-assembly is a ubiquitous approach to the design and fabrication of novel supermolecular architectures. Here we report a strategy termed `lipid-bilayer-assisted self-assembly' that is used to assemble DNA origami nanostructures into two-dimensional lattices. DNA origami structures are electrostatically adsorbed onto a mica-supported zwitterionic lipid bilayer in the presence of divalent cations. We demonstrate that the bilayer-adsorbed origami units are mobile on the surface and self-assembled into large micrometre-sized lattices in their lateral dimensions. Using high-speed atomic force microscopy imaging, a variety of dynamic processes involved in the formation of the lattice, such as fusion, reorganization and defect filling, are successfully visualized. The surface modifiability of the assembled lattice is also demonstrated by in situ decoration with streptavidin molecules. Our approach provides a new strategy for preparing versatile scaffolds for nanofabrication and paves the way for organizing functional nanodevices in a micrometer space.
Lipid-bilayer-assisted two-dimensional self-assembly of DNA origami nanostructures
Endo, Masayuki; Sugiyama, Hiroshi
2015-01-01
Self-assembly is a ubiquitous approach to the design and fabrication of novel supermolecular architectures. Here we report a strategy termed ‘lipid-bilayer-assisted self-assembly' that is used to assemble DNA origami nanostructures into two-dimensional lattices. DNA origami structures are electrostatically adsorbed onto a mica-supported zwitterionic lipid bilayer in the presence of divalent cations. We demonstrate that the bilayer-adsorbed origami units are mobile on the surface and self-assembled into large micrometre-sized lattices in their lateral dimensions. Using high-speed atomic force microscopy imaging, a variety of dynamic processes involved in the formation of the lattice, such as fusion, reorganization and defect filling, are successfully visualized. The surface modifiability of the assembled lattice is also demonstrated by in situ decoration with streptavidin molecules. Our approach provides a new strategy for preparing versatile scaffolds for nanofabrication and paves the way for organizing functional nanodevices in a micrometer space. PMID:26310995
Weber-aware weighted mutual information evaluation for infrared-visible image fusion
NASA Astrophysics Data System (ADS)
Luo, Xiaoyan; Wang, Shining; Yuan, Ding
2016-10-01
A performance metric for infrared and visible image fusion is proposed based on Weber's law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.
Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing
NASA Astrophysics Data System (ADS)
Fan, Lei
Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.
Wang, Jie-sheng; Han, Shuang; Shen, Na-na
2014-01-01
For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935
A fast fusion scheme for infrared and visible light images in NSCT domain
NASA Astrophysics Data System (ADS)
Zhao, Chunhui; Guo, Yunting; Wang, Yulei
2015-09-01
Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.
Image fusion via nonlocal sparse K-SVD dictionary learning.
Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang
2016-03-01
Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.
Boulanger, Pierre; Flores-Mir, Carlos; Ramirez, Juan F; Mesa, Elizabeth; Branch, John W
2009-01-01
The measurements from registered images obtained from Cone Beam Computed Tomography (CBCT) and a photogrammetric sensor are used to track three-dimensional shape variations of orthodontic patients before and after their treatments. The methodology consists of five main steps: (1) the patient's bone and skin shapes are measured in 3D using the fusion of images from a CBCT and a photogrammetric sensor. (2) The bone shape is extracted from the CBCT data using a standard marching cube algorithm. (3) The bone and skin shape measurements are registered using titanium targets located on the head of the patient. (4) Using a manual segmentation technique the head and lower jaw geometry are extracted separately to deal with jaw motion at the different record visits. (5) Using natural features of the upper head the two datasets are then registered with each other and then compared to evaluate bone, teeth, and skin displacements before and after treatments. This procedure is now used at the University of Alberta orthodontic clinic.
Solid polystyrene and deuterated polystyrene light output response to fast neutrons
NASA Astrophysics Data System (ADS)
Simpson, R.; Danly, C.; Glebov, V. Yu.; Hurlbut, C.; Merrill, F. E.; Volegov, P. L.; Wilde, C.
2016-04-01
The Neutron Imaging System has proven to be an important diagnostic in studying DT implosion characteristics at the National Ignition Facility. The current system depends on a polystyrene scintillating fiber array, which detects fusion neutrons born in the DT hotspot as well as neutrons that have scattered to lower energies in the surrounding cold fuel. Increasing neutron yields at NIF, as well as a desire to resolve three-dimensional information about the fuel assembly, have provided the impetus to build and install two additional next-generation neutron imaging systems. We are currently investigating a novel neutron imaging system that will utilize a deuterated polystyrene (CD) fiber array instead of standard hydrogen-based polystyrene (CH). Studies of deuterated xylene or deuterated benzene liquid scintillator show an improvement in imaging resolution by a factor of two [L. Disdier et al., Rev. Sci. Instrum. 75, 2134 (2004)], but also a reduction in light output [V. Bildstein et al., Nucl. Instrum. Methods Phys. Res., Sect. A 729, 188 (2013); M. I. Ojaruega, Ph.D. thesis, University of Michigan, 2009; M. T. Febbraro, Ph.D. thesis, University of Michigan, 2014] as compared to standard plastic. Tests of the relative light output of deuterated polystyrene and standard polystyrene were completed using 14 MeV fusion neutrons generated through implosions of deuterium-tritium filled capsules at the OMEGA laser facility. In addition, we collected data of the relative response of these two scintillators to a wide energy range of neutrons (1-800 MeV) at the Weapons Neutrons Research Facility. Results of these measurements are presented.
Solid polystyrene and deuterated polystyrene light output response to fast neutrons.
Simpson, R; Danly, C; Glebov, V Yu; Hurlbut, C; Merrill, F E; Volegov, P L; Wilde, C
2016-04-01
The Neutron Imaging System has proven to be an important diagnostic in studying DT implosion characteristics at the National Ignition Facility. The current system depends on a polystyrene scintillating fiber array, which detects fusion neutrons born in the DT hotspot as well as neutrons that have scattered to lower energies in the surrounding cold fuel. Increasing neutron yields at NIF, as well as a desire to resolve three-dimensional information about the fuel assembly, have provided the impetus to build and install two additional next-generation neutron imaging systems. We are currently investigating a novel neutron imaging system that will utilize a deuterated polystyrene (CD) fiber array instead of standard hydrogen-based polystyrene (CH). Studies of deuterated xylene or deuterated benzene liquid scintillator show an improvement in imaging resolution by a factor of two [L. Disdier et al., Rev. Sci. Instrum. 75, 2134 (2004)], but also a reduction in light output [V. Bildstein et al., Nucl. Instrum. Methods Phys. Res., Sect. A 729, 188 (2013); M. I. Ojaruega, Ph.D. thesis, University of Michigan, 2009; M. T. Febbraro, Ph.D. thesis, University of Michigan, 2014] as compared to standard plastic. Tests of the relative light output of deuterated polystyrene and standard polystyrene were completed using 14 MeV fusion neutrons generated through implosions of deuterium-tritium filled capsules at the OMEGA laser facility. In addition, we collected data of the relative response of these two scintillators to a wide energy range of neutrons (1-800 MeV) at the Weapons Neutrons Research Facility. Results of these measurements are presented.
Solid polystyrene and deuterated polystyrene light output response to fast neutrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, R., E-mail: raspberry@lanl.gov; Danly, C.; Merrill, F. E.
The Neutron Imaging System has proven to be an important diagnostic in studying DT implosion characteristics at the National Ignition Facility. The current system depends on a polystyrene scintillating fiber array, which detects fusion neutrons born in the DT hotspot as well as neutrons that have scattered to lower energies in the surrounding cold fuel. Increasing neutron yields at NIF, as well as a desire to resolve three-dimensional information about the fuel assembly, have provided the impetus to build and install two additional next-generation neutron imaging systems. We are currently investigating a novel neutron imaging system that will utilize amore » deuterated polystyrene (CD) fiber array instead of standard hydrogen-based polystyrene (CH). Studies of deuterated xylene or deuterated benzene liquid scintillator show an improvement in imaging resolution by a factor of two [L. Disdier et al., Rev. Sci. Instrum. 75, 2134 (2004)], but also a reduction in light output [V. Bildstein et al., Nucl. Instrum. Methods Phys. Res., Sect. A 729, 188 (2013); M. I. Ojaruega, Ph.D. thesis, University of Michigan, 2009; M. T. Febbraro, Ph.D. thesis, University of Michigan, 2014] as compared to standard plastic. Tests of the relative light output of deuterated polystyrene and standard polystyrene were completed using 14 MeV fusion neutrons generated through implosions of deuterium-tritium filled capsules at the OMEGA laser facility. In addition, we collected data of the relative response of these two scintillators to a wide energy range of neutrons (1-800 MeV) at the Weapons Neutrons Research Facility. Results of these measurements are presented.« less
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813
Optical design and development of a snapshot light-field laryngoscope
NASA Astrophysics Data System (ADS)
Zhu, Shuaishuai; Jin, Peng; Liang, Rongguang; Gao, Liang
2018-02-01
The convergence of recent advances in optical fabrication and digital processing yields a generation of imaging technology-light-field (LF) cameras which bridge the realms of applied mathematics, optics, and high-performance computing. Herein for the first time, we introduce the paradigm of LF imaging into laryngoscopy. The resultant probe can image the three-dimensional shape of vocal folds within a single camera exposure. Furthermore, to improve the spatial resolution, we developed an image fusion algorithm, providing a simple solution to a long-standing problem in LF imaging.
A method based on IHS cylindrical transform model for quality assessment of image fusion
NASA Astrophysics Data System (ADS)
Zhu, Xiaokun; Jia, Yonghong
2005-10-01
Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.
Image Fusion During Vascular and Nonvascular Image-Guided Procedures☆
Abi-Jaoudeh, Nadine; Kobeiter, Hicham; Xu, Sheng; Wood, Bradford J.
2013-01-01
Image fusion may be useful in any procedure where previous imaging such as positron emission tomography, magnetic resonance imaging, or contrast-enhanced computed tomography (CT) defines information that is referenced to the procedural imaging, to the needle or catheter, or to an ultrasound transducer. Fusion of prior and intraoperative imaging provides real-time feedback on tumor location or margin, metabolic activity, device location, or vessel location. Multimodality image fusion in interventional radiology was initially introduced for biopsies and ablations, especially for lesions only seen on arterial phase CT, magnetic resonance imaging, or positron emission tomography/CT but has more recently been applied to other vascular and nonvascular procedures. Two different types of platforms are commonly used for image fusion and navigation: (1) electromagnetic tracking and (2) cone-beam CT. Both technologies would be reviewed as well as their strengths and weaknesses, indications, when to use one vs the other, tips and guidance to streamline use, and early evidence defining clinical benefits of these rapidly evolving, commercially available and emerging techniques. PMID:23993079
Walter, Uwe; Müller, Jan-Uwe; Rösche, Johannes; Kirsch, Michael; Grossmann, Annette; Benecke, Reiner; Wittstock, Matthias; Wolters, Alexander
2016-03-01
A combination of preoperative magnetic resonance imaging (MRI) with real-time transcranial ultrasound, known as fusion imaging, may improve postoperative control of deep brain stimulation (DBS) electrode location. Fusion imaging, however, employs a weak magnetic field for tracking the position of the ultrasound transducer and the patient's head. Here we assessed its feasibility, safety, and clinical relevance in patients with DBS. Eighteen imaging sessions were conducted in 15 patients (7 women; aged 52.4 ± 14.4 y) with DBS of subthalamic nucleus (n = 6), globus pallidus interna (n = 5), ventro-intermediate (n = 3), or anterior (n = 1) thalamic nucleus and clinically suspected lead displacement. Minimum distance between DBS generator and magnetic field transmitter was kept at 65 cm. The pre-implantation MRI dataset was loaded into the ultrasound system for the fusion imaging examination. The DBS lead position was rated using validated criteria. Generator DBS parameters and neurological state of patients were monitored. Magnetic resonance-ultrasound fusion imaging and volume navigation were feasible in all cases and provided with real-time imaging capabilities of DBS lead and its location within the superimposed magnetic resonance images. Of 35 assessed lead locations, 30 were rated optimal, three suboptimal, and two displaced. In two cases, electrodes were re-implanted after confirming their inappropriate location on computed tomography (CT) scan. No influence of fusion imaging on clinical state of patients, or on DBS implantable pulse generator function, was found. Magnetic resonance-ultrasound real-time fusion imaging of DBS electrodes is safe with distinct precautions and improves assessment of electrode location. It may lower the need for repeated CT or MRI scans in DBS patients. © 2015 International Parkinson and Movement Disorder Society.
A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.
Khelifi, Lazhar; Mignotte, Max
2017-08-01
Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.
Oshiro, Yukio; Ohkohchi, Nobuhiro
2017-06-01
To perform accurate hepatectomy without injury, it is necessary to understand the anatomical relationship among the branches of Glisson's sheath, hepatic veins, and tumor. In Japan, three-dimensional (3D) preoperative simulation for liver surgery is becoming increasingly common, and liver 3D modeling and 3D hepatectomy simulation by 3D analysis software for liver surgery have been covered by universal healthcare insurance since 2012. Herein, we review the history of virtual hepatectomy using computer-assisted surgery (CAS) and our research to date, and we discuss the future prospects of CAS. We have used the SYNAPSE VINCENT medical imaging system (Fujifilm Medical, Tokyo, Japan) for 3D visualization and virtual resection of the liver since 2010. We developed a novel fusion imaging technique combining 3D computed tomography (CT) with magnetic resonance imaging (MRI). The fusion image enables us to easily visualize anatomic relationships among the hepatic arteries, portal veins, bile duct, and tumor in the hepatic hilum. In 2013, we developed an original software, called Liversim, which enables real-time deformation of the liver using physical simulation, and a randomized control trial has recently been conducted to evaluate the use of Liversim and SYNAPSE VINCENT for preoperative simulation and planning. Furthermore, we developed a novel hollow 3D-printed liver model whose surface is covered with frames. This model is useful for safe liver resection, has better visibility, and the production cost is reduced to one-third of a previous model. Preoperative simulation and navigation with CAS in liver resection are expected to help planning and conducting a surgery and surgical education. Thus, a novel CAS system will contribute to not only the performance of reliable hepatectomy but also to surgical education.
Xie, Mei-Ming; Xia, Kang; Zhang, Hong-Xin; Cao, Hong-Hui; Yang, Zhi-Jin; Cui, Hai-Feng; Gao, Shang; Tang, Kang-Lai
2017-01-23
Screw fixation is a typical technique for isolated talonavicular arthrodesis (TNA), however, no consensus has been reached on how to select most suitable inserted position and direction. The study aimed to present a new fixation technique and to evaluate the clinical outcome of individual headless compression screws (HCSs) applied with three-dimensional (3D) image processing technology to isolated TNA. From 2007 to 2014, 69 patients underwent isolated TNA by using double Acutrak HCSs. The preoperative three-dimensional (3D) insertion model of double HCSs was applied by Mimics, Catia, and SolidWorks reconstruction software. One HCS oriented antegradely from the edge of dorsal navicular tail where intersected interspace between the first and the second cuneiform into the talus body along the talus axis, and the other one paralleled the first screw oriented from the dorsal-medial navicular where intersected at the medial plane of the first cuneiform. The anteroposterior and lateral X-ray examinations certified that the double HCSs were placed along the longitudinal axis of the talus. Postoperative assessment included the American Orthopaedic Foot & Ankle Society hindfoot (AOFAS), the visual analogue scale (VAS) score, satisfaction score, imaging assessments, and complications. At the mean 44-months follow-up, all patients exhibited good articular congruity and solid bone fusion at an average of 11.26 ± 0.85 weeks (range, 10 ~ 13 weeks) without screw loosening, shifting, or breakage. The overall fusion rates were 100%. The average AOFAS score increased from 46.62 ± 4.6 (range, 37 ~ 56) preoperatively to 74.77 ± 5.4 (range, 64-88) at the final follow-up (95% CI: -30.86 ~ -27.34; p < 0.001). The mean VAS score decreased from 7.01 ± 1.2 (range, 4 ~ 9) to 1.93 ± 1.3 (range, 0 ~ 4) (95% CI: 4.69 ~ 5.48; p < 0.001). One cases (1.45%) and three cases (4.35%) experienced wound infection and adjacent arthritis respectively. The postoperative satisfaction score including pain relief, activities of daily living, and return to recreational activities were good to excellent in 62 (89.9%) cases. Individual 3D reconstruction of HCSs insertion model can be designed with three-dimensional image processing technology in TNA. The technology is safe, effective, and reliable to isolated TNA method with high bone fusion rates, low incidences of complications.
A new hyperspectral image compression paradigm based on fusion
NASA Astrophysics Data System (ADS)
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
Aoki, Yasuko; Endo, Hidenori; Niizuma, Kuniyasu; Inoue, Takashi; Shimizu, Hiroaki; Tominaga, Teiji
2013-12-01
We report two cases with internal carotid artery(ICA)aneurysms, in which fusion image effectively indicated the anatomical variations of the anterior choroidal artery (AchoA). Fusion image was obtained using fusion application software (Integrated Registration, Advantage Workstation VS4, GE Healthcare). When the artery passed through the choroidal fissure, it was diagnosed as AchoA. Case 1 had an aneurysm at the left ICA. Left internal carotid angiography (ICAG) showed that an artery arising from the aneurysmal neck supplied the medial occipital lobe. Fusion image showed that this artery had a branch passing through the choroidal fissure, which was diagnosed as hyperplastic AchoA. Case 2 had an aneurysm at the supraclinoid segment of the right ICA. AchoA or posterior communicating artery (PcomA) were not detected by the right ICAG. Fusion image obtained from 3D vertebral angiography (VAG) and MRI showed that the right AchoA arose from the right PcomA. Fusion image obtained from the right ICAG and the left VAG suggested that the aneurysm was located on the ICA where the PcomA regressed. Fusion image is an effective tool for assessing anatomical variations of AchoA. The present method is simple and quick for obtaining a fusion image that can be used in a real-time clinical setting.
Fusion of laser and image sensory data for 3-D modeling of the free navigation space
NASA Technical Reports Server (NTRS)
Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.
1994-01-01
A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.
Three-dimensional Image Fusion Guidance for Transjugular Intrahepatic Portosystemic Shunt Placement.
Tacher, Vania; Petit, Arthur; Derbel, Haytham; Novelli, Luigi; Vitellius, Manuel; Ridouani, Fourat; Luciani, Alain; Rahmouni, Alain; Duvoux, Christophe; Salloum, Chady; Chiaradia, Mélanie; Kobeiter, Hicham
2017-11-01
To assess the safety, feasibility and effectiveness of image fusion guidance with pre-procedural portal phase computed tomography with intraprocedural fluoroscopy for transjugular intrahepatic portosystemic shunt (TIPS) placement. All consecutive cirrhotic patients presenting at our interventional unit for TIPS creation from January 2015 to January 2016 were prospectively enrolled. Procedures were performed under general anesthesia in an interventional suite equipped with flat panel detector, cone-beam computed tomography (CBCT) and image fusion technique. All TIPSs were placed under image fusion guidance. After hepatic vein catheterization, an unenhanced CBCT acquisition was performed and co-registered with the pre-procedural portal phase CT images. A virtual path between hepatic vein and portal branch was made using the virtual needle path trajectory software. Subsequently, the 3D virtual path was overlaid on 2D fluoroscopy for guidance during portal branch cannulation. Safety, feasibility, effectiveness and per-procedural data were evaluated. Sixteen patients (12 males; median age 56 years) were included. Procedures were technically feasible in 15 of the 16 patients (94%). One procedure was aborted due to hepatic vein catheterization failure related to severe liver distortion. No periprocedural complications occurred within 48 h of the procedure. The median dose-area product was 91 Gy cm 2 , fluoroscopy time 15 min, procedure time 40 min and contrast media consumption 65 mL. Clinical benefit of the TIPS placement was observed in nine patients (56%). This study suggests that 3D image fusion guidance for TIPS is feasible, safe and effective. By identifying virtual needle path, CBCT enables real-time multiplanar guidance and may facilitate TIPS placement.
NASA Astrophysics Data System (ADS)
Luo, Yuan; Wang, Bo-yu; Zhang, Yi; Zhao, Li-ming
2018-03-01
In this paper, under different illuminations and random noises, focusing on the local texture feature's defects of a face image that cannot be completely described because the threshold of local ternary pattern (LTP) cannot be calculated adaptively, a local three-value model of improved adaptive local ternary pattern (IALTP) is proposed. Firstly, the difference function between the center pixel and the neighborhood pixel weight is established to obtain the statistical characteristics of the central pixel and the neighborhood pixel. Secondly, the adaptively gradient descent iterative function is established to calculate the difference coefficient which is defined to be the threshold of the IALTP operator. Finally, the mean and standard deviation of the pixel weight of the local region are used as the coding mode of IALTP. In order to reflect the overall properties of the face and reduce the dimension of features, the two-directional two-dimensional PCA ((2D)2PCA) is adopted. The IALTP is used to extract local texture features of eyes and mouth area. After combining the global features and local features, the fusion features (IALTP+) are obtained. The experimental results on the Extended Yale B and AR standard face databases indicate that under different illuminations and random noises, the algorithm proposed in this paper is more robust than others, and the feature's dimension is smaller. The shortest running time reaches 0.329 6 s, and the highest recognition rate reaches 97.39%.
Data processing and analysis for 2D imaging GEM detector system
NASA Astrophysics Data System (ADS)
Czarski, T.; Chernyshova, M.; Pozniak, K. T.; Kasprowicz, G.; Byszuk, A.; Juszczyk, B.; Kolasinski, P.; Linczuk, M.; Wojenski, A.; Zabolotny, W.; Zienkiewicz, P.
2014-11-01
The Triple Gas Electron Multiplier (T-GEM) is presented as soft X-ray (SXR) energy and position sensitive detector for high-resolution X-ray diagnostics of magnetic confinement fusion plasmas [1]. Multi-channel measurement system and essential data processing for X-ray energy and position recognition is consider. Several modes of data acquisition are introduced depending on processing division for hardware and software components. Typical measuring issues aredeliberated for enhancement of data quality. Fundamental output characteristics are presented for one and two dimensional detector structure. Representative results for reference X-ray source and tokamak plasma are demonstrated.
Zhao, Y J; Liu, Y; Sun, Y C; Wang, Y
2017-08-18
To explore a three-dimensional (3D) data fusion and integration method of optical scanning tooth crowns and cone beam CT (CBCT) reconstructing tooth roots for their natural transition in the 3D profile. One mild dental crowding case was chosen from orthodontics clinics with full denture. The CBCT data were acquired to reconstruct the dental model with tooth roots by Mimics 17.0 medical imaging software, and the optical impression was taken to obtain the dentition model with high precision physiological contour of crowns by Smart Optics dental scanner. The two models were doing 3D registration based on their common part of the crowns' shape in Geomagic Studio 2012 reverse engineering software. The model coordinate system was established by defining the occlusal plane. crown-gingiva boundary was extracted from optical scanning model manually, then crown-root boundary was generated by offsetting and projecting crown-gingiva boundary to the root model. After trimming the crown and root models, the 3D fusion model with physiological contour crown and nature root was formed by curvature continuity filling algorithm finally. In the study, 10 patients with dentition mild crowded from the oral clinics were followed up with this method to obtain 3D crown and root fusion models, and 10 high qualification doctors were invited to do subjective evaluation of these fusion models. This study based on commercial software platform, preliminarily realized the 3D data fusion and integration method of optical scanning tooth crowns and CBCT tooth roots with a curvature continuous shape transition. The 10 patients' 3D crown and root fusion models were constructed successfully by the method, and the average score of the doctors' subjective evaluation for these 10 models was 8.6 points (0-10 points). which meant that all the fusion models could basically meet the need of the oral clinics, and also showed the method in our study was feasible and efficient in orthodontics study and clinics. The method of this study for 3D crown and root data fusion could obtain an integrate tooth or dental model more close to the nature shape. CBCT model calibration may probably improve the precision of the fusion model. The adaptation of this method for severe dentition crowding and micromaxillary deformity needs further research.
Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen
2017-07-01
Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.
Detection of buried objects by fusing dual-band infrared images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, G.A.; Sengupta, S.K.; Sherwood, R.J.
1993-11-01
We have conducted experiments to demonstrate the enhanced detectability of buried land mines using sensor fusion techniques. Multiple sensors, including visible imagery, infrared imagery, and ground penetrating radar (GPR), have been used to acquire data on a number of buried mines and mine surrogates. Because the visible wavelength and GPR data are currently incomplete. This paper focuses on the fusion of two-band infrared images. We use feature-level fusion and supervised learning with the probabilistic neural network (PNN) to evaluate detection performance. The novelty of the work lies in the application of advanced target recognition algorithms, the fusion of dual-band infraredmore » images and evaluation of the techniques using two real data sets.« less
Improved disparity map analysis through the fusion of monocular image segmentations
NASA Technical Reports Server (NTRS)
Perlant, Frederic P.; Mckeown, David M.
1991-01-01
The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.
In situ calibration of an infrared imaging video bolometer in the Large Helical Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukai, K., E-mail: mukai.kiyofumi@LHD.nifs.ac.jp; Peterson, B. J.; Pandya, S. N.
The InfraRed imaging Video Bolometer (IRVB) is a powerful diagnostic to measure multi-dimensional radiation profiles in plasma fusion devices. In the Large Helical Device (LHD), four IRVBs have been installed with different fields of view to reconstruct three-dimensional profiles using a tomography technique. For the application of the measurement to plasma experiments using deuterium gas in LHD in the near future, the long-term effect of the neutron irradiation on the heat characteristics of an IRVB foil should be taken into account by regular in situ calibration measurements. Therefore, in this study, an in situ calibration system was designed.
A transversal approach for patch-based label fusion via matrix completion
Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Thung, Kim-Han; Guo, Yanrong; Shen, Dinggang
2015-01-01
Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge. PMID:26160394
Quantitative image fusion in infrared radiometry
NASA Astrophysics Data System (ADS)
Romm, Iliya; Cukurel, Beni
2018-05-01
Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.
Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.
Franchi, G; Angulo, J; Moreaud, M; Sorbier, L
2018-01-01
The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Guler, N; Volegov, P; Danly, C R; Grim, G P; Merrill, F E; Wilde, C H
2012-10-01
Inertial confinement fusion experiments at the National Ignition Facility are designed to understand the basic principles of creating self-sustaining fusion reactions by laser driven compression of deuterium-tritium (DT) filled cryogenic plastic capsules. The neutron imaging diagnostic provides information on the distribution of the central fusion reaction region and the surrounding DT fuel by observing neutron images in two different energy bands for primary (13-17 MeV) and down-scattered (6-12 MeV) neutrons. From this, the final shape and size of the compressed capsule can be estimated and the symmetry of the compression can be inferred. These experiments provide small sources with high yield neutron flux. An aperture design that includes an array of pinholes and penumbral apertures has provided the opportunity to image the same source with two different techniques. This allows for an evaluation of these different aperture designs and reconstruction algorithms.
Depth and thermal sensor fusion to enhance 3D thermographic reconstruction.
Cao, Yanpeng; Xu, Baobei; Ye, Zhangyu; Yang, Jiangxin; Cao, Yanlong; Tisse, Christel-Loic; Li, Xin
2018-04-02
Three-dimensional geometrical models with incorporated surface temperature data provide important information for various applications such as medical imaging, energy auditing, and intelligent robots. In this paper we present a robust method for mobile and real-time 3D thermographic reconstruction through depth and thermal sensor fusion. A multimodal imaging device consisting of a thermal camera and a RGB-D sensor is calibrated geometrically and used for data capturing. Based on the underlying principle that temperature information remains robust against illumination and viewpoint changes, we present a Thermal-guided Iterative Closest Point (T-ICP) methodology to facilitate reliable 3D thermal scanning applications. The pose of sensing device is initially estimated using correspondences found through maximizing the thermal consistency between consecutive infrared images. The coarse pose estimate is further refined by finding the motion parameters that minimize a combined geometric and thermographic loss function. Experimental results demonstrate that complimentary information captured by multimodal sensors can be utilized to improve performance of 3D thermographic reconstruction. Through effective fusion of thermal and depth data, the proposed approach generates more accurate 3D thermal models using significantly less scanning data.
NASA Astrophysics Data System (ADS)
Liu, Yaqing; Wen, Xiaoyong
2018-05-01
In this paper, a generalized (3+1)-dimensional B-type Kadomtsev-Petviashvili (gBKP) equation is investigated by using the Hirota’s bilinear method. With the aid of symbolic computation, some new lump, mixed lump kink and periodic lump solutions are derived. Based on the derived solutions, some novel interaction phenomena like the fission and fusion interactions between one lump soliton and one kink soliton, the fission and fusion interactions between one lump soliton and a pair of kink solitons and the interactions between two periodic lump solitons are discussed graphically. Results might be helpful for understanding the propagation of the shallow water wave.
Infrared and visible image fusion scheme based on NSCT and low-level visual features
NASA Astrophysics Data System (ADS)
Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei
2016-05-01
Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.
Desai, Atman; Pendharkar, Arjun V; Swienckowski, Jessica G; Ball, Perry A; Lollis, Scott; Simmons, Nathan E
2015-11-23
Construct failure is an uncommon but well-recognized complication following anterior cervical corpectomy and fusion (ACCF). In order to screen for these complications, many centers routinely image patients at outpatient visits following surgery. There remains, however, little data on the utility of such imaging. The electronic medical record of all patients undergoing anterior cervical corpectomy and fusion at Dartmouth-Hitchcock Medical Center between 2004 and 2009 were reviewed. All patients had routine cervical spine radiographs performed perioperatively. Follow-up visits up to two years postoperatively were analyzed. Sixty-five patients (mean age 52.2) underwent surgery during the time period. Eighteen patients were female. Forty patients had surgery performed for spondylosis, 20 for trauma, three for tumor, and two for infection. Forty-three patients underwent one-level corpectomy, 20 underwent two-level corpectomy, and two underwent three-level corpectomy, using an allograft, autograft, or both. Sixty-two of the fusions were instrumented using a plate and 13 had posterior augmentation. Fifty-seven patients had follow-up with imaging at four to 12 weeks following surgery, 54 with plain radiographs, two with CT scans, and one with an MRI scan. Unexpected findings were noted in six cases. One of those patients, found to have asymptomatic recurrent kyphosis following a two-level corpectomy, had repeat surgery because of those findings. Only one further patient was found to have abnormal imaging up to two years, and this patient required no further intervention. Routine imaging after ACCF can demonstrate asymptomatic occurrences of clinically significant instrument failure. In 43 consecutive single-level ACCF however, routine imaging did not change management, even when an abnormality was discovered. This may suggest a limited role for routine imaging after ACCF in longer constructs involving multiple levels.
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
A Standard Mammography Unit - Standard 3D Ultrasound Probe Fusion Prototype: First Results.
Schulz-Wendtland, Rüdiger; Jud, Sebastian M; Fasching, Peter A; Hartmann, Arndt; Radicke, Marcus; Rauh, Claudia; Uder, Michael; Wunderle, Marius; Gass, Paul; Langemann, Hanna; Beckmann, Matthias W; Emons, Julius
2017-06-01
The combination of different imaging modalities through the use of fusion devices promises significant diagnostic improvement for breast pathology. The aim of this study was to evaluate image quality and clinical feasibility of a prototype fusion device (fusion prototype) constructed from a standard tomosynthesis mammography unit and a standard 3D ultrasound probe using a new method of breast compression. Imaging was performed on 5 mastectomy specimens from patients with confirmed DCIS or invasive carcinoma (BI-RADS ™ 6). For the preclinical fusion prototype an ABVS system ultrasound probe from an Acuson S2000 was integrated into a MAMMOMAT Inspiration (both Siemens Healthcare Ltd) and, with the aid of a newly developed compression plate, digital mammogram and automated 3D ultrasound images were obtained. The quality of digital mammogram images produced by the fusion prototype was comparable to those produced using conventional compression. The newly developed compression plate did not influence the applied x-ray dose. The method was not more labour intensive or time-consuming than conventional mammography. From the technical perspective, fusion of the two modalities was achievable. In this study, using only a few mastectomy specimens, the fusion of an automated 3D ultrasound machine with a standard mammography unit delivered images of comparable quality to conventional mammography. The device allows simultaneous ultrasound - the second important imaging modality in complementary breast diagnostics - without increasing examination time or requiring additional staff.
Stewart, Richard S; Kiss, Ilona M; Wilkinson, Robert S
2014-04-16
Four-dimensional (4D) light imaging has been used to study behavior of small structures within motor nerve terminals of the thin transversus abdominis muscle of the garter snake. Raw data comprises time-lapse sequences of 3D z-stacks. Each stack contains 4-20 images acquired with epifluorescence optics at focal planes separated by 400-1,500 nm. Steps in the acquisition of image stacks, such as adjustment of focus, switching of excitation wavelengths, and operation of the digital camera, are automated as much as possible to maximize image rate and minimize tissue damage from light exposure. After acquisition, a set of image stacks is deconvolved to improve spatial resolution, converted to the desired 3D format, and used to create a 4D "movie" that is suitable for variety of computer-based analyses, depending upon the experimental data sought. One application is study of the dynamic behavior of two classes of endosomes found in nerve terminals-macroendosomes (MEs) and acidic endosomes (AEs)-whose sizes (200-800 nm for both types) are at or near the diffraction limit. Access to 3D information at each time point provides several advantages over conventional time-lapse imaging. In particular, size and velocity of movement of structures can be quantified over time without loss of sharp focus. Examples of data from 4D imaging reveal that MEs approach the plasma membrane and disappear, suggesting that they are exocytosed rather than simply moving vertically away from a single plane of focus. Also revealed is putative fusion of MEs and AEs, by visualization of overlap between the two dye-containing structures as viewed in each three orthogonal projections.
Molecular imaging of malignant tumor metabolism: whole-body image fusion of DWI/CT vs. PET/CT.
Reiner, Caecilia S; Fischer, Michael A; Hany, Thomas; Stolzmann, Paul; Nanz, Daniel; Donati, Olivio F; Weishaupt, Dominik; von Schulthess, Gustav K; Scheffel, Hans
2011-08-01
To prospectively investigate the technical feasibility and performance of image fusion for whole-body diffusion-weighted imaging (wbDWI) and computed tomography (CT) to detect metastases using hybrid positron emission tomography/computed tomography (PET/CT) as reference standard. Fifty-two patients (60 ± 14 years; 18 women) with different malignant tumor disease examined by PET/CT for clinical reasons consented to undergo additional wbDWI at 1.5 Tesla. WbDWI was performed using a diffusion-weighted single-shot echo-planar imaging during free breathing. Images at b = 0 s/mm(2) and b = 700 s/mm(2) were acquired and apparent diffusion coefficient (ADC) maps were generated. Image fusion of wbDWI and CT (from PET/CT scan) was performed yielding for wbDWI/CT fused image data. One radiologist rated the success of image fusion and diagnostic image quality. The presence or absence of metastases on wbDWI/CT fused images was evaluated together with the separate wbDWI and CT images by two different, independent radiologists blinded to results from PET/CT. Detection rate and positive predictive values for diagnosing metastases was calculated. PET/CT examinations were used as reference standard. PET/CT identified 305 malignant lesions in 39 of 52 (75%) patients. WbDWI/CT image fusion was technically successful and yielded diagnostic image quality in 73% and 92% of patients, respectively. Interobserver agreement for the evaluation of wbDWI/CT images was κ = 0.78. WbDWI/CT identified 270 metastases in 43 of 52 (83%) patients. Overall detection rate and positive predictive value of wbDWI/CT was 89% (95% CI, 0.85-0.92) and 94% (95% CI, 0.92-0.97), respectively. WbDWI/CT image fusion is technically feasible in a clinical setting and allows the diagnostic assessment of metastatic tumor disease detecting nine of 10 lesions as compared with PET/CT. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.
Soyama, Takeshi; Sakuhara, Yusuke; Kudo, Kohsuke; Abo, Daisuke; Wang, Jeff; Ito, Yoichi M; Hasegawa, Yu; Shirato, Hiroki
2016-07-01
This preliminary study compared ultrasonography-computed tomography (US-CT) fusion imaging and conventional ultrasonography (US) for accuracy and time required for target identification using a combination of real phantoms and sets of digitally modified computed tomography (CT) images (digital/real hybrid phantoms). In this randomized prospective study, 27 spheres visible on B-mode US were placed at depths of 3.5, 8.5, and 13.5 cm (nine spheres each). All 27 spheres were digitally erased from the CT images, and a radiopaque sphere was digitally placed at each of the 27 locations to create 27 different sets of CT images. Twenty clinicians were instructed to identify the sphere target using US alone and fusion imaging. The accuracy of target identification of the two methods was compared using McNemar's test. The mean time required for target identification and error distances were compared using paired t tests. At all three depths, target identification was more accurate and the mean time required for target identification was significantly less with US-CT fusion imaging than with US alone, and the mean error distances were also shorter with US-CT fusion imaging. US-CT fusion imaging was superior to US alone in terms of accurate and rapid identification of target lesions.
Assessment of Spatiotemporal Fusion Algorithms for Planet and Worldview Images
Zhu, Xiaolin; Gao, Feng; Chou, Bryan; Li, Jiang; Shen, Yuzhong; Koperski, Krzysztof; Marchisio, Giovanni
2018-01-01
Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images. PMID:29614745
Assessment of Spatiotemporal Fusion Algorithms for Planet and Worldview Images.
Kwan, Chiman; Zhu, Xiaolin; Gao, Feng; Chou, Bryan; Perez, Daniel; Li, Jiang; Shen, Yuzhong; Koperski, Krzysztof; Marchisio, Giovanni
2018-03-31
Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images.
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
Cha, Dong Ik; Lee, Min Woo; Song, Kyoung Doo; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga
2017-06-01
To compare the accuracy and required time for image fusion of real-time ultrasound (US) with pre-procedural magnetic resonance (MR) images between positioning auto-registration and manual registration for percutaneous radiofrequency ablation or biopsy of hepatic lesions. This prospective study was approved by the institutional review board, and all patients gave written informed consent. Twenty-two patients (male/female, n = 18/n = 4; age, 61.0 ± 7.7 years) who were referred for planning US to assess the feasibility of radiofrequency ablation (n = 21) or biopsy (n = 1) for focal hepatic lesions were included. One experienced radiologist performed the two types of image fusion methods in each patient. The performance of auto-registration and manual registration was evaluated. The accuracy of the two methods, based on measuring registration error, and the time required for image fusion for both methods were recorded using in-house software and respectively compared using the Wilcoxon signed rank test. Image fusion was successful in all patients. The registration error was not significantly different between the two methods (auto-registration: median, 3.75 mm; range, 1.0-15.8 mm vs. manual registration: median, 2.95 mm; range, 1.2-12.5 mm, p = 0.242). The time required for image fusion was significantly shorter with auto-registration than with manual registration (median, 28.5 s; range, 18-47 s, vs. median, 36.5 s; range, 14-105 s, p = 0.026). Positioning auto-registration showed promising results compared with manual registration, with similar accuracy and even shorter registration time.
Forest Attributes from Radar Interferometric Structure and its Fusion with Optical Remote Sensing
NASA Technical Reports Server (NTRS)
Treuhaft, Robert N.; Law, Beverly E.; Asner, Gregory P.
2004-01-01
The possibility of global, three-dimensional remote sensing of forest structure with interferometric synthetic aperture radar (InSAR) bears on important forest ecological processes, particularly the carbon cycle. InSAR supplements two-dimensional remote sensing with information in the vertical dimension. Its strengths in potential for global coverage complement those of lidar (light detecting and ranging), which has the potential for high-accuracy vertical profiles over small areas. InSAR derives its sensitivity to forest vertical structure from the differences in signals received by two, spatially separate radar receivers. Estimation of parameters describing vertical structure requires multiple-polarization, multiple-frequency, or multiple-baseline InSAR. Combining InSAR with complementary remote sensing techniques, such as hyperspectral optical imaging and lidar, can enhance vertical-structure estimates and consequent biophysical quantities of importance to ecologists, such as biomass. Future InSAR experiments will supplement recent airborne and spaceborne demonstrations, and together with inputs from ecologists regarding structure, they will suggest designs for future spaceborne strategies for measuring global vegetation structure.
Nagamachi, Shigeki; Nishii, Ryuichi; Wakamatsu, Hideyuki; Mizutani, Youichi; Kiyohara, Shogo; Fujita, Seigo; Futami, Shigemi; Sakae, Tatefumi; Furukoji, Eiji; Tamura, Shozo; Arita, Hideo; Chijiiwa, Kazuo; Kawai, Keiichi
2013-07-01
This study aimed at demonstrating the feasibility of retrospectively fused (18)F FDG-PET and MRI (PET/MRI fusion image) in diagnosing pancreatic tumor, in particular differentiating malignant tumor from benign lesions. In addition, we evaluated additional findings characterizing pancreatic lesions by FDG-PET/MRI fusion image. We analyzed retrospectively 119 patients: 96 cancers and 23 benign lesions. FDG-PET/MRI fusion images (PET/T1 WI or PET/T2WI) were made by dedicated software using 1.5 Tesla (T) MRI image and FDG-PET images. These images were interpreted by two well-trained radiologists without knowledge of clinical information and compared with FDG-PET/CT images. We compared the differential diagnostic capability between PET/CT and FDG-PET/MRI fusion image. In addition, we evaluated additional findings such as tumor structure and tumor invasion. FDG-PET/MRI fusion image significantly improved accuracy compared with that of PET/CT (96.6 vs. 86.6 %). As additional finding, dilatation of main pancreatic duct was noted in 65.9 % of solid types and in 22.6 % of cystic types, on PET/MRI-T2 fusion image. Similarly, encasement of adjacent vessels was noted in 43.1 % of solid types and in 6.5 % of cystic types. Particularly in cystic types, intra-tumor structures such as mural nodule (35.4 %) or intra-cystic septum (74.2 %) were detected additionally. Besides, PET/MRI-T2 fusion image could detect extra benign cystic lesions (9.1 % in solid type and 9.7 % in cystic type) that were not noted by PET/CT. In diagnosing pancreatic lesions, FDG-PET/MRI fusion image was useful in differentiating pancreatic cancer from benign lesions. Furthermore, it was helpful in evaluating relationship between lesions and surrounding tissues as well as in detecting extra benign cysts.
Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion
NASA Astrophysics Data System (ADS)
Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei
2018-06-01
Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.
Human visual system consistent quality assessment for remote sensing image fusion
NASA Astrophysics Data System (ADS)
Liu, Jun; Huang, Junyi; Liu, Shuguang; Li, Huali; Zhou, Qiming; Liu, Junchen
2015-07-01
Quality assessment for image fusion is essential for remote sensing application. Generally used indices require a high spatial resolution multispectral (MS) image for reference, which is not always readily available. Meanwhile, the fusion quality assessments using these indices may not be consistent with the Human Visual System (HVS). As an attempt to overcome this requirement and inconsistency, this paper proposes an HVS-consistent image fusion quality assessment index at the highest resolution without a reference MS image using Gaussian Scale Space (GSS) technology that could simulate the HVS. The spatial details and spectral information of original and fused images are first separated in GSS, and the qualities are evaluated using the proposed spatial and spectral quality index respectively. The overall quality is determined without a reference MS image by a combination of the proposed two indices. Experimental results on various remote sensing images indicate that the proposed index is more consistent with HVS evaluation compared with other widely used indices that may or may not require reference images.
Automated Recognition of 3D Features in GPIR Images
NASA Technical Reports Server (NTRS)
Park, Han; Stough, Timothy; Fijany, Amir
2007-01-01
A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.
A novel design for scintillator-based neutron and gamma imaging in inertial confinement fusion
NASA Astrophysics Data System (ADS)
Geppert-Kleinrath, Verena; Cutler, Theresa; Danly, Chris; Madden, Amanda; Merrill, Frank; Tybo, Josh; Volegov, Petr; Wilde, Carl
2017-10-01
The LANL Advanced Imaging team has been providing reliable 2D neutron imaging of the burning fusion fuel at NIF for years, revealing possible multi-dimensional asymmetries in the fuel shape, and therefore calling for additional views. Adding a passive imaging system using image plate techniques along a new polar line of sight has recently demonstrated the merit of 3D neutron image reconstruction. Now, the team is in the process of designing a new active neutron imaging system for an additional equatorial view. The design will include a gamma imaging system as well, to allow for the imaging of carbon in the ablator of the NIF fuel capsules, constraining the burning fuel shape even further. The selection of ideal scintillator materials for a position-sensitive detector system is the key component for the new design. A comprehensive study of advanced scintillators has been carried out at the Los Alamos Neutron Science Center and the OMEGA Laser Facility in Rochester, NY. Neutron radiography using a fast-gated CCD camera system delivers measurements of resolution, light output and noise characteristics. The measured performance parameters inform the novel design, for which we conclude the feasibility of monolithic scintillators over pixelated counterparts.
Assessment of SPOT-6 optical remote sensing data against GF-1 using NNDiffuse image fusion algorithm
NASA Astrophysics Data System (ADS)
Zhao, Jinling; Guo, Junjie; Cheng, Wenjie; Xu, Chao; Huang, Linsheng
2017-07-01
A cross-comparison method was used to assess the SPOT-6 optical satellite imagery against Chinese GF-1 imagery using three types of indicators: spectral and color quality, fusion effect and identification potential. More specifically, spectral response function (SRF) curves were used to compare the two imagery, showing that the SRF curve shape of SPOT-6 is more like a rectangle compared to GF-1 in blue, green, red and near-infrared bands. NNDiffuse image fusion algorithm was used to evaluate the capability of information conservation in comparison with wavelet transform (WT) and principal component (PC) algorithms. The results show that NNDiffuse fused image has extremely similar entropy vales than original image (1.849 versus 1.852) and better color quality. In addition, the object-oriented classification toolset (ENVI EX) was used to identify greenlands for comparing the effect of self-fusion image of SPOT-6 and inter-fusion image between SPOT-6 and GF-1 based on the NNDiffuse algorithm. The overall accuracy is 97.27% and 76.88%, respectively, showing that self-fused image of SPOT-6 has better identification capability.
Three-dimensional super-resolved live cell imaging through polarized multi-angle TIRF.
Zheng, Cheng; Zhao, Guangyuan; Liu, Wenjie; Chen, Youhua; Zhang, Zhimin; Jin, Luhong; Xu, Yingke; Kuang, Cuifang; Liu, Xu
2018-04-01
Measuring three-dimensional nanoscale cellular structures is challenging, especially when the structure is dynamic. Owing to the informative total internal reflection fluorescence (TIRF) imaging under varied illumination angles, multi-angle (MA) TIRF has been examined to offer a nanoscale axial and a subsecond temporal resolution. However, conventional MA-TIRF still performs badly in lateral resolution and fails to characterize the depth image in densely distributed regions. Here, we emphasize the lateral super-resolution in the MA-TIRF, exampled by simply introducing polarization modulation into the illumination procedure. Equipped with a sparsity and accelerated proximal algorithm, we examine a more precise 3D sample structure compared with previous methods, enabling live cell imaging with a temporal resolution of 2 s and recovering high-resolution mitochondria fission and fusion processes. We also shared the recovery program, which is the first open-source recovery code for MA-TIRF, to the best of our knowledge.
A Bayesian trans-dimensional approach for the fusion of multiple geophysical datasets
NASA Astrophysics Data System (ADS)
JafarGandomi, Arash; Binley, Andrew
2013-09-01
We propose a Bayesian fusion approach to integrate multiple geophysical datasets with different coverage and sensitivity. The fusion strategy is based on the capability of various geophysical methods to provide enough resolution to identify either subsurface material parameters or subsurface structure, or both. We focus on electrical resistivity as the target material parameter and electrical resistivity tomography (ERT), electromagnetic induction (EMI), and ground penetrating radar (GPR) as the set of geophysical methods. However, extending the approach to different sets of geophysical parameters and methods is straightforward. Different geophysical datasets are entered into a trans-dimensional Markov chain Monte Carlo (McMC) search-based joint inversion algorithm. The trans-dimensional property of the McMC algorithm allows dynamic parameterisation of the model space, which in turn helps to avoid bias of the post-inversion results towards a particular model. Given that we are attempting to develop an approach that has practical potential, we discretize the subsurface into an array of one-dimensional earth-models. Accordingly, the ERT data that are collected by using two-dimensional acquisition geometry are re-casted to a set of equivalent vertical electric soundings. Different data are inverted either individually or jointly to estimate one-dimensional subsurface models at discrete locations. We use Shannon's information measure to quantify the information obtained from the inversion of different combinations of geophysical datasets. Information from multiple methods is brought together via introducing joint likelihood function and/or constraining the prior information. A Bayesian maximum entropy approach is used for spatial fusion of spatially dispersed estimated one-dimensional models and mapping of the target parameter. We illustrate the approach with a synthetic dataset and then apply it to a field dataset. We show that the proposed fusion strategy is successful not only in enhancing the subsurface information but also as a survey design tool to identify the appropriate combination of the geophysical tools and show whether application of an individual method for further investigation of a specific site is beneficial.
Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.
Reena Benjamin, J; Jayasree, T
2018-02-01
In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.
Schulz-Wendtland, Rüdiger; Jud, Sebastian M.; Fasching, Peter A.; Hartmann, Arndt; Radicke, Marcus; Rauh, Claudia; Uder, Michael; Wunderle, Marius; Gass, Paul; Langemann, Hanna; Beckmann, Matthias W.; Emons, Julius
2017-01-01
Aim The combination of different imaging modalities through the use of fusion devices promises significant diagnostic improvement for breast pathology. The aim of this study was to evaluate image quality and clinical feasibility of a prototype fusion device (fusion prototype) constructed from a standard tomosynthesis mammography unit and a standard 3D ultrasound probe using a new method of breast compression. Materials and Methods Imaging was performed on 5 mastectomy specimens from patients with confirmed DCIS or invasive carcinoma (BI-RADS ™ 6). For the preclinical fusion prototype an ABVS system ultrasound probe from an Acuson S2000 was integrated into a MAMMOMAT Inspiration (both Siemens Healthcare Ltd) and, with the aid of a newly developed compression plate, digital mammogram and automated 3D ultrasound images were obtained. Results The quality of digital mammogram images produced by the fusion prototype was comparable to those produced using conventional compression. The newly developed compression plate did not influence the applied x-ray dose. The method was not more labour intensive or time-consuming than conventional mammography. From the technical perspective, fusion of the two modalities was achievable. Conclusion In this study, using only a few mastectomy specimens, the fusion of an automated 3D ultrasound machine with a standard mammography unit delivered images of comparable quality to conventional mammography. The device allows simultaneous ultrasound – the second important imaging modality in complementary breast diagnostics – without increasing examination time or requiring additional staff. PMID:28713173
Spatial Statistical Data Fusion for Remote Sensing Applications
NASA Technical Reports Server (NTRS)
Nguyen, Hai
2010-01-01
Data fusion is the process of combining information from heterogeneous sources into a single composite picture of the relevant process, such that the composite picture is generally more accurate and complete than that derived from any single source alone. Data collection is often incomplete, sparse, and yields incompatible information. Fusion techniques can make optimal use of such data. When investment in data collection is high, fusion gives the best return. Our study uses data from two satellites: (1) Multiangle Imaging SpectroRadiometer (MISR), (2) Moderate Resolution Imaging Spectroradiometer (MODIS).
Fusion of monocular cues to detect man-made structures in aerial imagery
NASA Technical Reports Server (NTRS)
Shufelt, Jefferey; Mckeown, David M.
1991-01-01
The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.
Huang, Haifeng; Wang, Wei; Lin, Tingsheng; Zhang, Qing; Zhao, Xiaozhi; Lian, Huibo; Guo, Hongqian
2016-11-17
To compare the complications of traditional transrectal (TR) prostate biopsy and image fusion guided transperineal (TP) prostate biopsy in our center. Two hundred and fourty-two patients who underwent prostate biopsy from August 2014 to January 2015were reviewed. Among them, 144 patients underwent systematic 12-core transrectal ultrasonography (TRUS) guided prostate biopsy (TR approach) while 98 patients underwent free-hand transperineal targeted biopsy with TRUS and multi-parameter magnetic resonance imaging (mpMRI) fusion images (TP approach). The complications of the two groups were presented and a simple statistical analysis was performed to compare the two groups. The cohort of our study include242 patients, including 144 patients underwent TR biopsies while 98 patients underwentTP biopsies. There was no significant difference of major complications, including sepsis, bleeding and other complication requiring admissionbetween the two groups (P > 0.05). The incidence rate of infection and rectal bleeding in TR was much higher than TP (p < 0.05), but the incidence rate of perineal swelling in TP was much higher than TR (p < 0.05). There were no significant differences of minor complications including hematuria, lower urinary tract symptoms (LUTS), dysuria, and acuteurinary retention between the two groups (p > 0.05). The present study supports the safety of both techniques. Free-handTP targeted prostate biopsy with real-time fusion imaging of mpMRI and TR ultrasound is a good approach for prostate biopsy.
NASA Astrophysics Data System (ADS)
Prasad, S.; Bruce, L. M.
2007-04-01
There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.
NASA Astrophysics Data System (ADS)
Zhou, Xiran; Liu, Jun; Liu, Shuguang; Cao, Lei; Zhou, Qiming; Huang, Huawen
2014-02-01
High spatial resolution and spectral fidelity are basic standards for evaluating an image fusion algorithm. Numerous fusion methods for remote sensing images have been developed. Some of these methods are based on the intensity-hue-saturation (IHS) transform and the generalized IHS (GIHS), which may cause serious spectral distortion. Spectral distortion in the GIHS is proven to result from changes in saturation during fusion. Therefore, reducing such changes can achieve high spectral fidelity. A GIHS-based spectral preservation fusion method that can theoretically reduce spectral distortion is proposed in this study. The proposed algorithm consists of two steps. The first step is spectral modulation (SM), which uses the Gaussian function to extract spatial details and conduct SM of multispectral (MS) images. This method yields a desirable visual effect without requiring histogram matching between the panchromatic image and the intensity of the MS image. The second step uses the Gaussian convolution function to restore lost edge details during SM. The proposed method is proven effective and shown to provide better results compared with other GIHS-based methods.
Nighttime images fusion based on Laplacian pyramid
NASA Astrophysics Data System (ADS)
Wu, Cong; Zhan, Jinhao; Jin, Jicheng
2018-02-01
This paper expounds method of the average weighted fusion, image pyramid fusion, the wavelet transform and apply these methods on the fusion of multiple exposures nighttime images. Through calculating information entropy and cross entropy of fusion images, we can evaluate the effect of different fusion. Experiments showed that Laplacian pyramid image fusion algorithm is suitable for processing nighttime images fusion, it can reduce the halo while preserving image details.
Multiscale morphological filtering for analysis of noisy and complex images
NASA Astrophysics Data System (ADS)
Kher, A.; Mitra, S.
Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.
Multiscale Morphological Filtering for Analysis of Noisy and Complex Images
NASA Technical Reports Server (NTRS)
Kher, A.; Mitra, S.
1993-01-01
Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.
NASA Astrophysics Data System (ADS)
Liu, Zhanwen; Feng, Yan; Chen, Hang; Jiao, Licheng
2017-10-01
A novel and effective image fusion method is proposed for creating a highly informative and smooth surface of fused image through merging visible and infrared images. Firstly, a two-scale non-subsampled shearlet transform (NSST) is employed to decompose the visible and infrared images into detail layers and one base layer. Then, phase congruency is adopted to extract the saliency maps from the detail layers and a guided filtering is proposed to compute the filtering output of base layer and saliency maps. Next, a novel weighted average technique is used to make full use of scene consistency for fusion and obtaining coefficients map. Finally the fusion image was acquired by taking inverse NSST of the fused coefficients map. Experiments show that the proposed approach can achieve better performance than other methods in terms of subjective visual effect and objective assessment.
Abe, Masanori; Fukazawa, Ryuji; Ogawa, Shunichi; Watanabe, Makoto; Fukushima, Yoshimitsu; Kiriyama, Tomonari; Hayashi, Hiromitsu; Itoh, Yasuhiko
2016-01-01
The coronary arterial lesions of Kawasaki disease are mainly dilative lesions, aneurysms, and stenotic lesions formed before, after, and between aneurysms; these lesions develop in multiple branches resulting in complex coronary hemodynamics. Diagnosis of myocardial ischemia and infarction and evaluation of the culprit coronary arteries and regions is critical to evaluating the treatment and prognosis of patients. This study used hybrid imaging, in which multidetector computed tomographic (CT) images for coronary CT angiography (CCTA) and stress myocardial perfusion single-photon emission CT (SPECT) images were fused. We investigated the diagnosis of blood vessels and regions responsible for myocardial ischemia and infarction in patients with complex coronary arterial lesions; in addition, we evaluated myocardial lesions that developed directly under giant coronary artery aneurysms. The subjects were 17 patients with Kawasaki disease with multiple coronary arterial lesions (median age, 18.0 years; 16 male). Both CCTA using 64-row CT and adenosine-loading myocardial SPECT were performed. Three branches, the right coronary artery (RCA), left anterior descending branch (LAD), and left circumflex branch, were evaluated with the conventional side-by-side interpretation, in which the images were lined up for diagnosis, and hybrid imaging, in which the CCTA and SPECT images were fused with computer processing. In addition, the myocardial lesions directly under giant coronary artery aneurysms were investigated with fusion imaging. Images sufficient for evaluation were acquired in all 17 patients. In the RCA, coronary arterial lesions were detected with CCTA in 16 patients. The evaluations were consistent between the side-by-side and fusion interpretation in 14 patients, and the blood vessel responsible for the myocardial ischemic region was identified in 2 patients. In the left circumflex branch, coronary arterial lesions were confirmed with 3-dimensional CT in 5 patients, and the the culprit coronary arteries for myocardial ischemia/infarction were confirmed with the fusion interpretation but not with the side-by-side interpretation. In the LAD, coronary arterial lesions were present in all patients, and the diagnosis was made with the fusion interpretation in 10 patients. In the LAD, small-range infarct lesions were detected directly under the giant coronary artery aneurysm in 8 patients, but were not confirmed with the side-by-side interpretation. Fusion imaging was capable of accurately evaluating myocardial ischemia/infarction as cardiovascular sequelae of Kawasaki disease and confirming the culprit coronary arteries. In addition, analysis of fusion images confirmed that small-range infarct lesions were concomitantly present directly under giant coronary artery aneurysms in the anterior descending coronary artery.
Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryu, Seun; Lin, Guang; Sun, Xin
2013-01-01
Fast image reconstruction from statistical information is critical in image fusion from multimodality chemical imaging instrumentation to create high resolution image with large domain. Stochastic methods have been used widely in image reconstruction from two point correlation function. The main challenge is to increase the efficiency of reconstruction. A novel simulated annealing method is proposed for fast solution of image reconstruction. Combining the advantage of very fast cooling schedules, dynamic adaption and parallelization, the new simulation annealing algorithm increases the efficiencies by several orders of magnitude, making the large domain image fusion feasible.
Guffei, Amanda; Sarkar, Rahul; Klewes, Ludger; Righolt, Christiaan; Knecht, Hans; Mai, Sabine
2010-12-01
Hodgkin's lymphoma is characterized by the presence of mono-nucleated Hodgkin cells and bi- to multi-nucleated Reed-Sternberg cells. We have recently shown telomere dysfunction and aberrant synchronous/asynchronous cell divisions during the transition of Hodgkin cells to Reed-Sternberg cells.1 To determine whether overall changes in nuclear architecture affect genomic instability during the transition of Hodgkin cells to Reed-Sternberg cells, we investigated the nuclear organization of chromosomes in these cells. Three-dimensional fluorescent in situ hybridization revealed irregular nuclear positioning of individual chromosomes in Hodgkin cells and, more so, in Reed-Sternberg cells. We characterized an increasingly unequal distribution of chromosomes as mono-nucleated cells became multi-nucleated cells, some of which also contained chromosome-poor 'ghost' cell nuclei. Measurements of nuclear chromosome positions suggested chromosome overlaps in both types of cells. Spectral karyotyping then revealed both aneuploidy and complex chromosomal rearrangements: multiple breakage-bridge-fusion cycles were at the origin of the multiple rearranged chromosomes. This conclusion was challenged by super resolution three-dimensional structured illumination imaging of Hodgkin and Reed-Sternberg nuclei. Three-dimensional super resolution microscopy data documented inter-nuclear DNA bridges in multi-nucleated cells but not in mono-nucleated cells. These bridges consisted of chromatids and chromosomes shared by two Reed-Sternberg nuclei. The complexity of chromosomal rearrangements increased as Hodgkin cells developed into multi-nucleated cells, thus indicating tumor progression and evolution in Hodgkin's lymphoma, with Reed-Sternberg cells representing the highest complexity in chromosomal rearrangements in this disease. This is the first study to demonstrate nuclear remodeling and associated genomic instability leading to the generation of Reed-Sternberg cells of Hodgkin's lymphoma. We defined nuclear remodeling as a key feature of Hodgkin's lymphoma, highlighting the relevance of nuclear architecture in cancer.
Statistical image quantification toward optimal scan fusion and change quantification
NASA Astrophysics Data System (ADS)
Potesil, Vaclav; Zhou, Xiang Sean
2007-03-01
Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-01-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary. PMID:26942233
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution.
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-10-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary.
A model for explaining fusion suppression using classical trajectory method
NASA Astrophysics Data System (ADS)
Phookan, C. K.; Kalita, K.
2015-01-01
We adopt a semi-classical approach for explanation of projectile breakup and above barrier fusion suppression for the reactions 6Li+152Sm and 6Li+144Sm. The cut-off impact parameter for fusion is determined by employing quantum mechanical ideas. Within this cut-off impact parameter for fusion, the fraction of projectiles undergoing breakup is determined using the method of classical trajectory in two-dimensions. For obtaining the initial conditions of the equations of motion, a simplified model of the 6Li nucleus has been proposed. We introduce a simple formula for explanation of fusion suppression. We find excellent agreement between the experimental and calculated fusion cross section. A slight modification of the above formula for fusion suppression is also proposed for a three-dimensional model.
Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data
NASA Astrophysics Data System (ADS)
Jiang, Wei; Lu, Jian
Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.
Hill, K W; Bitter, M L; Scott, S D; Ince-Cushman, A; Reinke, M; Rice, J E; Beiersdorfer, P; Gu, M-F; Lee, S G; Broennimann, Ch; Eikenberry, E F
2008-10-01
A new spatially resolving x-ray crystal spectrometer capable of measuring continuous spatial profiles of high resolution spectra (lambda/d lambda>6000) of He-like and H-like Ar K alpha lines with good spatial (approximately 1 cm) and temporal (approximately 10 ms) resolutions has been installed on the Alcator C-Mod tokamak. Two spherically bent crystals image the spectra onto four two-dimensional Pilatus II pixel detectors. Tomographic inversion enables inference of local line emissivity, ion temperature (T(i)), and toroidal plasma rotation velocity (upsilon(phi)) from the line Doppler widths and shifts. The data analysis techniques, T(i) and upsilon(phi) profiles, analysis of fusion-neutron background, and predictions of performance on other tokamaks, including ITER, will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapuyade-Lahorgue, J; Ruan, S; Li, H
Purpose: Multi-tracer PET imaging is getting more attention in radiotherapy by providing additional tumor volume information such as glucose and oxygenation. However, automatic PET-based tumor segmentation is still a very challenging problem. We propose a statistical fusion approach to joint segment the sub-area of tumors from the two tracers FDG and FMISO PET images. Methods: Non-standardized Gamma distributions are convenient to model intensity distributions in PET. As a serious correlation exists in multi-tracer PET images, we proposed a new fusion method based on copula which is capable to represent dependency between different tracers. The Hidden Markov Field (HMF) model ismore » used to represent spatial relationship between PET image voxels and statistical dynamics of intensities for each modality. Real PET images of five patients with FDG and FMISO are used to evaluate quantitatively and qualitatively our method. A comparison between individual and multi-tracer segmentations was conducted to show advantages of the proposed fusion method. Results: The segmentation results show that fusion with Gaussian copula can receive high Dice coefficient of 0.84 compared to that of 0.54 and 0.3 of monomodal segmentation results based on individual segmentation of FDG and FMISO PET images. In addition, high correlation coefficients (0.75 to 0.91) for the Gaussian copula for all five testing patients indicates the dependency between tumor regions in the multi-tracer PET images. Conclusion: This study shows that using multi-tracer PET imaging can efficiently improve the segmentation of tumor region where hypoxia and glucidic consumption are present at the same time. Introduction of copulas for modeling the dependency between two tracers can simultaneously take into account information from both tracers and deal with two pathological phenomena. Future work will be to consider other families of copula such as spherical and archimedian copulas, and to eliminate partial volume effect by considering dependency between neighboring voxels.« less
Nishikawa, Shuh-ichi; Hirata, Aiko; Endo, Toshiya
2008-11-01
During mating of budding yeast, Saccharomyces cerevisiae, two haploid nuclei fuse to produce a diploid nucleus. The process of nuclear fusion requires two J proteins, Jem1p in the endoplasmic reticulum (ER) lumen and Sec63p, which forms a complex with Sec71p and Sec72p, in the ER membrane. Zygotes of mutants defective in the functions of Jem1p or Sec63p contain two haploid nuclei that were closely apposed but failed to fuse. Here we analyzed the ultrastructure of nuclei in jem1 Delta and sec71 Delta mutant zygotes using electron microscope with the freeze-substituted fixation method. Three-dimensional reconstitution of nuclear structures from electron microscope serial sections revealed that Jem1p facilitates nuclear inner-membrane fusion and spindle pole body (SPB) fusion while Sec71p facilitates nuclear outer-membrane fusion. Two haploid SPBs that failed to fuse could duplicate, and mitotic nuclear division of the unfused haploid nuclei started in jem1 Delta and sec71 Delta mutant zygotes. This observation suggests that nuclear inner-membrane fusion is required for SPB fusion, but not for SPB duplication in the first mitotic cell division.
System integration and DICOM image creation for PET-MR fusion.
Hsiao, Chia-Hung; Kao, Tsair; Fang, Yu-Hua; Wang, Jiunn-Kuen; Guo, Wan-Yuo; Chao, Liang-Hsiao; Yen, Sang-Hue
2005-03-01
This article demonstrates a gateway system for converting image fusion results to digital imaging and communication in medicine (DICOM) objects. For the purpose of standardization and integration, we have followed the guidelines of the Integrated Healthcare Enterprise technical framework and developed a DICOM gateway. The gateway system combines data from hospital information system, image fusion results, and the information generated itself to constitute new DICOM objects. All the mandatory tags defined in standard DICOM object were generated in the gateway system. The gateway system will generate two series of SOP instances of each PET-MR fusion result; SOP (Service Object Pair) one for the reconstructed magnetic resonance (MR) images and the other for position emission tomography (PET) images. The size, resolution, spatial coordinates, and number of frames are the same in both series of SOP instances. Every new generated MR image exactly fits with one of the reconstructed PET images. Those DICOM images are stored to the picture archiving and communication system (PACS) server by means of standard DICOM protocols. When those images are retrieved and viewed by standard DICOM viewing systems, both images can be viewed at the same anatomy location. This system is useful for precise diagnosis and therapy.
Antunes, Jacob; Viswanath, Satish; Brady, Justin T; Crawshaw, Benjamin; Ros, Pablo; Steele, Scott; Delaney, Conor P; Paspulati, Raj; Willis, Joseph; Madabhushi, Anant
2018-07-01
The objective of this study was to develop and quantitatively evaluate a radiology-pathology fusion method for spatially mapping tissue regions corresponding to different chemoradiation therapy-related effects from surgically excised whole-mount rectal cancer histopathology onto preoperative magnetic resonance imaging (MRI). This study included six subjects with rectal cancer treated with chemoradiation therapy who were then imaged with a 3-T T2-weighted MRI sequence, before undergoing mesorectal excision surgery. Excised rectal specimens were sectioned, stained, and digitized as two-dimensional (2D) whole-mount slides. Annotations of residual disease, ulceration, fibrosis, muscularis propria, mucosa, fat, inflammation, and pools of mucin were made by an expert pathologist on digitized slide images. An expert radiologist and pathologist jointly established corresponding 2D sections between MRI and pathology images, as well as identified a total of 10 corresponding landmarks per case (based on visually similar structures) on both modalities (five for driving registration and five for evaluating alignment). We spatially fused the in vivo MRI and ex vivo pathology images using landmark-based registration. This allowed us to spatially map detailed annotations from 2D pathology slides onto corresponding 2D MRI sections. Quantitative assessment of coregistered pathology and MRI sections revealed excellent structural alignment, with an overall deviation of 1.50 ± 0.63 mm across five expert-selected anatomic landmarks (in-plane misalignment of two to three pixels at 0.67- to 1.00-mm spatial resolution). Moreover, the T2-weighted intensity distributions were distinctly different when comparing fibrotic tissue to perirectal fat (as expected), but showed a marked overlap when comparing fibrotic tissue and residual rectal cancer. Our fusion methodology enabled successful and accurate localization of post-treatment effects on in vivo MRI. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
Direct heating of a laser-imploded core using ultraintense laser LFEX
NASA Astrophysics Data System (ADS)
Kitagawa, Y.; Mori, Y.; Ishii, K.; Hanayama, R.; Nishimura, Y.; Okihara, S.; Nakayama, S.; Sekine, T.; Takagi, M.; Watari, T.; Satoh, N.; Kawashima, T.; Komeda, O.; Hioki, T.; Motohiro, T.; Azuma, H.; Sunahara, A.; Sentoku, Y.; Arikawa, Y.; Abe, Y.; Miura, E.; Ozaki, T.
2017-07-01
A CD shell was preimploded by two counter-propagating green beams from the GEKKO laser system GXII (based at the Institute of Laser Engineering, Osaka University), forming a dense core. The core was predominantly heated by energetic ions driven by the laser for fast-ignition-fusion experiment, an extremely energetic ultrashort pulse laser, that is illuminated perpendicularly to the GXII axis. Consequently, we observed the D(d, n)3 He-reacted neutrons (DD beam-fusion neutrons) at a yield of 5× {{10}8} n/4π sr. The beam-fusion neutrons verified that the ions directly collided with the core plasma. Whereas the hot electrons heated the whole core volume, the energetic ions deposited their energies locally in the core. As evidenced in the spectrum, the process simultaneously excited thermal neutrons with a yield of 6× {{10}7} n/4π sr, raising the local core temperature from 0.8 to 1.8 keV. The shell-implosion dynamics (including the beam fusion and thermal fusion initiated by fast deuterons and carbon ions) can be explained by the one-dimensional hydrocode STAR 1D. Meanwhile, the core heating due to resistive processes driven by hot electrons, and also the generation of fast ions were well-predicted by the two-dimensional collisional particle-in-cell code. Together with hot electrons, the ion contribution to fast ignition is indispensable for realizing high-gain fusion. By virtue of its core heating and ignition, the proposed scheme can potentially achieve high-gain fusion.
Baños-Capilla, M C; García, M A; Bea, J; Pla, C; Larrea, L; López, E
2007-06-01
The quality of dosimetry in radiotherapy treatment requires the accurate delimitation of the gross tumor volume. This can be achieved by complementing the anatomical detail provided by CT images through fusion with other imaging modalities that provide additional metabolic and physiological information. Therefore, use of multiple imaging modalities for radiotherapy treatment planning requires an accurate image registration method. This work describes tests carried out on a Discovery LS positron emission/computed tomography (PET/CT) system by General Electric Medical Systems (GEMS), for its later use to obtain images to delimit the target in radiotherapy treatment. Several phantoms have been used to verify image correlation, in combination with fiducial markers, which were used as a system of external landmarks. We analyzed the geometrical accuracy of two different fusion methods with the images obtained with these phantoms. We first studied the fusion method used by the PET/CT system by GEMS (hardware fusion) on the basis that there is satisfactory coincidence between the reconstruction centers in CT and PET systems; and secondly the fiducial fusion, a registration method, by means of least-squares fitting algorithm of a landmark points system. The study concluded with the verification of the centroid position of some phantom components in both imaging modalities. Centroids were estimated through a calculation similar to center-of-mass, weighted by the value of the CT number and the uptake intensity in PET. The mean deviations found for the hardware fusion method were: deltax/ +/-sigma = 3.3 mm +/- 1.0 mm and /deltax/ +/-sigma = 3.6 mm +/- 1.0 mm. These values were substantially improved upon applying fiducial fusion based on external landmark points: /deltax/ +/-sigma = 0.7 mm +/- 0.8 mm and /deltax/ +/-sigma = 0.3 mm 1.7 mm. We also noted that differences found for each of the fusion methods were similar for both the axial and helical CT image acquisition protocols.
The fusion of large scale classified side-scan sonar image mosaics.
Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan
2006-07-01
This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.
Fusion of Geophysical Images in the Study of Archaeological Sites
NASA Astrophysics Data System (ADS)
Karamitrou, A. A.; Petrou, M.; Tsokas, G. N.
2011-12-01
This paper presents results from different fusion techniques between geophysical images from different modalities in order to combine them into one image with higher information content than the two original images independently. The resultant image will be useful for the detection and mapping of buried archaeological relics. The examined archaeological area is situated in Kampana site (NE Greece) near the ancient theater of Maronia city. Archaeological excavations revealed an ancient theater, an aristocratic house and the temple of the ancient Greek God Dionysus. Numerous ceramic objects found in the broader area indicated the probability of the existence of buried urban structure. In order to accurately locate and map the latter, geophysical measurements performed with the use of the magnetic method (vertical gradient of the magnetic field) and of the electrical method (apparent resistivity). We performed a semi-stochastic pixel based registration method between the geophysical images in order to fine register them by correcting their local spatial offsets produced by the use of hand held devices. After this procedure we applied to the registered images three different fusion approaches. Image fusion is a relatively new technique that not only allows integration of different information sources, but also takes advantage of the spatial and spectral resolution as well as the orientation characteristics of each image. We have used three different fusion techniques, fusion with mean values, with wavelets by enhancing selected frequency bands and curvelets giving emphasis at specific bands and angles (according the expecting orientation of the relics). In all three cases the fused images gave significantly better results than each of the original geophysical images separately. The comparison of the results of the three different approaches showed that the fusion with the use of curvelets, giving emphasis at the features' orientation, seems to give the best fused image. In the resultant image appear clear linear and ellipsoid features corresponding to potential archaeological relics.
SIRF: Simultaneous Satellite Image Registration and Fusion in a Unified Framework.
Chen, Chen; Li, Yeqing; Liu, Wei; Huang, Junzhou
2015-11-01
In this paper, we propose a novel method for image fusion with a high-resolution panchromatic image and a low-resolution multispectral (Ms) image at the same geographical location. The fusion is formulated as a convex optimization problem which minimizes a linear combination of a least-squares fitting term and a dynamic gradient sparsity regularizer. The former is to preserve accurate spectral information of the Ms image, while the latter is to keep sharp edges of the high-resolution panchromatic image. We further propose to simultaneously register the two images during the fusing process, which is naturally achieved by virtue of the dynamic gradient sparsity property. An efficient algorithm is then devised to solve the optimization problem, accomplishing a linear computational complexity in the size of the output image in each iteration. We compare our method against six state-of-the-art image fusion methods on Ms image data sets from four satellites. Extensive experimental results demonstrate that the proposed method substantially outperforms the others in terms of both spatial and spectral qualities. We also show that our method can provide high-quality products from coarsely registered real-world IKONOS data sets. Finally, a MATLAB implementation is provided to facilitate future research.
NASA Astrophysics Data System (ADS)
Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting
2018-02-01
Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.
Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation
NASA Astrophysics Data System (ADS)
Song, Huihui
Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat-MODIS image pairs, we build the corresponding relationship between the difference images of MODIS and ETM+ by training a low- and high-resolution dictionary pair from the given prior image pairs. In the second scenario, i.e., only one Landsat- MODIS image pair being available, we directly correlate MODIS and ETM+ data through an image degradation model. Then, the fusion stage is achieved by super-resolving the MODIS image combining the high-pass modulation in a two-layer fusion framework. Remarkably, the proposed spatial-temporal fusion methods form a unified framework for blending remote sensing images with phenology change or land-cover-type change. Based on the proposed spatial-temporal fusion models, we propose to monitor the land use/land cover changes in Shenzhen, China. As a fast-growing city, Shenzhen faces the problem of detecting the rapid changes for both rational city planning and sustainable development. However, the cloudy and rainy weather in region Shenzhen located makes the capturing circle of high-quality satellite images longer than their normal revisit periods. Spatial-temporal fusion methods are capable to tackle this problem by improving the spatial resolution of images with coarse spatial resolution but frequent temporal coverage, thereby making the detection of rapid changes possible. On two Landsat-MODIS datasets with annual and monthly changes, respectively, we apply the proposed spatial-temporal fusion methods to the task of multiple change detection. Afterward, we propose a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning and sparse non-negative matrix factorization. By combining the spectral information from hyperspectral image, which is characterized by low spatial resolution but high spectral resolution and abbreviated as LSHS, and the spatial information from multispectral image, which is featured by high spatial resolution but low spectral resolution and abbreviated as HSLS, this method aims to generate the fused data with both high spatial and high spectral resolutions. Motivated by the observation that each hyperspectral pixel can be represented by a linear combination of a few endmembers, this method first extracts the spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatially unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, we finally derive the fused data characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data.
Elsayed, Mahmoud; Hsiung, Ming C; Meggo-Quiroz, L David; Elguindy, Mostafa; Uygur, Begum; Tandon, Rohit; Guvenc, Tolga; Keser, Nurgul; Vural, Mustafa G; Bulur, Serkan; Chahwala, Jugal R; Abtahi, Firoozeh; Nanda, Navin C
2015-12-01
An atrial septal pouch (ASP) results from partial fusion of the septum primum and the septum secundum, and depending on the site of fusion, the pouch can be left-sided (LASP) or right-sided (RASP). LASPs have been described in association with thrombi found in patients admitted with acute strokes, raising awareness of its potential cardioembolic role, especially in those with no other clearly identifiable embolic source. We retrospectively studied 39 patients in whom the presence of an ASP had been identified by three-dimensional transesophageal echocardiography (3DTEE) and who had a two-dimensional transesophageal echocardiogram (2DTEE) performed during the same clinical encounter. The incremental value provided by 3DTEE over 2DTEE included the detection of six ASPs not found by 2DTEE; the detection of two ASPs in the same subject (in four patients) not identified by 2DTEE; larger ASP measurements of length and height in over 80% of the cases; and measurement of the ASP width (elevational axis) for the calculation of the area of the ASP opening, because of its unique capability to view the pouch en face. In addition, the volume of ASP and of the echogenic masses contained in the ASP (four of 39 patients) could be calculated by 3DTEE, which is a superior parameter of size characterization when compared to individual dimensions. One of these patients who presented with ischemic stroke diagnosed by magnetic resonance imaging had a large (>2 cm) mass in a LASP, with echolucencies similar to those seen in thrombi and associated with clot lysis and resolution. This mass completely disappeared on anticoagulant therapy lending credence that it was most likely a thrombus. There was no history of stroke or any other type of embolic event in the other three patients with masses in ASP. In conclusion, this retrospective study highlights the incremental value of 3DTEE over 2DTEE in the comprehensive assessment and characterization of ASPs, which can aid in the clarification of their role in cryptogenic stroke patients. © 2015, Wiley Periodicals, Inc.
Xia, Jun; He, Pin; Cai, Xiaodong; Zhang, Doudou; Xie, Ni
2017-10-15
Electrode position after deep brain stimulation (DBS) for Parkinson's disease (PD) needs to be confirmed, but there are concerns about the risk of postoperative magnetic resonance imaging (MRI) after DBS. These issues could be avoided by fusion images obtained from preoperative MRI and postoperative computed tomography (CT). This study aimed to investigate image fusion technology for displaying the position of the electrodes compared with postoperative MRI. This was a retrospective study of 32 patients with PD treated with bilateral subthalamic nucleus (STN) DBS between April 2015 and March 2016. The postoperative (same day) CT and preoperative MRI were fused using the Elekta Leksell 10.1 planning workstation (Elekta Instruments, Stockholm, Sweden). The position of the electrodes was compared between the fusion images and postoperative 1-2-week MRI. The position of the electrodes was highly correlated between the fusion and postoperative MRI (all r between 0.865 and 0.996; all P<0.001). The differences of the left electrode position in the lateral and vertical planes was significantly different between the two methods (0.30 and 0.24mm, respectively, both P<0.05), but there were no significant differences for the other electrode and planes (all P>0.05). The position of the electrodes was highly correlated between the fusion and postoperative MRI. The CT-MRI fusion images could be used to avoid the potential risks of MRI after DBS in patients with PD. Copyright © 2017. Published by Elsevier B.V.
Label fusion based brain MR image segmentation via a latent selective model
NASA Astrophysics Data System (ADS)
Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu
2018-04-01
Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.
NASA Astrophysics Data System (ADS)
Guler, Nevzat; Aragonez, Robert J.; Archuleta, Thomas N.; Batha, Steven H.; Clark, David D.; Clark, Deborah J.; Danly, Chris R.; Day, Robert D.; Fatherley, Valerie E.; Finch, Joshua P.; Gallegos, Robert A.; Garcia, Felix P.; Grim, Gary; Hsu, Albert H.; Jaramillo, Steven A.; Loomis, Eric N.; Mares, Danielle; Martinson, Drew D.; Merrill, Frank E.; Morgan, George L.; Munson, Carter; Murphy, Thomas J.; Oertel, John A.; Polk, Paul J.; Schmidt, Derek W.; Tregillis, Ian L.; Valdez, Adelaida C.; Volegov, Petr L.; Wang, Tai-Sen F.; Wilde, Carl H.; Wilke, Mark D.; Wilson, Douglas C.; Atkinson, Dennis P.; Bower, Dan E.; Drury, Owen B.; Dzenitis, John M.; Felker, Brian; Fittinghoff, David N.; Frank, Matthias; Liddick, Sean N.; Moran, Michael J.; Roberson, George P.; Weiss, Paul; Buckles, Robert A.; Cradick, Jerry R.; Kaufman, Morris I.; Lutz, Steve S.; Malone, Robert M.; Traille, Albert
2013-11-01
Inertial Confinement Fusion experiments at the National Ignition Facility (NIF) are designed to understand and test the basic principles of self-sustaining fusion reactions by laser driven compression of deuterium-tritium (DT) filled cryogenic plastic (CH) capsules. The experimental campaign is ongoing to tune the implosions and characterize the burning plasma conditions. Nuclear diagnostics play an important role in measuring the characteristics of these burning plasmas, providing feedback to improve the implosion dynamics. The Neutron Imaging (NI) diagnostic provides information on the distribution of the central fusion reaction region and the surrounding DT fuel by collecting images at two different energy bands for primary (13-15 MeV) and downscattered (10-12 MeV) neutrons. From these distributions, the final shape and size of the compressed capsule can be estimated and the symmetry of the compression can be inferred. The first downscattered neutron images from imploding ICF capsules are shown in this paper.
Analyzing the spatial positioning of nuclei in polynuclear giant cells
NASA Astrophysics Data System (ADS)
Stange, Maike; Hintsche, Marius; Sachse, Kirsten; Gerhardt, Matthias; Valleriani, Angelo; Beta, Carsten
2017-11-01
How cells establish and maintain a well-defined size is a fundamental question of cell biology. Here we investigated to what extent the microtubule cytoskeleton can set a predefined cell size, independent of an enclosing cell membrane. We used electropulse-induced cell fusion to form giant multinuclear cells of the social amoeba Dictyostelium discoideum. Based on dual-color confocal imaging of cells that expressed fluorescent markers for the cell nucleus and the microtubules, we determined the subcellular distributions of nuclei and centrosomes in the giant cells. Our two- and three-dimensional imaging results showed that the positions of nuclei in giant cells do not fall onto a regular lattice. However, a comparison with model predictions for random positioning showed that the subcellular arrangement of nuclei maintains a low but still detectable degree of ordering. This can be explained by the steric requirements of the microtubule cytoskeleton, as confirmed by the effect of a microtubule degrading drug.
Kawaguchi, Yoshiharu; Nakano, Masato; Yasuda, Taketoshi; Seki, Shoji; Hori, Takeshi; Kimura, Tomoatsu
2012-11-01
We developed a new technique for cervical pedicle screw and Magerl screw insertion using a 3-dimensional image guide. In posterior cervical spinal fusion surgery, instrumentation with screws is virtually routine. However, malpositioning of screws is not rare. To avoid complications during cervical pedicle screw and Magerl screw insertion, the authors developed a new technique which is a mold shaped to fit the lamina. Cervical pedicle screw fixation and Magerl screw fixation provide good correction of cervical alignment, rigid fixation, and a high fusion rate. However, malpositioning of screws is not a rare occurrence, and thus the insertion of screws has a potential risk of neurovascular injury. It is necessary to determine a safe insertion procedure for these screws. Preoperative computed tomographic (CT) scans of 1-mm slice thickness were obtained of the whole surgical area. The CT data were imported into a computer navigation system. We developed a 3-dimensional full-scale model of the patient's spine using a rapid prototyping technique from the CT data. Molds of the left and right sides at each vertebra were also constructed. One hole (2.0 mm in diameter and 2.0 cm in length) was made in each mold for the insertion of a screw guide. We performed a simulated surgery using the bone model and the mold before operation in all patients. The mold was firmly attached to the surface of the lamina and the guide wire was inserted using the intraoperative image of lateral vertebra. The proper insertion point, direction, and length of the guide were also confirmed both with the model bone and the image intensifier in the operative field. Then, drilling using a cannulated drill and tapping using a cannulated tapping device were carried out. Eleven consecutive patients who underwent posterior spinal fusion surgery using this technique since 2009 are included. The screw positions in the sagittal and axial planes were evaluated by postoperative CT scan to check for malpositioning. The screw insertion was done in the same manner as the simulated surgery. With the aid of this guide the pedicle screws and Magerl screws could be easily inserted even at the level where the pedicle seemed to be very thin and sclerotic on the CT scan. Postoperative CT scan showed that there were no critical breaches of the screws. This method employing the device using a 3-dimensional image guide seems to be easy and safe to use. The technique may improve the safety of pedicle screw and Magerl screw insertion even in difficult cases with narrow sclerotic pedicles.
Wang, Hongzhi; Yushkevich, Paul A.
2013-01-01
Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427
Conversion of NIMROD simulation results for graphical analysis using VisIt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero-Talamas, C A
Software routines developed to prepare NIMROD [C. R. Sovinec et al., J. Comp. Phys. 195, 355 (2004)] results for three-dimensional visualization from simulations of the Sustained Spheromak Physics Experiment (SSPX ) [E. B. Hooper et al., Nucl. Fusion 39, 863 (1999)] are presented here. The visualization is done by first converting the NIMROD output to a format known as legacy VTK and then loading it to VisIt, a graphical analysis tool that includes three-dimensional rendering and various mathematical operations for large data sets. Sample images obtained from the processing of NIMROD data with VisIt are included.
Standardizing Quality Assessment of Fused Remotely Sensed Images
NASA Astrophysics Data System (ADS)
Pohl, C.; Moellmann, J.; Fries, K.
2017-09-01
The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, M. J., E-mail: mros@lle.rochester.edu; Séguin, F. H.; Rinderknecht, H. G.
The significance and nature of ion kinetic effects in D{sup 3}He-filled, shock-driven inertial confinement fusion implosions are assessed through measurements of fusion burn profiles. Over this series of experiments, the ratio of ion-ion mean free path to minimum shell radius (the Knudsen number, N{sub K}) was varied from 0.3 to 9 in order to probe hydrodynamic-like to strongly kinetic plasma conditions; as the Knudsen number increased, hydrodynamic models increasingly failed to match measured yields, while an empirically-tuned, first-step model of ion kinetic effects better captured the observed yield trends [Rosenberg et al., Phys. Rev. Lett. 112, 185001 (2014)]. Here, spatiallymore » resolved measurements of the fusion burn are used to examine kinetic ion transport effects in greater detail, adding an additional dimension of understanding that goes beyond zero-dimensional integrated quantities to one-dimensional profiles. In agreement with the previous findings, a comparison of measured and simulated burn profiles shows that models including ion transport effects are able to better match the experimental results. In implosions characterized by large Knudsen numbers (N{sub K} ∼ 3), the fusion burn profiles predicted by hydrodynamics simulations that exclude ion mean free path effects are peaked far from the origin, in stark disagreement with the experimentally observed profiles, which are centrally peaked. In contrast, a hydrodynamics simulation that includes a model of ion diffusion is able to qualitatively match the measured profile shapes. Therefore, ion diffusion or diffusion-like processes are identified as a plausible explanation of the observed trends, though further refinement of the models is needed for a more complete and quantitative understanding of ion kinetic effects.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, M. J.; Séguin, F. H.; Amendt, P. A.
The significance and nature of ion kinetic effects in D³He-filled, shock-driven inertial confinement fusion implosions are assessed through measurements of fusion burn profiles. Over this series of experiments, the ratio of ion-ion mean free path to minimum shell radius (the Knudsen number, N K) was varied from 0.3 to 9 in order to probe hydrodynamic-like to strongly kinetic plasma conditions; as the Knudsen number increased, hydrodynamic models increasingly failed to match measured yields, while an empirically-tuned, first-step model of ion kinetic effects better captured the observed yield trends [Rosenberg et al., Phys. Rev. Lett. 112, 185001 (2014)]. Here, spatially resolvedmore » measurements of the fusion burn are used to examine kinetic ion transport effects in greater detail, adding an additional dimension of understanding that goes beyond zero-dimensional integrated quantities to one-dimensional profiles. In agreement with the previous findings, a comparison of measured and simulated burn profiles shows that models including ion transport effects are able to better match the experimental results. In implosions characterized by large Knudsen numbers (N K ~ 3), the fusion burn profiles predicted by hydrodynamics simulations that exclude ion mean free path effects are peaked far from the origin, in stark disagreement with the experimentally observed profiles, which are centrally peaked. In contrast, a hydrodynamics simulation that includes a model of ion diffusion is able to qualitatively match the measured profile shapes. Therefore, ion diffusion or diffusion-like processes are identified as a plausible explanation of the observed trends, though further refinement of the models is needed for a more complete and quantitative understanding of ion kinetic effects.« less
NASA Astrophysics Data System (ADS)
Rosenberg, M. J.; Séguin, F. H.; Amendt, P. A.; Atzeni, S.; Rinderknecht, H. G.; Hoffman, N. M.; Zylstra, A. B.; Li, C. K.; Sio, H.; Gatu Johnson, M.; Frenje, J. A.; Petrasso, R. D.; Glebov, V. Yu.; Stoeckl, C.; Seka, W.; Marshall, F. J.; Delettrez, J. A.; Sangster, T. C.; Betti, R.; Wilks, S. C.; Pino, J.; Kagan, G.; Molvig, K.; Nikroo, A.
2015-06-01
The significance and nature of ion kinetic effects in D3He-filled, shock-driven inertial confinement fusion implosions are assessed through measurements of fusion burn profiles. Over this series of experiments, the ratio of ion-ion mean free path to minimum shell radius (the Knudsen number, NK) was varied from 0.3 to 9 in order to probe hydrodynamic-like to strongly kinetic plasma conditions; as the Knudsen number increased, hydrodynamic models increasingly failed to match measured yields, while an empirically-tuned, first-step model of ion kinetic effects better captured the observed yield trends [Rosenberg et al., Phys. Rev. Lett. 112, 185001 (2014)]. Here, spatially resolved measurements of the fusion burn are used to examine kinetic ion transport effects in greater detail, adding an additional dimension of understanding that goes beyond zero-dimensional integrated quantities to one-dimensional profiles. In agreement with the previous findings, a comparison of measured and simulated burn profiles shows that models including ion transport effects are able to better match the experimental results. In implosions characterized by large Knudsen numbers (NK ˜ 3), the fusion burn profiles predicted by hydrodynamics simulations that exclude ion mean free path effects are peaked far from the origin, in stark disagreement with the experimentally observed profiles, which are centrally peaked. In contrast, a hydrodynamics simulation that includes a model of ion diffusion is able to qualitatively match the measured profile shapes. Therefore, ion diffusion or diffusion-like processes are identified as a plausible explanation of the observed trends, though further refinement of the models is needed for a more complete and quantitative understanding of ion kinetic effects.
Rosenberg, M. J.; Séguin, F. H.; Amendt, P. A.; ...
2015-06-02
The significance and nature of ion kinetic effects in D³He-filled, shock-driven inertial confinement fusion implosions are assessed through measurements of fusion burn profiles. Over this series of experiments, the ratio of ion-ion mean free path to minimum shell radius (the Knudsen number, N K) was varied from 0.3 to 9 in order to probe hydrodynamic-like to strongly kinetic plasma conditions; as the Knudsen number increased, hydrodynamic models increasingly failed to match measured yields, while an empirically-tuned, first-step model of ion kinetic effects better captured the observed yield trends [Rosenberg et al., Phys. Rev. Lett. 112, 185001 (2014)]. Here, spatially resolvedmore » measurements of the fusion burn are used to examine kinetic ion transport effects in greater detail, adding an additional dimension of understanding that goes beyond zero-dimensional integrated quantities to one-dimensional profiles. In agreement with the previous findings, a comparison of measured and simulated burn profiles shows that models including ion transport effects are able to better match the experimental results. In implosions characterized by large Knudsen numbers (N K ~ 3), the fusion burn profiles predicted by hydrodynamics simulations that exclude ion mean free path effects are peaked far from the origin, in stark disagreement with the experimentally observed profiles, which are centrally peaked. In contrast, a hydrodynamics simulation that includes a model of ion diffusion is able to qualitatively match the measured profile shapes. Therefore, ion diffusion or diffusion-like processes are identified as a plausible explanation of the observed trends, though further refinement of the models is needed for a more complete and quantitative understanding of ion kinetic effects.« less
MR angiography fusion technique for treatment planning of intracranial arteriovenous malformations.
McGee, Kiaran P; Ivanovic, Vladimir; Felmlee, Joel P; Meyer, Fredrick B; Pollock, Bruce E; Huston, John
2006-03-01
To develop an image fusion technique using elliptical centric contrast-enhanced (CE) MR angiography (MRA) and three-dimensional (3D) time-of-flight (TOF) acquisitions for radiosurgery treatment planning of arteriovenous malformations (AVMs). CE and 3D-TOF MR angiograms with disparate in-plane fields of view (FOVs) were acquired, followed by k-space reformatting to provide equal voxel dimensions. Spatial domain addition was performed to provide a third, fused data volume. Spatial distortion was evaluated on an MRA phantom and provided slice-dependent and global distortion along the three physical dimensions of the MR scanner. In vivo validation was performed on 10 patients with intracranial AVMs prior to their conventional angiogram on the day of gamma knife radiosurgery. Spatial distortion in the phantom within a volume of 14 x 14 x 3.2 cm(3) was less than +/-1 mm (+/-1 standard deviation (SD)) for CE and 3D-TOF data sets. Fused data volumes were successfully generated for all 10 patients. Image fusion can be used to obtain high-resolution CE-MRA images of intracranial AVMs while keeping the fiducial markers needed for gamma knife radiosurgery planning. The spatial fidelity of these data is within the tolerance acceptable for daily quality control (QC) purposes and gamma knife treatment planning. (c) 2006 Wiley-Liss, Inc.
The API 120: A portable neutron generator for the associated particle technique
NASA Astrophysics Data System (ADS)
Chichester, D. L.; Lemchak, M.; Simpson, J. D.
2005-12-01
The API 120 is a lightweight, portable neutron generator for active neutron interrogation (ANI) field work exploiting the associated particle technique. It incorporates a small sealed-tube accelerator, an all digital control system with smart on-board diagnostics, a simple platform-independent control interface and a comprehensive safety interlock philosophy with provisions for wireless control. The generator operates in a continuous output mode using either the D-D or D-T fusion reactions. To register the helium ion associated with fusion, the system incorporates a high resolution fiber optic imaging plate that may be coated with one of several different phosphors. The ion beam on the target measures less than 2 mm in diameter, thus making the system suitable for multi-dimensional imaging. The system is rated at 1E7 n/s for over 1000 h although higher yields are possible. The overall weight is 12 kg; power consumption is less than 50 W.
High resolution electron microscopy study of crystal growth mechanisms in chicken bone composites
NASA Astrophysics Data System (ADS)
Cuisinier, F. J. G.; Steuer, P.; Brisson, A.; Voegel, J. C.
1995-12-01
The present study describes the early stages of chicken bone crystal growth, followed by high resolution electron microscopy (HREM). We have developed an original analysis procedure to determine the crystal structure. Images were first digitalized and selected areas were fast Fourier transformed. Numerical masks were selected around the most intense spots and the filtered signal was retransformed back to real space. The filtered images were then compared to computer calculated images to identify the inorganic mineral phase. Nanometer-sized particles were observed on amorphous areas. These particles have a structure loosely related to hydroxyapatite (HA) and a specific orientation. In a more advanced situation, the nanoparticles appeared to grow in two dimensions and to form plate-like crystals. These crystals seem, in a last growth step, to fuse by their (100) faces. These experimental observations allowed us to propose a four-step model for the development and growth of chicken bone crystals. The two initial stages are the ionic adsorption onto the organic substrate followed by the nucleation of nanometer-sized particles. The two following steps, i.e. two-dimensional growth of the nanoparticles leading to the formation of needle-like crystals, and the lateral fusion of these crystals by their (100) faces, are controlled only by spatial constraints inside the extracellular organic matrix.
Joint image registration and fusion method with a gradient strength regularization
NASA Astrophysics Data System (ADS)
Lidong, Huang; Wei, Zhao; Jun, Wang
2015-05-01
Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.
Ultra-fast framing camera tube
Kalibjian, Ralph
1981-01-01
An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.
Facial recognition using multisensor images based on localized kernel eigen spaces.
Gundimada, Satyanadh; Asari, Vijayan K
2009-06-01
A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.
Quantification of spheno-occipital synchondrosis fusion in a contemporary Malaysian population.
Hisham, Salina; Flavel, Ambika; Abdullah, Nurliza; Noor, Mohamad Helmee Mohamad; Franklin, Daniel
2018-03-01
Timing of fusion of the spheno-occipital synchondrosis (SOS) is correlated with age. Previous research, however, has demonstrated variation in the timing of closure among different global populations. The present study aims to quantify the timing of SOS fusion in Malaysian individuals as visualised in multi-detector computed tomography (CT) scans and to thereafter formulate age estimation models based on fusion status. Anonymised cranial CT scans of 336 males and 164 females, aged 5-25 years, were acquired from the National Institute of Forensic Medicine, Hospital Kuala Lumpur and Department of Diagnostic Imaging, Hospital Sultanah Aminah. The scans were received in DICOM format and reconstructed into three-dimensional images using OsiriX. The SOS is scored as open, fusing endocranially, fusing ectocranially or completely fused. Statistical analyses are performed using IBM SPSS Statistics version 24. Transition analysis (Nphases2) is then utilised to calculate age ranges for each stage. To assess the reliability of an observation, intra- and inter-observer agreement is quantified using Fleiss Kappa and was found to be excellent (κ=0.785-0.907 and 0.812). The mean (SD) age for complete fusion is 20.84 (2.84) years in males and 19.78 (3.35) years in females. Transition ages between Stages 0 and 1, 1 and 2, and 2 and 3 in males are 12.52, 13.98 and 15.52 years, respectively (SD 1.37); in females, the corresponding data are 10.47, 12.26 and 13.80 years (SD 1.72). Complete fusion of the SOS was observed in all individuals above the age of 18 years. SOS fusion status provides upper and lower age boundaries for forensic age estimation in the Malaysian sample. Copyright © 2018 Elsevier B.V. All rights reserved.
Angiogram, fundus, and oxygen saturation optic nerve head image fusion
NASA Astrophysics Data System (ADS)
Cao, Hua; Khoobehi, Bahram
2009-02-01
A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.
Wei, Hongjiang; Viallon, Magalie; Delattre, Benedicte M A; Moulin, Kevin; Yang, Feng; Croisille, Pierre; Zhu, Yuemin
2015-01-01
Free-breathing cardiac diffusion tensor imaging (DTI) is a promising but challenging technique for the study of fiber structures of the human heart in vivo. This work proposes a clinically compatible and robust technique to provide three-dimensional (3-D) fiber architecture properties of the human heart. To this end, 10 short-axis slices were acquired across the entire heart using a multiple shifted trigger delay (TD) strategy under free breathing conditions. Interscan motion was first corrected automatically using a nonrigid registration method. Then, two post-processing schemes were optimized and compared: an algorithm based on principal component analysis (PCA) filtering and temporal maximum intensity projection (TMIP), and an algorithm that uses the wavelet-based image fusion (WIF) method. The two methods were applied to the registered diffusion-weighted (DW) images to cope with intrascan motion-induced signal loss. The tensor fields were finally calculated, from which fractional anisotropy (FA), mean diffusivity (MD), and 3-D fiber tracts were derived and compared. The results show that the comparison of the FA values (FA(PCATMIP) = 0.45 ±0.10, FA(WIF) = 0.42 ±0.05, P=0.06) showed no significant difference, while the MD values ( MD(PCATMIP)=0.83 ±0.12×10(-3) mm (2)/s, MD(WIF)=0.74±0.05×10(-3) mm (2)/s, P=0.028) were significantly different. Improved helix angle variations through the myocardium wall reflecting the rotation characteristic of cardiac fibers were observed with WIF. This study demonstrates that the combination of multiple shifted TD acquisitions and dedicated post-processing makes it feasible to retrieve in vivo cardiac tractographies from free-breathing DTI acquisitions. The substantial improvements were observed using the WIF method instead of the previously published PCATMIP technique.
Assessment of radiofrequency ablation margin by MRI-MRI image fusion in hepatocellular carcinoma.
Wang, Xiao-Li; Li, Kai; Su, Zhong-Zhen; Huang, Ze-Ping; Wang, Ping; Zheng, Rong-Qin
2015-05-07
To investigate the feasibility and clinical value of magnetic resonance imaging (MRI)-MRI image fusion in assessing the ablative margin (AM) for hepatocellular carcinoma (HCC). A newly developed ultrasound workstation for MRI-MRI image fusion was used to evaluate the AM of 62 tumors in 52 HCC patients after radiofrequency ablation (RFA). The lesions were divided into two groups: group A, in which the tumor was completely ablated and 5 mm AM was achieved (n = 32); and group B, in which the tumor was completely ablated but 5 mm AM was not achieved (n = 29). To detect local tumor progression (LTP), all patients were followed every two months by contrast-enhanced ultrasound, contrast-enhanced MRI or computed tomography (CT) in the first year after RFA. Then, the follow-up interval was prolonged to every three months after the first year. Of the 62 tumors, MRI-MRI image fusion was successful in 61 (98.4%); the remaining case had significant deformation of the liver and massive ascites after RFA. The time required for creating image fusion and AM evaluation was 15.5 ± 5.5 min (range: 8-22 min) and 9.6 ± 3.2 min (range: 6-14 min), respectively. The follow-up period ranged from 1-23 mo (14.2 ± 5.4 mo). In group A, no LTP was detected in 32 lesions, whereas in group B, LTP was detected in 4 of 29 tumors, which occurred at 2, 7, 9, and 15 mo after RFA. The frequency of LTP in group B (13.8%; 4/29) was significantly higher than that in group A (0/32, P = 0.046). All of the LTPs occurred in the area in which the 5 mm AM was not achieved. The MRI-MRI image fusion using an ultrasound workstation is feasible and useful for evaluating the AM after RFA for HCC.
Assessment of radiofrequency ablation margin by MRI-MRI image fusion in hepatocellular carcinoma
Wang, Xiao-Li; Li, Kai; Su, Zhong-Zhen; Huang, Ze-Ping; Wang, Ping; Zheng, Rong-Qin
2015-01-01
AIM: To investigate the feasibility and clinical value of magnetic resonance imaging (MRI)-MRI image fusion in assessing the ablative margin (AM) for hepatocellular carcinoma (HCC). METHODS: A newly developed ultrasound workstation for MRI-MRI image fusion was used to evaluate the AM of 62 tumors in 52 HCC patients after radiofrequency ablation (RFA). The lesions were divided into two groups: group A, in which the tumor was completely ablated and 5 mm AM was achieved (n = 32); and group B, in which the tumor was completely ablated but 5 mm AM was not achieved (n = 29). To detect local tumor progression (LTP), all patients were followed every two months by contrast-enhanced ultrasound, contrast-enhanced MRI or computed tomography (CT) in the first year after RFA. Then, the follow-up interval was prolonged to every three months after the first year. RESULTS: Of the 62 tumors, MRI-MRI image fusion was successful in 61 (98.4%); the remaining case had significant deformation of the liver and massive ascites after RFA. The time required for creating image fusion and AM evaluation was 15.5 ± 5.5 min (range: 8-22 min) and 9.6 ± 3.2 min (range: 6-14 min), respectively. The follow-up period ranged from 1-23 mo (14.2 ± 5.4 mo). In group A, no LTP was detected in 32 lesions, whereas in group B, LTP was detected in 4 of 29 tumors, which occurred at 2, 7, 9, and 15 mo after RFA. The frequency of LTP in group B (13.8%; 4/29) was significantly higher than that in group A (0/32, P = 0.046). All of the LTPs occurred in the area in which the 5 mm AM was not achieved. CONCLUSION: The MRI-MRI image fusion using an ultrasound workstation is feasible and useful for evaluating the AM after RFA for HCC. PMID:25954109
Matovic, Milovan; Jankovic, Milica; Barjaktarovic, Marko; Jeremic, Marija
2017-01-01
After radioiodine therapy of differentiated thyroid cancer (DTC) patients, whole body scintigraphy (WBS) is standard procedure before releasing the patient from the hospital. A common problem is the precise localization of regions where the iod-avide tissue is located. Sometimes is practically impossible to perform precise topographic localization of such regions. In order to face this problem, we have developed a low-cost Vision-Fusion system for web-camera image acquisition simultaneously with routine scintigraphic whole body acquisition including the algorithm for fusion of images given from both cameras. For image acquisition in the gamma part of the spectra we used e.cam dual head gamma camera (Siemens, Erlangen, Germany) in WBS modality, with matrix size of 256×1024 pixels and bed speed of 6cm/min, equipped with high energy collimator. For optical image acquisition in visible part of spectra we have used web-camera model C905 (Logitech, USA) with Carl Zeiss® optics, native resolution 1600×1200 pixels, 34 o field of view, 30g weight, with autofocus option turned "off" and auto white balance turned "on". Web camera is connected to upper head of gamma camera (GC) by a holder of lightweight aluminum rod and a plexiglas adapter. Our own Vision-Fusion software for image acquisition and coregistration was developed using NI LabVIEW programming environment 2015 (National Instruments, Texas, USA) and two additional LabVIEW modules: NI Vision Acquisition Software (VAS) and NI Vision Development Module (VDM). Vision acquisition software enables communication and control between laptop computer and web-camera. Vision development module is image processing library used for image preprocessing and fusion. Software starts the web-camera image acquisition before starting image acquisition on GC and stops it when GC completes the acquisition. Web-camera is in continuous acquisition mode with frame rate f depending on speed of patient bed movement v (f=v/∆ cm , where ∆ cm is a displacement step that can be changed in Settings option of Vision-Fusion software; by default, ∆ cm is set to 1cm corresponding to ∆ p =15 pixels). All images captured while patient's bed is moving are processed. Movement of patient's bed is checked using cross-correlation of two successive images. After each image capturing, algorithm extracts the central region of interest (ROI) of the image, with the same width as captured image (1600 pixels) and the height that is equal to the ∆ p displacement in pixels. All extracted central ROI are placed next to each other in the overall whole-body image. Stacking of narrow central ROI introduces negligible distortion in the overall whole-body image. The first step for fusion of the scintigram and the optical image was determination of spatial transformation between them. We have made an experiment with two markers (point radioactivity sources of 99m Tc pertechnetate 1MBq) visible in both images (WBS and optical) to find transformation of coordinates between images. The distance between point markers is used for spatial coregistration of the gamma and optical images. At the end of coregistration process, gamma image is rescaled in spatial domain and added to the optical image (green or red channel, amplification changeable from user interface). We tested our system for 10 patients with DTC who received radioiodine therapy (8 women and two men, with average age of 50.10±12.26 years). Five patients received 5.55Gbq, three 3.70GBq and two 1.85GBq. Whole-body scintigraphy and optical image acquisition were performed 72 hours after application of radioiodine therapy. Based on our first results during clinical testing of our system, we can conclude that our system can improve diagnostic possibility of whole body scintigraphy to detect thyroid remnant tissue in patients with DTC after radioiodine therapy.
[Research Progress of Multi-Model Medical Image Fusion at Feature Level].
Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun
2016-04-01
Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.
Enhanced image capture through fusion
NASA Technical Reports Server (NTRS)
Burt, Peter J.; Hanna, Keith; Kolczynski, Raymond J.
1993-01-01
Image fusion may be used to combine images from different sensors, such as IR and visible cameras, to obtain a single composite with extended information content. Fusion may also be used to combine multiple images from a given sensor to form a composite image in which information of interest is enhanced. We present a general method for performing image fusion and show that this method is effective for diverse fusion applications. We suggest that fusion may provide a powerful tool for enhanced image capture with broad utility in image processing and computer vision.
Layer-Based Approach for Image Pair Fusion.
Son, Chang-Hwan; Zhang, Xiao-Ping
2016-04-20
Recently, image pairs, such as noisy and blurred images or infrared and noisy images, have been considered as a solution to provide high-quality photographs under low lighting conditions. In this paper, a new method for decomposing the image pairs into two layers, i.e., the base layer and the detail layer, is proposed for image pair fusion. In the case of infrared and noisy images, simple naive fusion leads to unsatisfactory results due to the discrepancies in brightness and image structures between the image pair. To address this problem, a local contrast-preserving conversion method is first proposed to create a new base layer of the infrared image, which can have visual appearance similar to another base layer such as the denoised noisy image. Then, a new way of designing three types of detail layers from the given noisy and infrared images is presented. To estimate the noise-free and unknown detail layer from the three designed detail layers, the optimization framework is modeled with residual-based sparsity and patch redundancy priors. To better suppress the noise, an iterative approach that updates the detail layer of the noisy image is adopted via a feedback loop. This proposed layer-based method can also be applied to fuse another noisy and blurred image pair. The experimental results show that the proposed method is effective for solving the image pair fusion problem.
Design of two-DMD based zoom MW and LW dual-band IRSP using pixel fusion
NASA Astrophysics Data System (ADS)
Pan, Yue; Xu, Xiping; Qiao, Yang
2018-06-01
In order to test the anti-jamming ability of mid-wave infrared (MWIR) and long-wave infrared (LWIR) dual-band imaging system, a zoom mid-wave (MW) and long-wave (LW) dual-band infrared scene projector (IRSP) based on two-digital micro-mirror device (DMD) was designed by using a projection method of pixel fusion. Two illumination systems, which illuminate the two DMDs directly with Kohler telecentric beam respectively, were combined with projection system by a spatial layout way. The distances of projection entrance pupil and illumination exit pupil were also analyzed separately. MWIR and LWIR virtual scenes were generated respectively by two DMDs and fused by a dichroic beam combiner (DBC), resulting in two radiation distributions in projected image. The optical performance of each component was evaluated by ray tracing simulations. Apparent temperature and image contrast were demonstrated by imaging experiments. On the basis of test and simulation results, the aberrations of optical system were well corrected, and the quality of projected image meets test requirements.
Ierardi, Anna Maria; Petrillo, Mario; Xhepa, Genti; Laganà, Domenico; Piacentino, Filippo; Floridi, Chiara; Duka, Ejona; Fugazzola, Carlo; Carrafiello, Gianpaolo
2016-02-01
Recently different software with the ability to plan ablation volumes have been developed in order to minimize the number of attempts of positioning electrodes and to improve a safe overall tumor coverage. To assess the feasibility of three-dimensional cone beam computed tomography (3D CBCT) fusion imaging with "virtual probe" positioning, to predict ablation volume in lung tumors treated percutaneously. Pre-procedural computed tomography contrast-enhanced scans (CECT) were merged with a CBCT volume obtained to plan the ablation. An offline tumor segmentation was performed to determine the number of antennae and their positioning within the tumor. The volume of ablation obtained, evaluated on CECT performed after 1 month, was compared with the pre-procedural predicted one. Feasibility was assessed on the basis of accuracy evaluation (visual evaluation [VE] and quantitative evaluation [QE]), technical success (TS), and technical effectiveness (TE). Seven of the patients with lung tumor treated by percutaneous thermal ablation were selected and treated on the basis of the 3D CBCT fusion imaging. In all cases the volume of ablation predicted was in accordance with that obtained. The difference in volume between predicted ablation volumes and obtained ones on CECT at 1 month was 1.8 cm(3) (SD ± 2, min. 0.4, max. 0.9) for MW and 0.9 cm(3) (SD ± 1.1, min. 0.1, max. 0.7) for RF. Use of pre-procedural 3D CBCT fusion imaging could be useful to define expected ablation volumes. However, more patients are needed to ensure stronger evidence. © The Foundation Acta Radiologica 2015.
Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images
NASA Astrophysics Data System (ADS)
Awumah, Anna; Mahanti, Prasun; Robinson, Mark
2016-10-01
Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).
Burgmans, Mark Christiaan; den Harder, J Michiel; Meershoek, Philippa; van den Berg, Nynke S; Chan, Shaun Xavier Ju Min; van Leeuwen, Fijs W B; van Erkel, Arian R
2017-06-01
To determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions. CT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined by measurement of the residual displacement in phantom lesions by two independent observers. Mean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values. The accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.
Hsieh, K S; Lin, C C; Liu, W S; Chen, F L
1996-01-01
Two-dimensional echocardiography had long been a standard diagnostic modality for congenital heart disease. Further attempts of three-dimensional reconstruction using two-dimensional echocardiographic images to visualize stereotypic structure of cardiac lesions have been successful only recently. So far only very few studies have been done to display three-dimensional anatomy of the heart through two-dimensional image acquisition because such complex procedures were involved. This study introduced a recently developed image acquisition and processing system for dynamic three-dimensional visualization of various congenital cardiac lesions. From December 1994 to April 1995, 35 cases were selected in the Echo Laboratory here from about 3000 Echo examinations completed. Each image was acquired on-line with specially designed high resolution image grazmber with EKG and respiratory gating technique. Off-line image processing using a window-architectured interactive software package includes construction of 2-D ehcocardiographic pixel to 3-D "voxel" with conversion of orthogonal to rotatory axial system, interpolation, extraction of region of interest, segmentation, shading and, finally, 3D rendering. Three-dimensional anatomy of various congenital cardiac defects was shown, including four cases with ventricular septal defects, two cases with atrial septal defects, and two cases with aortic stenosis. Dynamic reconstruction of a "beating heart" is recorded as vedio tape with video interface. The potential application of 3D display of the reconstruction from 2D echocardiographic images for the diagnosis of various congenital heart defects has been shown. The 3D display was able to improve the diagnostic ability of echocardiography, and clear-cut display of the various congenital cardiac defects and vavular stenosis could be demonstrated. Reinforcement of current techniques will expand future application of 3D display of conventional 2D images.
Image intensifier-based volume tomographic angiography imaging system: system evaluation
NASA Astrophysics Data System (ADS)
Ning, Ruola; Wang, Xiaohui; Shen, Jianjun; Conover, David L.
1995-05-01
An image intensifier-based rotational volume tomographic angiography imaging system has been constructed. The system consists of an x-ray tube and an image intensifier that are separately mounted on a gantry. This system uses an image intensifier coupled to a TV camera as a two-dimensional detector so that a set of two-dimensional projections can be acquired for a direct three-dimensional reconstruction (3D). This system has been evaluated with two phantoms: a vascular phantom and a monkey head cadaver. One hundred eighty projections of each phantom were acquired with the system. A set of three-dimensional images were directly reconstructed from the projection data. The experimental results indicate that good imaging quality can be obtained with this system.
Fusion energy with lasers, direct drive targets, and dry wall chambers
NASA Astrophysics Data System (ADS)
Sethian, J. D.; Friedman, M.; Lehmberg, R. H.; Myers, M.; Obenschain, S. P.; Giuliani, J.; Kepple, P.; Schmitt, A. J.; Colombant, D.; Gardner, J.; Hegeler, F.; Wolford, M.; Swanekamp, S. B.; Weidenheimer, D.; Welch, D.; Rose, D.; Payne, S.; Bibeau, C.; Baraymian, A.; Beach, R.; Schaffers, K.; Freitas, B.; Skulina, K.; Meier, W.; Latkowski, J.; Perkins, L. J.; Goodin, D.; Petzoldt, R.; Stephens, E.; Najmabadi, F.; Tillack, M.; Raffray, R.; Dragojlovic, Z.; Haynes, D.; Peterson, R.; Kulcinski, G.; Hoffer, J.; Geller, D.; Schroen, D.; Streit, J.; Olson, C.; Tanaka, T.; Renk, T.; Rochau, G.; Snead, L.; Ghoneim, N.; Lucas, G.
2003-12-01
A coordinated, focused effort is underway to develop Laser Inertial Fusion Energy. The key components are developed in concert with one another and the science and engineering issues are addressed concurrently. Recent advances include: target designs have been evaluated that show it could be possible to achieve the high gains (>100) needed for a practical fusion system.These designs feature a low-density CH foam that is wicked with solid DT and over-coated with a thin high-Z layer. These results have been verified with three independent one-dimensional codes, and are now being evaluated with two- and three-dimensional codes. Two types of lasers are under development: Krypton Fluoride (KrF) gas lasers and Diode Pumped Solid State Lasers (DPSSL). Both have recently achieved repetitive 'first light', and both have made progress in meeting the fusion energy requirements for durability, efficiency, and cost. This paper also presents the advances in development of chamber operating windows (target survival plus no wall erosion), final optics (aluminium at grazing incidence has high reflectivity and exceeds the required laser damage threshold), target fabrication (demonstration of smooth DT ice layers grown over foams, batch production of foam shells, and appropriate high-Z overcoats), and target injection (new facility for target injection and tracking studies).
Direct three-dimensional ultrasound-to-video registration using photoacoustic markers
NASA Astrophysics Data System (ADS)
Cheng, Alexis; Kang, Jin U.; Taylor, Russell H.; Boctor, Emad M.
2013-06-01
Modern surgical procedures often have a fusion of video and other imaging modalities to provide the surgeon with information support. This requires interventional guidance equipment and surgical navigation systems to register different tools and devices together, such as stereoscopic endoscopes and ultrasound (US) transducers. In this work, the focus is specifically on the registration between these two devices. Electromagnetic and optical trackers are typically used to acquire this registration, but they have various drawbacks typically leading to target registration errors (TRE) of approximately 3 mm. We introduce photoacoustic markers for direct three-dimensional (3-D) US-to-video registration. The feasibility of this method was demonstrated on synthetic and ex vivo porcine liver, kidney, and fat phantoms with an air-coupled laser and a motorized 3-D US probe. The resulting TRE for each experiment ranged from 380 to 850 μm with standard deviations ranging from 150 to 450 μm. We also discuss a roadmap to bring this system into the surgical setting and possible challenges along the way.
Wang, Shunfang; Liu, Shuhui
2015-12-19
An effective representation of a protein sequence plays a crucial role in protein sub-nuclear localization. The existing representations, such as dipeptide composition (DipC), pseudo-amino acid composition (PseAAC) and position specific scoring matrix (PSSM), are insufficient to represent protein sequence due to their single perspectives. Thus, this paper proposes two fusion feature representations of DipPSSM and PseAAPSSM to integrate PSSM with DipC and PseAAC, respectively. When constructing each fusion representation, we introduce the balance factors to value the importance of its components. The optimal values of the balance factors are sought by genetic algorithm. Due to the high dimensionality of the proposed representations, linear discriminant analysis (LDA) is used to find its important low dimensional structure, which is essential for classification and location prediction. The numerical experiments on two public datasets with KNN classifier and cross-validation tests showed that in terms of the common indexes of sensitivity, specificity, accuracy and MCC, the proposed fusing representations outperform the traditional representations in protein sub-nuclear localization, and the representation treated by LDA outperforms the untreated one.
Wang, Shunfang; Liu, Shuhui
2015-01-01
An effective representation of a protein sequence plays a crucial role in protein sub-nuclear localization. The existing representations, such as dipeptide composition (DipC), pseudo-amino acid composition (PseAAC) and position specific scoring matrix (PSSM), are insufficient to represent protein sequence due to their single perspectives. Thus, this paper proposes two fusion feature representations of DipPSSM and PseAAPSSM to integrate PSSM with DipC and PseAAC, respectively. When constructing each fusion representation, we introduce the balance factors to value the importance of its components. The optimal values of the balance factors are sought by genetic algorithm. Due to the high dimensionality of the proposed representations, linear discriminant analysis (LDA) is used to find its important low dimensional structure, which is essential for classification and location prediction. The numerical experiments on two public datasets with KNN classifier and cross-validation tests showed that in terms of the common indexes of sensitivity, specificity, accuracy and MCC, the proposed fusing representations outperform the traditional representations in protein sub-nuclear localization, and the representation treated by LDA outperforms the untreated one. PMID:26703574
Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.
Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A
2015-12-01
We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.
Implementing and validating of pan-sharpening algorithms in open-source software
NASA Astrophysics Data System (ADS)
Pesántez-Cobos, Paúl; Cánovas-García, Fulgencio; Alonso-Sarría, Francisco
2017-10-01
Several approaches have been used in remote sensing to integrate images with different spectral and spatial resolutions in order to obtain fused enhanced images. The objective of this research is three-fold. To implement in R three image fusion techniques (High Pass Filter, Principal Component Analysis and Gram-Schmidt); to apply these techniques to merging multispectral and panchromatic images from five different images with different spatial resolutions; finally, to evaluate the results using the universal image quality index (Q index) and the ERGAS index. As regards qualitative analysis, Landsat-7 and Landsat-8 show greater colour distortion with the three pansharpening methods, although the results for the other images were better. Q index revealed that HPF fusion performs better for the QuickBird, IKONOS and Landsat-7 images, followed by GS fusion; whereas in the case of Landsat-8 and Natmur-08 images, the results were more even. Regarding the ERGAS spatial index, the ACP algorithm performed better for the QuickBird, IKONOS, Landsat-7 and Natmur-08 images, followed closely by the GS algorithm. Only for the Landsat-8 image did, the GS fusion present the best result. In the evaluation of spectral components, HPF results tended to be better and ACP results worse, the opposite was the case with the spatial components. Better quantitative results are obtained in Landsat-7 and Landsat-8 images with the three fusion methods than with the QuickBird, IKONOS and Natmur-08 images. This contrasts with the qualitative evaluation reflecting the importance of splitting the two evaluation approaches (qualitative and quantitative). Significant disagreement may arise when different methodologies are used to asses the quality of an image fusion. Moreover, it is not possible to designate, a priori, a given algorithm as the best, not only because of the different characteristics of the sensors, but also because of the different atmospherics conditions or peculiarities of the different study areas, among other reasons.
F-18 Labeled Diabody-Luciferase Fusion Proteins for Optical-ImmunoPET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Anna M.
2013-01-18
The goal of the proposed work is to develop novel dual-labeled molecular imaging probes for multimodality imaging. Based on small, engineered antibodies called diabodies, these probes will be radioactively tagged with Fluorine-18 for PET imaging, and fused to luciferases for optical (bioluminescence) detection. Performance will be evaluated and validated using a prototype integrated optical-PET imaging system, OPET. Multimodality probes for optical-PET imaging will be based on diabodies that are dually labeled with 18F for PET detection and fused to luciferases for optical imaging. 1) Two sets of fusion proteins will be built, targeting the cell surface markers CEA or HER2.more » Coelenterazine-based luciferases and variant forms will be evaluated in combination with native substrate and analogs, in order to obtain two distinct probes recognizing different targets with different spectral signatures. 2) Diabody-luciferase fusion proteins will be labeled with 18F using amine reactive [18F]-SFB produced using a novel microwave-assisted, one-pot method. 3) Sitespecific, chemoselective radiolabeling methods will be devised, to reduce the chance that radiolabeling will inactivate either the target-binding properties or the bioluminescence properties of the diabody-luciferase fusion proteins. 4) Combined optical and PET imaging of these dual modality probes will be evaluated and validated in vitro and in vivo using a prototype integrated optical-PET imaging system, OPET. Each imaging modality has its strengths and weaknesses. Development and use of dual modality probes allows optical imaging to benefit from the localization and quantitation offered by the PET mode, and enhances the PET imaging by enabling simultaneous detection of more than one probe.« less
Spontaneous and evoked release are independently regulated at individual active zones.
Melom, Jan E; Akbergenova, Yulia; Gavornik, Jeffrey P; Littleton, J Troy
2013-10-30
Neurotransmitter release from synaptic vesicle fusion is the fundamental mechanism for neuronal communication at synapses. Evoked release following an action potential has been well characterized for its function in activating the postsynaptic cell, but the significance of spontaneous release is less clear. Using transgenic tools to image single synaptic vesicle fusion events at individual release sites (active zones) in Drosophila, we characterized the spatial and temporal dynamics of exocytotic events that occur spontaneously or in response to an action potential. We also analyzed the relationship between these two modes of fusion at single release sites. A majority of active zones participate in both modes of fusion, although release probability is not correlated between the two modes of release and is highly variable across the population. A subset of active zones is specifically dedicated to spontaneous release, indicating a population of postsynaptic receptors is uniquely activated by this mode of vesicle fusion. Imaging synaptic transmission at individual release sites also revealed general rules for spontaneous and evoked release, and indicate that active zones with similar release probability can cluster spatially within individual synaptic boutons. These findings suggest neuronal connections contain two information channels that can be spatially segregated and independently regulated to transmit evoked or spontaneous fusion signals.
Multi-focus image fusion using a guided-filter-based difference image.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu
2016-03-20
The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.
A hybrid image fusion system for endovascular interventions of peripheral artery disease.
Lalys, Florent; Favre, Ketty; Villena, Alexandre; Durrmann, Vincent; Colleaux, Mathieu; Lucas, Antoine; Kaladji, Adrien
2018-07-01
Interventional endovascular treatment has become the first line of management in the treatment of peripheral artery disease (PAD). However, contrast and radiation exposure continue to limit the feasibility of these procedures. This paper presents a novel hybrid image fusion system for endovascular intervention of PAD. We present two different roadmapping methods from intra- and pre-interventional imaging that can be used either simultaneously or independently, constituting the navigation system. The navigation system is decomposed into several steps that can be entirely integrated within the procedure workflow without modifying it to benefit from the roadmapping. First, a 2D panorama of the entire peripheral artery system is automatically created based on a sequence of stepping fluoroscopic images acquired during the intra-interventional diagnosis phase. During the interventional phase, the live image can be synchronized on the panorama to form the basis of the image fusion system. Two types of augmented information are then integrated. First, an angiography panorama is proposed to avoid contrast media re-injection. Information exploiting the pre-interventional computed tomography angiography (CTA) is also brought to the surgeon by means of semiautomatic 3D/2D registration on the 2D panorama. Each step of the workflow was independently validated. Experiments for both the 2D panorama creation and the synchronization processes showed very accurate results (errors of 1.24 and [Formula: see text] mm, respectively), similarly to the registration on the 3D CTA (errors of [Formula: see text] mm), with minimal user interaction and very low computation time. First results of an on-going clinical study highlighted its major clinical added value on intraoperative parameters. No image fusion system has been proposed yet for endovascular procedures of PAD in lower extremities. More globally, such a navigation system, combining image fusion from different 2D and 3D image sources, is novel in the field of endovascular procedures.
A dual-channel fusion system of visual and infrared images based on color transfer
NASA Astrophysics Data System (ADS)
Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong
2013-09-01
A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.
A Review of Multivariate Methods for Multimodal Fusion of Brain Imaging Data
Adali, Tülay; Yu, Qingbao; Calhoun, Vince D.
2011-01-01
The development of various neuroimaging techniques is rapidly improving the measurements of brain function/structure. However, despite improvements in individual modalities, it is becoming increasingly clear that the most effective research approaches will utilize multi-modal fusion, which takes advantage of the fact that each modality provides a limited view of the brain. The goal of multimodal fusion is to capitalize on the strength of each modality in a joint analysis, rather than a separate analysis of each. This is a more complicated endeavor that must be approached more carefully and efficient methods should be developed to draw generalized and valid conclusions from high dimensional data with a limited number of subjects. Numerous research efforts have been reported in the field based on various statistical approaches, e.g. independent component analysis (ICA), canonical correlation analysis (CCA) and partial least squares (PLS). In this review paper, we survey a number of multivariate methods appearing in previous reports, which are performed with or without prior information and may have utility for identifying potential brain illness biomarkers. We also discuss the possible strengths and limitations of each method, and review their applications to brain imaging data. PMID:22108139
Inverse energy cascades in three-dimensional turbulence
NASA Technical Reports Server (NTRS)
Hossain, Murshed
1991-01-01
Fully three-dimensional magnetohydrodynamic (MHD) turbulence at large kinetic and low magnetic Reynolds numbers is considered in the presence of a strong uniform magnetic field. It is shown by numerical simulation of a model of MHD that the energy inverse cascades to longer length scales when the interaction parameter is large. While the steady-state dynamics of the driven problem is three-dimensional in character, the behavior has resemblance to two-dimensional hydrodynamics. These results have implications in turbulence theory, MHD power generator, planetary dynamos, and fusion reactor blanket design.
Pfister, Karin; Schierling, Wilma; Jung, Ernst Michael; Apfelbeck, Hanna; Hennersperger, Christoph; Kasprzak, Piotr M
2016-01-01
To compare standardised 2D ultrasound (US) to the novel ultrasonographic imaging techniques 3D/4D US and image fusion (combined real-time display of B mode and CT scan) for routine measurement of aortic diameter in follow-up after endovascular aortic aneurysm repair (EVAR). 300 measurements were performed on 20 patients after EVAR by one experienced sonographer (3rd degree of the German society of ultrasound (DEGUM)) with a high-end ultrasound machine and a convex probe (1-5 MHz). An internally standardized scanning protocol of the aortic aneurysm diameter in B mode used a so called leading-edge method. In summary, five different US methods (2D, 3D free-hand, magnetic field tracked 3D - Curefab™, 4D volume sweep, image fusion), each including contrast-enhanced ultrasound (CEUS), were used for measurement of the maximum aortic aneurysm diameter. Standardized 2D sonography was the defined reference standard for statistical analysis. CEUS was used for endoleak detection. Technical success was 100%. In augmented transverse imaging the mean aortic anteroposterior (AP) diameter was 4.0±1.3 cm for 2D US, 4.0±1.2 cm for 3D Curefab™, and 3.9±1.3 cm for 4D US and 4.0±1.2 for image fusion. The mean differences were below 1 mm (0.2-0.9 mm). Concerning estimation of aneurysm growth, agreement was found between 2D, 3D and 4D US in 19 of the 20 patients (95%). Definitive decision could always be made by image fusion. CEUS was combined with all methods and detected two out of the 20 patients (10%) with an endoleak type II. In one case, endoleak feeding arteries remained unclear with 2D CEUS but could be clearly localized by 3D CEUS and image fusion. Standardized 2D US allows adequate routine follow-up of maximum aortic aneurysm diameter after EVAR. Image Fusion enables a definitive statement about aneurysm growth without the need for new CT imaging by combining the postoperative CT scan with real-time B mode in a dual image display. 3D/4D CEUS and image fusion can improve endoleak characterization in selected cases but are not mandatory for routine practice.
Improving the recognition of fingerprint biometric system using enhanced image fusion
NASA Astrophysics Data System (ADS)
Alsharif, Salim; El-Saba, Aed; Stripathi, Reshma
2010-04-01
Fingerprints recognition systems have been widely used by financial institutions, law enforcement, border control, visa issuing, just to mention few. Biometric identifiers can be counterfeited, but considered more reliable and secure compared to traditional ID cards or personal passwords methods. Fingerprint pattern fusion improves the performance of a fingerprint recognition system in terms of accuracy and security. This paper presents digital enhancement and fusion approaches that improve the biometric of the fingerprint recognition system. It is a two-step approach. In the first step raw fingerprint images are enhanced using high-frequency-emphasis filtering (HFEF). The second step is a simple linear fusion process between the raw images and the HFEF ones. It is shown that the proposed approach increases the verification and identification of the fingerprint biometric recognition system, where any improvement is justified using the correlation performance metrics of the matching algorithm.
Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.
Ganasala, Padma; Kumar, Vinod
2016-02-01
Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.
Time-resolved wide-field optically sectioned fluorescence microscopy
NASA Astrophysics Data System (ADS)
Dupuis, Guillaume; Benabdallah, Nadia; Chopinaud, Aurélien; Mayet, Céline; Lévêque-Fort, Sandrine
2013-02-01
We present the implementation of a fast wide-field optical sectioning technique called HiLo microscopy on a fluorescence lifetime imaging microscope. HiLo microscopy is based on the fusion of two images, one with structured illumination and another with uniform illumination. Optically sectioned images are then digitally generated thanks to a fusion algorithm. HiLo images are comparable in quality with confocal images but they can be acquired faster over larger fields of view. We obtain 4D imaging by combining HiLo optical sectioning, time-gated detection, and z-displacement. We characterize the performances of this set-up in terms of 3D spatial resolution and time-resolved capabilities in both fixed- and live-cell imaging modes.
Laser-driven magnetized liner inertial fusion
Davies, J. R.
2017-06-05
A laser-driven, magnetized liner inertial fusion (MagLIF) experiment is designed in this paper for the OMEGA Laser System by scaling down the Z point design to provide the first experimental data on MagLIF scaling. OMEGA delivers roughly 1000× less energy than Z, so target linear dimensions are reduced by factors of ~10. Magneto-inertial fusion electrical discharge system could provide an axial magnetic field of 10 T. Two-dimensional hydrocode modeling indicates that a single OMEGA beam can preheat the fuel to a mean temperature of ~200 eV, limited by mix caused by heat flow into the wall. One-dimensional magnetohydrodynamic (MHD) modelingmore » is used to determine the pulse duration and fuel density that optimize neutron yield at a fuel convergence ratio of roughly 25 or less, matching the Z point design, for a range of shell thicknesses. A relatively thinner shell, giving a higher implosion velocity, is required to give adequate fuel heating on OMEGA compared to Z because of the increase in thermal losses in smaller targets. Two-dimensional MHD modeling of the point design gives roughly a 50% reduction in compressed density, temperature, and magnetic field from 1-D because of end losses. Finally, scaling up the OMEGA point design to the MJ laser energy available on the National Ignition Facility gives a 500-fold increase in neutron yield in 1-D modeling.« less
Laser-driven magnetized liner inertial fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, J. R.
A laser-driven, magnetized liner inertial fusion (MagLIF) experiment is designed in this paper for the OMEGA Laser System by scaling down the Z point design to provide the first experimental data on MagLIF scaling. OMEGA delivers roughly 1000× less energy than Z, so target linear dimensions are reduced by factors of ~10. Magneto-inertial fusion electrical discharge system could provide an axial magnetic field of 10 T. Two-dimensional hydrocode modeling indicates that a single OMEGA beam can preheat the fuel to a mean temperature of ~200 eV, limited by mix caused by heat flow into the wall. One-dimensional magnetohydrodynamic (MHD) modelingmore » is used to determine the pulse duration and fuel density that optimize neutron yield at a fuel convergence ratio of roughly 25 or less, matching the Z point design, for a range of shell thicknesses. A relatively thinner shell, giving a higher implosion velocity, is required to give adequate fuel heating on OMEGA compared to Z because of the increase in thermal losses in smaller targets. Two-dimensional MHD modeling of the point design gives roughly a 50% reduction in compressed density, temperature, and magnetic field from 1-D because of end losses. Finally, scaling up the OMEGA point design to the MJ laser energy available on the National Ignition Facility gives a 500-fold increase in neutron yield in 1-D modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukai, K., E-mail: mukai.kiyofumi@LHD.nifs.ac.jp; Peterson, B. J.; SOKENDAI
The InfraRed imaging Video Bolometer (IRVB) is a useful diagnostic for the multi-dimensional measurement of plasma radiation profiles. For the application of IRVB measurement to the neutron environment in fusion plasma devices such as the Large Helical Device (LHD), in situ calibration of the thermal characteristics of the foil detector is required. Laser irradiation tests of sample foils show that the reproducibility and uniformity of the carbon coating for the foil were improved using a vacuum evaporation method. Also, the principle of the in situ calibration system was justified.
Multiscale Medical Image Fusion in Wavelet Domain
Khare, Ashish
2013-01-01
Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach. PMID:24453868
2016-11-30
AFRL-AFOSR-JP-TR-2017-0016 In-situ Manipulation and Imaging of Switchable Two-dimensional Electron Gas at Oxide Heterointerfaces CHANG BEOM EOM...Imaging of Switchable Two-dimensional Electron Gas at Oxide Heterointerfaces 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA2386-15-1-4046 5c. PROGRAM...NOTES 14. ABSTRACT The recent discovery of a two-dimensional electron gas (2DEG) at the interface between insulating perovskite oxides SrTiO3 and LaAlO3
Adaptive fusion of infrared and visible images in dynamic scene
NASA Astrophysics Data System (ADS)
Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi
2011-11-01
Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.
Present status and trends of image fusion
NASA Astrophysics Data System (ADS)
Xiang, Dachao; Fu, Sheng; Cai, Yiheng
2009-10-01
Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.
Djan, Igor; Petrović, Borislava; Erak, Marko; Nikolić, Ivan; Lucić, Silvija
2013-08-01
Development of imaging techniques, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), made great impact on radiotherapy treatment planning by improving the localization of target volumes. Improved localization allows better local control of tumor volumes, but also minimizes geographical misses. Mutual information is obtained by registration and fusion of images achieved manually or automatically. The aim of this study was to validate the CT-MRI image fusion method and compare delineation obtained by CT versus CT-MRI image fusion. The image fusion software (XIO CMS 4.50.0) was applied to delineate 16 patients. The patients were scanned on CT and MRI in the treatment position within an immobilization device before the initial treatment. The gross tumor volume (GTV) and clinical target volume (CTV) were delineated on CT alone and on CT+MRI images consecutively and image fusion was obtained. Image fusion showed that CTV delineated on a CT image study set is mainly inadequate for treatment planning, in comparison with CTV delineated on CT-MRI fused image study set. Fusion of different modalities enables the most accurate target volume delineation. This study shows that registration and image fusion allows precise target localization in terms of GTV and CTV and local disease control.
[Possibilities of sonographic image fusion: Current developments].
Jung, E M; Clevert, D-A
2015-11-01
For diagnostic and interventional procedures ultrasound (US) image fusion can be used as a complementary imaging technique. Image fusion has the advantage of real time imaging and can be combined with other cross-sectional imaging techniques. With the introduction of US contrast agents sonography and image fusion have gained more importance in the detection and characterization of liver lesions. Fusion of US images with computed tomography (CT) or magnetic resonance imaging (MRI) facilitates the diagnostics and postinterventional therapy control. In addition to the primary application of image fusion in the diagnosis and treatment of liver lesions, there are more useful indications for contrast-enhanced US (CEUS) in routine clinical diagnostic procedures, such as intraoperative US (IOUS), vascular imaging and diagnostics of other organs, such as the kidneys and prostate gland.
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-01-01
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. PMID:28505137
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-05-15
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems.
Three dimensional identification card and applications
NASA Astrophysics Data System (ADS)
Zhou, Changhe; Wang, Shaoqing; Li, Chao; Li, Hao; Liu, Zhao
2016-10-01
Three dimensional Identification Card, with its three-dimensional personal image displayed and stored for personal identification, is supposed be the advanced version of the present two-dimensional identification card in the future [1]. Three dimensional Identification Card means that there are three-dimensional optical techniques are used, the personal image on ID card is displayed to be three-dimensional, so we can see three dimensional personal face. The ID card also stores the three-dimensional face information in its inside electronics chip, which might be recorded by using two-channel cameras, and it can be displayed in computer as three-dimensional images for personal identification. Three-dimensional ID card might be one interesting direction to update the present two-dimensional card in the future. Three-dimension ID card might be widely used in airport custom, entrance of hotel, school, university, as passport for on-line banking, registration of on-line game, etc...
Three-dimensional digital mapping of the optic nerve head cupping in glaucoma
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Ramirez, Manuel; Morales, Jose
1992-08-01
Visualization of the optic nerve head cupping is clinically achieved by stereoscopic viewing of a fundus image pair of the suspected eye. A novel algorithm for three-dimensional digital surface representation of the optic nerve head, using fusion of stereo depth map with a linearly stretched intensity image of a stereo fundus image pair, is presented. Prior to depth map acquisition, a number of preprocessing tasks including feature extraction, registration by cepstral analysis, and correction for intensity variations are performed. The depth map is obtained by using a coarse to fine strategy for obtaining disparities between corresponding areas. The required matching techniques to obtain the translational differences in every step, uses cepstral analysis and correlation-like scanning technique in the spatial domain for the finest details. The quantitative and precise representation of the optic nerve head surface topography following this algorithm is not computationally intensive and should provide more useful information than just qualitative stereoscopic viewing of the fundus as one of the diagnostic criteria for diagnosis of glaucoma.
Hogervorst, Maarten A.; Pinkus, Alan R.
2016-01-01
The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4–0.7μm), near-infrared (NIR, 0.7–1.0μm) and long-wave infrared (LWIR, 8–14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance. PMID:28036328
Toet, Alexander; Hogervorst, Maarten A; Pinkus, Alan R
2016-01-01
The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm), near-infrared (NIR, 0.7-1.0μm) and long-wave infrared (LWIR, 8-14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance.
A survey of infrared and visual image fusion methods
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Hai, Jinjin; He, Kangjian
2017-09-01
Infrared (IR) and visual (VI) image fusion is designed to fuse multiple source images into a comprehensive image to boost imaging quality and reduce redundancy information, which is widely used in various imaging equipment to improve the visual ability of human and robot. The accurate, reliable and complementary descriptions of the scene in fused images make these techniques be widely used in various fields. In recent years, a large number of fusion methods for IR and VI images have been proposed due to the ever-growing demands and the progress of image representation methods; however, there has not been published an integrated survey paper about this field in last several years. Therefore, we make a survey to report the algorithmic developments of IR and VI image fusion. In this paper, we first characterize the IR and VI image fusion based applications to represent an overview of the research status. Then we present a synthesize survey of the state of the art. Thirdly, the frequently-used image fusion quality measures are introduced. Fourthly, we perform some experiments of typical methods and make corresponding analysis. At last, we summarize the corresponding tendencies and challenges in IR and VI image fusion. This survey concludes that although various IR and VI image fusion methods have been proposed, there still exist further improvements or potential research directions in different applications of IR and VI image fusion.
Paprottka, P M; Zengel, P; Cyran, C C; Ingrisch, M; Nikolaou, K; Reiser, M F; Clevert, D A
2014-01-01
To evaluate the ultrasound tissue elasticity imaging by comparison to multimodality imaging using image fusion with Magnetic Resonance Imaging (MRI) and conventional grey scale imaging with additional elasticity-ultrasound in an experimental small-animal-squamous-cell carcinoma-model for the assessment of tissue morphology. Human hypopharynx carcinoma cells were subcutaneously injected into the left flank of 12 female athymic nude rats. After 10 days (SD ± 2) of subcutaneous tumor growth, sonographic grey scale including elasticity imaging and MRI measurements were performed using a high-end ultrasound system and a 3T MR. For image fusion the contrast-enhanced MRI DICOM data set was uploaded in the ultrasonic device which has a magnetic field generator, a linear array transducer (6-15 MHz) and a dedicated software package (GE Logic E9), that can detect transducers by means of a positioning system. Conventional grey scale and elasticity imaging were integrated in the image fusion examination. After successful registration and image fusion the registered MR-images were simultaneously shown with the respective ultrasound sectional plane. Data evaluation was performed using the digitally stored video sequence data sets by two experienced radiologist using a modified Tsukuba Elasticity score. The colors "red and green" are assigned for an area of soft tissue, "blue" indicates hard tissue. In all cases a successful image fusion and plan registration with MRI and ultrasound imaging including grey scale and elasticity imaging was possible. The mean tumor volume based on caliper measurements in 3 dimensions was ~323 mm3. 4/12 rats were evaluated with Score I, 5/12 rates were evaluated with Score II, 3/12 rates were evaluated with Score III. There was a close correlation in the fused MRI with existing small necrosis in the tumor. None of the scored II or III lesions was visible by conventional grey scale. The comparison of ultrasound tissue elasticity imaging enables a secure differentiation between different tumor tissue areas in comparison to image fusion with MRI in our small study group. Therefore ultrasound tissue elasticity imaging might be used for fast detection of tumor response in the future whereas conventional grey scale imaging alone could not provide the additional information. By using standard, contrast-enhanced MRI images for reliable and reproducible slice positioning, the strongly user-dependent limitation of ultrasound tissue elasticity imaging may be overcome, especially for a comparison between baseline and follow-up measurements.
Lattanzi, J P; Fein, D A; McNeeley, S W; Shaer, A H; Movsas, B; Hanks, G E
1997-01-01
We describe our initial experience with the AcQSim (Picker International, St. David, PA) computed tomography-magnetic resonance imaging (CT-MRI) fusion software in eight patients with intracranial lesions. MRI data are electronically integrated into the CT-based treatment planning system. Since MRI is superior to CT in identifying intracranial abnormalities, we evaluated the precision and feasibility of this new localization method. Patients initially underwent CT simulation from C2 to the most superior portion of the scalp. T2 and post-contrast T1-weighted MRI of this area was then performed. Patient positioning was duplicated utilizing a head cup and bridge of nose to forehead angle measurements. First, a gross tumor volume (GTV) was identified utilizing the CT (CT/GTV). The CT and MRI scans were subsequently fused utilizing a point pair matching method and a second GTV (CT-MRI/GTV) was contoured with the aid of both studies. The fusion process was uncomplicated and completed in a timely manner. Volumetric analysis revealed the CT-MRI/GTV to be larger than the CT/GTV in all eight cases. The mean CT-MRI/GTV was 28.7 cm3 compared to 16.7 cm3 by CT alone. This translated into a 72% increase in the radiographic tumor volume by CT-MRI. A simulated dose-volume histogram in two patients revealed that marginal portions of the lesion, as identified by CT and MRI, were not included in the high dose treatment volume as contoured with the use of CT alone. Our initial experience with the fusion software demonstrated an improvement in tumor localization with this technique. Based on these patients the use of CT alone for treatment planning purposes in central nervous system (CNS) lesions is inadequate and would result in an unacceptable rate of marginal misses. The importation of MRI data into three-dimensional treatment planning is therefore crucial to accurate tumor localization. The fusion process simplifies and improves precision of this task.
A high-resolution imaging x-ray crystal spectrometer for high energy density plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hui, E-mail: chen33@llnl.gov, E-mail: bitter@pppl.gov; Magee, E.; Nagel, S. R.
2014-11-15
Adapting a concept developed for magnetic confinement fusion experiments, an imaging crystal spectrometer has been designed and tested for HED plasmas. The instrument uses a spherically bent quartz [211] crystal with radius of curvature of 490.8 mm. The instrument was tested at the Titan laser at Lawrence Livermore National Laboratory by irradiating titanium slabs with laser intensities of 10{sup 19}–10{sup 20} W/cm{sup 2}. He-like and Li-like Ti lines were recorded, from which the spectrometer performance was evaluated. This spectrometer provides very high spectral resolving power (E/dE > 7000) while acquiring a one-dimensional image of the source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiraga, H.; Mahigashi, N.; Yamada, T.
2008-10-15
Low-density plastic foam filled with liquid deuterium is one of the candidates for inertial fusion target. Density profile and trajectory of 527 nm laser-irradiated planer foam-deuterium target in the acceleration phase were observed with streaked side-on x-ray backlighting. An x-ray imager employing twin slits coupled to an x-ray streak camera was used to simultaneously observe three images of the target: self-emission from the target, x-ray backlighter profile, and the backlit target. The experimentally obtained density profile and trajectory were in good agreement with predictions by one-dimensional hydrodynamic simulation code ILESTA-1D.
Objective quality assessment for multiexposure multifocus image fusion.
Hassen, Rania; Wang, Zhou; Salama, Magdy M A
2015-09-01
There has been a growing interest in image fusion technologies, but how to objectively evaluate the quality of fused images has not been fully understood. Here, we propose a method for objective quality assessment of multiexposure multifocus image fusion based on the evaluation of three key factors of fused image quality: 1) contrast preservation; 2) sharpness; and 3) structure preservation. Subjective experiments are conducted to create an image fusion database, based on which, performance evaluation shows that the proposed fusion quality index correlates well with subjective scores, and gives a significant improvement over the existing fusion quality measures.
Dauwe, Dieter Frans; Nuyens, Dieter; De Buck, Stijn; Claus, Piet; Gheysens, Olivier; Koole, Michel; Coudyzer, Walter; Vanden Driessche, Nina; Janssens, Laurens; Ector, Joris; Dymarkowski, Steven; Bogaert, Jan; Heidbuchel, Hein; Janssens, Stefan
2014-08-01
Biological therapies for ischaemic heart disease require efficient, safe, and affordable intramyocardial delivery. Integration of multiple imaging modalities within the fluoroscopy framework can provide valuable information to guide these procedures. We compared an anatomo-electric method (LARCA) with a non-fluoroscopic electromechanical mapping system (NOGA(®)). LARCA integrates selective three-dimensional-rotational angiograms with biplane fluoroscopy. To identify the infarct region, we studied LARCA-fusion with pre-procedural magnetic resonance imaging (MRI), dedicated CT, or (18)F-FDG-PET/CT. We induced myocardial infarction in 20 pigs by 90-min LAD occlusion. Six weeks later, we compared peri-infarct delivery accuracy of coloured fluospheres using sequential NOGA(®)- and LARCA-MRI-guided vs. LARCA-CT- and LARCA-(18)F-FDG-PET/CT-guided intramyocardial injections. MRI after 6 weeks revealed significant left ventricular (LV) functional impairment and remodelling (LVEF 31 ± 3%, LVEDV 178 ± 15 mL, infarct size 17 ± 2% LV mass). During NOGA(®)-procedures, three of five animals required DC-shock for major ventricular arrhythmias vs. one of ten during LARCA-procedures. Online procedure time was shorter for LARCA than NOGA(®) (77 ± 6 vs. 130 ± 3 min, P < 0.0001). Absolute distance of injection spots to the infarct border was similar for LARCA-MRI (4.8 ± 0.5 mm) and NOGA(®) (5.4 ± 0.5 mm). LARCA-CT-integration allowed closer approximation of the targeted border zone than LARCA-PET (4.0 ± 0.5 mm vs. 6.2 ± 0.6 mm, P < 0.05). Three-dimensional -rotational angiography fused with multimodal imaging offers a new, cost-effective, and safe strategy to guide intramyocardial injections. Endoventricular procedure times and arrhythmias compare favourably to NOGA(®), without compromising injection accuracy. LARCA-based fusion imaging is a promising enabling technology for cardiac biological therapies. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2014. For permissions please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ince-Cushman, A.; Rice, J. E.; Reinke, M. L.
2008-10-15
The use of high resolution x-ray crystal spectrometers to diagnose fusion plasmas has been limited by the poor spatial localization associated with chord integrated measurements. Taking advantage of a new x-ray imaging spectrometer concept [M. Bitter et al., Rev. Sci. Instrum. 75, 3660 (2004)], and improvements in x-ray detector technology [Ch. Broennimann et al., J. Synchrotron Radiat. 13, 120 (2006)], a spatially resolving high resolution x-ray spectrometer has been built and installed on the Alcator C-Mod tokamak. This instrument utilizes a spherically bent quartz crystal and a set of two dimensional x-ray detectors arranged in the Johann configuration [H. H.more » Johann, Z. Phys. 69, 185 (1931)] to image the entire plasma cross section with a spatial resolution of about 1 cm. The spectrometer was designed to measure line emission from H-like and He-like argon in the wavelength range 3.7 and 4.0 A with a resolving power of approximately 10 000 at frame rates up to 200 Hz. Using spectral tomographic techniques [I. Condrea, Phys. Plasmas 11, 2427 (2004)] the line integrated spectra can be inverted to infer profiles of impurity emissivity, velocity, and temperature. From these quantities it is then possible to calculate impurity density and electron temperature profiles. An overview of the instrument, analysis techniques, and example profiles are presented.« less
A New Approach to Image Fusion Based on Cokriging
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; LeMoigne, Jacqueline; Mount, David M.; Morisette, Jeffrey T.
2005-01-01
We consider the image fusion problem involving remotely sensed data. We introduce cokriging as a method to perform fusion. We investigate the advantages of fusing Hyperion with ALI. The evaluation is performed by comparing the classification of the fused data with that of input images and by calculating well-chosen quantitative fusion quality metrics. We consider the Invasive Species Forecasting System (ISFS) project as our fusion application. The fusion of ALI with Hyperion data is studies using PCA and wavelet-based fusion. We then propose utilizing a geostatistical based interpolation method called cokriging as a new approach for image fusion.
NASA Astrophysics Data System (ADS)
Ai, Yan-Ting; Guan, Jiao-Yue; Fei, Cheng-Wei; Tian, Jing; Zhang, Feng-Ling
2017-05-01
To monitor rolling bearing operating status with casings in real time efficiently and accurately, a fusion method based on n-dimensional characteristic parameters distance (n-DCPD) was proposed for rolling bearing fault diagnosis with two types of signals including vibration signal and acoustic emission signals. The n-DCPD was investigated based on four information entropies (singular spectrum entropy in time domain, power spectrum entropy in frequency domain, wavelet space characteristic spectrum entropy and wavelet energy spectrum entropy in time-frequency domain) and the basic thought of fusion information entropy fault diagnosis method with n-DCPD was given. Through rotor simulation test rig, the vibration and acoustic emission signals of six rolling bearing faults (ball fault, inner race fault, outer race fault, inner-ball faults, inner-outer faults and normal) are collected under different operation conditions with the emphasis on the rotation speed from 800 rpm to 2000 rpm. In the light of the proposed fusion information entropy method with n-DCPD, the diagnosis of rolling bearing faults was completed. The fault diagnosis results show that the fusion entropy method holds high precision in the recognition of rolling bearing faults. The efforts of this study provide a novel and useful methodology for the fault diagnosis of an aeroengine rolling bearing.
NASA Astrophysics Data System (ADS)
Perkins, L. J.; Ho, D. D.-M.; Logan, B. G.; Zimmerman, G. B.; Rhodes, M. A.; Strozzi, D. J.; Blackfield, D. T.; Hawkins, S. A.
2017-06-01
We examine the potential that imposed magnetic fields of tens of Tesla that increase to greater than 10 kT (100 MGauss) under implosion compression may relax the conditions required for ignition and propagating burn in indirect-drive inertial confinement fusion (ICF) targets. This may allow the attainment of ignition, or at least significant fusion energy yields, in presently performing ICF targets on the National Ignition Facility (NIF) that today are sub-marginal for thermonuclear burn through adverse hydrodynamic conditions at stagnation [Doeppner et al., Phys. Rev. Lett. 115, 055001 (2015)]. Results of detailed two-dimensional radiation-hydrodynamic-burn simulations applied to NIF capsule implosions with low-mode shape perturbations and residual kinetic energy loss indicate that such compressed fields may increase the probability for ignition through range reduction of fusion alpha particles, suppression of electron heat conduction, and potential stabilization of higher-mode Rayleigh-Taylor instabilities. Optimum initial applied fields are found to be around 50 T. Given that the full plasma structure at capsule stagnation may be governed by three-dimensional resistive magneto-hydrodynamics, the formation of closed magnetic field lines might further augment ignition prospects. Experiments are now required to further assess the potential of applied magnetic fields to ICF ignition and burn on NIF.
Numerical modeling of the sensitivity of x-ray driven implosions to low-mode flux asymmetries.
Scott, R H H; Clark, D S; Bradley, D K; Callahan, D A; Edwards, M J; Haan, S W; Jones, O S; Spears, B K; Marinak, M M; Town, R P J; Norreys, P A; Suter, L J
2013-02-15
The sensitivity of inertial confinement fusion implosions, of the type performed on the National Ignition Facility (NIF) [1], to low-mode flux asymmetries is investigated numerically. It is shown that large-amplitude, low-order mode shapes (Legendre polynomial P(4), resulting from low-order flux asymmetries, cause spatial variations in capsule and fuel momentum that prevent the deuterium and tritium (DT) "ice" layer from being decelerated uniformly by the hot spot pressure. This reduces the transfer of implosion kinetic energy to internal energy of the central hot spot, thus reducing the neutron yield. Furthermore, synthetic gated x-ray images of the hot spot self-emission indicate that P(4) shapes may be unquantifiable for DT layered capsules. Instead the positive P(4) asymmetry "aliases" itself as an oblate P(2) in the x-ray images. Correction of this apparent P(2) distortion can further distort the implosion while creating a round x-ray image. Long wavelength asymmetries may be playing a significant role in the observed yield reduction of NIF DT implosions relative to detailed postshot two-dimensional simulations.
Distributed memory approaches for robotic neural controllers
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1990-01-01
The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.
Qi, Shile; Calhoun, Vince D.; van Erp, Theo G. M.; Bustillo, Juan; Damaraju, Eswar; Turner, Jessica A.; Du, Yuhui; Chen, Jiayu; Yu, Qingbao; Mathalon, Daniel H.; Ford, Judith M.; Voyvodic, James; Mueller, Bryon A.; Belger, Aysenil; Ewen, Sarah Mc; Potkin, Steven G.; Preda, Adrian; Jiang, Tianzi
2017-01-01
Multimodal fusion is an effective approach to take advantage of cross-information among multiple imaging data to better understand brain diseases. However, most current fusion approaches are blind, without adopting any prior information. To date, there is increasing interest to uncover the neurocognitive mapping of specific behavioral measurement on enriched brain imaging data; hence, a supervised, goal-directed model that enables a priori information as a reference to guide multimodal data fusion is in need and a natural option. Here we proposed a fusion with reference model, called “multi-site canonical correlation analysis with reference plus joint independent component analysis” (MCCAR+jICA), which can precisely identify co-varying multimodal imaging patterns closely related to reference information, such as cognitive scores. In a 3-way fusion simulation, the proposed method was compared with its alternatives on estimation accuracy of both target component decomposition and modality linkage detection. MCCAR+jICA outperforms others with higher precision. In human imaging data, working memory performance was utilized as a reference to investigate the covarying functional and structural brain patterns among 3 modalities and how they are impaired in schizophrenia. Two independent cohorts (294 and 83 subjects respectively) were used. Interestingly, similar brain maps were identified between the two cohorts, with substantial overlap in the executive control networks in fMRI, salience network in sMRI, and major white matter tracts in dMRI. These regions have been linked with working memory deficits in schizophrenia in multiple reports, while MCCAR+jICA further verified them in a repeatable, joint manner, demonstrating the potential of such results to identify potential neuromarkers for mental disorders. PMID:28708547
Gradient-based multiresolution image fusion.
Petrović, Valdimir S; Xydeas, Costas S
2004-02-01
A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.
Qi, Rui; Zhou, Xiangping; Yu, Jianqun; Li, Zhenlin
2014-04-01
This study aims to explore the inferior adhesion of the renal fascia (RF), and the inferior connectivity of the perirenal spaces (PS) with multidetector computed tomography (MDCT), and to investigate the diagnostic value of CT for showing this anatomy. From May to July 2012, eighty-two patients with acute pancreatitis presented in our hospital were enrolled into this study and underwent contrast-enhanced CT scans. All the image data were used to perform three dimensional reconstruction to show the inferior attachment of RF and the inferior connectivity of PS. The fusion of anterior renal fascia (ARF) and posterior renal fascia (PRF) next to the plane of iliac fossa were found on the left in 71.95% (59/82) cases, and on the right in 75.61% (62/82). In these cases, bilateral perirenal spaces, and anterior and posterior pararenal spaces were not found to be connected with each other. No fusion of ARF and PRF below the level of bilateral kidneys occurred on the left side in 28.05% (23/82) cases and on the right side in 24.39% (20/82). In these patients, the PS extended to the extraperitoneal space of the pelvic cavity and further to the inguinal region, and bilateral anterior and posterior pararenal spaces were not found to be connected with each other. Three-dimensional reconstruction on contrast-enhanced MDCT could be a valuable procedure for depicting inferior attachment of RF, and the inferior connectivity of PS.
Insights into bunyavirus architecture from electron cryotomography of Uukuniemi virus
Överby, A. K.; Pettersson, R. F.; Grünewald, K.; Huiskonen, J. T.
2008-01-01
Bunyaviridae is a large family of viruses that have gained attention as “emerging viruses” because many members cause serious disease in humans, with an increasing number of outbreaks. These negative-strand RNA viruses possess a membrane envelope covered by glycoproteins. The virions are pleiomorphic and thus have not been amenable to structural characterization using common techniques that involve averaging of electron microscopic images. Here, we determined the three-dimensional structure of a member of the Bunyaviridae family by using electron cryotomography. The genome, incorporated as a complex with the nucleoprotein inside the virions, was seen as a thread-like structure partially interacting with the viral membrane. Although no ordered nucleocapsid was observed, lateral interactions between the two membrane glycoproteins determine the structure of the viral particles. In the most regular particles, the glycoprotein protrusions, or “spikes,” were seen to be arranged on an icosahedral lattice, with T = 12 triangulation. This arrangement has not yet been proven for a virus. Two distinctly different spike conformations were observed, which were shown to depend on pH. This finding is reminiscent of the fusion proteins of alpha-, flavi-, and influenza viruses, in which conformational changes occur in the low pH of the endosome to facilitate fusion of the viral and host membrane during viral entry. PMID:18272496
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor); Glasgow, Thomas K. (Inventor)
1999-01-01
A system and a method for measuring three-dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. The cameras are calibrated to accurately represent image coordinates in world coordinate system. The two-dimensional views of the cameras are recorded for image processing and centroid coordinate determination. Any overlapping particle clusters are decomposed into constituent centroids. The tracer particles are tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured therefrom The stereo imaging velocimetry technique of the present invention provides a full-field. quantitative, three-dimensional map of any optically transparent fluid which is seeded with tracer particles.
Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM
NASA Astrophysics Data System (ADS)
Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang
2016-03-01
A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.
NASA Astrophysics Data System (ADS)
Naderi, D.; Pahlavani, M. R.; Alavi, S. A.
2013-05-01
Using the Langevin dynamical approach, the neutron multiplicity and the anisotropy of angular distribution of fission fragments in heavy ion fusion-fission reactions were calculated. We applied one- and two-dimensional Langevin equations to study the decay of a hot excited compound nucleus. The influence of the level-density parameter on neutron multiplicity and anisotropy of angular distribution of fission fragments was investigated. We used the level-density parameter based on the liquid drop model with two different values of the Bartel approach and Pomorska approach. Our calculations show that the anisotropy and neutron multiplicity are affected by level-density parameter and neck thickness. The calculations were performed on the 16O+208Pb and 20Ne+209Bi reactions. Obtained results in the case of the two-dimensional Langevin with a level-density parameter based on Bartel and co-workers approach are in better agreement with experimental data.
2009-05-01
transport, and thermonuclear burn. Using FAST, three classes of shock-ignited targets were designed that achieve one-dimensional fusion - energy gains in the...MJ) G a in Figure 1: Results of one-dimensional simulations showing the fusion energy gain as a function of KrF laser energy for three classes of...rises smoothly (according to a double power (a) Spike width: 160 ps (b) Spike power: 1530 TW Figure 4: Examples of fusion - energy gain contours for a shock
Phase-sensitive two-dimensional neutron shearing interferometer and Hartmann sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kevin
2015-12-08
A neutron imaging system detects both the phase shift and absorption of neutrons passing through an object. The neutron imaging system is based on either of two different neutron wavefront sensor techniques: 2-D shearing interferometry and Hartmann wavefront sensing. Both approaches measure an entire two-dimensional neutron complex field, including its amplitude and phase. Each measures the full-field, two-dimensional phase gradients and, concomitantly, the two-dimensional amplitude mapping, requiring only a single measurement.
Yang, Hai-song; Chen, De-yu; Lu, Xu-hua; Yang, Li-li; Yan, Wang-jun; Yuan, Wen; Chen, Yu
2010-03-01
Ossification of the posterior longitudinal ligament (OPLL) is a common spinal disorder that presents with or without cervical myelopathy. Furthermore, there is evidence suggesting that OPLL often coexists with cervical disc hernia (CDH), and that the latter is the more important compression factor. To raise the awareness of CDH in OPLL for spinal surgeons, we performed a retrospective study on 142 patients with radiologically proven OPLL who had received surgery between January 2004 and January 2008 in our hospital. Plain radiograph, three-dimensional computed tomography construction (3D CT), and magnetic resonance imaging (MRI) of the cervical spine were all performed. Twenty-six patients with obvious CDH (15 of segmental-type, nine of mixed-type, two of continuous-type) were selected via clinical and radiographic features, and intraoperative findings. By MRI, the most commonly involved level was C5/6, followed by C3/4, C4/5, and C6/7. The areas of greatest spinal cord compression were at the disc levels because of herniated cervical discs. Eight patients were decompressed via anterior cervical discectomy and fusion (ACDF), 13 patients via anterior cervical corpectomy and fusion (ACCF), and five patients via ACDF combined with posterior laminectomy and fusion. The outcomes were all favorable. In conclusion, surgeons should consider the potential for CDH when performing spinal cord decompression and deciding the surgical approach in patients presenting with OPLL.
Jun, Yong Woong; Wang, Taejun; Hwang, Sekyu; Kim, Dokyoung; Ma, Donghee; Kim, Ki Hean; Kim, Sungjee; Jung, Junyang; Ahn, Kyo Han
2018-06-05
Vesicles exchange its contents through membrane fusion processes-kiss-and-run and full-collapse fusion. Indirect observation of these fusion processes using artificial vesicles enhanced our understanding on the molecular mechanisms involved. Direct observation of the fusion processes in a real biological system, however, remains a challenge owing to many technical obstacles. We disclose a ratiometric two-photon probe offering real-time tracking of lysosomal ATP with quantitative information for the first time. By applying the probe to two-photon live-cell imaging technique, lysosomal membrane fusion process in cells has been directly observed along with the concentration of its content-lysosomal ATP. Results show that the kiss-and-run process between lysosomes proceeds through repeating transient interactions with gradual content mixing, whereas the full-fusion process occurs at once. Furthermore, it is confirmed that both the fusion processes proceed with conservation of the content. Such a small-molecule probe exerts minimal disturbance and hence has potential for studying various biological processes associated with lysosomal ATP. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
[Research progress of multi-model medical image fusion and recognition].
Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian
2013-10-01
Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.
Self characterization of a coded aperture array for neutron source imaging
Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; ...
2014-12-15
The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning DT plasma during the stagnation stage of ICF implosions. Since the neutron source is small (~100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be preciselymore » aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.« less
A novel method to acquire 3D data from serial 2D images of a dental cast
NASA Astrophysics Data System (ADS)
Yi, Yaxing; Li, Zhongke; Chen, Qi; Shao, Jun; Li, Xinshe; Liu, Zhiqin
2007-05-01
This paper introduced a newly developed method to acquire three-dimensional data from serial two-dimensional images of a dental cast. The system consists of a computer and a set of data acquiring device. The data acquiring device is used to take serial pictures of the a dental cast; an artificial neural network works to translate two-dimensional pictures to three-dimensional data; then three-dimensional image can reconstruct by the computer. The three-dimensional data acquiring of dental casts is the foundation of computer-aided diagnosis and treatment planning in orthodontics.
Shirvani, Atefeh; Jabbari, Keyvan; Amouheidari, Alireza
2017-01-01
In radiation therapy, computed tomography (CT) simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI)-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P < 0.005) and second factor was the angle between CT and MRI slice in the sagittal plane (cor = 0.75, P < 0.005). In 20% of patients, this angle was more than 28° and image fusion was not efficient. In 17% of patients, difference slice gap in CT and MRI was >4 cm and image fusion quality was <25%. The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle.
Extended depth of field integral imaging using multi-focus fusion
NASA Astrophysics Data System (ADS)
Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua
2018-03-01
In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.
Image-fusion of MR spectroscopic images for treatment planning of gliomas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang Jenghwa; Thakur, Sunitha; Perera, Gerard
2006-01-15
{sup 1}H magnetic resonance spectroscopic imaging (MRSI) can improve the accuracy of target delineation for gliomas, but it lacks the anatomic resolution needed for image fusion. This paper presents a simple protocol for fusing simulation computer tomography (CT) and MRSI images for glioma intensity-modulated radiotherapy (IMRT), including a retrospective study of 12 patients. Each patient first underwent whole-brain axial fluid-attenuated-inversion-recovery (FLAIR) MRI (3 mm slice thickness, no spacing), followed by three-dimensional (3D) MRSI measurements (TE/TR: 144/1000 ms) of a user-specified volume encompassing the extent of the tumor. The nominal voxel size of MRSI ranged from 8x8x10 mm{sup 3} to 12x12x10more » mm{sup 3}. A system was developed to grade the tumor using the choline-to-creatine (Cho/Cr) ratios from each MRSI voxel. The merged MRSI images were then generated by replacing the Cho/Cr value of each MRSI voxel with intensities according to the Cho/Cr grades, and resampling the poorer-resolution Cho/Cr map into the higher-resolution FLAIR image space. The FUNCTOOL processing software was also used to create the screen-dumped MRSI images in which these data were overlaid with each FLAIR MRI image. The screen-dumped MRSI images were manually translated and fused with the FLAIR MRI images. Since the merged MRSI images were intrinsically fused with the FLAIR MRI images, they were also registered with the screen-dumped MRSI images. The position of the MRSI volume on the merged MRSI images was compared with that of the screen-dumped MRSI images and was shifted until agreement was within a predetermined tolerance. Three clinical target volumes (CTVs) were then contoured on the FLAIR MRI images corresponding to the Cho/Cr grades. Finally, the FLAIR MRI images were fused with the simulation CT images using a mutual-information algorithm, yielding an IMRT plan that simultaneously delivers three different dose levels to the three CTVs. The image-fusion protocol was tested on 12 (six high-grade and six low-grade) glioma patients. The average agreement of the MRSI volume position on the screen-dumped MRSI images and the merged MRSI images was 0.29 mm with a standard deviation of 0.07 mm. Of all the voxels with Cho/Cr grade one or above, the distribution of Cho/Cr grade was found to correlate with the glioma grade from pathologic finding and is consistent with literature results indicating Cho/Cr elevation as a marker for malignancy. In conclusion, an image-fusion protocol was developed that successfully incorporates MRSI information into the IMRT treatment plan for glioma.« less
Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.
Chao, Zhen; Kim, Dohyeon; Kim, Hee-Joung
2018-04-01
In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. Recently, neural network technique was applied to medical image fusion by many researchers, but there are still many deficiencies. In this study, we propose a novel fusion method to combine multi-modality medical images based on the enhanced fuzzy radial basis function neural network (Fuzzy-RBFNN), which includes five layers: input, fuzzy partition, front combination, inference, and output. Moreover, we propose a hybrid of the gravitational search algorithm (GSA) and error back propagation algorithm (EBPA) to train the network to update the parameters of the network. Two different patterns of images are used as inputs of the neural network, and the output is the fused image. A comparison with the conventional fusion methods and another neural network method through subjective observation and objective evaluation indexes reveals that the proposed method effectively synthesized the information of input images and achieved better results. Meanwhile, we also trained the network by using the EBPA and GSA, individually. The results reveal that the EBPGSA not only outperformed both EBPA and GSA, but also trained the neural network more accurately by analyzing the same evaluation indexes. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Hyperspectral and LiDAR remote sensing of fire fuels in Hawaii Volcanoes National Park.
Varga, Timothy A; Asner, Gregory P
2008-04-01
Alien invasive grasses threaten to transform Hawaiian ecosystems through the alteration of ecosystem dynamics, especially the creation or intensification of a fire cycle. Across sub-montane ecosystems of Hawaii Volcanoes National Park on Hawaii Island, we quantified fine fuels and fire spread potential of invasive grasses using a combination of airborne hyperspectral and light detection and ranging (LiDAR) measurements. Across a gradient from forest to savanna to shrubland, automated mixture analysis of hyperspectral data provided spatially explicit fractional cover estimates of photosynthetic vegetation, non-photosynthetic vegetation, and bare substrate and shade. Small-footprint LiDAR provided measurements of vegetation height along this gradient of ecosystems. Through the fusion of hyperspectral and LiDAR data, a new fire fuel index (FFI) was developed to model the three-dimensional volume of grass fuels. Regionally, savanna ecosystems had the highest volumes of fire fuels, averaging 20% across the ecosystem and frequently filling all of the three-dimensional space represented by each image pixel. The forest and shrubland ecosystems had lower FFI values, averaging 4.4% and 8.4%, respectively. The results indicate that the fusion of hyperspectral and LiDAR remote sensing can provide unique information on the three-dimensional properties of ecosystems, their flammability, and the potential for fire spread.
Classification Accuracy Increase Using Multisensor Data Fusion
NASA Astrophysics Data System (ADS)
Makarau, A.; Palubinskas, G.; Reinartz, P.
2011-09-01
The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc.
Zhang, Jinpeng; Zhang, Lichi; Xiang, Lei; Shao, Yeqin; Wu, Guorong; Zhou, Xiaodong; Shen, Dinggang; Wang, Qian
2017-01-01
It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images. PMID:29062159
Zhang, Jinpeng; Zhang, Lichi; Xiang, Lei; Shao, Yeqin; Wu, Guorong; Zhou, Xiaodong; Shen, Dinggang; Wang, Qian
2017-03-01
It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images.
Two-dimensional signal processing with application to image restoration
NASA Technical Reports Server (NTRS)
Assefi, T.
1974-01-01
A recursive technique for modeling and estimating a two-dimensional signal contaminated by noise is presented. A two-dimensional signal is assumed to be an undistorted picture, where the noise introduces the distortion. Both the signal and the noise are assumed to be wide-sense stationary processes with known statistics. Thus, to estimate the two-dimensional signal is to enhance the picture. The picture representing the two-dimensional signal is converted to one dimension by scanning the image horizontally one line at a time. The scanner output becomes a nonstationary random process due to the periodic nature of the scanner operation. Procedures to obtain a dynamical model corresponding to the autocorrelation function of the scanner output are derived. Utilizing the model, a discrete Kalman estimator is designed to enhance the image.
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
Applicability of common measures in multifocus image fusion comparison
NASA Astrophysics Data System (ADS)
Vajgl, Marek
2017-11-01
Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.
Newton, Peter O; Hahn, Gregory W; Fricka, Kevin B; Wenger, Dennis R
2002-04-15
A retrospective radiographic review of 31 patients with congenital spine abnormalities who underwent conventional radiography and advanced imaging studies was conducted. To analyze the utility of three-dimensional computed tomography with multiplanar reformatted images for congenital spine anomalies, as compared with plain radiographs and axial two-dimensional computed tomography imaging. Conventional radiographic imaging for congenital spine disorders often are difficult to interpret because of the patient's small size, the complexity of the disorder, a deformity not in the plane of the radiographs, superimposed structures, and difficulty in forming a mental three-dimensional image. Multiplanar reformatted and three-dimensional computed tomographic imaging offers many potential advantages for defining congenital spine anomalies including visualization of the deformity in any plane, from any angle, with the overlying structures subtracted. The imaging studies of patients who had undergone a three-dimensional computed tomography for congenital deformities of the spine between 1992 and 1998 were reviewed (31 cases). All plain radiographs and axial two-dimensional computed tomography images performed before the three-dimensional computed tomography were reviewed and the findings documented. This was repeated for the three-dimensional reconstructions and, when available, the multiplanar reformatted images (15 cases). In each case, the utility of the advanced imaging was graded as one of the following: Grade A (substantial new information obtained), Grade B (confirmatory with improved visualization and understanding of the deformity), and Grade C (no added useful information obtained). In 17 of 31 cases, the multiplanar reformatted and three-dimensional images allowed identification of unrecognized malformations. In nine additional cases, the advanced imaging was helpful in better visualizing and understanding previously identified deformities. In five cases, no new information was gained. The standard and curved multiplanar reformatted images were best for defining the occiput-C1-C2 anatomy and the extent of segmentation defects. The curved multiplanar reformatted images were especially helpful in keeping the spine from "coming in" and "going out" of the plane of the image when there was significant spine deformity in the sagittal or coronal plane. The three-dimensional reconstructions proved valuable in defining failures of formation. Advanced computed tomography imaging (three-dimensional computed tomography and curved/standard multiplanar reformatted images) allows better definition of congenital spine anomalies. More than 50% of the cases showed additional abnormalities not appreciated on plain radiographs or axial two-dimensional computed tomography images. Curved multiplanar reformatted images allowed imaging in the coronal and sagittal planes of the entire deformity.
NASA Astrophysics Data System (ADS)
Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue
2016-03-01
During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.
A three-dimensional quality-guided phase unwrapping method for MR elastography
NASA Astrophysics Data System (ADS)
Wang, Huifang; Weaver, John B.; Perreard, Irina I.; Doyley, Marvin M.; Paulsen, Keith D.
2011-07-01
Magnetic resonance elastography (MRE) uses accumulated phases that are acquired at multiple, uniformly spaced relative phase offsets, to estimate harmonic motion information. Heavily wrapped phase occurs when the motion is large and unwrapping procedures are necessary to estimate the displacements required by MRE. Two unwrapping methods were developed and compared in this paper. The first method is a sequentially applied approach. The three-dimensional MRE phase image block for each slice was processed by two-dimensional unwrapping followed by a one-dimensional phase unwrapping approach along the phase-offset direction. This unwrapping approach generally works well for low noise data. However, there are still cases where the two-dimensional unwrapping method fails when noise is high. In this case, the baseline of the corrupted regions within an unwrapped image will not be consistent. Instead of separating the two-dimensional and one-dimensional unwrapping in a sequential approach, an interleaved three-dimensional quality-guided unwrapping method was developed to combine both the two-dimensional phase image continuity and one-dimensional harmonic motion information. The quality of one-dimensional harmonic motion unwrapping was used to guide the three-dimensional unwrapping procedures and it resulted in stronger guidance than in the sequential method. In this work, in vivo results generated by the two methods were compared.
Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying
2016-12-20
The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.
2013-10-01
AD_________________ Award Number: W81XWH-12-1-0597 TITLE: Parametric PET /MR Fusion Imaging to...Parametric PET /MR Fusion Imaging to Differentiate Aggressive from Indolent Primary Prostate Cancer with Application for Image-Guided Prostate Cancer Biopsies...The study investigates whether fusion PET /MRI imaging with 18F-choline PET /CT and diffusion-weighted MRI can be successfully applied to target prostate
[Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain].
Fiedler, E; Platsch, G; Schwarz, A; Schmiedehausen, K; Tomandl, B; Huk, W; Rupprecht, Th; Rahn, N; Kuwert, T
2003-10-01
Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. PATIENTS, MATERIAL AND METHOD: In 32 patients regional cerebral blood flow was measured using (99m)Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohara-Imaizumi, Mica; Aoyagi, Kyota; Akimoto, Yoshihiro
To analyze the exocytosis of glucagon-like peptide-1 (GLP-1) granules, we imaged the motion of GLP-1 granules labeled with enhanced yellow fluorescent protein (Venus) fused to human growth hormone (hGH-Venus) in an enteroendocrine cell line, STC-1 cells, by total internal reflection fluorescent (TIRF) microscopy. We found glucose stimulation caused biphasic GLP-1 granule exocytosis: during the first phase, fusion events occurred from two types of granules (previously docked granules and newcomers), and thereafter continuous fusion was observed mostly from newcomers during the second phase. Closely similar to the insulin granule fusion from pancreatic {beta} cells, the regulated biphasic exocytosis from two typesmore » of granules may be a common mechanism in glucose-evoked hormone release from endocrine cells.« less
Tissue fusion during early mammalian development requires crosstalk between multiple cell types. For example, paracrine signaling between palatal epithelial cells and palatal mesenchyme mediates the fusion of opposing palatal shelves during embryonic development. Fusion events in...
Shirvani, Atefeh; Jabbari, Keyvan; Amouheidari, Alireza
2017-01-01
Background: In radiation therapy, computed tomography (CT) simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI)-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. Materials and Methods: In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. Results: According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P < 0.005) and second factor was the angle between CT and MRI slice in the sagittal plane (cor = 0.75, P < 0.005). In 20% of patients, this angle was more than 28° and image fusion was not efficient. In 17% of patients, difference slice gap in CT and MRI was >4 cm and image fusion quality was <25%. Conclusion: The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle. PMID:29387672
Real-time broadband terahertz spectroscopic imaging by using a high-sensitivity terahertz camera
NASA Astrophysics Data System (ADS)
Kanda, Natsuki; Konishi, Kuniaki; Nemoto, Natsuki; Midorikawa, Katsumi; Kuwata-Gonokami, Makoto
2017-02-01
Terahertz (THz) imaging has a strong potential for applications because many molecules have fingerprint spectra in this frequency region. Spectroscopic imaging in the THz region is a promising technique to fully exploit this characteristic. However, the performance of conventional techniques is restricted by the requirement of multidimensional scanning, which implies an image data acquisition time of several minutes. In this study, we propose and demonstrate a novel broadband THz spectroscopic imaging method that enables real-time image acquisition using a high-sensitivity THz camera. By exploiting the two-dimensionality of the detector, a broadband multi-channel spectrometer near 1 THz was constructed with a reflection type diffraction grating and a high-power THz source. To demonstrate the advantages of the developed technique, we performed molecule-specific imaging and high-speed acquisition of two-dimensional (2D) images. Two different sugar molecules (lactose and D-fructose) were identified with fingerprint spectra, and their distributions in one-dimensional space were obtained at a fast video rate (15 frames per second). Combined with the one-dimensional (1D) mechanical scanning of the sample, two-dimensional molecule-specific images can be obtained only in a few seconds. Our method can be applied in various important fields such as security and biomedicine.
[Binocular fusion method for prevention of myopia].
Xu, G D
1989-03-01
When looking at a far object with two eyes, relaxation of convergence and accommodation occurred and accompanied by binocular fusion. Using this phenomenon a method of binocular fusion of targets was designed, that is the distance between two targets are just the same as the distance between two visual lines, while looking at a far object. During the images of the targets are fused, the accommodation and convergence are relaxed concomitantly; thus a result of correction of pseudomyopia and prevention of myopia is achieved. By means of binocular fusion, the eye muscle exercises were conducted and resulted in not only the far point further but also the near point closer. The skiascopic examination carried out at the same time of binocular fusion showed that the degrees of relaxed accommodation was 97.9% that of looking at an object in far distance. The above results indicated that the binocular fusion method had excellent effect on the prevention of myopia. This method is simple and feasible, conforms to the visual physiology, and thus can be widely adopted.
Infrared and visible image fusion with spectral graph wavelet transform.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo
2015-09-01
Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.
Combined neutron and x-ray imaging at the National Ignition Facility (invited)
Danly, C. R.; Christensen, K.; Fatherley, Valerie E.; ...
2016-10-11
X-ray and neutrons are commonly used to image Inertial Confinement Fusion implosions, providing key diagnostic information on the fuel assembly of burning DT fuel. The x-ray and neutron data provided are complementary as the production of neutrons and x-rays occur from different physical processes, but typically these two images are collected from different views with no opportunity for co-registration of the two images. Neutrons are produced where the DT fusion fuel is burning; X-rays are produced in regions corresponding to high temperatures. Processes such as mix of ablator material into the hotspot can result in increased x-ray production and decreasedmore » neutron production but can only be confidently observed if the two images are collected along the same line of sight and co-registered. To allow direct comparison of x-ray and neutron data, a Combined Neutron X-ray Imaging system has been tested at Omega and installed at the National Ignition Facility to collect an x-ray image along the currently installed neutron imaging line-of-sight. Here, this system is described, and initial results are presented along with prospects for definitive coregistration of the images.« less
Combined neutron and x-ray imaging at the National Ignition Facility (invited).
Danly, C R; Christensen, K; Fatherley, V E; Fittinghoff, D N; Grim, G P; Hibbard, R; Izumi, N; Jedlovec, D; Merrill, F E; Schmidt, D W; Simpson, R A; Skulina, K; Volegov, P L; Wilde, C H
2016-11-01
X-ray and neutrons are commonly used to image inertial confinement fusion implosions, providing key diagnostic information on the fuel assembly of burning deuterium-tritium (DT) fuel. The x-ray and neutron data provided are complementary as the production of neutrons and x-rays occurs from different physical processes, but typically these two images are collected from different views with no opportunity for co-registration of the two images. Neutrons are produced where the DT fusion fuel is burning; X-rays are produced in regions corresponding to high temperatures. Processes such as mix of ablator material into the hotspot can result in increased x-ray production and decreased neutron production but can only be confidently observed if the two images are collected along the same line of sight and co-registered. To allow direct comparison of x-ray and neutron data, a combined neutron x-ray imaging system has been tested at Omega and installed at the National Ignition Facility to collect an x-ray image along the currently installed neutron imaging line of sight. This system is described, and initial results are presented along with prospects for definitive coregistration of the images.
Image matrix processor for fast multi-dimensional computations
Roberson, George P.; Skeate, Michael F.
1996-01-01
An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.
Augustine, Daniel; Yaqub, Mohammad; Szmigielski, Cezary; Lima, Eduardo; Petersen, Steffen E; Becher, Harald; Noble, J Alison; Leeson, Paul
2015-02-01
Three-dimensional fusion echocardiography (3DFE) is a novel postprocessing approach that utilizes imaging data acquired from multiple 3D acquisitions. We assessed image quality, endocardial border definition, and cardiac wall motion in patients using 3DFE compared to standard 3D images (3D) and results obtained with contrast echocardiography (2DC). Twenty-four patients (mean age 66.9 ± 13 years, 17 males, 7 females) undergoing 2DC had three, noncontrast, 3D apical volumes acquired at rest. Images were fused using an automated image fusion approach. Quality of the 3DFE was compared to both 3D and 2DC based on contrast-to-noise ratio (CNR) and endocardial border definition. We then compared clinical wall-motion score index (WMSI) calculated from 3DFE and 3D to those obtained from 2DC images. Fused 3D volumes had significantly improved CNR (8.92 ± 1.35 vs. 6.59 ± 1.19, P < 0.0005) and segmental image quality (2.42 ± 0.99 vs. 1.93 ± 1.18, P < 0.005) compared to unfused 3D acquisitions. Levels achieved were closer to scores for 2D contrast images (CNR: 9.04 ± 2.21, P = 0.6; segmental image quality: 2.91 ± 0.37, P < 0.005). WMSI calculated from fused 3D volumes did not differ significantly from those obtained from 2D contrast echocardiography (1.06 ± 0.09 vs. 1.07 ± 0.15, P = 0.69), whereas unfused images produced significantly more variable results (1.19 ± 0.30). This was confirmed by a better intraclass correlation coefficient (ICC 0.72; 95% CI 0.32-0.88) relative to comparisons with unfused images (ICC 0.56; 95% CI 0.02-0.81). 3DFE significantly improves left ventricular image quality compared to unfused 3D in a patient population and allows noncontrast assessment of wall motion that approaches that achieved with 2D contrast echocardiography. © 2014, Wiley Periodicals, Inc.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents.
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-13
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-01
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions. PMID:28098797
Hoshi, K; Fujihara, Y; Mori, Y; Asawa, Y; Kanazawa, S; Nishizawa, S; Misawa, M; Numano, T; Inoue, H; Sakamoto, T; Watanabe, M; Komura, M; Takato, T
2016-09-01
In this study, the mutual fusion of chondrocyte pellets was promoted in order to produce large-sized tissue-engineered cartilage with a three-dimensional (3D) shape. Five pellets of human auricular chondrocytes were first prepared, which were then incubated in an agarose mold. After 3 weeks of culture in matrix production-promoting medium under 5.78g/cm(2) compression, the tissue-engineered cartilage showed a sufficient mechanical strength. To confirm the usefulness of these methods, a transplantation experiment was performed using beagles. Tissue-engineered cartilage prepared with 50 pellets of beagle chondrocytes was transplanted subcutaneously into the cell-donor dog for 2 months. The tissue-engineered cartilage of the beagles maintained a rod-like shape, even after harvest. Histology showed fair cartilage regeneration. Furthermore, 20 pellets were made and placed on a beta-tricalcium phosphate prism, and this was then incubated within the agarose mold for 3 weeks. The construct was transplanted into a bone/cartilage defect in the cell-donor beagle. After 2 months, bone and cartilage regeneration was identified on micro-computed tomography and magnetic resonance imaging. This approach involving the fusion of small pellets into a large structure enabled the production of 3D tissue-engineered cartilage that was close to physiological cartilage tissue in property, without conventional polyper scaffolds. Copyright © 2016. Published by Elsevier Ltd.
Investigations of image fusion
NASA Astrophysics Data System (ADS)
Zhang, Zhong
1999-12-01
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.
Sensor fusion for synthetic vision
NASA Technical Reports Server (NTRS)
Pavel, M.; Larimer, J.; Ahumada, A.
1991-01-01
Display methodologies are explored for fusing images gathered by millimeter wave sensors with images rendered from an on-board terrain data base to facilitate visually guided flight and ground operations in low visibility conditions. An approach to fusion based on multiresolution image representation and processing is described which facilitates fusion of images differing in resolution within and between images. To investigate possible fusion methods, a workstation-based simulation environment is being developed.
Yang, Minglei; Ding, Hui; Zhu, Lei; Wang, Guangzhi
2016-12-01
Ultrasound fusion imaging is an emerging tool and benefits a variety of clinical applications, such as image-guided diagnosis and treatment of hepatocellular carcinoma and unresectable liver metastases. However, respiratory liver motion-induced misalignment of multimodal images (i.e., fusion error) compromises the effectiveness and practicability of this method. The purpose of this paper is to develop a subject-specific liver motion model and automatic registration-based method to correct the fusion error. An online-built subject-specific motion model and automatic image registration method for 2D ultrasound-3D magnetic resonance (MR) images were combined to compensate for the respiratory liver motion. The key steps included: 1) Build a subject-specific liver motion model for current subject online and perform the initial registration of pre-acquired 3D MR and intra-operative ultrasound images; 2) During fusion imaging, compensate for liver motion first using the motion model, and then using an automatic registration method to further correct the respiratory fusion error. Evaluation experiments were conducted on liver phantom and five subjects. In the phantom study, the fusion error (superior-inferior axis) was reduced from 13.90±2.38mm to 4.26±0.78mm by using the motion model only. The fusion error further decreased to 0.63±0.53mm by using the registration method. The registration method also decreased the rotation error from 7.06±0.21° to 1.18±0.66°. In the clinical study, the fusion error was reduced from 12.90±9.58mm to 6.12±2.90mm by using the motion model alone. Moreover, the fusion error decreased to 1.96±0.33mm by using the registration method. The proposed method can effectively correct the respiration-induced fusion error to improve the fusion image quality. This method can also reduce the error correction dependency on the initial registration of ultrasound and MR images. Overall, the proposed method can improve the clinical practicability of ultrasound fusion imaging. Copyright © 2016 Elsevier Ltd. All rights reserved.
Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion
Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang
2016-01-01
Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost. PMID:26840313
A motorized ultrasound system for MRI-ultrasound fusion guided prostatectomy
NASA Astrophysics Data System (ADS)
Seifabadi, Reza; Xu, Sheng; Pinto, Peter; Wood, Bradford J.
2016-03-01
Purpose: This study presents MoTRUS, a motorized transrectal ultrasound system, to enable remote navigation of a transrectal ultrasound (TRUS) probe during da Vinci assisted prostatectomy. MoTRUS not only provides a stable platform to the ultrasound probe, but also allows the physician to navigate it remotely while sitting on the da Vinci console. This study also presents phantom feasibility study with the goal being intraoperative MRI-US image fusion capability to bring preoperative MR images to the operating room for the best visualization of the gland, boundaries, nerves, etc. Method: A two degree-of-freedom probe holder is developed to insert and rotate a bi-plane transrectal ultrasound transducer. A custom joystick is made to enable remote navigation of MoTRUS. Safety features have been considered to avoid inadvertent risks (if any) to the patient. Custom design software has been developed to fuse pre-operative MR images to intraoperative ultrasound images acquired by MoTRUS. Results: Remote TRUS probe navigation was evaluated on a patient after taking required consents during prostatectomy using MoTRUS. It took 10 min to setup the system in OR. MoTRUS provided similar capability in addition to remote navigation and stable imaging. No complications were observed. Image fusion was evaluated on a commercial prostate phantom. Electromagnetic tracking was used for the fusion. Conclusions: Motorized navigation of the TRUS probe during prostatectomy is safe and feasible. Remote navigation provides physician with a more precise and easier control of the ultrasound image while removing the burden of manual manipulation of the probe. Image fusion improved visualization of the prostate and boundaries in a phantom study.
a Comparative Analysis of Spatiotemporal Data Fusion Models for Landsat and Modis Data
NASA Astrophysics Data System (ADS)
Hazaymeh, K.; Almagbile, A.
2018-04-01
In this study, three documented spatiotemporal data fusion models were applied to Landsat-7 and MODIS surface reflectance, and NDVI. The algorithms included the spatial and temporal adaptive reflectance fusion model (STARFM), sparse representation based on a spatiotemporal reflectance fusion model (SPSTFM), and spatiotemporal image-fusion model (STI-FM). The objectives of this study were to (i) compare the performance of these three fusion models using a one Landsat-MODIS spectral reflectance image pairs using time-series datasets from the Coleambally irrigation area in Australia, and (ii) quantitatively evaluate the accuracy of the synthetic images generated from each fusion model using statistical measurements. Results showed that the three fusion models predicted the synthetic Landsat-7 image with adequate agreements. The STI-FM produced more accurate reconstructions of both Landsat-7 spectral bands and NDVI. Furthermore, it produced surface reflectance images having the highest correlation with the actual Landsat-7 images. This study indicated that STI-FM would be more suitable for spatiotemporal data fusion applications such as vegetation monitoring, drought monitoring, and evapotranspiration.
Tissue fusion during early mammalian development requires coordination of multiple cell types, the extracellular matrix, and complex signaling pathways. Fusion events during processes including heart development, neural tube closure, and palatal fusion are dependent on signaling ...
An FPGA-based heterogeneous image fusion system design method
NASA Astrophysics Data System (ADS)
Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong
2011-08-01
Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.
Image fusion based on millimeter-wave for concealed weapon detection
NASA Astrophysics Data System (ADS)
Zhu, Weiwen; Zhao, Yuejin; Deng, Chao; Zhang, Cunlin; Zhang, Yalin; Zhang, Jingshui
2010-11-01
This paper describes a novel multi sensors image fusion technology which is presented for concealed weapon detection (CWD). It is known to all, because of the good transparency of the clothes at millimeter wave band, a millimeter wave radiometer can be used to image and distinguish concealed contraband beneath clothes, for example guns, knives, detonator and so on. As a result, we adopt the passive millimeter wave (PMMW) imaging technology for airport security. However, in consideration of the wavelength of millimeter wave and the single channel mechanical scanning, the millimeter wave image has law optical resolution, which can't meet the need of practical application. Therefore, visible image (VI), which has higher resolution, is proposed for the image fusion with the millimeter wave image to enhance the readability. Before the image fusion, a novel image pre-processing which specifics to the fusion of millimeter wave imaging and visible image is adopted. And in the process of image fusion, multi resolution analysis (MRA) based on Wavelet Transform (WT) is adopted. In this way, the experiment result shows that this method has advantages in concealed weapon detection and has practical significance.
Maximizing Science Return from Future Mars Missions with Onboard Image Analyses
NASA Technical Reports Server (NTRS)
Gulick, V. C.; Morris, R. L.; Bandari, E. B.; Roush, T. L.
2000-01-01
We have developed two new techniques to enhance science return and to decrease returned data volume for near-term Mars missions: 1) multi-spectral image compression and 2) autonomous identification and fusion of in-focus regions in an image series.
D Object Classification Based on Thermal and Visible Imagery in Urban Area
NASA Astrophysics Data System (ADS)
Hasani, H.; Samadzadegan, F.
2015-12-01
The spatial distribution of land cover in the urban area especially 3D objects (buildings and trees) is a fundamental dataset for urban planning, ecological research, disaster management, etc. According to recent advances in sensor technologies, several types of remotely sensed data are available from the same area. Data fusion has been widely investigated for integrating different source of data in classification of urban area. Thermal infrared imagery (TIR) contains information on emitted radiation and has unique radiometric properties. However, due to coarse spatial resolution of thermal data, its application has been restricted in urban areas. On the other hand, visible image (VIS) has high spatial resolution and information in visible spectrum. Consequently, there is a complementary relation between thermal and visible imagery in classification of urban area. This paper evaluates the potential of aerial thermal hyperspectral and visible imagery fusion in classification of urban area. In the pre-processing step, thermal imagery is resampled to the spatial resolution of visible image. Then feature level fusion is applied to construct hybrid feature space include visible bands, thermal hyperspectral bands, spatial and texture features and moreover Principle Component Analysis (PCA) transformation is applied to extract PCs. Due to high dimensionality of feature space, dimension reduction method is performed. Finally, Support Vector Machines (SVMs) classify the reduced hybrid feature space. The obtained results show using thermal imagery along with visible imagery, improved the classification accuracy up to 8% respect to visible image classification.
Advances in multi-sensor data fusion: algorithms and applications.
Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying
2009-01-01
With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hakime, Antoine, E-mail: thakime@yahoo.com; Yevich, Steven; Tselikas, Lambros
PurposeTo assess whether fusion imaging-guided percutaneous microwave ablation (MWA) can improve visibility and targeting of liver metastasis that were deemed inconspicuous on ultrasound (US).Materials and MethodsMWA of liver metastasis not judged conspicuous enough on US was performed under CT/US fusion imaging guidance. The conspicuity before and after the fusion imaging was graded on a five-point scale, and significance was assessed by Wilcoxon test. Technical success, procedure time, and procedure-related complications were evaluated.ResultsA total of 35 patients with 40 liver metastases (mean size 1.3 ± 0.4 cm) were enrolled. Image fusion improved conspicuity sufficiently to allow fusion-targeted MWA in 33 patients. The time requiredmore » for image fusion processing and tumors’ identification averaged 10 ± 2.1 min (range 5–14). Initial conspicuity on US by inclusion criteria was 1.2 ± 0.4 (range 0–2), while conspicuity after localization on fusion imaging was 3.5 ± 1 (range 1–5, p < 0.001). Technical success rate was 83% (33/40) in intention-to-treat analysis and 100% in analysis of treated tumors. There were no major procedure-related complications.ConclusionsFusion imaging broadens the scope of US-guided MWA to metastasis lacking adequate conspicuity on conventional US. Fusion imaging is an effective tool to increase the conspicuity of liver metastases that were initially deemed non visualizable on conventional US imaging.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Kim, M; Lee, M
Purpose: The novel 3 dimensional (3D)-printed spine quality assurance (QA) phantoms generated by two different 3D-printing technologies, digital light processing (DLP) and Polyjet, were developed and evaluated for spine stereotactic body radiation treatment (SBRT). Methods: The developed 3D-printed spine QA phantom consisted of an acrylic body and a 3D-printed spine phantom. DLP and Polyjet 3D printers using the high-density acrylic polymer were employed to produce spine-shaped phantoms based on CT images. To verify dosimetric effects, the novel phantom was made it enable to insert films between each slabs of acrylic body phantom. Also, for measuring internal dose of spine, 3D-printedmore » spine phantom was designed as divided laterally exactly in half. Image fusion was performed to evaluate the reproducibility of our phantom, and the Hounsfield unit (HU) was measured based on each CT image. Intensity-modulated radiotherapy plans to deliver a fraction of a 16 Gy dose to a planning target volume (PTV) based on the two 3D-printing techniques were compared for target coverage and normal organ-sparing. Results: Image fusion demonstrated good reproducibility of the fabricated spine QA phantom. The HU values of the DLP- and Polyjet-printed spine vertebrae differed by 54.3 on average. The PTV Dmax dose for the DLP-generated phantom was about 1.488 Gy higher than for the Polyjet-generated phantom. The organs at risk received a lower dose when the DLP technique was used than when the Polyjet technique was used. Conclusion: This study confirmed that a novel 3D-printed phantom mimicking a high-density organ can be created based on CT images, and that a developed 3D-printed spine phantom could be utilized in patient-specific QA for SBRT. Despite using the same main material, DLP and Polyjet yielded different HU values. Therefore, the printing technique and materials must be carefully chosen in order to accurately produce a patient-specific QA phantom.« less
Multifocus image fusion using phase congruency
NASA Astrophysics Data System (ADS)
Zhan, Kun; Li, Qiaoqiao; Teng, Jicai; Wang, Mingying; Shi, Jinhui
2015-05-01
We address the problem of fusing multifocus images based on the phase congruency (PC). PC provides a sharpness feature of a natural image. The focus measure (FM) is identified as strong PC near a distinctive image feature evaluated by the complex Gabor wavelet. The PC is more robust against noise than other FMs. The fusion image is obtained by a new fusion rule (FR), and the focused region is selected by the FR from one of the input images. Experimental results show that the proposed fusion scheme achieves the fusion performance of the state-of-the-art methods in terms of visual quality and quantitative evaluations.
NASA Astrophysics Data System (ADS)
Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan
2015-10-01
Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.
Morris, Michael D.; Treado, Patrick J.
1991-01-01
An imaging system for providing spectrographically resolved images. The system incorporates a one-dimensional spatial encoding mask which enables an image to be projected onto a two-dimensional image detector after spectral dispersion of the image. The dimension of the image which is lost due to spectral dispersion on the two-dimensional detector is recovered through employing a reverse transform based on presenting a multiplicity of different spatial encoding patterns to the image. The system is especially adapted for detecting Raman scattering of monochromatic light transmitted through or reflected from physical samples. Preferably, spatial encoding is achieved through the use of Hadamard mask which selectively transmits or blocks portions of the image from the sample being evaluated.
Multi-focus image fusion based on window empirical mode decomposition
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao
2017-09-01
In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang
2018-04-01
A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.
[Contrast-enhanced ultrasound (CEUS) and image fusion for procedures of liver interventions].
Jung, E M; Clevert, D A
2018-06-01
Contrast-enhanced ultrasound (CEUS) is becoming increasingly important for the detection and characterization of malignant liver lesions and allows percutaneous treatment when surgery is not possible. Contrast-enhanced ultrasound image fusion with computed tomography (CT) and magnetic resonance imaging (MRI) opens up further options for the targeted investigation of a modified tumor treatment. Ultrasound image fusion offers the potential for real-time imaging and can be combined with other cross-sectional imaging techniques as well as CEUS. With the implementation of ultrasound contrast agents and image fusion, ultrasound has been improved in the detection and characterization of liver lesions in comparison to other cross-sectional imaging techniques. In addition, this method can also be used for intervention procedures. The success rate of fusion-guided biopsies or CEUS-guided tumor ablation lies between 80 and 100% in the literature. Ultrasound-guided image fusion using CT or MRI data, in combination with CEUS, can facilitate diagnosis and therapy follow-up after liver interventions. In addition to the primary applications of image fusion in the diagnosis and treatment of liver lesions, further useful indications can be integrated into daily work. These include, for example, intraoperative and vascular applications as well applications in other organ systems.
Application and evaluation of ISVR method in QuickBird image fusion
NASA Astrophysics Data System (ADS)
Cheng, Bo; Song, Xiaolu
2014-05-01
QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.
Zhang, Ji-Bin; Zhao, Li-Rong; Cui, Tian-Xiang; Chen, Xie-Wan; Yang, Qiao; Zhou, Yi-Bing; Chen, Zheng-Tang; Zhang, Shao-Xiang; Sun, Jian-Guo
2018-01-01
The aim of the present study was to investigate the optimal strategy and dosimetric measurement of thoracic radiotherapy based on three-dimensional (3D) modeling of mediastinal lymph nodes (MLNs). A 3D model of MLNs was constructed from a Chinese Visible Human female dataset. Image registration and fusion between reconstructed MLNs and original chest computed tomography (CT) images was conducted in the Eclipse™ treatment planning system (TPS). There were three plans, including 3D conformal radiotherapy (3D-CRT), intensity-modulated radiotherapy (IMRT) and volumetric-modulated arc therapy (VMAT), which were designed based on 10 cases of simulated lung lesions (SLLs) and MLNs. The quality of these plans was evaluated via examining indexes, including conformity index (CI), homogeneity index and clinical target volume (CTV) coverage. Dose-volume histogram analysis was performed on SLL, MLNs and organs at risk (OARs). A Chengdu Dosimetric Phantom (CDP) was then drilled at specific MLNs according to 20 patients with thoracic tumors and of a medium-build. These plans were repeated on fused MLNs and CDP CT images in the Eclipse™ TPS. Radiation doses at the SLLs and MLNs of the CDP were measured and compared with calculated doses. The established 3D MLN model demonstrated the spatial location of MLNs and adjacent structures. Precise image registration and fusion were conducted between reconstructed MLNs and the original chest CT or CDP CT images. IMRT demonstrated greater values in CI, CTV coverage and OAR (lungs and spinal cord) protection, compared with 3D-CRT and VMAT (P<0.05). The deviation between the measured and calculated doses was within ± 10% at SLL, and at the 2R and 7th MLN stations. In conclusion, the 3D MLN model can benefit plan optimization and dosimetric measurement of thoracic radiotherapy, and when combined with CDP, it may provide a tool for clinical dosimetric monitoring. PMID:29556300
Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren
2018-03-14
Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.
Local-global classifier fusion for screening chest radiographs
NASA Astrophysics Data System (ADS)
Ding, Meng; Antani, Sameer; Jaeger, Stefan; Xue, Zhiyun; Candemir, Sema; Kohli, Marc; Thoma, George
2017-03-01
Tuberculosis (TB) is a severe comorbidity of HIV and chest x-ray (CXR) analysis is a necessary step in screening for the infective disease. Automatic analysis of digital CXR images for detecting pulmonary abnormalities is critical for population screening, especially in medical resource constrained developing regions. In this article, we describe steps that improve previously reported performance of NLM's CXR screening algorithms and help advance the state of the art in the field. We propose a local-global classifier fusion method where two complementary classification systems are combined. The local classifier focuses on subtle and partial presentation of the disease leveraging information in radiology reports that roughly indicates locations of the abnormalities. In addition, the global classifier models the dominant spatial structure in the gestalt image using GIST descriptor for the semantic differentiation. Finally, the two complementary classifiers are combined using linear fusion, where the weight of each decision is calculated by the confidence probabilities from the two classifiers. We evaluated our method on three datasets in terms of the area under the Receiver Operating Characteristic (ROC) curve, sensitivity, specificity and accuracy. The evaluation demonstrates the superiority of our proposed local-global fusion method over any single classifier.
Dual wavelength imaging allows analysis of membrane fusion of influenza virus inside cells.
Sakai, Tatsuya; Ohuchi, Masanobu; Imai, Masaki; Mizuno, Takafumi; Kawasaki, Kazunori; Kuroda, Kazumichi; Yamashina, Shohei
2006-02-01
Influenza virus hemagglutinin (HA) is a determinant of virus infectivity. Therefore, it is important to determine whether HA of a new influenza virus, which can potentially cause pandemics, is functional against human cells. The novel imaging technique reported here allows rapid analysis of HA function by visualizing viral fusion inside cells. This imaging was designed to detect fusion changing the spectrum of the fluorescence-labeled virus. Using this imaging, we detected the fusion between a virus and a very small endosome that could not be detected previously, indicating that the imaging allows highly sensitive detection of viral fusion.
Electron cyclotron emission imaging and applications in magnetic fusion energy
NASA Astrophysics Data System (ADS)
Tobias, Benjamin John
Energy production through the burning of fossil fuels is an unsustainable practice. Exponentially increasing energy consumption and dwindling natural resources ensure that coal and gas fueled power plants will someday be a thing of the past. However, even before fuel reserves are depleted, our planet may well succumb to disastrous side effects, namely the build up of carbon emissions in the environment triggering world-wide climate change and the countless industrial spills of pollutants that continue to this day. Many alternatives are currently being developed, but none has so much promise as fusion nuclear energy, the energy of the sun. The confinement of hot plasma at temperatures in excess of 100 million Kelvin by a carefully arranged magnetic field for the realization of a self-sustaining fusion power plant requires new technologies and improved understanding of fundamental physical phenomena. Imaging of electron cyclotron radiation lends insight into the spatial and temporal behavior of electron temperature fluctuations and instabilities, providing a powerful diagnostic for investigations into basic plasma physics and nuclear fusion reactor operation. This dissertation presents the design and implementation of a new generation of Electron Cyclotron Emission Imaging (ECEI) diagnostics on toroidal magnetic fusion confinement devices, or tokamaks, around the world. The underlying physics of cyclotron radiation in fusion plasmas is reviewed, and a thorough discussion of millimeter wave imaging techniques and heterodyne radiometry in ECEI follows. The imaging of turbulence and fluid flows has evolved over half a millennium since Leonardo da Vinci's first sketches of cascading water, and applications for ECEI in fusion research are broad ranging. Two areas of physical investigation are discussed in this dissertation: the identification of poloidal shearing in Alfven eigenmode structures predicted by hybrid gyrofluid-magnetohydrodynamic (gyrofluid-MHD) modeling, and magnetic field line displacement during precursor oscillations associated with the sawtooth crash, a disruptive instability observed both in tokamak plasmas with high core current and in the magnetized plasmas of solar flares and other interstellar plasmas. Understanding both of these phenomena is essential for the future of magnetic fusion energy, and important new observations described herein underscore the advantages of imaging techniques in experimental physics.
Hamm, Klaus D; Surber, Gunnar; Schmücking, Michael; Wurm, Reinhard E; Aschenbach, Rene; Kleinert, Gabriele; Niesen, A; Baum, Richard P
2004-11-01
Innovative new software solutions may enable image fusion to produce the desired data superposition for precise target definition and follow-up studies in radiosurgery/stereotactic radiotherapy in patients with intracranial lesions. The aim is to integrate the anatomical and functional information completely into the radiation treatment planning and to achieve an exact comparison for follow-up examinations. Special conditions and advantages of BrainLAB's fully automatic image fusion system are evaluated and described for this purpose. In 458 patients, the radiation treatment planning and some follow-up studies were performed using an automatic image fusion technique involving the use of different imaging modalities. Each fusion was visually checked and corrected as necessary. The computerized tomography (CT) scans for radiation treatment planning (slice thickness 1.25 mm), as well as stereotactic angiography for arteriovenous malformations, were acquired using head fixation with stereotactic arc or, in the case of stereotactic radiotherapy, with a relocatable stereotactic mask. Different magnetic resonance (MR) imaging sequences (T1, T2, and fluid-attenuated inversion-recovery images) and positron emission tomography (PET) scans were obtained without head fixation. Fusion results and the effects on radiation treatment planning and follow-up studies were analyzed. The precision level of the results of the automatic fusion depended primarily on the image quality, especially the slice thickness and the field homogeneity when using MR images, as well as on patient movement during data acquisition. Fully automated image fusion of different MR, CT, and PET studies was performed for each patient. Only in a few cases was it necessary to correct the fusion manually after visual evaluation. These corrections were minor and did not materially affect treatment planning. High-quality fusion of thin slices of a region of interest with a complete head data set could be performed easily. The target volume for radiation treatment planning could be accurately delineated using multimodal information provided by CT, MR, angiography, and PET studies. The fusion of follow-up image data sets yielded results that could be successfully compared and quantitatively evaluated. Depending on the quality of the originally acquired image, automated image fusion can be a very valuable tool, allowing for fast (approximately 1-2 minute) and precise fusion of all relevant data sets. Fused multimodality imaging improves the target volume definition for radiation treatment planning. High-quality follow-up image data sets should be acquired for image fusion to provide exactly comparable slices and volumetric results that will contribute to quality contol.
Mor-Avi, Victor; Patel, Mita B; Maffessanti, Francesco; Singh, Amita; Medvedofsky, Diego; Zaidi, S Javed; Mediratta, Anuj; Narang, Akhil; Nazir, Noreen; Kachenoura, Nadjia; Lang, Roberto M; Patel, Amit R
2018-06-01
Combined evaluation of coronary stenosis and the extent of ischemia is essential in patients with chest pain. Intermediate-grade stenosis on computed tomographic coronary angiography (CTCA) frequently triggers downstream nuclear stress testing. Alternative approaches without stress and/or radiation may have important implications. Myocardial strain measured from echocardiographic images can be used to detect subclinical dysfunction. The authors recently tested the feasibility of fusion of three-dimensional (3D) echocardiography-derived regional resting longitudinal strain with coronary arteries from CTCA to determine the hemodynamic significance of stenosis. The aim of the present study was to validate this approach against accepted reference techniques. Seventy-eight patients with chest pain referred for CTCA who also underwent 3D echocardiography and regadenoson stress computed tomography were prospectively studied. Left ventricular longitudinal strain data (TomTec) were used to generate fused 3D displays and detect resting strain abnormalities (RSAs) in each coronary territory. Computed tomographic coronary angiographic images were interpreted for the presence and severity of stenosis. Fused 3D displays of subendocardial x-ray attenuation were created to detect stress perfusion defects (SPDs). In patients with stenosis >25% in at least one artery, fractional flow reserve was quantified (HeartFlow). RSA as a marker of significant stenosis was validated against two different combined references: stenosis >50% on CTCA and SPDs seen in the same territory (reference standard A) and fractional flow reserve < 0.80 and SPDs in the same territory (reference standard B). Of the 99 arteries with no stenosis >50% and no SPDs, considered as normal, 19 (19%) had RSAs. Conversely, with stenosis >50% and SPDs, RSAs were considerably more frequent (17 of 24 [71%]). The sensitivity, specificity, and accuracy of RSA were 0.71, 0.81, and 0.79, respectively, against reference standard A and 0.83, 0.81, and 0.82 against reference standard B. Fusion of CTCA and 3D echocardiography-derived resting myocardial strain provides combined displays, which may be useful in determination of the hemodynamic or functional impact of coronary abnormalities, without additional ionizing radiation or stress testing. Copyright © 2018 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
Epi-Two-Dimensional Fluid Flow: A New Topological Paradigm for Dimensionality
NASA Astrophysics Data System (ADS)
Yoshida, Z.; Morrison, P. J.
2017-12-01
While a variety of fundamental differences are known to separate two-dimensional (2D) and three-dimensional (3D) fluid flows, it is not well understood how they are related. Conventionally, dimensional reduction is justified by an a priori geometrical framework; i.e., 2D flows occur under some geometrical constraint such as shallowness. However, deeper inquiry into 3D flow often finds the presence of local 2D-like structures without such a constraint, where 2D-like behavior may be identified by the integrability of vortex lines or vanishing local helicity. Here we propose a new paradigm of flow structure by introducing an intermediate class, termed epi-two-dimensional flow, and thereby build a topological bridge between 2D and 3D flows. The epi-2D property is local and is preserved in fluid elements obeying ideal (inviscid and barotropic) mechanics; a local epi-2D flow may be regarded as a "particle" carrying a generalized enstrophy as its charge. A finite viscosity may cause "fusion" of two epi-2D particles, generating helicity from their charges giving rise to 3D flow.
Viswanathan, P; Krishna, P Venkata
2014-05-01
Teleradiology allows transmission of medical images for clinical data interpretation to provide improved e-health care access, delivery, and standards. The remote transmission raises various ethical and legal issues like image retention, fraud, privacy, malpractice liability, etc. A joint FED watermarking system means a joint fingerprint/encryption/dual watermarking system is proposed for addressing these issues. The system combines a region based substitution dual watermarking algorithm using spatial fusion, stream cipher algorithm using symmetric key, and fingerprint verification algorithm using invariants. This paper aims to give access to the outcomes of medical images with confidentiality, availability, integrity, and its origin. The watermarking, encryption, and fingerprint enrollment are conducted jointly in protection stage such that the extraction, decryption, and verification can be applied independently. The dual watermarking system, introducing two different embedding schemes, one used for patient data and other for fingerprint features, reduces the difficulty in maintenance of multiple documents like authentication data, personnel and diagnosis data, and medical images. The spatial fusion algorithm, which determines the region of embedding using threshold from the image to embed the encrypted patient data, follows the exact rules of fusion resulting in better quality than other fusion techniques. The four step stream cipher algorithm using symmetric key for encrypting the patient data with fingerprint verification system using algebraic invariants improves the robustness of the medical information. The experiment result of proposed scheme is evaluated for security and quality analysis in DICOM medical images resulted well in terms of attacks, quality index, and imperceptibility.
Infrared and visible image fusion method based on saliency detection in sparse domain
NASA Astrophysics Data System (ADS)
Liu, C. H.; Qi, Y.; Ding, W. R.
2017-06-01
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.
Spectral CT imaging in patients with Budd-Chiari syndrome: investigation of image quality.
Su, Lei; Dong, Junqiang; Sun, Qiang; Liu, Jie; Lv, Peijie; Hu, Lili; Yan, Liangliang; Gao, Jianbo
2014-11-01
To assess the image quality of monochromatic imaging from spectral CT in patients with Budd-Chiari syndrome (BCS), fifty patients with BCS underwent spectral CT to generate conventional 140 kVp polychromatic images (group A) and monochromatic images, with energy levels from 40 to 80, 40 + 70, and 50 + 70 keV fusion images (group B) during the portal venous phase (PVP) and the hepatic venous phase (HVP). Two-sample t tests compared vessel-to-liver contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) for the portal vein (PV), hepatic vein (HV), inferior vena cava. Readers' subjective evaluations of the image quality were recorded. The highest SNR values in group B were distributed at 50 keV; the highest CNR values in group B were distributed at 40 keV. The higher CNR values and SNR values were obtained though PVP of PV (SNR 18.39 ± 6.13 vs. 10.56 ± 3.31, CNR 7.81 ± 3.40 vs. 3.58 ± 1.31) and HVP of HV (3.89 ± 2.08 vs. 1.27 ± 1.55) in the group B; the lower image noise for group B was at 70 keV and 50 + 70 keV (15.54 ± 8.39 vs. 18.40 ± 4.97, P = 0.0004 and 18.97 ± 7.61 vs. 18.40 ± 4.97, P = 0.0691); the results show that the 50 + 70 keV fusion image quality was better than that in group A. Monochromatic energy levels of 40-70, 40 + 70, and 50 + 70 keV fusion image can increase vascular contrast and that will be helpful for the diagnosis of BCS, we select the 50 + 70 keV fusion image to acquire the best BCS images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgmans, Mark Christiaan, E-mail: m.c.burgmans@lumc.nl; Harder, J. Michiel den, E-mail: chiel.den.harder@gmail.com; Meershoek, Philippa, E-mail: P.Meershoek@lumc.nl
PurposeTo determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions.Materials and MethodsCT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined bymore » measurement of the residual displacement in phantom lesions by two independent observers.ResultsMean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values.ConclusionThe accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.« less
Hosseini, Seyyed Abed; Khalilzadeh, Mohammad Ali; Naghibi-Sistani, Mohammad Bagher; Homam, Seyyed Mehran
2015-01-01
Background: This paper proposes a new emotional stress assessment system using multi-modal bio-signals. Electroencephalogram (EEG) is the reflection of brain activity and is widely used in clinical diagnosis and biomedical research. Methods: We design an efficient acquisition protocol to acquire the EEG signals in five channels (FP1, FP2, T3, T4 and Pz) and peripheral signals such as blood volume pulse, skin conductance (SC) and respiration, under images induction (calm-neutral and negatively excited) for the participants. The visual stimuli images are selected from the subset International Affective Picture System database. The qualitative and quantitative evaluation of peripheral signals are used to select suitable segments of EEG signals for improving the accuracy of signal labeling according to emotional stress states. After pre-processing, wavelet coefficients, fractal dimension, and Lempel-Ziv complexity are used to extract the features of the EEG signals. The vast number of features leads to the problem of dimensionality, which is solved using the genetic algorithm as a feature selection method. Results: The results show that the average classification accuracy is 89.6% for two categories of emotional stress states using the support vector machine (SVM). Conclusion: This is a great improvement in results compared to other similar researches. We achieve a noticeable improvement of 11.3% in accuracy using SVM classifier, in compared to previous studies. Therefore, a new fusion between EEG and peripheral signals are more robust in comparison to the separate signals. PMID:26622979
Hosseini, Seyyed Abed; Khalilzadeh, Mohammad Ali; Naghibi-Sistani, Mohammad Bagher; Homam, Seyyed Mehran
2015-07-06
This paper proposes a new emotional stress assessment system using multi-modal bio-signals. Electroencephalogram (EEG) is the reflection of brain activity and is widely used in clinical diagnosis and biomedical research. We design an efficient acquisition protocol to acquire the EEG signals in five channels (FP1, FP2, T3, T4 and Pz) and peripheral signals such as blood volume pulse, skin conductance (SC) and respiration, under images induction (calm-neutral and negatively excited) for the participants. The visual stimuli images are selected from the subset International Affective Picture System database. The qualitative and quantitative evaluation of peripheral signals are used to select suitable segments of EEG signals for improving the accuracy of signal labeling according to emotional stress states. After pre-processing, wavelet coefficients, fractal dimension, and Lempel-Ziv complexity are used to extract the features of the EEG signals. The vast number of features leads to the problem of dimensionality, which is solved using the genetic algorithm as a feature selection method. The results show that the average classification accuracy is 89.6% for two categories of emotional stress states using the support vector machine (SVM). This is a great improvement in results compared to other similar researches. We achieve a noticeable improvement of 11.3% in accuracy using SVM classifier, in compared to previous studies. Therefore, a new fusion between EEG and peripheral signals are more robust in comparison to the separate signals.
Segment fusion of ToF-SIMS images.
Milillo, Tammy M; Miller, Mary E; Fischione, Remo; Montes, Angelina; Gardella, Joseph A
2016-06-08
The imaging capabilities of time-of-flight secondary ion mass spectrometry (ToF-SIMS) have not been used to their full potential in the analysis of polymer and biological samples. Imaging has been limited by the size of the dataset and the chemical complexity of the sample being imaged. Pixel and segment based image fusion algorithms commonly used in remote sensing, ecology, geography, and geology provide a way to improve spatial resolution and classification of biological images. In this study, a sample of Arabidopsis thaliana was treated with silver nanoparticles and imaged with ToF-SIMS. These images provide insight into the uptake mechanism for the silver nanoparticles into the plant tissue, giving new understanding to the mechanism of uptake of heavy metals in the environment. The Munechika algorithm was programmed in-house and applied to achieve pixel based fusion, which improved the spatial resolution of the image obtained. Multispectral and quadtree segment or region based fusion algorithms were performed using ecognition software, a commercially available remote sensing software suite, and used to classify the images. The Munechika fusion improved the spatial resolution for the images containing silver nanoparticles, while the segment fusion allowed classification and fusion based on the tissue types in the sample, suggesting potential pathways for the uptake of the silver nanoparticles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arinilhaq,; Widita, Rena
2014-09-30
Optical Coherence Tomography is often used in medical image acquisition to diagnose that change due easy to use and low price. Unfortunately, this type of examination produces a two-dimensional retinal image of the point of acquisition. Therefore, this study developed a method that combines and reconstruct 2-dimensional retinal images into three-dimensional images to display volumetric macular accurately. The system is built with three main stages: data acquisition, data extraction and 3-dimensional reconstruction. At data acquisition step, Optical Coherence Tomography produced six *.jpg images of each patient were further extracted with MATLAB 2010a software into six one-dimensional arrays. The six arraysmore » are combined into a 3-dimensional matrix using a kriging interpolation method with SURFER9 resulting 3-dimensional graphics of macula. Finally, system provides three-dimensional color graphs based on the data distribution normal macula. The reconstruction system which has been designed produces three-dimensional images with size of 481 × 481 × h (retinal thickness) pixels.« less
Direct Heating of a Laser-Imploded Core by Ultraintense Laser-Driven Ions
NASA Astrophysics Data System (ADS)
Kitagawa, Y.; Mori, Y.; Komeda, O.; Ishii, K.; Hanayama, R.; Fujita, K.; Okihara, S.; Sekine, T.; Satoh, N.; Kurita, T.; Takagi, M.; Watari, T.; Kawashima, T.; Kan, H.; Nishimura, Y.; Sunahara, A.; Sentoku, Y.; Nakamura, N.; Kondo, T.; Fujine, M.; Azuma, H.; Motohiro, T.; Hioki, T.; Kakeno, M.; Miura, E.; Arikawa, Y.; Nagai, T.; Abe, Y.; Ozaki, S.; Noda, A.
2015-05-01
A novel direct core heating fusion process is introduced, in which a preimploded core is predominantly heated by energetic ions driven by LFEX, an extremely energetic ultrashort pulse laser. Consequently, we have observed the D (d ,n )
Direct heating of a laser-imploded core by ultraintense laser-driven ions.
Kitagawa, Y; Mori, Y; Komeda, O; Ishii, K; Hanayama, R; Fujita, K; Okihara, S; Sekine, T; Satoh, N; Kurita, T; Takagi, M; Watari, T; Kawashima, T; Kan, H; Nishimura, Y; Sunahara, A; Sentoku, Y; Nakamura, N; Kondo, T; Fujine, M; Azuma, H; Motohiro, T; Hioki, T; Kakeno, M; Miura, E; Arikawa, Y; Nagai, T; Abe, Y; Ozaki, S; Noda, A
2015-05-15
A novel direct core heating fusion process is introduced, in which a preimploded core is predominantly heated by energetic ions driven by LFEX, an extremely energetic ultrashort pulse laser. Consequently, we have observed the D(d,n)^{3}He-reacted neutrons (DD beam-fusion neutrons) with the yield of 5×10^{8} n/4π sr. Examination of the beam-fusion neutrons verified that the ions directly collide with the core plasma. While the hot electrons heat the whole core volume, the energetic ions deposit their energies locally in the core, forming hot spots for fuel ignition. As evidenced in the spectrum, the process simultaneously excited thermal neutrons with the yield of 6×10^{7} n/4π sr, raising the local core temperature from 0.8 to 1.8 keV. A one-dimensional hydrocode STAR 1D explains the shell implosion dynamics including the beam fusion and thermal fusion initiated by fast deuterons and carbon ions. A two-dimensional collisional particle-in-cell code predicts the core heating due to resistive processes driven by hot electrons, and also the generation of fast ions, which could be an additional heating source when they reach the core. Since the core density is limited to 2 g/cm^{3} in the current experiment, neither hot electrons nor fast ions can efficiently deposit their energy and the neutron yield remains low. In future work, we will achieve the higher core density (>10 g/cm^{3}); then hot electrons could contribute more to the core heating via drag heating. Together with hot electrons, the ion contribution to fast ignition is indispensable for realizing high-gain fusion. By virtue of its core heating and ignition, the proposed scheme can potentially achieve high gain fusion.
Image matrix processor for fast multi-dimensional computations
Roberson, G.P.; Skeate, M.F.
1996-10-15
An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.
Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach
NASA Astrophysics Data System (ADS)
Jazaeri, Amin
High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.
Two-dimensional imaging of sprays with fluorescence, lasing, and stimulated Raman scattering.
Serpengüzel, A; Swindal, J C; Chang, R K; Acker, W P
1992-06-20
Two-dimensional fluorescence, lasing, and stimulated Raman scattering images of a hollow-cone nozzle spray are observed. The various constituents of the spray, such as vapor, liquid ligaments, small droplets, and large droplets, are distinguished by selectively imaging different colors associated with the inelastic light-scattering processes.
Navarro-Ramirez, Rodrigo; Berlin, Connor; Lang, Gernot; Hussain, Ibrahim; Janssen, Insa; Sloan, Stephen; Askin, Gulce; Avila, Mauricio J; Zubkov, Micaella; Härtl, Roger
2018-01-01
Two-dimensional radiographic methods have been proposed to evaluate the radiographic outcome after indirect decompression through extreme lateral interbody fusion (XLIF). However, the assessment of neural decompression in a single plane may underestimate the effect of indirect decompression on central canal and foraminal volumes. The present study aimed to assess the reliability and consistency of a novel 3-dimensional radiographic method that assesses neural decompression by volumetric analysis using a new generation of intraoperative fan-beam computed tomography scanner in patients undergoing XLIF. Prospectively collected data from 7 patients (9 levels) undergoing XLIF was retrospectively analyzed. Three independent, blind raters using imaging analysis software performed volumetric measurements pre- and postoperatively to determine central canal and foraminal volumes. Intrarater and Interrater reliability tests were performed to assess the reliability of this novel volumetric method. The interrater reliability between the three raters ranged from 0.800 to 0.952, P < 0.0001. The test-retest analysis on a randomly selected subset of three patients showed good to excellent internal reliability (range of 0.78-1.00) for all 3 raters. There was a significant increase in mean volume ≈20% for right foramen, left foramen, and central canal volumes postoperatively (P = 0.0472; P = 0.0066; P = 0.0003, respectively). Here we demonstrate a new volumetric analysis technique that is feasible, reliable, and reproducible amongst independent raters for central canal and foraminal volumes in the lumbar spine using an intraoperative computed tomography scanner. Copyright © 2017. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Cheng, Boyang; Jin, Longxu; Li, Guoning
2018-06-01
Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.
Satellite image fusion based on principal component analysis and high-pass filtering.
Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E
2010-06-01
This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.
Color-coded Live Imaging of Heterokaryon Formation and Nuclear Fusion of Hybridizing Cancer Cells.
Suetsugu, Atsushi; Matsumoto, Takuro; Hasegawa, Kosuke; Nakamura, Miki; Kunisada, Takahiro; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Bouvet, Michael; Hoffman, Robert M
2016-08-01
Fusion of cancer cells has been studied for over half a century. However, the steps involved after initial fusion between cells, such as heterokaryon formation and nuclear fusion, have been difficult to observe in real time. In order to be able to visualize these steps, we have established cancer-cell sublines from the human HT-1080 fibrosarcoma, one expressing green fluorescent protein (GFP) linked to histone H2B in the nucleus and a red fluorescent protein (RFP) in the cytoplasm and the other subline expressing RFP in the nucleus (mCherry) linked to histone H2B and GFP in the cytoplasm. The two reciprocal color-coded sublines of HT-1080 cells were fused using the Sendai virus. The fused cells were cultured on plastic and observed using an Olympus FV1000 confocal microscope. Multi-nucleate (heterokaryotic) cancer cells, in addition to hybrid cancer cells with single-or multiple-fused nuclei, including fused mitotic nuclei, were observed among the fused cells. Heterokaryons with red, green, orange and yellow nuclei were observed by confocal imaging, even in single hybrid cells. The orange and yellow nuclei indicate nuclear fusion. Red and green nuclei remained unfused. Cell fusion with heterokaryon formation and subsequent nuclear fusion resulting in hybridization may be an important natural phenomenon between cancer cells that may make them more malignant. The ability to image the complex processes following cell fusion using reciprocal color-coded cancer cells will allow greater understanding of the genetic basis of malignancy. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Remote sensing fusion based on guided image filtering
NASA Astrophysics Data System (ADS)
Zhao, Wenfei; Dai, Qinling; Wang, Leiguang
2015-12-01
In this paper, we propose a novel remote sensing fusion approach based on guided image filtering. The fused images can well preserve the spectral features of the original multispectral (MS) images, meanwhile, enhance the spatial details information. Four quality assessment indexes are also introduced to evaluate the fusion effect when compared with other fusion methods. Experiments carried out on Gaofen-2, QuickBird, WorldView-2 and Landsat-8 images. And the results show an excellent performance of the proposed method.
Wu, Guorong; Kim, Minjeong; Sanroma, Gerard; Wang, Qian; Munsell, Brent C.; Shen, Dinggang
2014-01-01
Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images, after registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process it’s critical the chosen patch similarity measurement accurately captures the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch now characterized by a multi-scale feature representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchically approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 tesla MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale feature representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods. PMID:25463474
Graphene metamaterial spatial light modulator for infrared single pixel imaging.
Fan, Kebin; Suen, Jonathan Y; Padilla, Willie J
2017-10-16
High-resolution and hyperspectral imaging has long been a goal for multi-dimensional data fusion sensing applications - of interest for autonomous vehicles and environmental monitoring. In the long wave infrared regime this quest has been impeded by size, weight, power, and cost issues, especially as focal-plane array detector sizes increase. Here we propose and experimentally demonstrated a new approach based on a metamaterial graphene spatial light modulator (GSLM) for infrared single pixel imaging. A frequency-division multiplexing (FDM) imaging technique is designed and implemented, and relies entirely on the electronic reconfigurability of the GSLM. We compare our approach to the more common raster-scan method and directly show FDM image frame rates can be 64 times faster with no degradation of image quality. Our device and related imaging architecture are not restricted to the infrared regime, and may be scaled to other bands of the electromagnetic spectrum. The study presented here opens a new approach for fast and efficient single pixel imaging utilizing graphene metamaterials with novel acquisition strategies.
Three-dimensional imaging technology offers promise in medicine.
Karako, Kenji; Wu, Qiong; Gao, Jianjun
2014-04-01
Medical imaging plays an increasingly important role in the diagnosis and treatment of disease. Currently, medical equipment mainly has two-dimensional (2D) imaging systems. Although this conventional imaging largely satisfies clinical requirements, it cannot depict pathologic changes in 3 dimensions. The development of three-dimensional (3D) imaging technology has encouraged advances in medical imaging. Three-dimensional imaging technology offers doctors much more information on a pathology than 2D imaging, thus significantly improving diagnostic capability and the quality of treatment. Moreover, the combination of 3D imaging with augmented reality significantly improves surgical navigation process. The advantages of 3D imaging technology have made it an important component of technological progress in the field of medical imaging.
Park, Jae-Hyeung; Kim, Hak-Rin; Kim, Yunhee; Kim, Joohwan; Hong, Jisoo; Lee, Sin-Doo; Lee, Byoungho
2004-12-01
A depth-enhanced three-dimensional-two-dimensional convertible display that uses a polymer-dispersed liquid crystal based on the principle of integral imaging is proposed. In the proposed method, a lens array is located behind a transmission-type display panel to form an array of point-light sources, and a polymer-dispersed liquid crystal is electrically controlled to pass or to scatter light coming from these point-light sources. Therefore, three-dimensional-two-dimensional conversion is accomplished electrically without any mechanical movement. Moreover, the nonimaging structure of the proposed method increases the expressible depth range considerably. We explain the method of operation and present experimental results.
Shankar, Hariharan; Reddy, Sapna
2012-07-01
Ultrasound imaging has gained acceptance in pain management interventions. Features of myofascial pain syndrome have been explored using ultrasound imaging and elastography. There is a paucity of reports showing the benefit clinically. This report provides three-dimensional features of taut bands and highlights the advantages of using two-dimensional ultrasound imaging to improve targeting of taut bands in deeper locations. Fifty-eight-year-old man with pain and decreased range of motion of the right shoulder was referred for further management of pain above the scapula after having failed conservative management for myofascial pain syndrome. Three-dimensional ultrasound images provided evidence of aberrancy in the architecture of the muscle fascicles around the taut bands compared to the adjacent normal muscle tissue during serial sectioning of the accrued image. On two-dimensional ultrasound imaging over the palpated taut band, areas of hyperechogenicity were visualized in the trapezius and supraspinatus muscles. Subsequently, the patient received ultrasound-guided real-time lidocaine injections to the trigger points with successful resolution of symptoms. This is a successful demonstration of utility of ultrasound imaging of taut bands in the management of myofascial pain syndrome. Utility of this imaging modality in myofascial pain syndrome requires further clinical validation. Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing
2016-04-01
In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krengli, Marco; Ballare, Andrea; Cannillo, Barbara
2006-11-15
Purpose: This study aims to investigate the in vivo drainage of lymphatic spread by using the sentinel node (SN) technique and single-photon emission computed tomography (SPECT)-computed tomography (CT) image fusion, and to analyze the impact of such information on conformal pelvic irradiation. Methods and Materials: Twenty-three prostate cancer patients, candidates for radical prostatectomy already included in a trial studying the SN technique, were enrolled. CT and SPECT images were obtained after intraprostate injection of 115 MBq of {sup 99m}Tc-nanocolloid, allowing identification of SN and other pelvic lymph nodes. Target and nontarget structures, including lymph nodes identified by SPECT, were drawnmore » on SPECT-CT fusion images. A three-dimensional conformal treatment plan was performed for each patient. Results: Single-photon emission computed tomography lymph nodal uptake was detected in 20 of 23 cases (87%). The SN was inside the pelvic clinical target volume (CTV{sub 2}) in 16 of 20 cases (80%) and received no less than the prescribed dose in 17 of 20 cases (85%). The most frequent locations of SN outside the CTV{sub 2} were the common iliac and presacral lymph nodes. Sixteen of the 32 other lymph nodes (50%) identified by SPECT were found outside the CTV{sub 2}. Overall, the SN and other intrapelvic lymph nodes identified by SPECT were not included in the CTV{sub 2} in 5 of 20 (25%) patients. Conclusions: The study of lymphatic drainage can contribute to a better knowledge of the in vivo potential pattern of lymph node metastasis in prostate cancer and can lead to a modification of treatment volume with consequent optimization of pelvic irradiation.« less
Turkbey, Baris; Xu, Sheng; Kruecker, Jochen; Locklin, Julia; Pang, Yuxi; Shah, Vijay; Bernardo, Marcelino; Baccala, Angelo; Rastinehad, Ardeshir; Benjamin, Compton; Merino, Maria J; Wood, Bradford J; Choyke, Peter L; Pinto, Peter A
2011-03-29
During transrectal ultrasound (TRUS)-guided prostate biopsies, the actual location of the biopsy site is rarely documented. Here, we demonstrate the capability of TRUS-magnetic resonance imaging (MRI) image fusion to document the biopsy site and correlate biopsy results with multi-parametric MRI findings. Fifty consecutive patients (median age 61 years) with a median prostate-specific antigen (PSA) level of 5.8 ng/ml underwent 12-core TRUS-guided biopsy of the prostate. Pre-procedural T2-weighted magnetic resonance images were fused to TRUS. A disposable needle guide with miniature tracking sensors was attached to the TRUS probe to enable fusion with MRI. Real-time TRUS images during biopsy and the corresponding tracking information were recorded. Each biopsy site was superimposed onto the MRI. Each biopsy site was classified as positive or negative for cancer based on the results of each MRI sequence. Sensitivity, specificity, and receiver operating curve (ROC) area under the curve (AUC) values were calculated for multi-parametric MRI. Gleason scores for each multi-parametric MRI pattern were also evaluated. Six hundred and 5 systemic biopsy cores were analyzed in 50 patients, of whom 20 patients had 56 positive cores. MRI identified 34 of 56 positive cores. Overall, sensitivity, specificity, and ROC area values for multi-parametric MRI were 0.607, 0.727, 0.667, respectively. TRUS-MRI fusion after biopsy can be used to document the location of each biopsy site, which can then be correlated with MRI findings. Based on correlation with tracked biopsies, T2-weighted MRI and apparent diffusion coefficient maps derived from diffusion-weighted MRI are the most sensitive sequences, whereas the addition of delayed contrast enhancement MRI and three-dimensional magnetic resonance spectroscopy demonstrated higher specificity consistent with results obtained using radical prostatectomy specimens.
Development of a robust MRI fiducial system for automated fusion of MR-US abdominal images.
Favazza, Christopher P; Gorny, Krzysztof R; Callstrom, Matthew R; Kurup, Anil N; Washburn, Michael; Trester, Pamela S; Fowler, Charles L; Hangiandreou, Nicholas J
2018-05-21
We present the development of a two-component magnetic resonance (MR) fiducial system, that is, a fiducial marker device combined with an auto-segmentation algorithm, designed to be paired with existing ultrasound probe tracking and image fusion technology to automatically fuse MR and ultrasound (US) images. The fiducial device consisted of four ~6.4 mL cylindrical wells filled with 1 g/L copper sulfate solution. The algorithm was designed to automatically segment the device in clinical abdominal MR images. The algorithm's detection rate and repeatability were investigated through a phantom study and in human volunteers. The detection rate was 100% in all phantom and human images. The center-of-mass of the fiducial device was robustly identified with maximum variations of 2.9 mm in position and 0.9° in angular orientation. In volunteer images, average differences between algorithm-measured inter-marker spacings and actual separation distances were 0.53 ± 0.36 mm. "Proof-of-concept" automatic MR-US fusions were conducted with sets of images from both a phantom and volunteer using a commercial prototype system, which was built based on the above findings. Image fusion accuracy was measured to be within 5 mm for breath-hold scanning. These results demonstrate the capability of this approach to automatically fuse US and MR images acquired across a wide range of clinical abdominal pulse sequences. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
OPTICAL PROCESSING OF INFORMATION: Multistage optoelectronic two-dimensional image switches
NASA Astrophysics Data System (ADS)
Fedorov, V. B.
1994-06-01
The implementation principles and the feasibility of construction of high-throughput multistage optoelectronic switches, capable of transmitting data in the form of two-dimensional images along interconnected pairs of optical channels, are considered. Different ways of realising compact switches are proposed. They are based on the use of polarisation-sensitive elements, arrays of modulators of the plane of polarisation of light, arrays of objectives, and free-space optics. Optical systems of such switches can theoretically ensure that the resolution and optical losses in two-dimensional image transmission are limited only by diffraction. Estimates are obtained of the main maximum-performance parameters of the proposed optoelectronic image switches.
[Perceptual sharpness metric for visible and infrared color fusion images].
Gao, Shao-Shu; Jin, Wei-Qi; Wang, Xia; Wang, Ling-Xue; Luo, Yuan
2012-12-01
For visible and infrared color fusion images, objective sharpness assessment model is proposed to measure the clarity of detail and edge definition of the fusion image. Firstly, the contrast sensitivity functions (CSF) of the human visual system is used to reduce insensitive frequency components under certain viewing conditions. Secondly, perceptual contrast model, which takes human luminance masking effect into account, is proposed based on local band-limited contrast model. Finally, the perceptual contrast is calculated in the region of interest (contains image details and edges) in the fusion image to evaluate image perceptual sharpness. Experimental results show that the proposed perceptual sharpness metrics provides better predictions, which are more closely matched to human perceptual evaluations, than five existing sharpness (blur) metrics for color images. The proposed perceptual sharpness metrics can evaluate the perceptual sharpness for color fusion images effectively.
Tools and Methods for the Registration and Fusion of Remotely Sensed Data
NASA Technical Reports Server (NTRS)
Goshtasby, Arthur Ardeshir; LeMoigne, Jacqueline
2010-01-01
Tools and methods for image registration were reviewed. Methods for the registration of remotely sensed data at NASA were discussed. Image fusion techniques were reviewed. Challenges in registration of remotely sensed data were discussed. Examples of image registration and image fusion were given.
Yang, Hai-song; Lu, Xu-hua; Yang, Li–li; Yan, Wang-jun; Yuan, Wen; Chen, Yu
2009-01-01
Ossification of the posterior longitudinal ligament (OPLL) is a common spinal disorder that presents with or without cervical myelopathy. Furthermore, there is evidence suggesting that OPLL often coexists with cervical disc hernia (CDH), and that the latter is the more important compression factor. To raise the awareness of CDH in OPLL for spinal surgeons, we performed a retrospective study on 142 patients with radiologically proven OPLL who had received surgery between January 2004 and January 2008 in our hospital. Plain radiograph, three-dimensional computed tomography construction (3D CT), and magnetic resonance imaging (MRI) of the cervical spine were all performed. Twenty-six patients with obvious CDH (15 of segmental-type, nine of mixed-type, two of continuous-type) were selected via clinical and radiographic features, and intraoperative findings. By MRI, the most commonly involved level was C5/6, followed by C3/4, C4/5, and C6/7. The areas of greatest spinal cord compression were at the disc levels because of herniated cervical discs. Eight patients were decompressed via anterior cervical discectomy and fusion (ACDF), 13 patients via anterior cervical corpectomy and fusion (ACCF), and five patients via ACDF combined with posterior laminectomy and fusion. The outcomes were all favorable. In conclusion, surgeons should consider the potential for CDH when performing spinal cord decompression and deciding the surgical approach in patients presenting with OPLL. PMID:20012451
Fusion Imaging for Procedural Guidance.
Wiley, Brandon M; Eleid, Mackram F; Thaden, Jeremy J
2018-05-01
The field of percutaneous structural heart interventions has grown tremendously in recent years. This growth has fueled the development of new imaging protocols and technologies in parallel to help facilitate these minimally-invasive procedures. Fusion imaging is an exciting new technology that combines the strength of 2 imaging modalities and has the potential to improve procedural planning and the safety of many commonly performed transcatheter procedures. In this review we discuss the basic concepts of fusion imaging along with the relative strengths and weaknesses of static vs dynamic fusion imaging modalities. This review will focus primarily on echocardiographic-fluoroscopic fusion imaging and its application in commonly performed transcatheter structural heart procedures. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Witharana, Chandi; LaRue, Michelle A.; Lynch, Heather J.
2016-03-01
Remote sensing is a rapidly developing tool for mapping the abundance and distribution of Antarctic wildlife. While both panchromatic and multispectral imagery have been used in this context, image fusion techniques have received little attention. We tasked seven widely-used fusion algorithms: Ehlers fusion, hyperspherical color space fusion, high-pass fusion, principal component analysis (PCA) fusion, University of New Brunswick fusion, and wavelet-PCA fusion to resolution enhance a series of single-date QuickBird-2 and Worldview-2 image scenes comprising penguin guano, seals, and vegetation. Fused images were assessed for spectral and spatial fidelity using a variety of quantitative quality indicators and visual inspection methods. Our visual evaluation elected the high-pass fusion algorithm and the University of New Brunswick fusion algorithm as best for manual wildlife detection while the quantitative assessment suggested the Gram-Schmidt fusion algorithm and the University of New Brunswick fusion algorithm as best for automated classification. The hyperspherical color space fusion algorithm exhibited mediocre results in terms of spectral and spatial fidelities. The PCA fusion algorithm showed spatial superiority at the expense of spectral inconsistencies. The Ehlers fusion algorithm and the wavelet-PCA algorithm showed the weakest performances. As remote sensing becomes a more routine method of surveying Antarctic wildlife, these benchmarks will provide guidance for image fusion and pave the way for more standardized products for specific types of wildlife surveys.
Abdomen and spinal cord segmentation with augmented active shape models.
Xu, Zhoubing; Conrad, Benjamin N; Baucom, Rebeccah B; Smith, Seth A; Poulose, Benjamin K; Landman, Bennett A
2016-07-01
Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC.
Development of a fusion approach selection tool
NASA Astrophysics Data System (ADS)
Pohl, C.; Zeng, Y.
2015-06-01
During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.
Fusion of MODIS and Landsat-8 Surface Temperature Images: A New Approach
Hazaymeh, Khaled; Hassan, Quazi K.
2015-01-01
Here, our objective was to develop a spatio-temporal image fusion model (STI-FM) for enhancing temporal resolution of Landsat-8 land surface temperature (LST) images by fusing LST images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS); and implement the developed algorithm over a heterogeneous semi-arid study area in Jordan, Middle East. The STI-FM technique consisted of two major components: (i) establishing a linear relationship between two consecutive MODIS 8-day composite LST images acquired at time 1 and time 2; and (ii) utilizing the above mentioned relationship as a function of a Landsat-8 LST image acquired at time 1 in order to predict a synthetic Landsat-8 LST image at time 2. It revealed that strong linear relationships (i.e., r2, slopes, and intercepts were in the range 0.93–0.94, 0.94–0.99; and 2.97–20.07) existed between the two consecutive MODIS LST images. We evaluated the synthetic LST images qualitatively and found high visual agreements with the actual Landsat-8 LST images. In addition, we conducted quantitative evaluations of these synthetic images; and found strong agreements with the actual Landsat-8 LST images. For example, r2, root mean square error (RMSE), and absolute average difference (AAD)-values were in the ranges 084–0.90, 0.061–0.080, and 0.003–0.004, respectively. PMID:25730279
Fusion of MODIS and landsat-8 surface temperature images: a new approach.
Hazaymeh, Khaled; Hassan, Quazi K
2015-01-01
Here, our objective was to develop a spatio-temporal image fusion model (STI-FM) for enhancing temporal resolution of Landsat-8 land surface temperature (LST) images by fusing LST images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS); and implement the developed algorithm over a heterogeneous semi-arid study area in Jordan, Middle East. The STI-FM technique consisted of two major components: (i) establishing a linear relationship between two consecutive MODIS 8-day composite LST images acquired at time 1 and time 2; and (ii) utilizing the above mentioned relationship as a function of a Landsat-8 LST image acquired at time 1 in order to predict a synthetic Landsat-8 LST image at time 2. It revealed that strong linear relationships (i.e., r2, slopes, and intercepts were in the range 0.93-0.94, 0.94-0.99; and 2.97-20.07) existed between the two consecutive MODIS LST images. We evaluated the synthetic LST images qualitatively and found high visual agreements with the actual Landsat-8 LST images. In addition, we conducted quantitative evaluations of these synthetic images; and found strong agreements with the actual Landsat-8 LST images. For example, r2, root mean square error (RMSE), and absolute average difference (AAD)-values were in the ranges 084-0.90, 0.061-0.080, and 0.003-0.004, respectively.
Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications
NASA Astrophysics Data System (ADS)
Budzan, Sebastian; Kasprzyk, Jerzy
2016-02-01
The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.
Production and characterization of pure cryogenic inertial fusion targets
NASA Astrophysics Data System (ADS)
Boyd, B. A.; Kamerman, G. W.
An experimental cryogenic inertial fusion target generator and two optical techniques for automated target inspection are described. The generator produces 100 microns diameter solid hydrogen spheres at a rate compatible with fueling requirements of conceptual inertial fusion power plants. A jet of liquified hydrogen is disrupted into droplets by an ultrasonically excited nozzle. The droplets solidify into microspheres while falling through a chamber maintained below the hydrogen triple point pressure. Stable operation of the generator has been demonstrated for up to three hours. The optical inspection techniques are computer aided photomicrography and coarse diffraction pattern analysis (CDPA). The photomicrography system uses a conventional microscope coupled to a computer by a solid state camera and digital image memory. The computer enhances the stored image and performs feature extraction to determine pellet parameters. The CDPA technique uses Fourier transform optics and a special detector array to perform optical processing of a target image.
Han, Lei; Shi, Lu; Yang, Yiling; Song, Dalei
2014-01-01
Geostationary meteorological satellite infrared (IR) channel data contain important spectral information for meteorological research and applications, but their spatial resolution is relatively low. The objective of this study is to obtain higher-resolution IR images. One common method of increasing resolution fuses the IR data with high-resolution visible (VIS) channel data. However, most existing image fusion methods focus only on visual performance, and often fail to take into account the thermal physical properties of the IR images. As a result, spectral distortion occurs frequently. To tackle this problem, we propose a thermal physical properties-based correction method for fusing geostationary meteorological satellite IR and VIS images. In our two-step process, the high-resolution structural features of the VIS image are first extracted and incorporated into the IR image using regular multi-resolution fusion approach, such as the multiwavelet analysis. This step significantly increases the visual details in the IR image, but fake thermal information may be included. Next, the Stefan-Boltzmann Law is applied to correct the distortion, to retain or recover the thermal infrared nature of the fused image. The results of both the qualitative and quantitative evaluation demonstrate that the proposed physical correction method both improves the spatial resolution and preserves the infrared thermal properties. PMID:24919017
Han, Lei; Shi, Lu; Yang, Yiling; Song, Dalei
2014-06-10
Geostationary meteorological satellite infrared (IR) channel data contain important spectral information for meteorological research and applications, but their spatial resolution is relatively low. The objective of this study is to obtain higher-resolution IR images. One common method of increasing resolution fuses the IR data with high-resolution visible (VIS) channel data. However, most existing image fusion methods focus only on visual performance, and often fail to take into account the thermal physical properties of the IR images. As a result, spectral distortion occurs frequently. To tackle this problem, we propose a thermal physical properties-based correction method for fusing geostationary meteorological satellite IR and VIS images. In our two-step process, the high-resolution structural features of the VIS image are first extracted and incorporated into the IR image using regular multi-resolution fusion approach, such as the multiwavelet analysis. This step significantly increases the visual details in the IR image, but fake thermal information may be included. Next, the Stefan-Boltzmann Law is applied to correct the distortion, to retain or recover the thermal infrared nature of the fused image. The results of both the qualitative and quantitative evaluation demonstrate that the proposed physical correction method both improves the spatial resolution and preserves the infrared thermal properties.
A novel framework of tissue membrane systems for image fusion.
Zhang, Zulin; Yi, Xinzhong; Peng, Hong
2014-01-01
This paper proposes a tissue membrane system-based framework to deal with the optimal image fusion problem. A spatial domain fusion algorithm is given, and a tissue membrane system of multiple cells is used as its computing framework. Based on the multicellular structure and inherent communication mechanism of the tissue membrane system, an improved velocity-position model is developed. The performance of the fusion framework is studied with comparison of several traditional fusion methods as well as genetic algorithm (GA)-based and differential evolution (DE)-based spatial domain fusion methods. Experimental results show that the proposed fusion framework is superior or comparable to the other methods and can be efficiently used for image fusion.
Sedai, Suman; Garnavi, Rahil; Roy, Pallab; Xi Liang
2015-08-01
Multi-atlas segmentation first registers each atlas image to the target image and transfers the label of atlas image to the coordinate system of the target image. The transferred labels are then combined, using a label fusion algorithm. In this paper, we propose a novel label fusion method which aggregates discriminative learning and generative modeling for segmentation of cardiac MR images. First, a probabilistic Random Forest classifier is trained as a discriminative model to obtain the prior probability of a label at the given voxel of the target image. Then, a probability distribution of image patches is modeled using Gaussian Mixture Model for each label, providing the likelihood of the voxel belonging to the label. The final label posterior is obtained by combining the classification score and the likelihood score under Bayesian rule. Comparative study performed on MICCAI 2013 SATA Segmentation Challenge demonstrates that our proposed hybrid label fusion algorithm is accurate than other five state-of-the-art label fusion methods. The proposed method obtains dice similarity coefficient of 0.94 and 0.92 in segmenting epicardium and endocardium respectively. Moreover, our label fusion method achieves more accurate segmentation results compared to four other label fusion methods.
Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images. PMID:25214889
[A study on medical image fusion].
Zhang, Er-hu; Bian, Zheng-zhong
2002-09-01
Five algorithms with its advantages and disadvantage for medical image fusion are analyzed. Four kinds of quantitative evaluation criteria for the quality of image fusion algorithms are proposed and these will give us some guidance for future research.
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian
2018-01-01
In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.
PET-CT image fusion using random forest and à-trous wavelet transform.
Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo
2018-03-01
New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.
Image Fusion of CT and MR with Sparse Representation in NSST Domain
Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan
2017-01-01
Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. PMID:29250134
Image Fusion of CT and MR with Sparse Representation in NSST Domain.
Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan; Xia, Shunren
2017-01-01
Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.
Data processing from lobster eye type optics
NASA Astrophysics Data System (ADS)
Nentvich, Ondrej; Stehlikova, Veronika; Urban, Martin; Hudec, Rene; Sieger, Ladislav
2017-05-01
Wolter I optics are commonly used for imaging in X-Ray spectrum. This system uses two reflections, and at higher energies, this system is not so much efficient but has a very good optical resolution. Here is another type of optics Lobster Eye, which is using also two reflections for focusing rays in Schmidt's or Angel's arrangement. Here is also possible to use Lobster eye optics as two one dimensional independent optics. This paper describes advantages of one dimensional and two dimensional Lobster Eye optics in Schmidt's arrangement and its data processing - find out a number of sources in wide field of view. Two dimensional (2D) optics are suitable to detect the number of point X-ray sources and their magnitude, but it is necessary to expose for a long time because a 2D system has much lower transitivity, due to double reflection, compared to one dimensional (1D) optics. Not only for this reason, two 1D optics are better to use for lower magnitudes of sources. In this case, additional image processing is necessary to achieve a 2D image. This article describes of approach an image reconstruction and advantages of two 1D optics without significant losses of transitivity.
Multiple brain atlas database and atlas-based neuroimaging system.
Nowinski, W L; Fang, A; Nguyen, B T; Raphel, J K; Jagannathan, L; Raghavan, R; Bryan, R N; Miller, G A
1997-01-01
For the purpose of developing multiple, complementary, fully labeled electronic brain atlases and an atlas-based neuroimaging system for analysis, quantification, and real-time manipulation of cerebral structures in two and three dimensions, we have digitized, enhanced, segmented, and labeled the following print brain atlases: Co-Planar Stereotaxic Atlas of the Human Brain by Talairach and Tournoux, Atlas for Stereotaxy of the Human Brain by Schaltenbrand and Wahren, Referentially Oriented Cerebral MRI Anatomy by Talairach and Tournoux, and Atlas of the Cerebral Sulci by Ono, Kubik, and Abernathey. Three-dimensional extensions of these atlases have been developed as well. All two- and three-dimensional atlases are mutually preregistered and may be interactively registered with an actual patient's data. An atlas-based neuroimaging system has been developed that provides support for reformatting, registration, visualization, navigation, image processing, and quantification of clinical data. The anatomical index contains about 1,000 structures and over 400 sulcal patterns. Several new applications of the brain atlas database also have been developed, supported by various technologies such as virtual reality, the Internet, and electronic publishing. Fusion of information from multiple atlases assists the user in comprehensively understanding brain structures and identifying and quantifying anatomical regions in clinical data. The multiple brain atlas database and atlas-based neuroimaging system have substantial potential impact in stereotactic neurosurgery and radiotherapy by assisting in visualization and real-time manipulation in three dimensions of anatomical structures, in quantitative neuroradiology by allowing interactive analysis of clinical data, in three-dimensional neuroeducation, and in brain function studies.
Recognition of Equations Using a Two-Dimensional Stochastic Context-Free Grammar
NASA Astrophysics Data System (ADS)
Chou, Philip A.
1989-11-01
We propose using two-dimensional stochastic context-free grammars for image recognition, in a manner analogous to using hidden Markov models for speech recognition. The value of the approach is demonstrated in a system that recognizes printed, noisy equations. The system uses a two-dimensional probabilistic version of the Cocke-Younger-Kasami parsing algorithm to find the most likely parse of the observed image, and then traverses the corresponding parse tree in accordance with translation formats associated with each production rule, to produce eqn I troff commands for the imaged equation. In addition, it uses two-dimensional versions of the Inside/Outside and Baum re-estimation algorithms for learning the parameters of the grammar from a training set of examples. Parsing the image of a simple noisy equation currently takes about one second of cpu time on an Alliant FX/80.
Three-Dimensional Root Phenotyping with a Novel Imaging and Software Platform1[C][W][OA
Clark, Randy T.; MacCurdy, Robert B.; Jung, Janelle K.; Shaff, Jon E.; McCouch, Susan R.; Aneshansley, Daniel J.; Kochian, Leon V.
2011-01-01
A novel imaging and software platform was developed for the high-throughput phenotyping of three-dimensional root traits during seedling development. To demonstrate the platform’s capacity, plants of two rice (Oryza sativa) genotypes, Azucena and IR64, were grown in a transparent gellan gum system and imaged daily for 10 d. Rotational image sequences consisting of 40 two-dimensional images were captured using an optically corrected digital imaging system. Three-dimensional root reconstructions were generated and analyzed using a custom-designed software, RootReader3D. Using the automated and interactive capabilities of RootReader3D, five rice root types were classified and 27 phenotypic root traits were measured to characterize these two genotypes. Where possible, measurements from the three-dimensional platform were validated and were highly correlated with conventional two-dimensional measurements. When comparing gellan gum-grown plants with those grown under hydroponic and sand culture, significant differences were detected in morphological root traits (P < 0.05). This highly flexible platform provides the capacity to measure root traits with a high degree of spatial and temporal resolution and will facilitate novel investigations into the development of entire root systems or selected components of root systems. In combination with the extensive genetic resources that are now available, this platform will be a powerful resource to further explore the molecular and genetic determinants of root system architecture. PMID:21454799
Enhanced image fusion using directional contrast rules in fuzzy transform domain.
Nandal, Amita; Rosales, Hamurabi Gamboa
2016-01-01
In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.
Design of a new type synchronous focusing mechanism
NASA Astrophysics Data System (ADS)
Zhang, Jintao; Tan, Ruijun; Chen, Zhou; Zhang, Yongqi; Fu, Panlong; Qu, Yachen
2018-05-01
Aiming at the dual channel telescopic imaging system composed of infrared imaging system, low-light-level imaging system and image fusion module, In the fusion of low-light-level images and infrared images, it is obvious that using clear source images is easier to obtain high definition fused images. When the target is imaged at 15m to infinity, focusing is needed to ensure the imaging quality of the dual channel imaging system; therefore, a new type of synchronous focusing mechanism is designed. The synchronous focusing mechanism realizes the focusing function through the synchronous translational imaging devices, mainly including the structure of the screw rod nut, the shaft hole coordination structure and the spring steel ball eliminating clearance structure, etc. Starting from the synchronous focusing function of two imaging devices, the structure characteristics of the synchronous focusing mechanism are introduced in detail, and the focusing range is analyzed. The experimental results show that the synchronous focusing mechanism has the advantages of ingenious design, high focusing accuracy and stable and reliable operation.
Multiscale infrared and visible image fusion using gradient domain guided image filtering
NASA Astrophysics Data System (ADS)
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
Tamada, Tsutomu; Ream, Justin M; Doshi, Ankur M; Taneja, Samir S; Rosenkrantz, Andrew B
The purpose of this study was to compare image quality and tumor assessment at prostate magnetic resonance imaging (MRI) between reduced field-of-view diffusion-weighted imaging (rFOV-DWI) and standard DWI (st-DWI). A total of 49 patients undergoing prostate MRI and MRI/ultrasound fusion-targeted biopsy were included. Examinations included st-DWI (field of view [FOV], 200 × 200 mm) and rFOV-DWI (FOV, 140 × 64 mm) using a 2-dimensional (2D) spatially-selective radiofrequency pulse and parallel transmission. Two readers performed qualitative assessments; a third reader performed quantitative evaluation. Overall image quality, anatomic distortion, visualization of capsule, and visualization of peripheral/transition zone edge were better for rFOV-DWI for reader 1 (P ≤ 0.002), although not for reader 2 (P ≥ 0.567). For both readers, sensitivity, specificity, and accuracy for tumor with a Gleason Score (GS) of 3 + 4 or higher were not different (P ≥ 0.289). Lesion clarity was higher for st-DWI for reader 2 (P = 0.008), although similar for reader 1 (P = 0.409). Diagnostic confidence was not different for either reader (P ≥ 0.052). Tumor-to-benign apparent diffusion coefficient ratio was not different (P = 0.675). Potentially improved image quality of rFOV-DWI did not yield improved tumor assessment. Continued optimization is warranted.
Computer-generated 3D ultrasound images of the carotid artery
NASA Technical Reports Server (NTRS)
Selzer, Robert H.; Lee, Paul L.; Lai, June Y.; Frieden, Howard J.; Blankenhorn, David H.
1989-01-01
A method is under development to measure carotid artery lesions from a computer-generated three-dimensional ultrasound image. For each image, the position of the transducer in six coordinates (x, y, z, azimuth, elevation, and roll) is recorded and used to position each B-mode picture element in its proper spatial position in a three-dimensional memory array. After all B-mode images have been assembled in the memory, the three-dimensional image is filtered and resampled to produce a new series of parallel-plane two-dimensional images from which arterial boundaries are determined using edge tracking methods.
Computer-generated 3D ultrasound images of the carotid artery
NASA Astrophysics Data System (ADS)
Selzer, Robert H.; Lee, Paul L.; Lai, June Y.; Frieden, Howard J.; Blankenhorn, David H.
A method is under development to measure carotid artery lesions from a computer-generated three-dimensional ultrasound image. For each image, the position of the transducer in six coordinates (x, y, z, azimuth, elevation, and roll) is recorded and used to position each B-mode picture element in its proper spatial position in a three-dimensional memory array. After all B-mode images have been assembled in the memory, the three-dimensional image is filtered and resampled to produce a new series of parallel-plane two-dimensional images from which arterial boundaries are determined using edge tracking methods.
A new evaluation method research for fusion quality of infrared and visible images
NASA Astrophysics Data System (ADS)
Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda
2017-03-01
In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chugh, Brige Paul; Krishnan, Kalpagam; Liu, Jeff
2014-08-15
Integration of biological conductivity information provided by Electrical Impedance Tomography (EIT) with anatomical information provided by Computed Tomography (CT) imaging could improve the ability to characterize tissues in clinical applications. In this paper, we report results of our study which compared the fusion of EIT with CT using three different image fusion algorithms, namely: weighted averaging, wavelet fusion, and ROI indexing. The ROI indexing method of fusion involves segmenting the regions of interest from the CT image and replacing the pixels with the pixels of the EIT image. The three algorithms were applied to a CT and EIT image ofmore » an anthropomorphic phantom, constructed out of five acrylic contrast targets with varying diameter embedded in a base of gelatin bolus. The imaging performance was assessed using Detectability and Structural Similarity Index Measure (SSIM). Wavelet fusion and ROI-indexing resulted in lower Detectability (by 35% and 47%, respectively) yet higher SSIM (by 66% and 73%, respectively) than weighted averaging. Our results suggest that wavelet fusion and ROI-indexing yielded more consistent and optimal fusion performance than weighted averaging.« less
NASA Astrophysics Data System (ADS)
Hoffmann, A.; Zimmermann, F.; Scharr, H.; Krömker, S.; Schulz, C.
2005-01-01
A laser-based technique for measuring instantaneous three-dimensional species concentration distributions in turbulent flows is presented. The laser beam from a single laser is formed into two crossed light sheets that illuminate the area of interest. The laser-induced fluorescence (LIF) signal emitted from excited species within both planes is detected with a single camera via a mirror arrangement. Image processing enables the reconstruction of the three-dimensional data set in close proximity to the cutting line of the two light sheets. Three-dimensional intensity gradients are computed and compared to the two-dimensional projections obtained from the two directly observed planes. Volume visualization by digital image processing gives unique insight into the three-dimensional structures within the turbulent processes. We apply this technique to measurements of toluene-LIF in a turbulent, non-reactive mixing process of toluene and air and to hydroxyl (OH) LIF in a turbulent methane-air flame upon excitation at 248 nm with a tunable KrF excimer laser.
Convergence and Extrusion Are Required for Normal Fusion of the Mammalian Secondary Palate
Kim, Seungil; Lewis, Ace E.; Singh, Vivek; Ma, Xuefei; Adelstein, Robert; Bush, Jeffrey O.
2015-01-01
The fusion of two distinct prominences into one continuous structure is common during development and typically requires integration of two epithelia and subsequent removal of that intervening epithelium. Using confocal live imaging, we directly observed the cellular processes underlying tissue fusion, using the secondary palatal shelves as a model. We find that convergence of a multi-layered epithelium into a single-layer epithelium is an essential early step, driven by cell intercalation, and is concurrent to orthogonal cell displacement and epithelial cell extrusion. Functional studies in mice indicate that this process requires an actomyosin contractility pathway involving Rho kinase (ROCK) and myosin light chain kinase (MLCK), culminating in the activation of non-muscle myosin IIA (NMIIA). Together, these data indicate that actomyosin contractility drives cell intercalation and cell extrusion during palate fusion and suggest a general mechanism for tissue fusion in development. PMID:25848986
Radar image and data fusion for natural hazards characterisation
Lu, Zhong; Dzurisin, Daniel; Jung, Hyung-Sup; Zhang, Jixian; Zhang, Yonghong
2010-01-01
Fusion of synthetic aperture radar (SAR) images through interferometric, polarimetric and tomographic processing provides an all - weather imaging capability to characterise and monitor various natural hazards. This article outlines interferometric synthetic aperture radar (InSAR) processing and products and their utility for natural hazards characterisation, provides an overview of the techniques and applications related to fusion of SAR/InSAR images with optical and other images and highlights the emerging SAR fusion technologies. In addition to providing precise land - surface digital elevation maps, SAR - derived imaging products can map millimetre - scale elevation changes driven by volcanic, seismic and hydrogeologic processes, by landslides and wildfires and other natural hazards. With products derived from the fusion of SAR and other images, scientists can monitor the progress of flooding, estimate water storage changes in wetlands for improved hydrological modelling predictions and assessments of future flood impacts and map vegetation structure on a global scale and monitor its changes due to such processes as fire, volcanic eruption and deforestation. With the availability of SAR images in near real - time from multiple satellites in the near future, the fusion of SAR images with other images and data is playing an increasingly important role in understanding and forecasting natural hazards.
Information recovery through image sequence fusion under wavelet transformation
NASA Astrophysics Data System (ADS)
He, Qiang
2010-04-01
Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.
Finn, Michael A; Samuelson, Mical M; Bishop, Frank; Bachus, Kent N; Brodke, Darrel S
2011-03-15
Biomechanical study. To determine biomechanical forces exerted on intermediate and adjacent segments after two- or three-level fusion for treatment of noncontiguous levels. Increased motion adjacent to fused spinal segments is postulated to be a driving force in adjacent segment degeneration. Occasionally, a patient requires treatment of noncontiguous levels on either side of a normal level. The biomechanical forces exerted on the intermediate and adjacent levels are unknown. Seven intact human cadaveric cervical spines (C3-T1) were mounted in a custom seven-axis spine simulator equipped with a follower load apparatus and OptoTRAK three-dimensional tracking system. Each intact specimen underwent five cycles each of flexion/extension, lateral bending, and axial rotation under a ± 1.5 Nm moment and a 100-Nm axial follower load. Applied torque and motion data in each axis of motion and level were recorded. Testing was repeated under the same parameters after C4-C5 and C6-C7 diskectomies were performed and fused with rigid cervical plates and interbody spacers and again after a three-level fusion from C4 to C7. Range of motion was modestly increased (35%) in the intermediate and adjacent levels in the skip fusion construct. A significant or nearly significant difference was reached in seven of nine moments. With the three-level fusion construct, motion at the infra- and supra-adjacent levels was significantly or nearly significantly increased in all applied moments over the intact and the two-level noncontiguous construct. The magnitude of this change was substantial (72%). Infra- and supra-adjacent levels experienced a marked increase in strain in all moments with a three-level fusion, whereas the intermediate, supra-, and infra-adjacent segments of a two-level fusion experienced modest strain moments relative to intact. It would be appropriate to consider noncontiguous fusions instead of a three-level fusion when confronted with nonadjacent disease.
Three-dimensional MRI perfusion maps: a step beyond volumetric analysis in mental disorders
Fabene, Paolo F; Farace, Paolo; Brambilla, Paolo; Andreone, Nicola; Cerini, Roberto; Pelizza, Luisa; Versace, Amelia; Rambaldelli, Gianluca; Birbaumer, Niels; Tansella, Michele; Sbarbati, Andrea
2007-01-01
A new type of magnetic resonance imaging analysis, based on fusion of three-dimensional reconstructions of time-to-peak parametric maps and high-resolution T1-weighted images, is proposed in order to evaluate the perfusion of selected volumes of interest. Because in recent years a wealth of data have suggested the crucial involvement of vascular alterations in mental diseases, we tested our new method on a restricted sample of schizophrenic patients and matched healthy controls. The perfusion of the whole brain was compared with that of the caudate nucleus by means of intrasubject analysis. As expected, owing to the encephalic vascular pattern, a significantly lower time-to-peak was observed in the caudate nucleus than in the whole brain in all healthy controls, indicating that the suggested method has enough sensitivity to detect subtle perfusion changes even in small volumes of interest. Interestingly, a less uniform pattern was observed in the schizophrenic patients. The latter finding needs to be replicated in an adequate number of subjects. In summary, the three-dimensional analysis method we propose has been shown to be a feasible tool for revealing subtle vascular changes both in normal subjects and in pathological conditions. PMID:17229290
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2008-08-01
This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.
CTA with fluoroscopy image fusion guidance in endovascular complex aortic aneurysm repair.
Sailer, A M; de Haan, M W; Peppelenbosch, A G; Jacobs, M J; Wildberger, J E; Schurink, G W H
2014-04-01
To evaluate the effect of intraoperative guidance by means of live fluoroscopy image fusion with computed tomography angiography (CTA) on iodinated contrast material volume, procedure time, and fluoroscopy time in endovascular thoraco-abdominal aortic repair. CTA with fluoroscopy image fusion road-mapping was prospectively evaluated in patients with complex aortic aneurysms who underwent fenestrated and/or branched endovascular repair (FEVAR/BEVAR). Total iodinated contrast material volume, overall procedure time, and fluoroscopy time were compared between the fusion group (n = 31) and case controls (n = 31). Reasons for potential fusion image inaccuracy were analyzed. Fusion imaging was feasible in all patients. Fusion image road-mapping was used for navigation and positioning of the devices and catheter guidance during access to target vessels. Iodinated contrast material volume and procedure time were significantly lower in the fusion group than in case controls (159 mL [95% CI 132-186 mL] vs. 199 mL [95% CI 170-229 mL], p = .037 and 5.2 hours [95% CI 4.5-5.9 hours] vs. 6.3 hours (95% CI 5.4-7.2 hours), p = .022). No significant differences in fluoroscopy time were observed (p = .38). Respiration-related vessel displacement, vessel elongation, and displacement by stiff devices as well as patient movement were identified as reasons for fusion image inaccuracy. Image fusion guidance provides added value in complex endovascular interventions. The technology significantly reduces iodinated contrast material dose and procedure time. Copyright © 2014 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Multispectral image fusion based on fractal features
NASA Astrophysics Data System (ADS)
Tian, Jie; Chen, Jie; Zhang, Chunhua
2004-01-01
Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the composition of source pyramid images. So this fusion scheme is a multi-resolution analysis. The wavelet decomposition of image can be actually considered as special pyramid decomposition. According to wavelet decomposition theories, the approximation of image (formula available in paper) at resolution 2j+1 equal to its orthogonal projection in space , that is, where Ajf is the low-frequency approximation of image f(x, y) at resolution 2j and , , represent the vertical, horizontal and diagonal wavelet coefficients respectively at resolution 2j. These coefficients describe the high-frequency information of image at direction of vertical, horizontal and diagonal respectively. Ajf, , and are independent and can be considered as images. In this paper J is set to be 1, so the source image is decomposed to produce the son-images Af, D1f, D2f and D3f. To solve the problem of detecting artifacts, the concepts of vertical fractal dimension FD1, horizontal fractal dimension FD2 and diagonal fractal dimension FD3 are proposed in this paper. The vertical fractal dimension FD1 corresponds to the vertical wavelet coefficients image after the wavelet decomposition of source image, the horizontal fractal dimension FD2 corresponds to the horizontal wavelet coefficients and the diagonal fractal dimension FD3 the diagonal one. These definitions enrich the illustration of source images. Therefore they are helpful to classify the targets. Then the detection of artifacts in the decomposed images is a problem of pattern recognition in 4-D space. The combination of FD0, FD1, FD2 and FD3 make a vector of (FD0, FD1, FD2, FD3), which can be considered as a united feature vector of the studied image. All the parts of the images are classified in the 4-D pattern space created by the vector of (FD0, FD1, FD2, FD3) so that the area that contains man-made objects could be detected. This detection can be considered as a coarse recognition, and then the significant areas in each son-images are signed so that they can be dealt with special rules. There has been various fusion rules developed with each one aiming at a special problem. These rules have different performance, so it is very important to select an appropriate rule during the design of an image fusion system. Recent research denotes that the rule should be adjustable so that it is always suitable to extrude the features of targets and to preserve the pixels of useful information. In this paper, owing to the consideration that fractal dimension is one of the main features to distinguish man-made targets from natural objects, the fusion rule was defined that if the studied region of image contains man-made target, the pixels of the source image whose fractal dimension is minimal are saved to be the pixels of the fused image, otherwise, a weighted average operator is adopted to avoid loss of information. The main idea of this rule is to store the pixels with low fractal dimensions, so it can be named Minimal Fractal dimensions (MFD) fusion rule. This fractal-based algorithm is compared with a common weighted average fusion algorithm. An objective assessment is taken to the two fusion results. The criteria of Entropy, Cross-Entropy, Peak Signal-to-Noise Ratio (PSNR) and Standard Gray Scale Difference are defined in this paper. Reversely to the idea of constructing an ideal image as the assessing reference, the source images are selected to be the reference in this paper. It can be deemed that this assessment is to calculate how much the image quality has been enhanced and the quantity of information has been increased when the fused image is compared with the source images. The experimental results imply that the fractal-based multi-spectral fusion algorithm can effectively preserve the information of man-made objects with a high contrast. It is proved that this algorithm could well preserve features of military targets because that battlefield targets are most man-made objects and in common their images differ from fractal models obviously. Furthermore, the fractal features are not sensitive to the imaging conditions and the movement of targets, so this fractal-based algorithm may be very practical.
NASA Astrophysics Data System (ADS)
Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin
2018-04-01
Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.
Multi-focus image fusion with the all convolutional neural network
NASA Astrophysics Data System (ADS)
Du, Chao-ben; Gao, She-sheng
2018-01-01
A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.
Energy-resolved neutron imaging for inertial confinement fusion
NASA Astrophysics Data System (ADS)
Moran, M. J.; Haan, S. W.; Hatchett, S. P.; Izumi, N.; Koch, J. A.; Lerche, R. A.; Phillips, T. W.
2003-03-01
The success of the National Ignition Facility program will depend on diagnostic measurements which study the performance of inertial confinement fusion (ICF) experiments. Neutron yield, fusion-burn time history, and images are examples of important diagnostics. Neutron and x-ray images will record the geometries of compressed targets during the fusion-burn process. Such images provide a critical test of the accuracy of numerical modeling of ICF experiments. They also can provide valuable information in cases where experiments produce unexpected results. Although x-ray and neutron images provide similar data, they do have significant differences. X-ray images represent the distribution of high-temperature regions where fusion occurs, while neutron images directly reveal the spatial distribution of fusion-neutron emission. X-ray imaging has the advantage of a relatively straightforward path to the imaging system design. Neutron imaging, by using energy-resolved detection, offers the intriguing advantage of being able to provide independent images of burning and nonburning regions of the nuclear fuel. The usefulness of energy-resolved neutron imaging depends on both the information content of the data and on the quality of the data that can be recorded. The information content will relate to the characteristic neutron spectra that are associated with emission from different regions of the source. Numerical modeling of ICF fusion burn will be required to interpret the corresponding energy-dependent images. The exercise will be useful only if the images can be recorded with sufficient definition to reveal the spatial and energy-dependent features of interest. Several options are being evaluated with respect to the feasibility of providing the desired simultaneous spatial and energy resolution.
Sjöberg, C; Ahnesjö, A
2013-06-01
Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Developing one-dimensional implosions for inertial confinement fusion science
Kline, John L.; Yi, Sunghwan A.; Simakov, Andrei Nikolaevich; ...
2016-12-12
Experiments on the National Ignition Facility show that multi-dimensional effects currently dominate the implosion performance. Low mode implosion symmetry and hydrodynamic instabilities seeded by capsule mounting features appear to be two key limiting factors for implosion performance. One reason these factors have a large impact on the performance of inertial confinement fusion implosions is the high convergence required to achieve high fusion gains. To tackle these problems, a predictable implosion platform is needed meaning experiments must trade-off high gain for performance. LANL has adopted three main approaches to develop a one-dimensional (1D) implosion platform where 1D means measured yield overmore » the 1D clean calculation. A high adiabat, low convergence platform is being developed using beryllium capsules enabling larger case-to-capsule ratios to improve symmetry. The second approach is liquid fuel layers using wetted foam targets. With liquid fuel layers, the implosion convergence can be controlled via the initial vapor pressure set by the target fielding temperature. The last method is double shell targets. For double shells, the smaller inner shell houses the DT fuel and the convergence of this cavity is relatively small compared to hot spot ignition. However, double shell targets have a different set of trade-off versus advantages. As a result, details for each of these approaches are described.« less
The design of red-blue 3D video fusion system based on DM642
NASA Astrophysics Data System (ADS)
Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao
2016-10-01
Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.
A review of biomechanically informed breast image registration
NASA Astrophysics Data System (ADS)
Hipwell, John H.; Vavourakis, Vasileios; Han, Lianghao; Mertzanidou, Thomy; Eiben, Björn; Hawkes, David J.
2016-01-01
Breast radiology encompasses the full range of imaging modalities from routine imaging via x-ray mammography, magnetic resonance imaging and ultrasound (both two- and three-dimensional), to more recent technologies such as digital breast tomosynthesis, and dedicated breast imaging systems for positron emission mammography and ultrasound tomography. In addition new and experimental modalities, such as Photoacoustics, Near Infrared Spectroscopy and Electrical Impedance Tomography etc, are emerging. The breast is a highly deformable structure however, and this greatly complicates visual comparison of imaging modalities for the purposes of breast screening, cancer diagnosis (including image guided biopsy), tumour staging, treatment monitoring, surgical planning and simulation of the effects of surgery and wound healing etc. Due primarily to the challenges posed by these gross, non-rigid deformations, development of automated methods which enable registration, and hence fusion, of information within and across breast imaging modalities, and between the images and the physical space of the breast during interventions, remains an active research field which has yet to translate suitable methods into clinical practice. This review describes current research in the field of breast biomechanical modelling and identifies relevant publications where the resulting models have been incorporated into breast image registration and simulation algorithms. Despite these developments there remain a number of issues that limit clinical application of biomechanical modelling. These include the accuracy of constitutive modelling, implementation of representative boundary conditions, failure to meet clinically acceptable levels of computational cost, challenges associated with automating patient-specific model generation (i.e. robust image segmentation and mesh generation) and the complexity of applying biomechanical modelling methods in routine clinical practice.
NASA Astrophysics Data System (ADS)
Liu, F.; Chen, T.; He, J.; Wen, Q.; Yu, F.; Gu, X.; Wang, Z.
2018-04-01
In recent years, the quick upgrading and improvement of SAR sensors provide beneficial complements for the traditional optical remote sensing in the aspects of theory, technology and data. In this paper, Sentinel-1A SAR data and GF-1 optical data were selected for image fusion, and more emphases were put on the dryland crop classification under a complex crop planting structure, regarding corn and cotton as the research objects. Considering the differences among various data fusion methods, the principal component analysis (PCA), Gram-Schmidt (GS), Brovey and wavelet transform (WT) methods were compared with each other, and the GS and Brovey methods were proved to be more applicable in the study area. Then, the classification was conducted based on the object-oriented technique process. And for the GS, Brovey fusion images and GF-1 optical image, the nearest neighbour algorithm was adopted to realize the supervised classification with the same training samples. Based on the sample plots in the study area, the accuracy assessment was conducted subsequently. The values of overall accuracy and kappa coefficient of fusion images were all higher than those of GF-1 optical image, and GS method performed better than Brovey method. In particular, the overall accuracy of GS fusion image was 79.8 %, and the Kappa coefficient was 0.644. Thus, the results showed that GS and Brovey fusion images were superior to optical images for dryland crop classification. This study suggests that the fusion of SAR and optical images is reliable for dryland crop classification under a complex crop planting structure.
Bo, Xiao-Wan; Xu, Hui-Xiong; Wang, Dan; Guo, Le-Hang; Sun, Li-Ping; Li, Xiao-Long; Zhao, Chong-Ke; He, Ya-Ping; Liu, Bo-Ji; Li, Dan-Dan; Zhang, Kun
2016-11-01
To investigate the usefulness of fusion imaging of contrast-enhanced ultrasound (CEUS) and CECT/CEMRI before percutaneous ultrasound-guided radiofrequency ablation (RFA) for liver cancers. 45 consecutive patients with 70 liver lesions were included between March 2013 and October 2015, and all the lesions were identified on CEMRI/CECT prior to inclusion in the study. Planning ultrasound for percutaneous RFA was performed using conventional ultrasound, ultrasound-CECT/CEMRI and CEUS and CECT/CEMRI fusion imaging during the same session. The numbers of the conspicuous lesions on ultrasound and fusion imaging were recorded. RFA was performed according to the results of fusion imaging. Complete response (CR) rate was calculated and the complications were recorded. On conventional ultrasound, 25 (35.7%) of the 70 lesions were conspicuous, whereas 45 (64.3%) were inconspicuous. Ultrasound-CECT/CEMRI fusion imaging detected additional 24 lesions thus increased the number of the conspicuous lesions to 49 (70.0%) (70.0% vs 35.7%; p < 0.001 in comparison with conventional ultrasound). With the use of CEUS and CECT/CEMRI fusion imaging, the number of the conspicuous lesions further increased to 67 (95.7%, 67/70) (95.7% vs 70.0%, 95.7% vs 35.7%; both p < 0.001 in comparison with ultrasound and ultrasound-CECT/CEMRI fusion imaging, respectively). With the assistance of CEUS and CECT/CEMRI fusion imaging, the confidence level of the operator for performing RFA improved significantly with regard to visualization of the target lesions (p = 0.001). The CR rate for RFA was 97.0% (64/66) in accordance to the CECT/CEMRI results 1 month later. No procedure-related deaths and major complications occurred during and after RFA. Fusion of CEUS and CECT/CEMRI improves the visualization of those inconspicuous lesions on conventional ultrasound. It also facilitates improvement in the RFA operators' confidence and CR of RFA. Advances in knowledge: CEUS and CECT/CEMRI fusion imaging is better than both conventional ultrasound and ultrasound-CECT/CEMRI fusion imaging for lesion visualization and improves the operator confidence, thus it should be recommended to be used as a routine in ultrasound-guided percutaneous RFA procedures for liver cancer.
Bo, Xiao-Wan; Wang, Dan; Guo, Le-Hang; Sun, Li-Ping; Li, Xiao-Long; Zhao, Chong-Ke; He, Ya-Ping; Liu, Bo-Ji; Li, Dan-Dan; Zhang, Kun
2016-01-01
Objective: To investigate the usefulness of fusion imaging of contrast-enhanced ultrasound (CEUS) and CECT/CEMRI before percutaneous ultrasound-guided radiofrequency ablation (RFA) for liver cancers. Methods: 45 consecutive patients with 70 liver lesions were included between March 2013 and October 2015, and all the lesions were identified on CEMRI/CECT prior to inclusion in the study. Planning ultrasound for percutaneous RFA was performed using conventional ultrasound, ultrasound-CECT/CEMRI and CEUS and CECT/CEMRI fusion imaging during the same session. The numbers of the conspicuous lesions on ultrasound and fusion imaging were recorded. RFA was performed according to the results of fusion imaging. Complete response (CR) rate was calculated and the complications were recorded. Results: On conventional ultrasound, 25 (35.7%) of the 70 lesions were conspicuous, whereas 45 (64.3%) were inconspicuous. Ultrasound-CECT/CEMRI fusion imaging detected additional 24 lesions thus increased the number of the conspicuous lesions to 49 (70.0%) (70.0% vs 35.7%; p < 0.001 in comparison with conventional ultrasound). With the use of CEUS and CECT/CEMRI fusion imaging, the number of the conspicuous lesions further increased to 67 (95.7%, 67/70) (95.7% vs 70.0%, 95.7% vs 35.7%; both p < 0.001 in comparison with ultrasound and ultrasound-CECT/CEMRI fusion imaging, respectively). With the assistance of CEUS and CECT/CEMRI fusion imaging, the confidence level of the operator for performing RFA improved significantly with regard to visualization of the target lesions (p = 0.001). The CR rate for RFA was 97.0% (64/66) in accordance to the CECT/CEMRI results 1 month later. No procedure-related deaths and major complications occurred during and after RFA. Conclusion: Fusion of CEUS and CECT/CEMRI improves the visualization of those inconspicuous lesions on conventional ultrasound. It also facilitates improvement in the RFA operators' confidence and CR of RFA. Advances in knowledge: CEUS and CECT/CEMRI fusion imaging is better than both conventional ultrasound and ultrasound-CECT/CEMRI fusion imaging for lesion visualization and improves the operator confidence, thus it should be recommended to be used as a routine in ultrasound-guided percutaneous RFA procedures for liver cancer. PMID:27626506
Optical stereo video signal processor
NASA Technical Reports Server (NTRS)
Craig, G. D. (Inventor)
1985-01-01
An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.
van Ooij, Pim; Markl, Michael; Collins, Jeremy D; Carr, James C; Rigsby, Cynthia; Bonow, Robert O; Malaisrie, S Chris; McCarthy, Patrick M; Fedak, Paul W M; Barker, Alex J
2017-09-13
Wall shear stress (WSS) is a stimulus for vessel wall remodeling. Differences in ascending aorta (AAo) hemodynamics have been reported between bicuspid aortic valve (BAV) and tricuspid aortic valve patients with aortic dilatation, but the confounding impact of aortic valve stenosis (AS) is unknown. Five hundred seventy-one subjects underwent 4-dimensional flow magnetic resonance imaging in the thoracic aorta (210 right-left BAV cusp fusions, 60 right-noncoronary BAV cusp fusions, 245 tricuspid aortic valve patients with aortic dilatation, and 56 healthy controls). There were 166 of 515 (32%) patients with AS. WSS atlases were created to quantify group-specific WSS patterns in the AAo as a function of AS severity. In BAV patients without AS, the different cusp fusion phenotypes resulted in distinct differences in eccentric WSS elevation: right-left BAV patients exhibited increased WSS by 9% to 34% ( P <0.001) at the aortic root and along the entire outer curvature of the AAo whereas right-noncoronary BAV patients showed 30% WSS increase ( P <0.001) at the distal portion of the AAo. WSS in tricuspid aortic valve patients with aortic dilatation patients with no AS was significantly reduced by 21% to 33% ( P <0.01) in 4 of 6 AAo regions. In all patient groups, mild, moderate, and severe AS resulted in a marked increase in regional WSS ( P <0.001). Moderate-to-severe AS further increased WSS magnitude and variability in the AAo. Differences between valve phenotypes were no longer apparent. AS significantly alters aortic hemodynamics and WSS independent of aortic valve phenotype and over-rides previously described flow patterns associated with BAV and tricuspid aortic valve with aortic dilatation. Severity of AS must be considered when investigating valve-mediated aortopathy. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Three-dimensional confocal microscopy of the living cornea and ocular lens
NASA Astrophysics Data System (ADS)
Masters, Barry R.
1991-07-01
The three-dimensional reconstruction of the optic zone of the cornea and the ocular crystalline lens has been accomplished using confocal microscopy and volume rendering computer techniques. A laser scanning confocal microscope was used in the reflected light mode to obtain the two-dimensional images from the cornea and the ocular lens of a freshly enucleated rabbit eye. The light source was an argon ion laser with a 488 nm wavelength. The microscope objective was a Leitz X25, NA 0.6 water immersion lens. The 400 micron thick cornea was optically sectioned into 133 three micron sections. The semi-transparent cornea and the in-situ ocular lens was visualized as high resolution, high contrast two-dimensional images. The structures observed in the cornea include: superficial epithelial cells and their nuclei, basal epithelial cells and their 'beaded' cell borders, basal lamina, nerve plexus, nerve fibers, nuclei of stromal keratocytes, and endothelial cells. The structures observed in the in- situ ocular lens include: lens capsule, lens epithelial cells, and individual lens fibers. The three-dimensional data sets of the cornea and the ocular lens were reconstructed in the computer using volume rendering techniques. Stereo pairs were also created of the two- dimensional ocular images for visualization. The stack of two-dimensional images was reconstructed into a three-dimensional object using volume rendering techniques. This demonstration of the three-dimensional visualization of the intact, enucleated eye provides an important step toward quantitative three-dimensional morphometry of the eye. The important aspects of three-dimensional reconstruction are discussed.
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
Cha, Dong Ik; Lee, Min Woo; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Kim, Kyunga
2017-10-01
To identify the more accurate reference data sets for fusion imaging-guided radiofrequency ablation or biopsy of hepatic lesions between computed tomography (CT) and magnetic resonance (MR) images. This study was approved by the institutional review board, and written informed consent was received from all patients. Twelve consecutive patients who were referred to assess the feasibility of radiofrequency ablation or biopsy were enrolled. Automatic registration using CT and MR images was performed in each patient. Registration errors during optimal and opposite respiratory phases, time required for image fusion and number of point locks used were compared using the Wilcoxon signed-rank test. The registration errors during optimal respiratory phase were not significantly different between image fusion using CT and MR images as reference data sets (p = 0.969). During opposite respiratory phase, the registration error was smaller with MR images than CT (p = 0.028). The time and the number of points locks needed for complete image fusion were not significantly different between CT and MR images (p = 0.328 and p = 0.317, respectively). MR images would be more suitable as the reference data set for fusion imaging-guided procedures of focal hepatic lesions than CT images.
Evaluation of MRI-US Fusion Technology in Sports-Related Musculoskeletal Injuries.
Wong-On, Manuel; Til-Pérez, Lluís; Balius, Ramón
2015-06-01
A combination of magnetic resonance imaging (MRI) with real-time high-resolution ultrasound (US) known as fusion imaging may improve visualization of musculoskeletal (MSK) sports medicine injuries. The aim of this study was to evaluate the applicability of MRI-US fusion technology in MSK sports medicine. This study was conducted by the medical services of the FC Barcelona. The participants included volunteers and referred athletes with symptomatic and asymptomatic MSK injuries. All cases underwent MRI which was loaded into the US system for manual registration on the live US image and fusion imaging examination. After every test, an evaluation form was completed in terms of advantages, disadvantages, and anatomic fusion landmarks. From November 2014 to March 2015, we evaluated 20 subjects who underwent fusion imaging, 5 non-injured volunteers and 15 injured athletes, 11 symptomatic and 4 asymptomatic, age range 16-50 years, mean 22. We describe some of the anatomic landmarks used to guide fusion in different regions. This technology allowed us to examine muscle and tendon injuries simultaneously in US and MRI, and the correlation of both techniques, especially low-grade muscular injuries. This has also helped compensate for the limited field of view with US. It improves spatial orientation of cartilage, labrum and meniscal injuries. However, a high-quality MRI image is essential in achieving an adequate fusion image, and 3D sequences need to be added in MRI protocols to improve navigation. The combination of real-time MRI and US image fusion and navigation is relatively easy to perform and is helping to improve understanding of MSK injuries. However, it requires specific skills in MSK imaging and still needs further research in sports-related injuries. Toshiba Medical Systems Corporation.
Proteins on exocytic vesicles mediate calcium-triggered fusion.
Vogel, S S; Zimmerberg, J
1992-01-01
In many exocytic systems, micromolar concentrations of intracellular Ca2+ trigger fusion. We find that aggregates of secretory granules isolated from sea urchin eggs fuse together when perfused with greater than or equal to 10 microM free Ca2+. Mixing of membrane components was demonstrated by transfer of fluorescent lipophilic dye, and melding of granule contents was seen with differential interference microscopy. A technique based upon light scattering was developed to conveniently detect fusion. Two protein modifiers, trypsin and N-ethylmaleimide, inhibit granule-granule fusion at concentrations similar to those that inhibit granule-plasma membrane fusion. We suggest that molecular machinery sufficient for Ca(2+)-triggered fusion resides on secretory granules as purified and that at least some of these essential components are proteinaceous. Images PMID:1584814
Momeni, Saba; Pourghassem, Hossein
2014-08-01
Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.
Effective Multifocus Image Fusion Based on HVS and BP Neural Network
Yang, Yong
2014-01-01
The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations. PMID:24683327
Post-modelling of images from a laser-induced wavy boiling front
NASA Astrophysics Data System (ADS)
Matti, R. S.; Kaplan, A. F. H.
2015-12-01
Processes like laser keyhole welding, remote fusion laser cutting or laser drilling are governed by a highly dynamic wavy boiling front that was recently recorded by ultra-high speed imaging. A new approach has now been established by post-modelling of the high speed images. Based on the image greyscale and on a cavity model the three-dimensional front topology is reconstructed. As a second step the Fresnel absorptivity modulation across the wavy front is calculated, combined with the local projection of the laser beam. Frequency polygons enable additional analysis of the statistical variations of the properties across the front. Trends like shadow formation and time dependency can be studied, locally and for the whole front. Despite strong topology modulation in space and time, for lasers with 1 μm wavelength and steel the absorptivity is bounded to a narrow range of 35-43%, owing to its Fresnel characteristics.
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-01-01
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-09-15
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.
Combined use of iterative reconstruction and monochromatic imaging in spinal fusion CT images.
Wang, Fengdan; Zhang, Yan; Xue, Huadan; Han, Wei; Yang, Xianda; Jin, Zhengyu; Zwar, Richard
2017-01-01
Spinal fusion surgery is an important procedure for treating spinal diseases and computed tomography (CT) is a critical tool for postoperative evaluation. However, CT image quality is considerably impaired by metal artifacts and image noise. To explore whether metal artifacts and image noise can be reduced by combining two technologies, adaptive statistical iterative reconstruction (ASIR) and monochromatic imaging generated by gemstone spectral imaging (GSI) dual-energy CT. A total of 51 patients with 318 spinal pedicle screws were prospectively scanned by dual-energy CT using fast kV-switching GSI between 80 and 140 kVp. Monochromatic GSI images at 110 keV were reconstructed either without or with various levels of ASIR (30%, 50%, 70%, and 100%). The quality of five sets of images was objectively and subjectively assessed. With objective image quality assessment, metal artifacts decreased when increasing levels of ASIR were applied (P < 0.001). Moreover, adding ASIR to GSI also decreased image noise (P < 0.001) and improved the signal-to-noise ratio (P < 0.001). The subjective image quality analysis showed good inter-reader concordance, with intra-class correlation coefficients between 0.89 and 0.99. The visualization of peri-implant soft tissue was improved at higher ASIR levels (P < 0.001). Combined use of ASIR and GSI decreased image noise and improved image quality in post-spinal fusion CT scans. Optimal results were achieved with ASIR levels ≥70%. © The Foundation Acta Radiologica 2016.
Improved target detection by IR dual-band image fusion
NASA Astrophysics Data System (ADS)
Adomeit, U.; Ebert, R.
2009-09-01
Dual-band thermal imagers acquire information simultaneously in both the 8-12 μm (long-wave infrared, LWIR) and the 3-5 μm (mid-wave infrared, MWIR) spectral range. Compared to single-band thermal imagers they are expected to have several advantages in military applications. These advantages include the opportunity to use the best band for given atmospheric conditions (e. g. cold climate: LWIR, hot and humid climate: MWIR), the potential to better detect camouflaged targets and an improved discrimination between targets and decoys. Most of these advantages have not yet been verified and/or quantified. It is expected that image fusion allows better exploitation of the information content available with dual-band imagers especially with respect to detection of targets. We have developed a method for dual-band image fusion based on the apparent temperature differences in the two bands. This method showed promising results in laboratory tests. In order to evaluate its performance under operational conditions we conducted a field trial in an area with high thermal clutter. In such areas, targets are hardly to detect in single-band images because they vanish in the clutter structure. The image data collected in this field trial was used for a perception experiment. This perception experiment showed an enhanced target detection range and reduced false alarm rate for the fused images compared to the single-band images.
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
Chen, Hsin-Yu; Ng, Li-Shia; Chang, Chun-Shin; Lu, Ting-Chen; Chen, Ning-Hung; Chen, Zung-Chung
2017-06-01
Advances in three-dimensional imaging and three-dimensional printing technology have expanded the frontier of presurgical design for microtia reconstruction from two-dimensional curved lines to three-dimensional perspectives. This study presents an algorithm for combining three-dimensional surface imaging, computer-assisted design, and three-dimensional printing to create patient-specific auricular frameworks in unilateral microtia reconstruction. Between January of 2015 and January of 2016, six patients with unilateral microtia were enrolled. The average age of the patients was 7.6 years. A three-dimensional image of the patient's head was captured by 3dMDcranial, and virtual sculpture carried out using Geomagic Freeform software and a Touch X Haptic device for fabrication of the auricular template. Each template was tailored according to the patient's unique auricular morphology. The final construct was mirrored onto the defective side and printed out with biocompatible acrylic material. During the surgery, the prefabricated customized template served as a three-dimensional guide for surgical simulation and sculpture of the MEDPOR framework. Average follow-up was 10.3 months. Symmetric and good aesthetic results with regard to auricular shape, projection, and orientation were obtained. One case with severe implant exposure was salvaged with free temporoparietal fascia transfer and skin grafting. The combination of three-dimensional imaging and manufacturing technology with the malleability of MEDPOR has surpassed existing limitations resulting from the use of autologous materials and the ambiguity of two-dimensional planning. This approach allows surgeons to customize the auricular framework in a highly precise and sophisticated manner, taking a big step closer to the goal of mirror-image reconstruction for unilateral microtia patients. Therapeutic, IV.
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
Opto-acoustic breast imaging with co-registered ultrasound
NASA Astrophysics Data System (ADS)
Zalev, Jason; Clingman, Bryan; Herzog, Don; Miller, Tom; Stavros, A. Thomas; Oraevsky, Alexander; Kist, Kenneth; Dornbluth, N. Carol; Otto, Pamela
2014-03-01
We present results from a recent study involving the ImagioTM breast imaging system, which produces fused real-time two-dimensional color-coded opto-acoustic (OA) images that are co-registered and temporally inter- leaved with real-time gray scale ultrasound using a specialized duplex handheld probe. The use of dual optical wavelengths provides functional blood map images of breast tissue and tumors displayed with high contrast based on total hemoglobin and oxygen saturation of the blood. This provides functional diagnostic information pertaining to tumor metabolism. OA also shows morphologic information about tumor neo-vascularity that is complementary to the morphological information obtained with conventional gray scale ultrasound. This fusion technology conveniently enables real-time analysis of the functional opto-acoustic features of lesions detected by readers familiar with anatomical gray scale ultrasound. We demonstrate co-registered opto-acoustic and ultrasonic images of malignant and benign tumors from a recent clinical study that provide new insight into the function of tumors in-vivo. Results from the Feasibility Study show preliminary evidence that the technology may have the capability to improve characterization of benign and malignant breast masses over conventional diagnostic breast ultrasound alone and to improve overall accuracy of breast mass diagnosis. In particular, OA improved speci city over that of conventional diagnostic ultrasound, which could potentially reduce the number of negative biopsies performed without missing cancers.
Approach for scene reconstruction from the analysis of a triplet of still images
NASA Astrophysics Data System (ADS)
Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle
1997-03-01
Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.
NASA Technical Reports Server (NTRS)
Pavel, M.
1993-01-01
The topics covered include the following: a system overview of the basic components of a system designed to improve the ability of a pilot to fly through low-visibility conditions such as fog; the role of visual sciences; fusion issues; sensor characterization; sources of information; image processing; and image fusion.
Registration of 3D ultrasound computer tomography and MRI for evaluation of tissue correspondences
NASA Astrophysics Data System (ADS)
Hopp, T.; Dapp, R.; Zapf, M.; Kretzek, E.; Gemmeke, H.; Ruiter, N. V.
2015-03-01
3D Ultrasound Computer Tomography (USCT) is a new imaging method for breast cancer diagnosis. In the current state of development it is essential to correlate USCT with a known imaging modality like MRI to evaluate how different tissue types are depicted. Due to different imaging conditions, e.g. with the breast subject to buoyancy in USCT, a direct correlation is demanding. We present a 3D image registration method to reduce positioning differences and allow direct side-by-side comparison of USCT and MRI volumes. It is based on a two-step approach including a buoyancy simulation with a biomechanical model and free form deformations using cubic B-Splines for a surface refinement. Simulation parameters are optimized patient-specifically in a simulated annealing scheme. The method was evaluated with in-vivo datasets resulting in an average registration error below 5mm. Correlating tissue structures can thereby be located in the same or nearby slices in both modalities and three-dimensional non-linear deformations due to the buoyancy are reduced. Image fusion of MRI volumes and USCT sound speed volumes was performed for intuitive display. By applying the registration to data of our first in-vivo study with the KIT 3D USCT, we could correlate several tissue structures in MRI and USCT images and learn how connective tissue, carcinomas and breast implants observed in the MRI are depicted in the USCT imaging modes.
On the V-Line Radon Transform and Its Imaging Applications
Morvidone, M.; Nguyen, M. K.; Truong, T. T.; Zaidi, H.
2010-01-01
Radon transforms defined on smooth curves are well known and extensively studied in the literature. In this paper, we consider a Radon transform defined on a discontinuous curve formed by a pair of half-lines forming the vertical letter V. If the classical two-dimensional Radon transform has served as a work horse for tomographic transmission and/or emission imaging, we show that this V-line Radon transform is the backbone of scattered radiation imaging in two dimensions. We establish its analytic inverse formula as well as a corresponding filtered back projection reconstruction procedure. These theoretical results allow the reconstruction of two-dimensional images from Compton scattered radiation collected on a one-dimensional collimated camera. We illustrate the working principles of this imaging modality by presenting numerical simulation results. PMID:20706545
Multiclassifier information fusion methods for microarray pattern recognition
NASA Astrophysics Data System (ADS)
Braun, Jerome J.; Glina, Yan; Judson, Nicholas; Herzig-Marx, Rachel
2004-04-01
This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.
Interventional Cardiology: What's New?
Scansen, Brian A
2017-09-01
Interventional cardiology in veterinary medicine continues to expand beyond the standard 3 procedures of patent ductus arteriosus occlusion, balloon pulmonary valvuloplasty, and transvenous pacing. Opportunities for fellowship training; advances in equipment, including high-resolution digital fluoroscopy, real-time 3-dimensional transesophageal echocardiography, fusion imaging, and rotational angiography; ultrasound-guided access and vascular closure devices; and refinement of techniques, including cutting and high-pressure ballooning, intracardiac and intravascular stent implantation, septal defect occlusion, transcatheter valve implantation, and hybrid approaches, are likely to transform the field over the next decade. Copyright © 2017 Elsevier Inc. All rights reserved.
Image fusion based on Bandelet and sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Jiuxing; Zhang, Wei; Li, Xuzhi
2018-04-01
Bandelet transform could acquire geometric regular direction and geometric flow, sparse representation could represent signals with as little as possible atoms on over-complete dictionary, both of which could be used to image fusion. Therefore, a new fusion method is proposed based on Bandelet and Sparse Representation, to fuse Bandelet coefficients of multi-source images and obtain high quality fusion effects. The test are performed on remote sensing images and simulated multi-focus images, experimental results show that the performance of new method is better than tested methods according to objective evaluation indexes and subjective visual effects.
Fusion method of SAR and optical images for urban object extraction
NASA Astrophysics Data System (ADS)
Jia, Yonghong; Blum, Rick S.; Li, Fangfang
2007-11-01
A new image fusion method of SAR, Panchromatic (Pan) and multispectral (MS) data is proposed. First of all, SAR texture is extracted by ratioing the despeckled SAR image to its low pass approximation, and is used to modulate high pass details extracted from the available Pan image by means of the á trous wavelet decomposition. Then, high pass details modulated with the texture is applied to obtain the fusion product by HPFM (High pass Filter-based Modulation) fusion method. A set of image data including co-registered Landsat TM, ENVISAT SAR and SPOT Pan is used for the experiment. The results demonstrate accurate spectral preservation on vegetated regions, bare soil, and also on textured areas (buildings and road network) where SAR texture information enhances the fusion product, and the proposed approach is effective for image interpret and classification.
Three-dimensional scene reconstruction from a two-dimensional image
NASA Astrophysics Data System (ADS)
Parkins, Franz; Jacobs, Eddie
2017-05-01
We propose and simulate a method of reconstructing a three-dimensional scene from a two-dimensional image for developing and augmenting world models for autonomous navigation. This is an extension of the Perspective-n-Point (PnP) method which uses a sampling of the 3D scene, 2D image point parings, and Random Sampling Consensus (RANSAC) to infer the pose of the object and produce a 3D mesh of the original scene. Using object recognition and segmentation, we simulate the implementation on a scene of 3D objects with an eye to implementation on embeddable hardware. The final solution will be deployed on the NVIDIA Tegra platform.
Scattering calculation and image reconstruction using elevation-focused beams
Duncan, David P.; Astheimer, Jeffrey P.; Waag, Robert C.
2009-01-01
Pressure scattered by cylindrical and spherical objects with elevation-focused illumination and reception has been analytically calculated, and corresponding cross sections have been reconstructed with a two-dimensional algorithm. Elevation focusing was used to elucidate constraints on quantitative imaging of three-dimensional objects with two-dimensional algorithms. Focused illumination and reception are represented by angular spectra of plane waves that were efficiently computed using a Fourier interpolation method to maintain the same angles for all temporal frequencies. Reconstructions were formed using an eigenfunction method with multiple frequencies, phase compensation, and iteration. The results show that the scattered pressure reduces to a two-dimensional expression, and two-dimensional algorithms are applicable when the region of a three-dimensional object within an elevation-focused beam is approximately constant in elevation. The results also show that energy scattered out of the reception aperture by objects contained within the focused beam can result in the reconstructed values of attenuation slope being greater than true values at the boundary of the object. Reconstructed sound speed images, however, appear to be relatively unaffected by the loss in scattered energy. The broad conclusion that can be drawn from these results is that two-dimensional reconstructions require compensation to account for uncaptured three-dimensional scattering. PMID:19425653
Scattering calculation and image reconstruction using elevation-focused beams.
Duncan, David P; Astheimer, Jeffrey P; Waag, Robert C
2009-05-01
Pressure scattered by cylindrical and spherical objects with elevation-focused illumination and reception has been analytically calculated, and corresponding cross sections have been reconstructed with a two-dimensional algorithm. Elevation focusing was used to elucidate constraints on quantitative imaging of three-dimensional objects with two-dimensional algorithms. Focused illumination and reception are represented by angular spectra of plane waves that were efficiently computed using a Fourier interpolation method to maintain the same angles for all temporal frequencies. Reconstructions were formed using an eigenfunction method with multiple frequencies, phase compensation, and iteration. The results show that the scattered pressure reduces to a two-dimensional expression, and two-dimensional algorithms are applicable when the region of a three-dimensional object within an elevation-focused beam is approximately constant in elevation. The results also show that energy scattered out of the reception aperture by objects contained within the focused beam can result in the reconstructed values of attenuation slope being greater than true values at the boundary of the object. Reconstructed sound speed images, however, appear to be relatively unaffected by the loss in scattered energy. The broad conclusion that can be drawn from these results is that two-dimensional reconstructions require compensation to account for uncaptured three-dimensional scattering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shabbir, A., E-mail: aqsa.shabbir@ugent.be; Noterdaeme, J. M.; Max-Planck-Institut für Plasmaphysik, Garching D-85748
2014-11-15
Information visualization aimed at facilitating human perception is an important tool for the interpretation of experiments on the basis of complex multidimensional data characterizing the operational space of fusion devices. This work describes a method for visualizing the operational space on a two-dimensional map and applies it to the discrimination of type I and type III edge-localized modes (ELMs) from a series of carbon-wall ELMy discharges at JET. The approach accounts for stochastic uncertainties that play an important role in fusion data sets, by modeling measurements with probability distributions in a metric space. The method is aimed at contributing tomore » physical understanding of ELMs as well as their control. Furthermore, it is a general method that can be applied to the modeling of various other plasma phenomena as well.« less
Two-dimensional PCA-based human gait identification
NASA Astrophysics Data System (ADS)
Chen, Jinyan; Wu, Rongteng
2012-11-01
It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.
Plasma focus neutron anisotropy measurements and influence of a deuteron beam obstacle
NASA Astrophysics Data System (ADS)
Talebitaher, A.; Springham, S. V.; Rawat, R. S.; Lee, P.
2017-03-01
The deuterium-deuterium (DD) fusion neutron yield and anisotropy were measured on a shot-to-shot basis for the NX2 plasma focus (PF) device using two beryllium fast-neutron activation detectors at 0° and 90° to the PF axis. Measurements were performed for deuterium gas pressures in the range 6-16 mbar, and positive correlations between neutron yield and anisotropy were observed at all pressures. Subsequently, at one deuterium gas pressure (13 mbar), the contribution to the fusion yield produced by the forwardly-directed D+ ion beam, emitted from the plasma pinch, was investigated by using a circular Pyrex plate to obstruct the beam and suppress its fusion contribution. Neutron measurements were performed with the obstacle positioned at two distances from the anode tip, and also without the obstacle. It was found that 80% of the neutron yield originates in the plasma pinch column and just above that. In addition, proton pinhole imaging was performed from the 0° and 90° directions to the pinch. The obtained proton images are consistent with the conclusion that DD fusion is concentrated ( 80%) in the pinch column region.
A multi-component evaporation model for beam melting processes
NASA Astrophysics Data System (ADS)
Klassen, Alexander; Forster, Vera E.; Körner, Carolin
2017-02-01
In additive manufacturing using laser or electron beam melting technologies, evaporation losses and changes in chemical composition are known issues when processing alloys with volatile elements. In this paper, a recently described numerical model based on a two-dimensional free surface lattice Boltzmann method is further developed to incorporate the effects of multi-component evaporation. The model takes into account the local melt pool composition during heating and fusion of metal powder. For validation, the titanium alloy Ti-6Al-4V is melted by selective electron beam melting and analysed using mass loss measurements and high-resolution microprobe imaging. Numerically determined evaporation losses and spatial distributions of aluminium compare well with experimental data. Predictions of the melt pool formation in bulk samples provide insight into the competition between the loss of volatile alloying elements from the irradiated surface and their advective redistribution within the molten region.
Remote Sensing Data Fusion to Detect Illicit Crops and Unauthorized Airstrips
NASA Astrophysics Data System (ADS)
Pena, J. A.; Yumin, T.; Liu, H.; Zhao, B.; Garcia, J. A.; Pinto, J.
2018-04-01
Remote sensing data fusion has been playing a more and more important role in crop planting area monitoring, especially for crop area information acquisition. Multi-temporal data and multi-spectral time series are two major aspects for improving crop identification accuracy. Remote sensing fusion provides high quality multi-spectral and panchromatic images in terms of spectral and spatial information, respectively. In this paper, we take one step further and prove the application of remote sensing data fusion in detecting illicit crop through LSMM, GOBIA, and MCE analyzing of strategic information. This methodology emerges as a complementary and effective strategy to control and eradicate illicit crops.
Case retrieval in medical databases by fusing heterogeneous information.
Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice
2011-01-01
A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.
Quantifying the kinetics and morphological changes of the fusion of spheroid building blocks.
Susienka, Michael J; Wilks, Benjamin T; Morgan, Jeffrey R
2016-10-10
Tissue fusion, whereby two or more spheroids coalesce, is a process that is fundamental to biofabrication. We have designed a quantitative, high-throughput platform to investigate the fusion of multicellular spheroids using agarose micro-molds. Spheroids of primary human chondrocytes (HCH) or human breast cancer cells (MCF-7) were self-assembled for 24 h and then brought together to form an array comprised of two spheroids (one doublet) per well. To quantify spheroid fusogenicity, we developed two assays: (1) an initial tack assay, defined as the minimum amount of time for two spheroids to form a mechanically stable tissue complex or doublet, and (2) a fusion assay, in which we defined and tracked key morphological parameters of the doublets as a function of time using wide-field fluorescence microscopy over a 24 h time-lapse. The initial tack of spheroid fusion was measured by inverting the micro-molds and centrifuging doublets at various time points to assess their connectedness. We found that the initial tack between two spheroids forms rapidly, with the majority of doublets remaining intact after centrifugation following just 30 min of fusion. Over the course of 24 h of fusion, several morphological changes occurred, which were quantified using a custom image analysis pipeline. End-to-end doublet lengths decreased over time, doublet widths decreased for chondrocytes and increased for MCF-7, contact lengths increased over time, and chondrocyte doublets exhibited higher intersphere angles at the end of fusion. We also assessed fusion by measuring the fluorescence intensity at the plane of fusion, which increased over time for both cell types. Interestingly, we observed that doublets moved and rotated in the micro-wells during fusion and this rotation was inhibited by ROCK inhibitor Y-27632 and myosin II inhibitor blebbistatin. Understanding and optimizing tissue fusion is essential for creating larger tissues, organs, or other structures using individual microtissues as building parts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deniaud-Alexandre, Elisabeth; Touboul, Emmanuel; Lerouge, Delphine
2005-12-01
Purpose: To report a retrospective study concerning the impact of fused {sup 18}F-fluoro-deoxy-D-glucose (FDG)-hybrid positron emission tomography (PET) and CT images on three-dimensional conformal radiotherapy planning for patients with non-small-cell lung cancer. Methods and Materials: A total of 101 patients consecutively treated for Stage I-III non-small-cell lung cancer were studied. Each patient underwent CT and FDG-hybrid PET for simulation treatment in the same treatment position. Images were coregistered using five fiducial markers. Target volume delineation was initially performed on the CT images, and the corresponding FDG-PET data were subsequently used as an overlay to the CT data to define themore » target volume. Results: {sup 18}F-fluoro-deoxy-D-glucose-PET identified previously undetected distant metastatic disease in 8 patients, making them ineligible for curative conformal radiotherapy (1 patient presented with some positive uptake corresponding to concomitant pulmonary tuberculosis). Another patient was ineligible for curative treatment because the fused PET-CT images demonstrated excessively extensive intrathoracic disease. The gross tumor volume (GTV) was decreased by CT-PET image fusion in 21 patients (23%) and was increased in 24 patients (26%). The GTV reduction was {>=}25% in 7 patients because CT-PET image fusion reduced the pulmonary GTV in 6 patients (3 patients with atelectasis) and the mediastinal nodal GTV in 1 patient. The GTV increase was {>=}25% in 14 patients owing to an increase in the pulmonary GTV in 11 patients (4 patients with atelectasis) and detection of occult mediastinal lymph node involvement in 3 patients. Of 81 patients receiving a total dose of {>=}60 Gy at the International Commission on Radiation Units and Measurements point, after CT-PET image fusion, the percentage of total lung volume receiving >20 Gy increased in 15 cases and decreased in 22. The percentage of total heart volume receiving >36 Gy increased in 8 patients and decreased in 14. The spinal cord volume receiving at least 45 Gy (2 patients) decreased. Multivariate analysis showed that tumor with atelectasis was the single independent factor that resulted in a significant effect on the modification of the size of the GTV by FDG-PET: tumor with atelectasis (with vs. without atelectasis, p = 0.0001). Conclusion: The results of our study have confirmed that integrated hybrid PET/CT in the treatment position and coregistered images have an impact on treatment planning and management of non-small-cell lung cancer. However, FDG images using dedicated PET scanners and respiration-gated acquisition protocols could improve the PET-CT image coregistration. Furthermore, the impact on treatment outcome remains to be demonstrated.« less
Tagaste, Barbara; Riboldi, Marco; Spadea, Maria F; Bellante, Simone; Baroni, Guido; Cambria, Raffaella; Garibaldi, Cristina; Ciocca, Mario; Catalano, Gianpiero; Alterio, Daniela; Orecchia, Roberto
2012-04-01
To compare infrared (IR) optical vs. stereoscopic X-ray technologies for patient setup in image-guided stereotactic radiotherapy. Retrospective data analysis of 233 fractions in 127 patients treated with hypofractionated stereotactic radiotherapy was performed. Patient setup at the linear accelerator was carried out by means of combined IR optical localization and stereoscopic X-ray image fusion in 6 degrees of freedom (6D). Data were analyzed to evaluate the geometric and dosimetric discrepancy between the two patient setup strategies. Differences between IR optical localization and 6D X-ray image fusion parameters were on average within the expected localization accuracy, as limited by CT image resolution (3 mm). A disagreement between the two systems below 1 mm in all directions was measured in patients treated for cranial tumors. In extracranial sites, larger discrepancies and higher variability were observed as a function of the initial patient alignment. The compensation of IR-detected rotational errors resulted in a significantly improved agreement with 6D X-ray image fusion. On the basis of the bony anatomy registrations, the measured differences were found not to be sensitive to patient breathing. The related dosimetric analysis showed that IR-based patient setup caused limited variations in three cases, with 7% maximum dose reduction in the clinical target volume and no dose increase in organs at risk. In conclusion, patient setup driven by IR external surrogates localization in 6D featured comparable accuracy with respect to procedures based on stereoscopic X-ray imaging. Copyright © 2012 Elsevier Inc. All rights reserved.
Improved detection probability of low level light and infrared image fusion system
NASA Astrophysics Data System (ADS)
Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang
2018-02-01
Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.
Li, Kai; Su, Zhong-Zhen; Xu, Er-Jiao; Ju, Jin-Xiu; Meng, Xiao-Chun; Zheng, Rong-Qin
2016-04-18
To assess whether intraoperative use of contrast-enhanced ultrasound (CEUS)-CT/MR image fusion can accurately evaluate ablative margin (AM) and guide supplementary ablation to improve AM after hepatocellular carcinoma (HCC) ablation. Ninety-eight patients with 126 HCCs designated to undergo thermal ablation treatment were enrolled in this prospective study. CEUS-CT/MR image fusion was performed intraoperatively to evaluate whether 5-mm AM was covered by the ablative area. If possible, supplementary ablation was applied at the site of inadequate AM. The CEUS image quality, the time used for CEUS-CT/MR image fusion and the success rate of image fusion were recorded. Local tumor progression (LTP) was observed during follow-up. Clinical factors including AM were examined to identify risk factors for LTP. The success rate of image fusion was 96.2% (126/131), and the duration required for image fusion was 4.9 ± 2.0 (3-13) min. The CEUS image quality was good in 36.1% (53/147) and medium in 63.9% (94/147) of the cases. By supplementary ablation, 21.8% (12/55) of lesions with inadequate AMs became adequate AMs. During follow-up, there were 5 LTPs in lesions with inadequate AMs and 1 LTP in lesions with adequate AMs. Multivariate analysis showed that AM was the only independent risk factor for LTP (hazard ratio, 9.167; 95% confidence interval, 1.070-78.571; p = 0.043). CEUS-CT/MR image fusion is feasible for intraoperative use and can serve as an accurate method to evaluate AMs and guide supplementary ablation to lower inadequate AMs.
NASA Astrophysics Data System (ADS)
Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao
2015-12-01
The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification
Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references. PMID:29581722
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification.
Yu, Yunlong; Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.
A pre-trained convolutional neural network based method for thyroid nodule diagnosis.
Ma, Jinlian; Wu, Fa; Zhu, Jiang; Xu, Dong; Kong, Dexing
2017-01-01
In ultrasound images, most thyroid nodules are in heterogeneous appearances with various internal components and also have vague boundaries, so it is difficult for physicians to discriminate malignant thyroid nodules from benign ones. In this study, we propose a hybrid method for thyroid nodule diagnosis, which is a fusion of two pre-trained convolutional neural networks (CNNs) with different convolutional layers and fully-connected layers. Firstly, the two networks pre-trained with ImageNet database are separately trained. Secondly, we fuse feature maps learned by trained convolutional filters, pooling and normalization operations of the two CNNs. Finally, with the fused feature maps, a softmax classifier is used to diagnose thyroid nodules. The proposed method is validated on 15,000 ultrasound images collected from two local hospitals. Experiment results show that the proposed CNN based methods can accurately and effectively diagnose thyroid nodules. In addition, the fusion of the two CNN based models lead to significant performance improvement, with an accuracy of 83.02%±0.72%. These demonstrate the potential clinical applications of this method. Copyright © 2016 Elsevier B.V. All rights reserved.
Crack Modelling for Radiography
NASA Astrophysics Data System (ADS)
Chady, T.; Napierała, L.
2010-02-01
In this paper, possibility of creation of three-dimensional crack models, both random type and based on real-life radiographic images is discussed. Method for storing cracks in a number of two-dimensional matrices, as well algorithm for their reconstruction into three-dimensional objects is presented. Also the possibility of using iterative algorithm for matching simulated images of cracks to real-life radiographic images is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gambhir, Sanjiv; Pritha, Ray
Novel double and triple fusion reporter gene constructs harboring distinct imagable reporter genes are provided, as well as applications for the use of such double and triple fusion constructs in living cells and in living animals using distinct imaging technologies.
Gambhir, Sanjiv; Pritha, Ray
2015-07-14
Novel double and triple fusion reporter gene constructs harboring distinct imagable reporter genes are provided, as well as applications for the use of such double and triple fusion constructs in living cells and in living animals using distinct imaging technologies.