Sample records for multimodality imaging system

  1. Fluorescence Imaging Topography Scanning System for intraoperative multimodal imaging

    PubMed Central

    Quang, Tri T.; Kim, Hye-Yeong; Bao, Forrest Sheng; Papay, Francis A.; Edwards, W. Barry; Liu, Yang

    2017-01-01

    Fluorescence imaging is a powerful technique with diverse applications in intraoperative settings. Visualization of three dimensional (3D) structures and depth assessment of lesions, however, are oftentimes limited in planar fluorescence imaging systems. In this study, a novel Fluorescence Imaging Topography Scanning (FITS) system has been developed, which offers color reflectance imaging, fluorescence imaging and surface topography scanning capabilities. The system is compact and portable, and thus suitable for deployment in the operating room without disturbing the surgical flow. For system performance, parameters including near infrared fluorescence detection limit, contrast transfer functions and topography depth resolution were characterized. The developed system was tested in chicken tissues ex vivo with simulated tumors for intraoperative imaging. We subsequently conducted in vivo multimodal imaging of sentinel lymph nodes in mice using FITS and PET/CT. The PET/CT/optical multimodal images were co-registered and conveniently presented to users to guide surgeries. Our results show that the developed system can facilitate multimodal intraoperative imaging. PMID:28437441

  2. Radiolabeled Nanoparticles for Multimodality Tumor Imaging

    PubMed Central

    Xing, Yan; Zhao, Jinhua; Conti, Peter S.; Chen, Kai

    2014-01-01

    Each imaging modality has its own unique strengths. Multimodality imaging, taking advantages of strengths from two or more imaging modalities, can provide overall structural, functional, and molecular information, offering the prospect of improved diagnostic and therapeutic monitoring abilities. The devices of molecular imaging with multimodality and multifunction are of great value for cancer diagnosis and treatment, and greatly accelerate the development of radionuclide-based multimodal molecular imaging. Radiolabeled nanoparticles bearing intrinsic properties have gained great interest in multimodality tumor imaging over the past decade. Significant breakthrough has been made toward the development of various radiolabeled nanoparticles, which can be used as novel cancer diagnostic tools in multimodality imaging systems. It is expected that quantitative multimodality imaging with multifunctional radiolabeled nanoparticles will afford accurate and precise assessment of biological signatures in cancer in a real-time manner and thus, pave the path towards personalized cancer medicine. This review addresses advantages and challenges in developing multimodality imaging probes by using different types of nanoparticles, and summarizes the recent advances in the applications of radiolabeled nanoparticles for multimodal imaging of tumor. The key issues involved in the translation of radiolabeled nanoparticles to the clinic are also discussed. PMID:24505237

  3. Multimodal quantitative phase and fluorescence imaging of cell apoptosis

    NASA Astrophysics Data System (ADS)

    Fu, Xinye; Zuo, Chao; Yan, Hao

    2017-06-01

    Fluorescence microscopy, utilizing fluorescence labeling, has the capability to observe intercellular changes which transmitted and reflected light microscopy techniques cannot resolve. However, the parts without fluorescence labeling are not imaged. Hence, the processes simultaneously happen in these parts cannot be revealed. Meanwhile, fluorescence imaging is 2D imaging where information in the depth is missing. Therefore the information in labeling parts is also not complete. On the other hand, quantitative phase imaging is capable to image cells in 3D in real time through phase calculation. However, its resolution is limited by the optical diffraction and cannot observe intercellular changes below 200 nanometers. In this work, fluorescence imaging and quantitative phase imaging are combined to build a multimodal imaging system. Such system has the capability to simultaneously observe the detailed intercellular phenomenon and 3D cell morphology. In this study the proposed multimodal imaging system is used to observe the cell behavior in the cell apoptosis. The aim is to highlight the limitations of fluorescence microscopy and to point out the advantages of multimodal quantitative phase and fluorescence imaging. The proposed multimodal quantitative phase imaging could be further applied in cell related biomedical research, such as tumor.

  4. Application of Multimodality Imaging Fusion Technology in Diagnosis and Treatment of Malignant Tumors under the Precision Medicine Plan.

    PubMed

    Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying

    2016-12-20

    The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.

  5. A simultaneous multimodal imaging system for tissue functional parameters

    NASA Astrophysics Data System (ADS)

    Ren, Wenqi; Zhang, Zhiwu; Wu, Qiang; Zhang, Shiwu; Xu, Ronald

    2014-02-01

    Simultaneous and quantitative assessment of skin functional characteristics in different modalities will facilitate diagnosis and therapy in many clinical applications such as wound healing. However, many existing clinical practices and multimodal imaging systems are subjective, qualitative, sequential for multimodal data collection, and need co-registration between different modalities. To overcome these limitations, we developed a multimodal imaging system for quantitative, non-invasive, and simultaneous imaging of cutaneous tissue oxygenation and blood perfusion parameters. The imaging system integrated multispectral and laser speckle imaging technologies into one experimental setup. A Labview interface was developed for equipment control, synchronization, and image acquisition. Advanced algorithms based on a wide gap second derivative reflectometry and laser speckle contrast analysis (LASCA) were developed for accurate reconstruction of tissue oxygenation and blood perfusion respectively. Quantitative calibration experiments and a new style of skinsimulating phantom were designed to verify the accuracy and reliability of the imaging system. The experimental results were compared with a Moor tissue oxygenation and perfusion monitor. For In vivo testing, a post-occlusion reactive hyperemia (PORH) procedure in human subject and an ongoing wound healing monitoring experiment using dorsal skinfold chamber models were conducted to validate the usability of our system for dynamic detection of oxygenation and perfusion parameters. In this study, we have not only setup an advanced multimodal imaging system for cutaneous tissue oxygenation and perfusion parameters but also elucidated its potential for wound healing assessment in clinical practice.

  6. Nanoparticles in Higher-Order Multimodal Imaging

    NASA Astrophysics Data System (ADS)

    Rieffel, James Ki

    Imaging procedures are a cornerstone in our current medical infrastructure. In everything from screening, diagnostics, and treatment, medical imaging is perhaps our greatest tool in evaluating individual health. Recently, there has been tremendous increase in the development of multimodal systems that combine the strengths of complimentary imaging technologies to overcome their independent weaknesses. Clinically, this has manifested in the virtually universal manufacture of combined PET-CT scanners. With this push toward more integrated imaging, new contrast agents with multimodal functionality are needed. Nanoparticle-based systems are ideal candidates based on their unique size, properties, and diversity. In chapter 1, an extensive background on recent multimodal imaging agents capable of enhancing signal or contrast in three or more modalities is presented. Chapter 2 discusses the development and characterization of a nanoparticulate probe with hexamodal imaging functionality. It is my hope that the information contained in this thesis will demonstrate the many benefits of nanoparticles in multimodal imaging, and provide insight into the potential of fully integrated imaging.

  7. Developing single-laser sources for multimodal coherent anti-Stokes Raman scattering microscopy

    NASA Astrophysics Data System (ADS)

    Pegoraro, Adrian Frank

    Coherent anti-Stokes Raman scattering (CARS) microscopy has developed rapidly and is opening the door to new types of experiments. This work describes the development of new laser sources for CARS microscopy and their use for different applications. It is specifically focused on multimodal nonlinear optical microscopy—the simultaneous combination of different imaging techniques. This allows us to address a diverse range of applications, such as the study of biomaterials, fluid inclusions, atherosclerosis, hepatitis C infection in cells, and ice formation in cells. For these applications new laser sources are developed that allow for practical multimodal imaging. For example, it is shown that using a single Ti:sapphire oscillator with a photonic crystal fiber, it is possible to develop a versatile multimodal imaging system using optimally chirped laser pulses. This system can perform simultaneous two photon excited fluorescence, second harmonic generation, and CARS microscopy. The versatility of the system is further demonstrated by showing that it is possible to probe different Raman modes using CARS microscopy simply by changing a time delay between the excitation beams. Using optimally chirped pulses also enables further simplification of the laser system required by using a single fiber laser combined with nonlinear optical fibers to perform effective multimodal imaging. While these sources are useful for practical multimodal imaging, it is believed that for further improvements in CARS microscopy sensitivity, new excitation schemes are necessary. This has led to the design of a new, high power, extended cavity oscillator that should be capable of implementing new excitation schemes for CARS microscopy as well as other techniques. Our interest in multimodal imaging has led us to other areas of research as well. For example, a fiber-coupling scheme for signal collection in the forward direction is demonstrated that allows for fluorescence lifetime imaging without significant temporal distortion. Also highlighted is an imaging artifact that is unique to CARS microscopy that can alter image interpretation, especially when using multimodal imaging. By combining expertise in nonlinear optics, laser development, fiber optics, and microscopy, we have developed systems and techniques that will be of benefit for multimodal CARS microscopy.

  8. A multimodal image sensor system for identifying water stress in grapevines

    NASA Astrophysics Data System (ADS)

    Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong

    2012-11-01

    Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.

  9. Optical/MRI Multimodality Molecular Imaging

    NASA Astrophysics Data System (ADS)

    Ma, Lixin; Smith, Charles; Yu, Ping

    2007-03-01

    Multimodality molecular imaging that combines anatomical and functional information has shown promise in development of tumor-targeted pharmaceuticals for cancer detection or therapy. We present a new multimodality imaging technique that combines fluorescence molecular tomography (FMT) and magnetic resonance imaging (MRI) for in vivo molecular imaging of preclinical tumor models. Unlike other optical/MRI systems, the new molecular imaging system uses parallel phase acquisition based on heterodyne principle. The system has a higher accuracy of phase measurements, reduced noise bandwidth, and an efficient modulation of the fluorescence diffuse density waves. Fluorescent Bombesin probes were developed for targeting breast cancer cells and prostate cancer cells. Tissue phantom and small animal experiments were performed for calibration of the imaging system and validation of the targeting probes.

  10. Image-guided plasma therapy of cutaneous wound

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiwu; Ren, Wenqi; Yu, Zelin; Zhang, Shiwu; Yue, Ting; Xu, Ronald

    2014-02-01

    The wound healing process involves the reparative phases of inflammation, proliferation, and remodeling. Interrupting any of these phases may result in chronically unhealed wounds, amputation, or even patient death. Despite the clinical significance in chronic wound management, no effective methods have been developed for quantitative image-guided treatment. We integrated a multimodal imaging system with a cold atmospheric plasma probe for image-guided treatment of chronic wound. Multimodal imaging system offers a non-invasive, painless, simultaneous and quantitative assessment of cutaneous wound healing. Cold atmospheric plasma accelerates the wound healing process through many mechanisms including decontamination, coagulation and stimulation of the wound healing. The therapeutic effect of cold atmospheric plasma is studied in vivo under the guidance of a multimodal imaging system. Cutaneous wounds are created on the dorsal skin of the nude mice. During the healing process, the sample wound is treated by cold atmospheric plasma at different controlled dosage, while the control wound is healed naturally. The multimodal imaging system integrating a multispectral imaging module and a laser speckle imaging module is used to collect the information of cutaneous tissue oxygenation (i.e. oxygen saturation, StO2) and blood perfusion simultaneously to assess and guide the plasma therapy. Our preliminary tests show that cold atmospheric plasma in combination with multimodal imaging guidance has the potential to facilitate the healing of chronic wounds.

  11. Melanoma detection using smartphone and multimode hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    MacKinnon, Nicholas; Vasefi, Fartash; Booth, Nicholas; Farkas, Daniel L.

    2016-04-01

    This project's goal is to determine how to effectively implement a technology continuum from a low cost, remotely deployable imaging device to a more sophisticated multimode imaging system within a standard clinical practice. In this work a smartphone is used in conjunction with an optical attachment to capture cross-polarized and collinear color images of a nevus that are analyzed to quantify chromophore distribution. The nevus is also imaged by a multimode hyperspectral system, our proprietary SkinSpect™ device. Relative accuracy and biological plausibility of the two systems algorithms are compared to assess aspects of feasibility of in-home or primary care practitioner smartphone screening prior to rigorous clinical analysis via the SkinSpect.

  12. MULTIMODAL IMAGING OF SYPHILITIC MULTIFOCAL RETINITIS.

    PubMed

    Curi, Andre L; Sarraf, David; Cunningham, Emmett T

    2015-01-01

    To describe multimodal imaging of syphilitic multifocal retinitis. Observational case series. Two patients developed multifocal retinitis after treatment of unrecognized syphilitic uveitis with systemic corticosteroids in the absence of appropriate antibiotic therapy. Multimodal imaging localized the foci of retinitis within the retina in contrast to superficial retinal precipitates that accumulate on the surface of the retina in eyes with untreated syphilitic uveitis. Although the retinitis resolved after treatment with systemic penicillin in both cases, vision remained poor in the patient with multifocal retinitis involving the macula. Treatment of unrecognized syphilitic uveitis with corticosteroids in the absence of antitreponemal treatment can lead to the development of multifocal retinitis. Multimodal imaging, and optical coherence tomography in particular, can be used to distinguish multifocal retinitis from superficial retinal precipitates or accumulations.

  13. Simultaneous multimodal ophthalmic imaging using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    PubMed Central

    Malone, Joseph D.; El-Haddad, Mohamed T.; Bozic, Ivan; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2016-01-01

    Scanning laser ophthalmoscopy (SLO) benefits diagnostic imaging and therapeutic guidance by allowing for high-speed en face imaging of retinal structures. When combined with optical coherence tomography (OCT), SLO enables real-time aiming and retinal tracking and provides complementary information for post-acquisition volumetric co-registration, bulk motion compensation, and averaging. However, multimodality SLO-OCT systems generally require dedicated light sources, scanners, relay optics, detectors, and additional digitization and synchronization electronics, which increase system complexity. Here, we present a multimodal ophthalmic imaging system using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography (SS-SESLO-OCT) for in vivo human retinal imaging. SESLO reduces the complexity of en face imaging systems by multiplexing spatial positions as a function of wavelength. SESLO image quality benefited from single-mode illumination and multimode collection through a prototype double-clad fiber coupler, which optimized scattered light throughput and reduce speckle contrast while maintaining lateral resolution. Using a shared 1060 nm swept-source, shared scanner and imaging optics, and a shared dual-channel high-speed digitizer, we acquired inherently co-registered en face retinal images and OCT cross-sections simultaneously at 200 frames-per-second. PMID:28101411

  14. Multimodal system for the planning and guidance of bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Cheirsilp, Ronnarit; Zang, Xiaonan; Byrnes, Patrick

    2015-03-01

    Many technical innovations in multimodal radiologic imaging and bronchoscopy have emerged recently in the effort against lung cancer. Modern X-ray computed-tomography (CT) scanners provide three-dimensional (3D) high-resolution chest images, positron emission tomography (PET) scanners give complementary molecular imaging data, and new integrated PET/CT scanners combine the strengths of both modalities. State-of-the-art bronchoscopes permit minimally invasive tissue sampling, with vivid endobronchial video enabling navigation deep into the airway-tree periphery, while complementary endobronchial ultrasound (EBUS) reveals local views of anatomical structures outside the airways. In addition, image-guided intervention (IGI) systems have proven their utility for CT-based planning and guidance of bronchoscopy. Unfortunately, no IGI system exists that integrates all sources effectively through the complete lung-cancer staging work flow. This paper presents a prototype of a computer-based multimodal IGI system that strives to fill this need. The system combines a wide range of automatic and semi-automatic image-processing tools for multimodal data fusion and procedure planning. It also provides a flexible graphical user interface for follow-on guidance of bronchoscopy/EBUS. Human-study results demonstrate the system's potential.

  15. Multimodal imaging of cutaneous wound tissue

    NASA Astrophysics Data System (ADS)

    Zhang, Shiwu; Gnyawali, Surya; Huang, Jiwei; Ren, Wenqi; Gordillo, Gayle; Sen, Chandan K.; Xu, Ronald

    2015-01-01

    Quantitative assessment of wound tissue ischemia, perfusion, and inflammation provides critical information for appropriate detection, staging, and treatment of chronic wounds. However, few methods are available for simultaneous assessment of these tissue parameters in a noninvasive and quantitative fashion. We integrated hyperspectral, laser speckle, and thermographic imaging modalities in a single-experimental setup for multimodal assessment of tissue oxygenation, perfusion, and inflammation characteristics. Algorithms were developed for appropriate coregistration between wound images acquired by different imaging modalities at different times. The multimodal wound imaging system was validated in an occlusion experiment, where oxygenation and perfusion maps of a healthy subject's upper extremity were continuously monitored during a postocclusive reactive hyperemia procedure and compared with standard measurements. The system was also tested in a clinical trial where a wound of three millimeters in diameter was introduced on a healthy subject's lower extremity and the healing process was continuously monitored. Our in vivo experiments demonstrated the clinical feasibility of multimodal cutaneous wound imaging.

  16. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  17. Multimodality bonchoscopic imaging of tracheopathica osteochondroplastica

    NASA Astrophysics Data System (ADS)

    Colt, Henri; Murgu, Septimiu D.; Ahn, Yeh-Chan; Brenner, Matt

    2009-05-01

    Results of a commercial optical coherence tomography system used as part of a multimodality diagnostic bronchoscopy platform are presented for a 61-year-old patient with central airway obstruction from tracheopathica osteochondroplastica. Comparison to results of white-light bronchoscopy, histology, and endobronchial ultrasound examination are accompanied by a discussion of resolution, penetration depth, contrast, and field of view of these imaging modalities. White-light bronchoscopy revealed irregularly shaped, firm submucosal nodules along cartilaginous structures of the anterior and lateral walls of the trachea, sparing the muscular posterior membrane. Endobronchial ultrasound showed a hyperechoic density of 0.4 cm thickness. optical coherence tomography (OCT) was performed using a commercially available, compact time-domain OCT system (Niris System, Imalux Corp., Cleveland, Ohio) with a magnetically actuating probe (two-dimensional, front imaging, and inside actuation). Images showed epithelium, upper submucosa, and osseous submucosal nodule layers corresponding with histopathology. To our knowledge, this is the first time these commercially available systems are used as part of a multimodality bronchoscopy platform to study diagnostic imaging of a benign disease causing central airway obstruction. Further studies are needed to optimize these systems for pulmonary applications and to determine how new-generation imaging modalities will be integrated into a multimodality bronchoscopy platform.

  18. A Multimodal Search Engine for Medical Imaging Studies.

    PubMed

    Pinho, Eduardo; Godinho, Tiago; Valente, Frederico; Costa, Carlos

    2017-02-01

    The use of digital medical imaging systems in healthcare institutions has increased significantly, and the large amounts of data in these systems have led to the conception of powerful support tools: recent studies on content-based image retrieval (CBIR) and multimodal information retrieval in the field hold great potential in decision support, as well as for addressing multiple challenges in healthcare systems, such as computer-aided diagnosis (CAD). However, the subject is still under heavy research, and very few solutions have become part of Picture Archiving and Communication Systems (PACS) in hospitals and clinics. This paper proposes an extensible platform for multimodal medical image retrieval, integrated in an open-source PACS software with profile-based CBIR capabilities. In this article, we detail a technical approach to the problem by describing its main architecture and each sub-component, as well as the available web interfaces and the multimodal query techniques applied. Finally, we assess our implementation of the engine with computational performance benchmarks.

  19. Multimodality Image Fusion-Guided Procedures: Technique, Accuracy, and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abi-Jaoudeh, Nadine, E-mail: naj@mail.nih.gov; Kruecker, Jochen, E-mail: jochen.kruecker@philips.com; Kadoury, Samuel, E-mail: samuel.kadoury@polymtl.ca

    2012-10-15

    Personalized therapies play an increasingly critical role in cancer care: Image guidance with multimodality image fusion facilitates the targeting of specific tissue for tissue characterization and plays a role in drug discovery and optimization of tailored therapies. Positron-emission tomography (PET), magnetic resonance imaging (MRI), and contrast-enhanced computed tomography (CT) may offer additional information not otherwise available to the operator during minimally invasive image-guided procedures, such as biopsy and ablation. With use of multimodality image fusion for image-guided interventions, navigation with advanced modalities does not require the physical presence of the PET, MRI, or CT imaging system. Several commercially available methodsmore » of image-fusion and device navigation are reviewed along with an explanation of common tracking hardware and software. An overview of current clinical applications for multimodality navigation is provided.« less

  20. Multimodal optoacoustic and multiphoton fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sela, Gali; Razansky, Daniel; Shoham, Shy

    2013-03-01

    Multiphoton microscopy is a powerful imaging modality that enables structural and functional imaging with cellular and sub-cellular resolution, deep within biological tissues. Yet, its main contrast mechanism relies on extrinsically administered fluorescent indicators. Here we developed a system for simultaneous multimodal optoacoustic and multiphoton fluorescence 3D imaging, which attains both absorption and fluorescence-based contrast by integrating an ultrasonic transducer into a two-photon laser scanning microscope. The system is readily shown to enable acquisition of multimodal microscopic images of fluorescently labeled targets and cell cultures as well as intrinsic absorption-based images of pigmented biological tissue. During initial experiments, it was further observed that that detected optoacoustically-induced response contains low frequency signal variations, presumably due to cavitation-mediated signal generation by the high repetition rate (80MHz) near IR femtosecond laser. The multimodal system may provide complementary structural and functional information to the fluorescently labeled tissue, by superimposing optoacoustic images of intrinsic tissue chromophores, such as melanin deposits, pigmentation, and hemoglobin or other extrinsic particle or dye-based markers highly absorptive in the NIR spectrum.

  1. Multi-Modality Phantom Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, Jennifer S.; Peng, Qiyu; Moses, William W.

    2009-03-20

    Multi-modality imaging has an increasing role in the diagnosis and treatment of a large number of diseases, particularly if both functional and anatomical information are acquired and accurately co-registered. Hence, there is a resulting need for multi modality phantoms in order to validate image co-registration and calibrate the imaging systems. We present our PET-ultrasound phantom development, including PET and ultrasound images of a simple prostate phantom. We use agar and gelatin mixed with a radioactive solution. We also present our development of custom multi-modality phantoms that are compatible with PET, transrectal ultrasound (TRUS), MRI and CT imaging. We describe bothmore » our selection of tissue mimicking materials and phantom construction procedures. These custom PET-TRUS-CT-MRI prostate phantoms use agargelatin radioactive mixtures with additional contrast agents and preservatives. We show multi-modality images of these custom prostate phantoms, as well as discuss phantom construction alternatives. Although we are currently focused on prostate imaging, this phantom development is applicable to many multi-modality imaging applications.« less

  2. A multimodal parallel architecture: A cognitive framework for multimodal interactions.

    PubMed

    Cohn, Neil

    2016-01-01

    Human communication is naturally multimodal, and substantial focus has examined the semantic correspondences in speech-gesture and text-image relationships. However, visual narratives, like those in comics, provide an interesting challenge to multimodal communication because the words and/or images can guide the overall meaning, and both modalities can appear in complicated "grammatical" sequences: sentences use a syntactic structure and sequential images use a narrative structure. These dual structures create complexity beyond those typically addressed by theories of multimodality where only a single form uses combinatorial structure, and also poses challenges for models of the linguistic system that focus on single modalities. This paper outlines a broad theoretical framework for multimodal interactions by expanding on Jackendoff's (2002) parallel architecture for language. Multimodal interactions are characterized in terms of their component cognitive structures: whether a particular modality (verbal, bodily, visual) is present, whether it uses a grammatical structure (syntax, narrative), and whether it "dominates" the semantics of the overall expression. Altogether, this approach integrates multimodal interactions into an existing framework of language and cognition, and characterizes interactions between varying complexity in the verbal, bodily, and graphic domains. The resulting theoretical model presents an expanded consideration of the boundaries of the "linguistic" system and its involvement in multimodal interactions, with a framework that can benefit research on corpus analyses, experimentation, and the educational benefits of multimodality. Copyright © 2015.

  3. Multimodal imaging of ischemic wounds

    NASA Astrophysics Data System (ADS)

    Zhang, Shiwu; Gnyawali, Surya; Huang, Jiwei; Liu, Peng; Gordillo, Gayle; Sen, Chandan K.; Xu, Ronald

    2012-12-01

    The wound healing process involves the reparative phases of inflammation, proliferation, and remodeling. Interrupting any of these phases may result in chronically unhealed wounds, amputation, or even patient death. Quantitative assessment of wound tissue ischemia, perfusion, and inflammation provides critical information for appropriate detection, staging, and treatment of chronic wounds. However, no method is available for noninvasive, simultaneous, and quantitative imaging of these tissue parameters. We integrated hyperspectral, laser speckle, and thermographic imaging modalities into a single setup for multimodal assessment of tissue oxygenation, perfusion, and inflammation characteristics. Advanced algorithms were developed for accurate reconstruction of wound oxygenation and appropriate co-registration between different imaging modalities. The multimodal wound imaging system was validated by an ongoing clinical trials approved by OSU IRB. In the clinical trial, a wound of 3mm in diameter was introduced on a healthy subject's lower extremity and the healing process was serially monitored by the multimodal imaging setup. Our experiments demonstrated the clinical usability of multimodal wound imaging.

  4. Fast and Robust Registration of Multimodal Remote Sensing Images via Dense Orientated Gradient Feature

    NASA Astrophysics Data System (ADS)

    Ye, Y.

    2017-09-01

    This paper presents a fast and robust method for the registration of multimodal remote sensing data (e.g., optical, LiDAR, SAR and map). The proposed method is based on the hypothesis that structural similarity between images is preserved across different modalities. In the definition of the proposed method, we first develop a pixel-wise feature descriptor named Dense Orientated Gradient Histogram (DOGH), which can be computed effectively at every pixel and is robust to non-linear intensity differences between images. Then a fast similarity metric based on DOGH is built in frequency domain using the Fast Fourier Transform (FFT) technique. Finally, a template matching scheme is applied to detect tie points between images. Experimental results on different types of multimodal remote sensing images show that the proposed similarity metric has the superior matching performance and computational efficiency than the state-of-the-art methods. Moreover, based on the proposed similarity metric, we also design a fast and robust automatic registration system for multimodal images. This system has been evaluated using a pair of very large SAR and optical images (more than 20000 × 20000 pixels). Experimental results show that our system outperforms the two popular commercial software systems (i.e. ENVI and ERDAS) in both registration accuracy and computational efficiency.

  5. Volume curtaining: a focus+context effect for multimodal volume visualization

    NASA Astrophysics Data System (ADS)

    Fairfield, Adam J.; Plasencia, Jonathan; Jang, Yun; Theodore, Nicholas; Crawford, Neil R.; Frakes, David H.; Maciejewski, Ross

    2014-03-01

    In surgical preparation, physicians will often utilize multimodal imaging scans to capture complementary information to improve diagnosis and to drive patient-specific treatment. These imaging scans may consist of data from magnetic resonance imaging (MR), computed tomography (CT), or other various sources. The challenge in using these different modalities is that the physician must mentally map the two modalities together during the diagnosis and planning phase. Furthermore, the different imaging modalities will be generated at various resolutions as well as slightly different orientations due to patient placement during scans. In this work, we present an interactive system for multimodal data fusion, analysis and visualization. Developed with partners from neurological clinics, this work discusses initial system requirements and physician feedback at the various stages of component development. Finally, we present a novel focus+context technique for the interactive exploration of coregistered multi-modal data.

  6. A multimodal imaging platform with integrated simultaneous photoacoustic microscopy, optical coherence tomography, optical Doppler tomography and fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dadkhah, Arash; Zhou, Jun; Yeasmin, Nusrat; Jiao, Shuliang

    2018-02-01

    Various optical imaging modalities with different optical contrast mechanisms have been developed over the past years. Although most of these imaging techniques are being used in many biomedical applications and researches, integration of these techniques will allow researchers to reach the full potential of these technologies. Nevertheless, combining different imaging techniques is always challenging due to the difference in optical and hardware requirements for different imaging systems. Here, we developed a multimodal optical imaging system with the capability of providing comprehensive structural, functional and molecular information of living tissue in micrometer scale. This imaging system integrates photoacoustic microscopy (PAM), optical coherence tomography (OCT), optical Doppler tomography (ODT) and fluorescence microscopy in one platform. Optical-resolution PAM (OR-PAM) provides absorption-based imaging of biological tissues. Spectral domain OCT is able to provide structural information based on the scattering property of biological sample with no need for exogenous contrast agents. In addition, ODT is a functional extension of OCT with the capability of measurement and visualization of blood flow based on the Doppler effect. Fluorescence microscopy allows to reveal molecular information of biological tissue using autofluoresce or exogenous fluorophores. In-vivo as well as ex-vivo imaging studies demonstrated the capability of our multimodal imaging system to provide comprehensive microscopic information on biological tissues. Integrating all the aforementioned imaging modalities for simultaneous multimodal imaging has promising potential for preclinical research and clinical practice in the near future.

  7. Multimode intravascular RF coil for MRI-guided interventions.

    PubMed

    Kurpad, Krishna N; Unal, Orhan

    2011-04-01

    To demonstrate the feasibility of using a single intravascular radiofrequency (RF) probe connected to the external magnetic resonance imaging (MRI) system via a single coaxial cable to perform active tip tracking and catheter visualization and high signal-to-noise ratio (SNR) intravascular imaging. A multimode intravascular RF coil was constructed on a 6F balloon catheter and interfaced to a 1.5T MRI scanner via a decoupling circuit. Bench measurements of coil impedances were followed by imaging experiments in saline and phantoms. The multimode coil behaves as an inductively coupled transmit coil. The forward-looking capability of 6 mm was measured. A greater than 3-fold increase in SNR compared to conventional imaging using optimized external coil was demonstrated. Simultaneous active tip tracking and catheter visualization was demonstrated. It is feasible to perform 1) active tip tracking, 2) catheter visualization, and 3) high SNR imaging using a single multimode intravascular RF coil that is connected to the external system via a single coaxial cable. Copyright © 2011 Wiley-Liss, Inc.

  8. Medical Image Retrieval: A Multimodal Approach

    PubMed Central

    Cao, Yu; Steffey, Shawn; He, Jianbiao; Xiao, Degui; Tao, Cui; Chen, Ping; Müller, Henning

    2014-01-01

    Medical imaging is becoming a vital component of war on cancer. Tremendous amounts of medical image data are captured and recorded in a digital format during cancer care and cancer research. Facing such an unprecedented volume of image data with heterogeneous image modalities, it is necessary to develop effective and efficient content-based medical image retrieval systems for cancer clinical practice and research. While substantial progress has been made in different areas of content-based image retrieval (CBIR) research, direct applications of existing CBIR techniques to the medical images produced unsatisfactory results, because of the unique characteristics of medical images. In this paper, we develop a new multimodal medical image retrieval approach based on the recent advances in the statistical graphic model and deep learning. Specifically, we first investigate a new extended probabilistic Latent Semantic Analysis model to integrate the visual and textual information from medical images to bridge the semantic gap. We then develop a new deep Boltzmann machine-based multimodal learning model to learn the joint density model from multimodal information in order to derive the missing modality. Experimental results with large volume of real-world medical images have shown that our new approach is a promising solution for the next-generation medical imaging indexing and retrieval system. PMID:26309389

  9. Multimodal swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography at 400 kHz

    NASA Astrophysics Data System (ADS)

    El-Haddad, Mohamed T.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2017-02-01

    Multimodal imaging systems that combine scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) have demonstrated the utility of concurrent en face and volumetric imaging for aiming, eye tracking, bulk motion compensation, mosaicking, and contrast enhancement. However, this additional functionality trades off with increased system complexity and cost because both SLO and OCT generally require dedicated light sources, galvanometer scanners, relay and imaging optics, detectors, and control and digitization electronics. We previously demonstrated multimodal ophthalmic imaging using swept-source spectrally encoded SLO and OCT (SS-SESLO-OCT). Here, we present system enhancements and a new optical design that increase our SS-SESLO-OCT data throughput by >7x and field-of-view (FOV) by >4x. A 200 kHz 1060 nm Axsun swept-source was optically buffered to 400 kHz sweep-rate, and SESLO and OCT were simultaneously digitized on dual input channels of a 4 GS/s digitizer at 1.2 GS/s per channel using a custom k-clock. We show in vivo human imaging of the anterior segment out to the limbus and retinal fundus over a >40° FOV. In addition, nine overlapping volumetric SS-SESLO-OCT volumes were acquired under video-rate SESLO preview and guidance. In post-processing, all nine SESLO images and en face projections of the corresponding OCT volumes were mosaicked to show widefield multimodal fundus imaging with a >80° FOV. Concurrent multimodal SS-SESLO-OCT may have applications in clinical diagnostic imaging by enabling aiming, image registration, and multi-field mosaicking and benefit intraoperative imaging by allowing for real-time surgical feedback, instrument tracking, and overlays of computationally extracted image-based surrogate biomarkers of disease.

  10. Multifocus confocal Raman microspectroscopy for fast multimode vibrational imaging of living cells.

    PubMed

    Okuno, Masanari; Hamaguchi, Hiro-o

    2010-12-15

    We have developed a multifocus confocal Raman microspectroscopic system for the fast multimode vibrational imaging of living cells. It consists of an inverted microscope equipped with a microlens array, a pinhole array, a fiber bundle, and a multichannel Raman spectrometer. Forty-eight Raman spectra from 48 foci under the microscope are simultaneously obtained by using multifocus excitation and image-compression techniques. The multifocus confocal configuration suppresses the background generated from the cover glass and the cell culturing medium so that high-contrast images are obtainable with a short accumulation time. The system enables us to obtain multimode (10 different vibrational modes) vibrational images of living cells in tens of seconds with only 1 mW laser power at one focal point. This image acquisition time is more than 10 times faster than that in conventional single-focus Raman microspectroscopy.

  11. ADMultiImg: a novel missing modality transfer learning based CAD system for diagnosis of MCI due to AD using incomplete multi-modality imaging data

    NASA Astrophysics Data System (ADS)

    Liu, Xiaonan; Chen, Kewei; Wu, Teresa; Weidman, David; Lure, Fleming; Li, Jing

    2018-02-01

    Alzheimer's Disease (AD) is the most common cause of dementia and currently has no cure. Treatments targeting early stages of AD such as Mild Cognitive Impairment (MCI) may be most effective to deaccelerate AD, thus attracting increasing attention. However, MCI has substantial heterogeneity in that it can be caused by various underlying conditions, not only AD. To detect MCI due to AD, NIA-AA published updated consensus criteria in 2011, in which the use of multi-modality images was highlighted as one of the most promising methods. It is of great interest to develop a CAD system based on automatic, quantitative analysis of multi-modality images and machine learning algorithms to help physicians more adequately diagnose MCI due to AD. The challenge, however, is that multi-modality images are not universally available for many patients due to cost, access, safety, and lack of consent. We developed a novel Missing Modality Transfer Learning (MMTL) algorithm capable of utilizing whatever imaging modalities are available for an MCI patient to diagnose the patient's likelihood of MCI due to AD. Furthermore, we integrated MMTL with radiomics steps including image processing, feature extraction, and feature screening, and a post-processing for uncertainty quantification (UQ), and developed a CAD system called "ADMultiImg" to assist clinical diagnosis of MCI due to AD using multi-modality images together with patient demographic and genetic information. Tested on ADNI date, our system can generate a diagnosis with high accuracy even for patients with only partially available image modalities (AUC=0.94), and therefore may have broad clinical utility.

  12. Integrated photoacoustic microscopy, optical coherence tomography, and fluorescence microscopy for multimodal chorioretinal imaging

    NASA Astrophysics Data System (ADS)

    Tian, Chao; Zhang, Wei; Nguyen, Van Phuc; Huang, Ziyi; Wang, Xueding; Paulus, Yannis M.

    2018-02-01

    Current clinical available retinal imaging techniques have limitations, including limited depth of penetration or requirement for the invasive injection of exogenous contrast agents. Here, we developed a novel multimodal imaging system for high-speed, high-resolution retinal imaging of larger animals, such as rabbits. The system integrates three state-of-the-art imaging modalities, including photoacoustic microscopy (PAM), optical coherence tomography (OCT), and fluorescence microscopy (FM). In vivo experimental results of rabbit eyes show that the PAM is able to visualize laser-induced retinal burns and distinguish individual eye blood vessels using a laser exposure dose of 80 nJ, which is well below the American National Standards Institute (ANSI) safety limit 160 nJ. The OCT can discern different retinal layers and visualize laser burns and choroidal detachments. The novel multi-modal imaging platform holds great promise in ophthalmic imaging.

  13. The new frontiers of multimodality and multi-isotope imaging

    NASA Astrophysics Data System (ADS)

    Behnam Azad, Babak; Nimmagadda, Sridhar

    2014-06-01

    Technological advances in imaging systems and the development of target specific imaging tracers has been rapidly growing over the past two decades. Recent progress in "all-in-one" imaging systems that allow for automated image coregistration has significantly added to the growth of this field. These developments include ultra high resolution PET and SPECT scanners that can be integrated with CT or MR resulting in PET/CT, SPECT/CT, SPECT/PET and PET/MRI scanners for simultaneous high resolution high sensitivity anatomical and functional imaging. These technological developments have also resulted in drastic enhancements in image quality and acquisition time while eliminating cross compatibility issues between modalities. Furthermore, the most cutting edge technology, though mostly preclinical, also allows for simultaneous multimodality multi-isotope image acquisition and image reconstruction based on radioisotope decay characteristics. These scientific advances, in conjunction with the explosion in the development of highly specific multimodality molecular imaging agents, may aid in realizing simultaneous imaging of multiple biological processes and pave the way towards more efficient diagnosis and improved patient care.

  14. A low-cost multimodal head-mounted display system for neuroendoscopic surgery.

    PubMed

    Xu, Xinghua; Zheng, Yi; Yao, Shujing; Sun, Guochen; Xu, Bainan; Chen, Xiaolei

    2018-01-01

    With rapid advances in technology, wearable devices as head-mounted display (HMD) have been adopted for various uses in medical science, ranging from simply aiding in fitness to assisting surgery. We aimed to investigate the feasibility and practicability of a low-cost multimodal HMD system in neuroendoscopic surgery. A multimodal HMD system, mainly consisted of a HMD with two built-in displays, an action camera, and a laptop computer displaying reconstructed medical images, was developed to assist neuroendoscopic surgery. With this intensively integrated system, the neurosurgeon could freely switch between endoscopic image, three-dimensional (3D) reconstructed virtual endoscopy images, and surrounding environment images. Using a leap motion controller, the neurosurgeon could adjust or rotate the 3D virtual endoscopic images at a distance to better understand the positional relation between lesions and normal tissues at will. A total of 21 consecutive patients with ventricular system diseases underwent neuroendoscopic surgery with the aid of this system. All operations were accomplished successfully, and no system-related complications occurred. The HMD was comfortable to wear and easy to operate. Screen resolution of the HMD was high enough for the neurosurgeon to operate carefully. With the system, the neurosurgeon might get a better comprehension on lesions by freely switching among images of different modalities. The system had a steep learning curve, which meant a quick increment of skill with it. Compared with commercially available surgical assistant instruments, this system was relatively low-cost. The multimodal HMD system is feasible, practical, helpful, and relatively cost efficient in neuroendoscopic surgery.

  15. Design and applications of a multimodality image data warehouse framework.

    PubMed

    Wong, Stephen T C; Hoo, Kent Soo; Knowlton, Robert C; Laxer, Kenneth D; Cao, Xinhau; Hawkins, Randall A; Dillon, William P; Arenson, Ronald L

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications--namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains.

  16. Design and Applications of a Multimodality Image Data Warehouse Framework

    PubMed Central

    Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885

  17. Image-guided thoracic surgery in the hybrid operation room.

    PubMed

    Ujiie, Hideki; Effat, Andrew; Yasufuku, Kazuhiro

    2017-01-01

    There has been an increase in the use of image-guided technology to facilitate minimally invasive therapy. The next generation of minimally invasive therapy is focused on advancement and translation of novel image-guided technologies in therapeutic interventions, including surgery, interventional pulmonology, radiation therapy, and interventional laser therapy. To establish the efficacy of different minimally invasive therapies, we have developed a hybrid operating room, known as the guided therapeutics operating room (GTx OR) at the Toronto General Hospital. The GTx OR is equipped with multi-modality image-guidance systems, which features a dual source-dual energy computed tomography (CT) scanner, a robotic cone-beam CT (CBCT)/fluoroscopy, high-performance endobronchial ultrasound system, endoscopic surgery system, near-infrared (NIR) fluorescence imaging system, and navigation tracking systems. The novel multimodality image-guidance systems allow physicians to quickly, and accurately image patients while they are on the operating table. This yield improved outcomes since physicians are able to use image guidance during their procedures, and carry out innovative multi-modality therapeutics. Multiple preclinical translational studies pertaining to innovative minimally invasive technology is being developed in our guided therapeutics laboratory (GTx Lab). The GTx Lab is equipped with similar technology, and multimodality image-guidance systems as the GTx OR, and acts as an appropriate platform for translation of research into human clinical trials. Through the GTx Lab, we are able to perform basic research, such as the development of image-guided technologies, preclinical model testing, as well as preclinical imaging, and then translate that research into the GTx OR. This OR allows for the utilization of new technologies in cancer therapy, including molecular imaging, and other innovative imaging modalities, and therefore enables a better quality of life for patients, both during and after the procedure. In this article, we describe capabilities of the GTx systems, and discuss the first-in-human technologies used, and evaluated in GTx OR.

  18. Multi-mode Intravascular RF Coil for MRI-guided Interventions

    PubMed Central

    Kurpad, Krishna N.; Unal, Orhan

    2011-01-01

    Purpose To demonstrate the feasibility of using a single intravascular RF probe connected to the external MRI system via a single coaxial cable to perform active tip tracking and catheter visualization, and high SNR intravascular imaging. Materials and Methods A multi-mode intravascular RF coil was constructed on a 6F balloon catheter and interfaced to a 1.5T MRI scanner via a decoupling circuit. Bench measurements of coil impedances were followed by imaging experiments in saline and phantoms. Results The multi-mode coil behaves as an inductively-coupled transmit coil. Forward looking capability of 6mm is measured. Greater than 3-fold increase in SNR compared to conventional imaging using optimized external coil is demonstrated. Simultaneous active tip tracking and catheter visualization is demonstrated. Conclusions It is feasible to perform 1) active tip tracking, 2) catheter visualization, and 3) high SNR imaging using a single multi-mode intravascular RF coil that is connected to the external system via a single coaxial cable. PMID:21448969

  19. Calibration for single multi-mode fiber digital scanning microscopy imaging system

    NASA Astrophysics Data System (ADS)

    Yin, Zhe; Liu, Guodong; Liu, Bingguo; Gan, Yu; Zhuang, Zhitao; Chen, Fengdong

    2015-11-01

    Single multimode fiber (MMF) digital scanning imaging system is a development tendency of modern endoscope. We concentrate on the calibration method of the imaging system. Calibration method comprises two processes, forming scanning focused spots and calibrating the couple factors varied with positions. Adaptive parallel coordinate algorithm (APC) is adopted to form the focused spots at the multimode fiber (MMF) output. Compare with other algorithm, APC contains many merits, i.e. rapid speed, small amount calculations and no iterations. The ratio of the optics power captured by MMF to the intensity of the focused spots is called couple factor. We setup the calibration experimental system to form the scanning focused spots and calculate the couple factors for different object positions. The experimental result the couple factor is higher in the center than the edge.

  20. Combined multi-modal photoacoustic tomography, optical coherence tomography (OCT) and OCT angiography system with an articulated probe for in vivo human skin structure and vasculature imaging

    PubMed Central

    Liu, Mengyang; Chen, Zhe; Zabihian, Behrooz; Sinz, Christoph; Zhang, Edward; Beard, Paul C.; Ginner, Laurin; Hoover, Erich; Minneman, Micheal P.; Leitgeb, Rainer A.; Kittler, Harald; Drexler, Wolfgang

    2016-01-01

    Cutaneous blood flow accounts for approximately 5% of cardiac output in human and plays a key role in a number of a physiological and pathological processes. We show for the first time a multi-modal photoacoustic tomography (PAT), optical coherence tomography (OCT) and OCT angiography system with an articulated probe to extract human cutaneous vasculature in vivo in various skin regions. OCT angiography supplements the microvasculature which PAT alone is unable to provide. Co-registered volumes for vessel network is further embedded in the morphologic image provided by OCT. This multi-modal system is therefore demonstrated as a valuable tool for comprehensive non-invasive human skin vasculature and morphology imaging in vivo. PMID:27699106

  1. Multimodal optical imaging system for in vivo investigation of cerebral oxygen delivery and energy metabolism

    PubMed Central

    Yaseen, Mohammad A.; Srinivasan, Vivek J.; Gorczynska, Iwona; Fujimoto, James G.; Boas, David A.; Sakadžić, Sava

    2015-01-01

    Improving our understanding of brain function requires novel tools to observe multiple physiological parameters with high resolution in vivo. We have developed a multimodal imaging system for investigating multiple facets of cerebral blood flow and metabolism in small animals. The system was custom designed and features multiple optical imaging capabilities, including 2-photon and confocal lifetime microscopy, optical coherence tomography, laser speckle imaging, and optical intrinsic signal imaging. Here, we provide details of the system’s design and present in vivo observations of multiple metrics of cerebral oxygen delivery and energy metabolism, including oxygen partial pressure, microvascular blood flow, and NADH autofluorescence. PMID:26713212

  2. Deep features for efficient multi-biometric recognition with face and ear images

    NASA Astrophysics Data System (ADS)

    Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng

    2017-07-01

    Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.

  3. Novel multifunctional theranostic liposome drug delivery system: construction, characterization, and multimodality MR, near-infrared fluorescent, and nuclear imaging.

    PubMed

    Li, Shihong; Goins, Beth; Zhang, Lujun; Bao, Ande

    2012-06-20

    Liposomes are effective lipid nanoparticle drug delivery systems, which can also be functionalized with noninvasive multimodality imaging agents with each modality providing distinct information and having synergistic advantages in diagnosis, monitoring of disease treatment, and evaluation of liposomal drug pharmacokinetics. We designed and constructed a multifunctional theranostic liposomal drug delivery system, which integrated multimodality magnetic resonance (MR), near-infrared (NIR) fluorescent and nuclear imaging of liposomal drug delivery, and therapy monitoring and prediction. The premanufactured liposomes were composed of DSPC/cholesterol/Gd-DOTA-DSPE/DOTA-DSPE with the molar ratio of 39:35:25:1 and having ammonium sulfate/pH gradient. A lipidized NIR fluorescent tracer, IRDye-DSPE, was effectively postinserted into the premanufactured liposomes. Doxorubicin could be effectively postloaded into the multifunctional liposomes. The multifunctional doxorubicin-liposomes could also be stably radiolabeled with (99m)Tc or (64)Cu for single-photon emission computed tomography (SPECT) or positron emission tomography (PET) imaging, respectively. MR images displayed the high-resolution micro-intratumoral distribution of the liposomes in squamous cell carcinoma of head and neck (SCCHN) tumor xenografts in nude rats after intratumoral injection. NIR fluorescent, SPECT, and PET images also clearly showed either the high intratumoral retention or distribution of the multifunctional liposomes. This multifunctional drug carrying liposome system is promising for disease theranostics allowing noninvasive multimodality NIR fluorescent, MR, SPECT, and PET imaging of their in vivo behavior and capitalizing on the inherent advantages of each modality.

  4. Design and demonstration of multimodal optical scanning microscopy for confocal and two-photon imaging

    NASA Astrophysics Data System (ADS)

    Chun, Wanhee; Do, Dukho; Gweon, Dae-Gab

    2013-01-01

    We developed a multimodal microscopy based on an optical scanning system in order to obtain diverse optical information of the same area of a sample. Multimodal imaging researches have mostly depended on a commercial microscope platform, easy to use but restrictive to extend imaging modalities. In this work, the beam scanning optics, especially including a relay lens, was customized to transfer broadband (400-1000 nm) lights to a sample without any optical error or loss. The customized scanning optics guarantees the best performances of imaging techniques utilizing the lights within the design wavelength. Confocal reflection, confocal fluorescence, and two-photon excitation fluorescence images were obtained, through respective implemented imaging channels, to demonstrate imaging feasibility for near-UV, visible, near-IR continuous light, and pulsed light in the scanning optics. The imaging performances for spatial resolution and image contrast were verified experimentally; the results were satisfactory in comparison with theoretical results. The advantages of customization, containing low cost, outstanding combining ability and diverse applications, will contribute to vitalize multimodal imaging researches.

  5. Multimodal flexible cystoscopy for creating co-registered panoramas of the bladder urothelium

    NASA Astrophysics Data System (ADS)

    Seibel, Eric J.; Soper, Timothy D.; Burkhardt, Matthew R.; Porter, Michael P.; Yoon, W. Jong

    2012-02-01

    Bladder cancer is the most expensive cancer to treat due to the high rate of recurrence. Though white light cystoscopy is the gold standard for bladder cancer surveillance, the advent of fluorescence biomarkers provides an opportunity to improve sensitivity for early detection and reduced recurrence resulting from more accurate excision. Ideally, fluorescence information could be combined with standard reflectance images to provide multimodal views of the bladder wall. The scanning fiber endoscope (SFE) of 1.2mm in diameter is able to acquire wide-field multimodal video from a bladder phantom with fluorescence cancer "hot-spots". The SFE generates images by scanning red, green, and blue (RGB) laser light and detects the backscatter signal for reflectance video of 500-line resolution at 30 frames per second. We imaged a bladder phantom with painted vessels and mimicked fluorescent lesions by applying green fluorescent microspheres to the surface. By eliminating the green laser illumination, simultaneous reflectance and fluorescence images can be acquired at the same field of view, resolution, and frame rate. Moreover, the multimodal SFE is combined with a robotic steering mechanism and image stitching software as part of a fully automated bladder surveillance system. Using this system, the SFE can be reliably articulated over the entire 360° bladder surface. Acquired images can then be stitched into a multimodal 3D panorama of the bladder using software developed in our laboratory. In each panorama, the fluorescence images are exactly co-registered with RGB reflectance.

  6. A novel automated method for doing registration and 3D reconstruction from multi-modal RGB/IR image sequences

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2016-09-01

    In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.

  7. Multimodal molecular 3D imaging for the tumoral volumetric distribution assessment of folate-based biosensors.

    PubMed

    Ramírez-Nava, Gerardo J; Santos-Cuevas, Clara L; Chairez, Isaac; Aranda-Lara, Liliana

    2017-12-01

    The aim of this study was to characterize the in vivo volumetric distribution of three folate-based biosensors by different imaging modalities (X-ray, fluorescence, Cerenkov luminescence, and radioisotopic imaging) through the development of a tridimensional image reconstruction algorithm. The preclinical and multimodal Xtreme imaging system, with a Multimodal Animal Rotation System (MARS), was used to acquire bidimensional images, which were processed to obtain the tridimensional reconstruction. Images of mice at different times (biosensor distribution) were simultaneously obtained from the four imaging modalities. The filtered back projection and inverse Radon transformation were used as main image-processing techniques. The algorithm developed in Matlab was able to calculate the volumetric profiles of 99m Tc-Folate-Bombesin (radioisotopic image), 177 Lu-Folate-Bombesin (Cerenkov image), and FolateRSense™ 680 (fluorescence image) in tumors and kidneys of mice, and no significant differences were detected in the volumetric quantifications among measurement techniques. The imaging tridimensional reconstruction algorithm can be easily extrapolated to different 2D acquisition-type images. This characteristic flexibility of the algorithm developed in this study is a remarkable advantage in comparison to similar reconstruction methods.

  8. A prototype hand-held tri-modal instrument for in vivo ultrasound, photoacoustic, and fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Kang, Jeeun; Chang, Jin Ho; Wilson, Brian C.; Veilleux, Israel; Bai, Yanhui; DaCosta, Ralph; Kim, Kang; Ha, Seunghan; Lee, Jong Gun; Kim, Jeong Seok; Lee, Sang-Goo; Kim, Sun Mi; Lee, Hak Jong; Ahn, Young Bok; Han, Seunghee; Yoo, Yangmo; Song, Tai-Kyong

    2015-03-01

    Multi-modality imaging is beneficial for both preclinical and clinical applications as it enables complementary information from each modality to be obtained in a single procedure. In this paper, we report the design, fabrication, and testing of a novel tri-modal in vivo imaging system to exploit molecular/functional information from fluorescence (FL) and photoacoustic (PA) imaging as well as anatomical information from ultrasound (US) imaging. The same ultrasound transducer was used for both US and PA imaging, bringing the pulsed laser light into a compact probe by fiberoptic bundles. The FL subsystem is independent of the acoustic components but the front end that delivers and collects the light is physically integrated into the same probe. The tri-modal imaging system was implemented to provide each modality image in real time as well as co-registration of the images. The performance of the system was evaluated through phantom and in vivo animal experiments. The results demonstrate that combining the modalities does not significantly compromise the performance of each of the separate US, PA, and FL imaging techniques, while enabling multi-modality registration. The potential applications of this novel approach to multi-modality imaging range from preclinical research to clinical diagnosis, especially in detection/localization and surgical guidance of accessible solid tumors.

  9. High-resolution multimodal clinical multiphoton tomography of skin

    NASA Astrophysics Data System (ADS)

    König, Karsten

    2011-03-01

    This review focuses on multimodal multiphoton tomography based on near infrared femtosecond lasers. Clinical multiphoton tomographs for 3D high-resolution in vivo imaging have been placed into the market several years ago. The second generation of this Prism-Award winning High-Tech skin imaging tool (MPTflex) was introduced in 2010. The same year, the world's first clinical CARS studies have been performed with a hybrid multimodal multiphoton tomograph. In particular, non-fluorescent lipids and water as well as mitochondrial fluorescent NAD(P)H, fluorescent elastin, keratin, and melanin as well as SHG-active collagen has been imaged with submicron resolution in patients suffering from psoriasis. Further multimodal approaches include the combination of multiphoton tomographs with low-resolution wide-field systems such as ultrasound, optoacoustical, OCT, and dermoscopy systems. Multiphoton tomographs are currently employed in Australia, Japan, the US, and in several European countries for early diagnosis of skin cancer, optimization of treatment strategies, and cosmetic research including long-term testing of sunscreen nanoparticles as well as anti-aging products.

  10. Computational method for multi-modal microscopy based on transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao

    2017-02-01

    In this paper, we develop the requisite theory to describe a hybrid virtual-physical multi-modal imaging system which yields quantitative phase, Zernike phase contrast, differential interference contrast (DIC), and light field moment imaging simultaneously based on transport of intensity equation(TIE). We then give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens based TIE system, combined with the appropriate post-processing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.

  11. A Novel Multifunctional Theranostic Liposome Drug Delivery System: Construction, Characterization, and Multimodality MR, Near-infrared Fluorescent and Nuclear Imaging

    PubMed Central

    Li, Shihong; Goins, Beth; Zhang, Lujun; Bao, Ande

    2012-01-01

    Liposomes are effective lipid nanoparticle drug delivery systems, which can also be functionalized with non-invasive multimodality imaging agents with each modality providing distinct information and having synergistic advantages in diagnosis, monitoring of disease treatment, and evaluation of liposomal drug pharmacokinetics. We designed and constructed a multifunctional theranostic liposomal drug delivery system, which integrated multimodality magnetic resonance (MR), near-infrared (NIR) fluorescent and nuclear imaging of liposomal drug delivery, and therapy monitoring and prediction. The pre-manufactured liposomes were composed of DSPC/cholesterol/Gd-DOTADSPE/DOTA-DSPE with the molar ratio of 39:35:25:1 and having ammonium sulfate/pH gradient. A lipidized NIR fluorescent tracer, IRDye-DSPE, was effectively post-inserted into the pre-manufactured liposomes. Doxorubicin could be effectively post-loaded into the multifunctional liposomes. The multifunctional doxorubicin-liposomes could also be stably radiolabeled with 99mTc or 64Cu for single photon emission computed tomography (SPECT) or positron emission tomography (PET) imaging, respectively. MR images displayed the high resolution micro-intratumoral distribution of the liposomes in squamous cell carcinoma of head and neck (SCCHN) tumor xenografts in nude rats after intratumoral injection. NIR fluorescent, SPECT and PET images also clearly showed either the high intratumoral retention or distribution of the multifunctional liposomes. This multifunctional drug carrying liposome system is promising for disease theranostics allowing non-invasive multimodality NIR fluorescent, MR, SPECT and PET imaging of their in vivo behavior and capitalizing on the inherent advantages of each modality. PMID:22577859

  12. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    NASA Astrophysics Data System (ADS)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  13. Combined multimodal photoacoustic tomography, optical coherence tomography (OCT) and OCT based angiography system for in vivo imaging of multiple skin disorders in human(Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Liu, Mengyang; Chen, Zhe; Sinz, Christoph; Rank, Elisabet; Zabihian, Behrooz; Zhang, Edward Z.; Beard, Paul C.; Kittler, Harald; Drexler, Wolfgang

    2017-02-01

    All optical photoacoustic tomography (PAT) using a planar Fabry-Perot interferometer polymer film sensor has been demonstrated for in vivo human palm imaging with an imaging penetration depth of 5 mm. The relatively larger vessels in the superficial plexus and the vessels in the dermal plexus are visible in PAT. However, due to both resolution and sensitivity limits, all optical PAT cannot reveal the smaller vessels such as capillary loops and venules. Melanin absorption also sometimes causes difficulties in PAT to resolve vessels. Optical coherence tomography (OCT) based angiography, on the other hand, has been proven suitable for microvasculature visualization in the first couple millimeters in human. In our work, we combine an all optical PAT system with an OCT system featuring a phase stable akinetic swept source. This multimodal PAT/OCT/OCT-angiography system provides us co-registered human skin vasculature information as well as the structural information of cutaneous. The scanning units of the sub-systems are assembled into one probe, which is then mounted onto a portable rack. The probe and rack design gives six degrees of freedom, allowing the multimodal optical imaging probe to access nearly all regions of human body. Utilizing this probe, we perform imaging on patients with various skin disorders as well as on healthy controls. Fused PAT/OCT-angiography volume shows the complete blood vessel network in human skin, which is further embedded in the morphology provided by OCT. A comparison between the results from the disordered regions and the normal regions demonstrates the clinical translational value of this multimodal optical imaging system in dermatology.

  14. WE-H-206-02: Recent Advances in Multi-Modality Molecular Imaging of Small Animals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsui, B.

    Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less

  15. A Multimode Optical Imaging System for Preclinical Applications In Vivo: Technology Development, Multiscale Imaging, and Chemotherapy Assessment

    PubMed Central

    Hwang, Jae Youn; Wachsmann-Hogiu, Sebastian; Ramanujan, V. Krishnan; Ljubimova, Julia; Gross, Zeev; Gray, Harry B.; Medina-Kauwe, Lali K.; Farkas, Daniel L.

    2012-01-01

    Purpose Several established optical imaging approaches have been applied, usually in isolation, to preclinical studies; however, truly useful in vivo imaging may require a simultaneous combination of imaging modalities to examine dynamic characteristics of cells and tissues. We developed a new multimode optical imaging system designed to be application-versatile, yielding high sensitivity, and specificity molecular imaging. Procedures We integrated several optical imaging technologies, including fluorescence intensity, spectral, lifetime, intravital confocal, two-photon excitation, and bioluminescence, into a single system that enables functional multiscale imaging in animal models. Results The approach offers a comprehensive imaging platform for kinetic, quantitative, and environmental analysis of highly relevant information, with micro-to-macroscopic resolution. Applied to small animals in vivo, this provides superior monitoring of processes of interest, represented here by chemo-/nanoconstruct therapy assessment. Conclusions This new system is versatile and can be optimized for various applications, of which cancer detection and targeted treatment are emphasized here. PMID:21874388

  16. Multimodal microscopy and the stepwise multi-photon activation fluorescence of melanin

    NASA Astrophysics Data System (ADS)

    Lai, Zhenhua

    The author's work is divided into three aspects: multimodal microscopy, stepwise multi-photon activation fluorescence (SMPAF) of melanin, and customized-profile lenses (CPL) for on-axis laser scanners, which will be introduced respectively. A multimodal microscope provides the ability to image samples with multiple modalities on the same stage, which incorporates the benefits of all modalities. The multimodal microscopes developed in this dissertation are the Keck 3D fusion multimodal microscope 2.0 (3DFM 2.0), upgraded from the old 3DFM with improved performance and flexibility, and the multimodal microscope for targeting small particles (the "Target" system). The control systems developed for both microscopes are low-cost and easy-to-build, with all components off-the-shelf. The control system have not only significantly decreased the complexity and size of the microscope, but also increased the pixel resolution and flexibility. The SMPAF of melanin, activated by a continuous-wave (CW) mode near-infrared (NIR) laser, has potential applications for a low-cost and reliable method of detecting melanin. The photophysics of melanin SMPAF has been studied by theoretical analysis of the excitation process and investigation of the spectra, activation threshold, and photon number absorption of melanin SMPAF. SMPAF images of melanin in mouse hair and skin, mouse melanoma, and human black and white hairs are compared with images taken by conventional multi-photon fluorescence microscopy (MPFM) and confocal reflectance microscopy (CRM). SMPAF images significantly increase specificity and demonstrate the potential to increase sensitivity for melanin detection compared to MPFM images and CRM images. Employing melanin SMPAF imaging to detect melanin inside human skin in vivo has been demonstrated, which proves the effectiveness of melanin detection using SMPAF for medical purposes. Selective melanin ablation with micrometer resolution has been presented using the Target system. Compared to the traditional selective photothermolysis, this method demonstrates higher precision, higher specificity and deeper penetration. Therefore, the SMPAF guided selective ablation of melanin is a promising tool of removing melanin for both medical and cosmetic purposes. Three CPLs have been designed for low-cost linear-motion scanners, low-cost fast spinning scanners and high-precision fast spinning scanners. Each design has been tailored to the industrial manufacturing ability and market demands.

  17. Image recovery from defocused 2D fluorescent images in multimodal digital holographic microscopy.

    PubMed

    Quan, Xiangyu; Matoba, Osamu; Awatsuji, Yasuhiro

    2017-05-01

    A technique of three-dimensional (3D) intensity retrieval from defocused, two-dimensional (2D) fluorescent images in the multimodal digital holographic microscopy (DHM) is proposed. In the multimodal DHM, 3D phase and 2D fluorescence distributions are obtained simultaneously by an integrated system of an off-axis DHM and a conventional epifluorescence microscopy, respectively. This gives us more information of the target; however, defocused fluorescent images are observed due to the short depth of field. In this Letter, we propose a method to recover the defocused images based on the phase compensation and backpropagation from the defocused plane to the focused plane using the distance information that is obtained from a 3D phase distribution. By applying Zernike polynomial phase correction, we brought back the fluorescence intensity to the focused imaging planes. The experimental demonstration using fluorescent beads is presented, and the expected applications are suggested.

  18. Multimodal medical information retrieval with unsupervised rank fusion.

    PubMed

    Mourão, André; Martins, Flávio; Magalhães, João

    2015-01-01

    Modern medical information retrieval systems are paramount to manage the insurmountable quantities of clinical data. These systems empower health care experts in the diagnosis of patients and play an important role in the clinical decision process. However, the ever-growing heterogeneous information generated in medical environments poses several challenges for retrieval systems. We propose a medical information retrieval system with support for multimodal medical case-based retrieval. The system supports medical information discovery by providing multimodal search, through a novel data fusion algorithm, and term suggestions from a medical thesaurus. Our search system compared favorably to other systems in 2013 ImageCLEFMedical. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Development of a multi-scale and multi-modality imaging system to characterize tumours and their microenvironment in vivo

    NASA Astrophysics Data System (ADS)

    Rouffiac, Valérie; Ser-Leroux, Karine; Dugon, Emilie; Leguerney, Ingrid; Polrot, Mélanie; Robin, Sandra; Salomé-Desnoulez, Sophie; Ginefri, Jean-Christophe; Sebrié, Catherine; Laplace-Builhé, Corinne

    2015-03-01

    In vivo high-resolution imaging of tumor development is possible through dorsal skinfold chamber implantable on mice model. However, current intravital imaging systems are weakly tolerated along time by mice and do not allow multimodality imaging. Our project aims to develop a new chamber for: 1- long-term micro/macroscopic visualization of tumor (vascular and cellular compartments) and tissue microenvironment; and 2- multimodality imaging (photonic, MRI and sonography). Our new experimental device was patented in March 2014 and was primarily assessed on 75 mouse engrafted with 4T1-Luc tumor cell line, and validated in confocal and multiphoton imaging after staining the mice vasculature using Dextran 155KDa-TRITC or Dextran 2000kDa-FITC. Simultaneously, a universal stage was designed for optimal removal of respiratory and cardiac artifacts during microscopy assays. Experimental results from optical, ultrasound (Bmode and pulse subtraction mode) and MRI imaging (anatomic sequences) showed that our patented design, unlike commercial devices, improves longitudinal monitoring over several weeks (35 days on average against 12 for the commercial chamber) and allows for a better characterization of the early and late tissue alterations due to tumour development. We also demonstrated the compatibility for multimodality imaging and the increase of mice survival was by a factor of 2.9, with our new skinfold chamber. Current developments include: 1- defining new procedures for multi-labelling of cells and tissue (screening of fluorescent molecules and imaging protocols); 2- developing ultrasound and MRI imaging procedures with specific probes; 3- correlating optical/ultrasound/MRI data for a complete mapping of tumour development and microenvironment.

  20. Towards an ultra-thin medical endoscope: multimode fibre as a wide-field image transferring medium

    NASA Astrophysics Data System (ADS)

    Duriš, Miroslav; Bradu, Adrian; Podoleanu, Adrian; Hughes, Michael

    2018-03-01

    Multimode optical fibres are attractive for biomedical and industrial applications such as endoscopes because of the small cross section and imaging resolution they can provide in comparison to widely-used fibre bundles. However, the image is randomly scrambled by propagation through a multimode fibre. Even though the scrambling is unpredictable, it is deterministic, and therefore the scrambling can be reversed. To unscramble the image, we treat the multimode fibre as a linear, disordered scattering medium. To calibrate, we scan a focused beam of coherent light over thousands of different beam positions at the distal end and record complex fields at the proximal end of the fibre. This way, the inputoutput response of the system is determined, which then allows computational reconstruction of reflection-mode images. However, there remains the problem of illuminating the tissue via the fibre while avoiding back reflections from the proximal face. To avoid this drawback, we provide here the first preliminary confirmation that an image can be transferred through a 2x2 fibre coupler, with the sample at its distal port interrogated in reflection. Light is injected into one port for illumination and then collected from a second port for imaging.

  1. Development of a Multi-modal Tissue Diagnostic System Combining High Frequency Ultrasound and Photoacoustic Imaging with Lifetime Fluorescence Spectroscopy

    PubMed Central

    Sun, Yang; Stephens, Douglas N.; Park, Jesung; Sun, Yinghua; Marcu, Laura; Cannata, Jonathan M.; Shung, K. Kirk

    2010-01-01

    We report the development and validate a multi-modal tissue diagnostic technology, which combines three complementary techniques into one system including ultrasound backscatter microscopy (UBM), photoacoustic imaging (PAI), and time-resolved laser-induced fluorescence spectroscopy (TR-LIFS). UBM enables the reconstruction of the tissue microanatomy. PAI maps the optical absorption heterogeneity of the tissue associated with structure information and has the potential to provide functional imaging of the tissue. Examination of the UBM and PAI images allows for localization of regions of interest for TR-LIFS evaluation of the tissue composition. The hybrid probe consists of a single element ring transducer with concentric fiber optics for multi-modal data acquisition. Validation and characterization of the multi-modal system and ultrasonic, photoacoustic, and spectroscopic data coregistration were conducted in a physical phantom with properties of ultrasound scattering, optical absorption, and fluorescence. The UBM system with the 41 MHz ring transducer can reach the axial and lateral resolution of 30 and 65 μm, respectively. The PAI system with 532 nm excitation light from a Nd:YAG laser shows great contrast for the distribution of optical absorbers. The TR-LIFS system records the fluorescence decay with the time resolution of ~300 ps and a high sensitivity of nM concentration range. Biological phantom constructed with different types of tissues (tendon and fat) was used to demonstrate the complementary information provided by the three modalities. Fluorescence spectra and lifetimes were compared to differentiate chemical composition of tissues at the regions of interest determined by the coregistered high resolution UBM and PAI image. Current results demonstrate that the fusion of these techniques enables sequentially detection of functional, morphological, and compositional features of biological tissue, suggesting potential applications in diagnosis of tumors and atherosclerotic plaques. PMID:21894259

  2. Development of a Multi-modal Tissue Diagnostic System Combining High Frequency Ultrasound and Photoacoustic Imaging with Lifetime Fluorescence Spectroscopy.

    PubMed

    Sun, Yang; Stephens, Douglas N; Park, Jesung; Sun, Yinghua; Marcu, Laura; Cannata, Jonathan M; Shung, K Kirk

    2008-01-01

    We report the development and validate a multi-modal tissue diagnostic technology, which combines three complementary techniques into one system including ultrasound backscatter microscopy (UBM), photoacoustic imaging (PAI), and time-resolved laser-induced fluorescence spectroscopy (TR-LIFS). UBM enables the reconstruction of the tissue microanatomy. PAI maps the optical absorption heterogeneity of the tissue associated with structure information and has the potential to provide functional imaging of the tissue. Examination of the UBM and PAI images allows for localization of regions of interest for TR-LIFS evaluation of the tissue composition. The hybrid probe consists of a single element ring transducer with concentric fiber optics for multi-modal data acquisition. Validation and characterization of the multi-modal system and ultrasonic, photoacoustic, and spectroscopic data coregistration were conducted in a physical phantom with properties of ultrasound scattering, optical absorption, and fluorescence. The UBM system with the 41 MHz ring transducer can reach the axial and lateral resolution of 30 and 65 μm, respectively. The PAI system with 532 nm excitation light from a Nd:YAG laser shows great contrast for the distribution of optical absorbers. The TR-LIFS system records the fluorescence decay with the time resolution of ~300 ps and a high sensitivity of nM concentration range. Biological phantom constructed with different types of tissues (tendon and fat) was used to demonstrate the complementary information provided by the three modalities. Fluorescence spectra and lifetimes were compared to differentiate chemical composition of tissues at the regions of interest determined by the coregistered high resolution UBM and PAI image. Current results demonstrate that the fusion of these techniques enables sequentially detection of functional, morphological, and compositional features of biological tissue, suggesting potential applications in diagnosis of tumors and atherosclerotic plaques.

  3. Analysis of multimode fiber bundles for endoscopic spectral-domain optical coherence tomography

    PubMed Central

    Risi, Matthew D.; Makhlouf, Houssine; Rouse, Andrew R.; Gmitro, Arthur F.

    2016-01-01

    A theoretical analysis of the use of a fiber bundle in spectral-domain optical coherence tomography (OCT) systems is presented. The fiber bundle enables a flexible endoscopic design and provides fast, parallelized acquisition of the OCT data. However, the multimode characteristic of the fibers in the fiber bundle affects the depth sensitivity of the imaging system. A description of light interference in a multimode fiber is presented along with numerical simulations and experimental studies to illustrate the theoretical analysis. PMID:25967012

  4. An atlas-based multimodal registration method for 2D images with discrepancy structures.

    PubMed

    Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng

    2018-06-04

    An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.

  5. Structured illumination multimodal 3D-resolved quantitative phase and fluorescence sub-diffraction microscopy

    PubMed Central

    Chowdhury, Shwetadwip; Eldridge, Will J.; Wax, Adam; Izatt, Joseph A.

    2017-01-01

    Sub-diffraction resolution imaging has played a pivotal role in biological research by visualizing key, but previously unresolvable, sub-cellular structures. Unfortunately, applications of far-field sub-diffraction resolution are currently divided between fluorescent and coherent-diffraction regimes, and a multimodal sub-diffraction technique that bridges this gap has not yet been demonstrated. Here we report that structured illumination (SI) allows multimodal sub-diffraction imaging of both coherent quantitative-phase (QP) and fluorescence. Due to SI’s conventionally fluorescent applications, we first demonstrate the principle of SI-enabled three-dimensional (3D) QP sub-diffraction imaging with calibration microspheres. Image analysis confirmed enhanced lateral and axial resolutions over diffraction-limited QP imaging, and established striking parallels between coherent SI and conventional optical diffraction tomography. We next introduce an optical system utilizing SI to achieve 3D sub-diffraction, multimodal QP/fluorescent visualization of A549 biological cells fluorescently tagged for F-actin. Our results suggest that SI has a unique utility in studying biological phenomena with significant molecular, biophysical, and biochemical components. PMID:28663887

  6. Laser Microdissection and Atmospheric Pressure Chemical Ionization Mass Spectrometry Coupled for Multimodal Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lorenz, Matthias; Ovchinnikova, Olga S; Kertesz, Vilmos

    2013-01-01

    This paper describes the coupling of ambient laser ablation surface sampling, accomplished using a laser capture microdissection system, with atmospheric pressure chemical ionization mass spectrometry for high spatial resolution multimodal imaging. A commercial laser capture microdissection system was placed in close proximity to a modified ion source of a mass spectrometer designed to allow for sampling of laser ablated material via a transfer tube directly into the ionization region. Rhodamine 6G dye of red sharpie ink in a laser etched pattern as well as cholesterol and phosphatidylcholine in a cerebellum mouse brain thin tissue section were identified and imaged frommore » full scan mass spectra. A minimal spot diameter of 8 m was achieved using the 10X microscope cutting objective with a lateral oversampling pixel resolution of about 3.7 m. Distinguishing between features approximately 13 m apart in a cerebellum mouse brain thin tissue section was demonstrated in a multimodal fashion including co-registered optical and mass spectral chemical images.« less

  7. Enhancing image classification models with multi-modal biomarkers

    NASA Astrophysics Data System (ADS)

    Caban, Jesus J.; Liao, David; Yao, Jianhua; Mollura, Daniel J.; Gochuico, Bernadette; Yoo, Terry

    2011-03-01

    Currently, most computer-aided diagnosis (CAD) systems rely on image analysis and statistical models to diagnose, quantify, and monitor the progression of a particular disease. In general, CAD systems have proven to be effective at providing quantitative measurements and assisting physicians during the decision-making process. As the need for more flexible and effective CADs continues to grow, questions about how to enhance their accuracy have surged. In this paper, we show how statistical image models can be augmented with multi-modal physiological values to create more robust, stable, and accurate CAD systems. In particular, this paper demonstrates how highly correlated blood and EKG features can be treated as biomarkers and used to enhance image classification models designed to automatically score subjects with pulmonary fibrosis. In our results, a 3-5% improvement was observed when comparing the accuracy of CADs that use multi-modal biomarkers with those that only used image features. Our results show that lab values such as Erythrocyte Sedimentation Rate and Fibrinogen, as well as EKG measurements such as QRS and I:40, are statistically significant and can provide valuable insights about the severity of the pulmonary fibrosis disease.

  8. Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance

    PubMed Central

    Mela, Christopher A.; Patterson, Carrie; Thompson, William K.; Papay, Francis; Liu, Yang

    2015-01-01

    We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a) the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b) the first wearable system offering both large FOV and microscopic imaging simultaneously, (c) the first wearable system that offers both ultrasound imaging and fluorescence imaging capacities, and (d) the first demonstration of goggle-to-goggle communication to share stereoscopic views for medical guidance. PMID:26529249

  9. Integrated scanning laser ophthalmoscopy and optical coherence tomography for quantitative multimodal imaging of retinal degeneration and autofluorescence

    NASA Astrophysics Data System (ADS)

    Issaei, Ali; Szczygiel, Lukasz; Hossein-Javaheri, Nima; Young, Mei; Molday, L. L.; Molday, R. S.; Sarunic, M. V.

    2011-03-01

    Scanning Laser Ophthalmoscopy (SLO) and Coherence Tomography (OCT) are complimentary retinal imaging modalities. Integration of SLO and OCT allows for both fluorescent detection and depth- resolved structural imaging of the retinal cell layers to be performed in-vivo. System customization is required to image rodents used in medical research by vision scientists. We are investigating multimodal SLO/OCT imaging of a rodent model of Stargardt's Macular Dystrophy which is characterized by retinal degeneration and accumulation of toxic autofluorescent lipofuscin deposits. Our new findings demonstrate the ability to track fundus autofluorescence and retinal degeneration concurrently.

  10. WE-H-206-03: Promises and Challenges of Benchtop X-Ray Fluorescence CT (XFCT) for Quantitative in Vivo Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, S.

    Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less

  11. WE-H-206-01: Photoacoustic Tomography: Multiscale Imaging From Organelles to Patients by Ultrasonically Beating the Optical Diffusion Limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, L.

    Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less

  12. WE-H-206-00: Advances in Preclinical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less

  13. XML-based scripting of multimodality image presentations in multidisciplinary clinical conferences

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Allada, Vivekanand; Dahlbom, Magdalena; Marcus, Phillip; Fine, Ian; Lapstra, Lorelle

    2002-05-01

    We developed a multi-modality image presentation software for display and analysis of images and related data from different imaging modalities. The software is part of a cardiac image review and presentation platform that supports integration of digital images and data from digital and analog media such as videotapes, analog x-ray films and 35 mm cine films. The software supports standard DICOM image files as well as AVI and PDF data formats. The system is integrated in a digital conferencing room that includes projections of digital and analog sources, remote videoconferencing capabilities, and an electronic whiteboard. The goal of this pilot project is to: 1) develop a new paradigm for image and data management for presentation in a clinically meaningful sequence adapted to case-specific scenarios, 2) design and implement a multi-modality review and conferencing workstation using component technology and customizable 'plug-in' architecture to support complex review and diagnostic tasks applicable to all cardiac imaging modalities and 3) develop an XML-based scripting model of image and data presentation for clinical review and decision making during routine clinical tasks and multidisciplinary clinical conferences.

  14. Use of anomolous thermal imaging effects for multi-mode systems control during crystal growth

    NASA Technical Reports Server (NTRS)

    Wargo, Michael J.

    1989-01-01

    Real time image processing techniques, combined with multitasking computational capabilities are used to establish thermal imaging as a multimode sensor for systems control during crystal growth. Whereas certain regions of the high temperature scene are presently unusable for quantitative determination of temperature, the anomalous information thus obtained is found to serve as a potentially low noise source of other important systems control output. Using this approach, the light emission/reflection characteristics of the crystal, meniscus and melt system are used to infer the crystal diameter and a linear regression algorithm is employed to determine the local diameter trend. This data is utilized as input for closed loop control of crystal shape. No performance penalty in thermal imaging speed is paid for this added functionality. Approach to secondary (diameter) sensor design and systems control structure is discussed. Preliminary experimental results are presented.

  15. Live animal myelin histomorphometry of the spinal cord with video-rate multimodal nonlinear microendoscopy

    NASA Astrophysics Data System (ADS)

    Bélanger, Erik; Crépeau, Joël; Laffray, Sophie; Vallée, Réal; De Koninck, Yves; Côté, Daniel

    2012-02-01

    In vivo imaging of cellular dynamics can be dramatically enabling to understand the pathophysiology of nervous system diseases. To fully exploit the power of this approach, the main challenges have been to minimize invasiveness and maximize the number of concurrent optical signals that can be combined to probe the interplay between multiple cellular processes. Label-free coherent anti-Stokes Raman scattering (CARS) microscopy, for example, can be used to follow demyelination in neurodegenerative diseases or after trauma, but myelin imaging alone is not sufficient to understand the complex sequence of events that leads to the appearance of lesions in the white matter. A commercially available microendoscope is used here to achieve minimally invasive, video-rate multimodal nonlinear imaging of cellular processes in live mouse spinal cord. The system allows for simultaneous CARS imaging of myelin sheaths and two-photon excitation fluorescence microendoscopy of microglial cells and axons. Morphometric data extraction at high spatial resolution is also described, with a technique for reducing motion-related imaging artifacts. Despite its small diameter, the microendoscope enables high speed multimodal imaging over wide areas of tissue, yet at resolution sufficient to quantify subtle differences in myelin thickness and microglial motility.

  16. Development of ClearPEM-Sonic, a multimodal mammography system for PET and Ultrasound

    NASA Astrophysics Data System (ADS)

    Cucciati, G.; Auffray, E.; Bugalho, R.; Cao, L.; Di Vara, N.; Farina, F.; Felix, N.; Frisch, B.; Ghezzi, A.; Juhan, V.; Jun, D.; Lasaygues, P.; Lecoq, P.; Mensah, S.; Mundler, O.; Neves, J.; Paganoni, M.; Peter, J.; Pizzichemi, M.; Siles, P.; Silva, J. C.; Silva, R.; Tavernier, S.; Tessonnier, L.; Varela, J.

    2014-03-01

    ClearPEM-Sonic is an innovative imaging device specifically developed for breast cancer. The possibility to work in PEM-Ultrasound multimodality allows to obtain metabolic and morphological information increasing the specificity of the exam. The ClearPEM detector is developed to maximize the sensitivity and the spatial resolution as compared to Whole-Body PET scanners. It is coupled with a 3D ultrasound system, the SuperSonic Imagine Aixplorer that improves the specificity of the exam by providing a tissue elasticity map. This work describes the ClearPEM-Sonic project focusing on the technological developments it has required, the technical merits (and limits) and the first multimodal images acquired on a dedicated phantom. It finally presents selected clinical case studies that confirm the value of PEM information.

  17. Water-stable NaLuF4-based upconversion nanophosphors with long-term validity for multimodal lymphatic imaging.

    PubMed

    Zhou, Jing; Zhu, Xingjun; Chen, Min; Sun, Yun; Li, Fuyou

    2012-09-01

    Multimodal imaging is rapidly becoming an important tool for biomedical applications because it can compensate for the deficiencies of individual imaging modalities. Herein, multifunctional NaLuF(4)-based upconversion nanoparticles (Lu-UCNPs) were synthesized though a facile one-step microemulsion method under ambient condition. The doping of lanthanide ions (Gd(3+), Yb(3+) and Er(3+)/Tm(3+)) endows the Lu-UCNPs with high T(1)-enhancement, bright upconversion luminescence (UCL) emissions, and excellent X-ray absorption coefficient. Moreover, the as-prepared Lu-UCNPs are stable in water for more than six months, due to the protection of sodium glutamate and diethylene triamine pentacetate acid (DTPA) coordinating ligands on the surface. Lu-UCNPs have been successfully applied to the trimodal CT/MR/UCL lymphatic imaging on the modal of small animals. It is worth noting that Lu-UCNPs could be used for imaging even after preserving for over six months. In vitro transmission electron microscope (TEM), methyl thiazolyl tetrazolium (MTT) assay and histological analysis demonstrated that Lu-UCNPs exhibited low toxicity on living systems. Therefore, Lu-UCNPs could be multimodal agents for CT/MR/UCL imaging, and the concept can be served as a platform technology for the next-generation of probes for multimodal imaging. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Ex vivo catheter-based imaging of coronary atherosclerosis using multimodality OCT and NIRAF excited at 633 nm

    PubMed Central

    Wang, Hao; Gardecki, Joseph A.; Ughi, Giovanni J.; Jacques, Paulino Vacas; Hamidi, Ehsan; Tearney, Guillermo J.

    2015-01-01

    While optical coherence tomography (OCT) has been shown to be capable of imaging coronary plaque microstructure, additional chemical/molecular information may be needed in order to determine which lesions are at risk of causing an acute coronary event. In this study, we used a recently developed imaging system and double-clad fiber (DCF) catheter capable of simultaneously acquiring both OCT and red excited near-infrared autofluorescence (NIRAF) images (excitation: 633 nm, emission: 680nm to 900nm). We found that NIRAF is elevated in lesions that contain necrotic core – a feature that is critical for vulnerable plaque diagnosis and that is not readily discriminated by OCT alone. We first utilized a DCF ball lens probe and a bench top setup to acquire en face NIRAF images of aortic plaques ex vivo (n = 20). In addition, we used the OCT-NIRAF system and fully assembled catheters to acquire multimodality images from human coronary arteries (n = 15) prosected from human cadaver hearts (n = 5). Comparison of these images with corresponding histology demonstrated that necrotic core plaques exhibited significantly higher NIRAF intensity than other plaque types. These results suggest that multimodality intracoronary OCT-NIRAF imaging technology may be used in the future to provide improved characterization of coronary artery disease in human patients. PMID:25909020

  19. Multimodal imaging system for dental caries detection

    NASA Astrophysics Data System (ADS)

    Liang, Rongguang; Wong, Victor; Marcus, Michael; Burns, Peter; McLaughlin, Paul

    2007-02-01

    Dental caries is a disease in which minerals of the tooth are dissolved by surrounding bacterial plaques. A caries process present for some time may result in a caries lesion. However, if it is detected early enough, the dentist and dental professionals can implement measures to reverse and control caries. Several optical, nonionized methods have been investigated and used to detect dental caries in early stages. However, there is not a method that can singly detect the caries process with both high sensitivity and high specificity. In this paper, we present a multimodal imaging system that combines visible reflectance, fluorescence, and Optical Coherence Tomography (OCT) imaging. This imaging system is designed to obtain one or more two-dimensional images of the tooth (reflectance and fluorescence images) and a three-dimensional OCT image providing depth and size information of the caries. The combination of two- and three-dimensional images of the tooth has the potential for highly sensitive and specific detection of dental caries.

  20. Magnetic Nanoparticles for Multi-Imaging and Drug Delivery

    PubMed Central

    Lee, Jae-Hyun; Kim, Ji-wook; Cheon, Jinwoo

    2013-01-01

    Various bio-medical applications of magnetic nanoparticles have been explored during the past few decades. As tools that hold great potential for advancing biological sciences, magnetic nanoparticles have been used as platform materials for enhanced magnetic resonance imaging (MRI) agents, biological separation and magnetic drug delivery systems, and magnetic hyperthermia treatment. Furthermore, approaches that integrate various imaging and bioactive moieties have been used in the design of multi-modality systems, which possess synergistically enhanced properties such as better imaging resolution and sensitivity, molecular recognition capabilities, stimulus responsive drug delivery with on-demand control, and spatio-temporally controlled cell signal activation. Below, recent studies that focus on the design and synthesis of multi-mode magnetic nanoparticles will be briefly reviewed and their potential applications in the imaging and therapy areas will be also discussed. PMID:23579479

  1. Multi-modal diffuse optical techniques for breast cancer neoadjuvant chemotherapy monitoring (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Cochran, Jeffrey M.; Busch, David R.; Ban, Han Y.; Kavuri, Venkaiah C.; Schweiger, Martin J.; Arridge, Simon R.; Yodh, Arjun G.

    2017-02-01

    We present high spatial density, multi-modal, parallel-plate Diffuse Optical Tomography (DOT) imaging systems for the purpose of breast tumor detection. One hybrid instrument provides time domain (TD) and continuous wave (CW) DOT at 64 source fiber positions. The TD diffuse optical spectroscopy with PMT- detection produces low-resolution images of absolute tissue scattering and absorption while the spatially dense array of CCD-coupled detector fibers (108 detectors) provides higher-resolution CW images of relative tissue optical properties. Reconstruction of the tissue optical properties, along with total hemoglobin concentration and tissue oxygen saturation, is performed using the TOAST software suite. Comparison of the spatially-dense DOT images and MR images allows for a robust validation of DOT against an accepted clinical modality. Additionally, the structural information from co-registered MR images is used as a spatial prior to improve the quality of the functional optical images and provide more accurate quantification of the optical and hemodynamic properties of tumors. We also present an optical-only imaging system that provides frequency domain (FD) DOT at 209 source positions with full CCD detection and incorporates optical fringe projection profilometry to determine the breast boundary. This profilometry serves as a spatial constraint, improving the quality of the DOT reconstructions while retaining the benefits of an optical-only device. We present initial images from both human subjects and phantoms to display the utility of high spatial density data and multi-modal information in DOT reconstruction with the two systems.

  2. Magnetic nanobubbles with potential for targeted drug delivery and trimodal imaging in breast cancer: an in vitro study.

    PubMed

    Song, Weixiang; Luo, Yindeng; Zhao, Yajing; Liu, Xinjie; Zhao, Jiannong; Luo, Jie; Zhang, Qunxia; Ran, Haitao; Wang, Zhigang; Guo, Dajing

    2017-05-01

    The aim of this study was to improve tumor-targeted therapy for breast cancer by designing magnetic nanobubbles with the potential for targeted drug delivery and multimodal imaging. Herceptin-decorated and ultrasmall superparamagnetic iron oxide (USPIO)/paclitaxel (PTX)-embedded nanobubbles (PTX-USPIO-HER-NBs) were manufactured by combining a modified double-emulsion evaporation process with carbodiimide technique. PTX-USPIO-HER-NBs were examined for characterization, specific cell-targeting ability and multimodal imaging. PTX-USPIO-HER-NBs exhibited excellent entrapment efficiency of Herceptin/PTX/USPIO and showed greater cytotoxic effects than other delivery platforms. Low-frequency ultrasound triggered accelerated PTX release. Moreover, the magnetic nanobubbles were able to enhance ultrasound, magnetic resonance and photoacoustics trimodal imaging. These results suggest that PTX-USPIO-HER-NBs have potential as a multimodal contrast agent and as a system for ultrasound-triggered drug release in breast cancer.

  3. Intra-operative label-free multimodal multiphoton imaging of breast cancer margins and microenvironment (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Sun, Yi; You, Sixian; Tu, Haohua; Spillman, Darold R.; Marjanovic, Marina; Chaney, Eric J.; Liu, George Z.; Ray, Partha S.; Higham, Anna; Boppart, Stephen A.

    2017-02-01

    Label-free multi-photon imaging has been a powerful tool for studying tissue microstructures and biochemical distributions, particularly for investigating tumors and their microenvironments. However, it remains challenging for traditional bench-top multi-photon microscope systems to conduct ex vivo tumor tissue imaging in the operating room due to their bulky setups and laser sources. In this study, we designed, built, and clinically demonstrated a portable multi-modal nonlinear label-free microscope system that combined four modalities, including two- and three- photon fluorescence for studying the distributions of FAD and NADH, and second and third harmonic generation, respectively, for collagen fiber structures and the distribution of micro-vesicles found in tumors and the microenvironment. Optical realignments and switching between modalities were motorized for more rapid and efficient imaging and for a light-tight enclosure, reducing ambient light noise to only 5% within the brightly lit operating room. Using up to 20 mW of laser power after a 20x objective, this system can acquire multi-modal sets of images over 600 μm × 600 μm at an acquisition rate of 60 seconds using galvo-mirror scanning. This portable microscope system was demonstrated in the operating room for imaging fresh, resected, unstained breast tissue specimens, and for assessing tumor margins and the tumor microenvironment. This real-time label-free nonlinear imaging system has the potential to uniquely characterize breast cancer margins and the microenvironment of tumors to intraoperatively identify structural, functional, and molecular changes that could indicate the aggressiveness of the tumor.

  4. Intraoperative imaging-guided cancer surgery: from current fluorescence molecular imaging methods to future multi-modality imaging technology.

    PubMed

    Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan

    2014-01-01

    Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery.

  5. Intraoperative Imaging-Guided Cancer Surgery: From Current Fluorescence Molecular Imaging Methods to Future Multi-Modality Imaging Technology

    PubMed Central

    Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan

    2014-01-01

    Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery. PMID:25250092

  6. Introduction of a standardized multimodality image protocol for navigation-guided surgery of suspected low-grade gliomas.

    PubMed

    Mert, Aygül; Kiesel, Barbara; Wöhrer, Adelheid; Martínez-Moreno, Mauricio; Minchev, Georgi; Furtner, Julia; Knosp, Engelbert; Wolfsberger, Stefan; Widhalm, Georg

    2015-01-01

    OBJECT Surgery of suspected low-grade gliomas (LGGs) poses a special challenge for neurosurgeons due to their diffusely infiltrative growth and histopathological heterogeneity. Consequently, neuronavigation with multimodality imaging data, such as structural and metabolic data, fiber tracking, and 3D brain visualization, has been proposed to optimize surgery. However, currently no standardized protocol has been established for multimodality imaging data in modern glioma surgery. The aim of this study was therefore to define a specific protocol for multimodality imaging and navigation for suspected LGG. METHODS Fifty-one patients who underwent surgery for a diffusely infiltrating glioma with nonsignificant contrast enhancement on MRI and available multimodality imaging data were included. In the first 40 patients with glioma, the authors retrospectively reviewed the imaging data, including structural MRI (contrast-enhanced T1-weighted, T2-weighted, and FLAIR sequences), metabolic images derived from PET, or MR spectroscopy chemical shift imaging, fiber tracking, and 3D brain surface/vessel visualization, to define standardized image settings and specific indications for each imaging modality. The feasibility and surgical relevance of this new protocol was subsequently prospectively investigated during surgery with the assistance of an advanced electromagnetic navigation system in the remaining 11 patients. Furthermore, specific surgical outcome parameters, including the extent of resection, histological analysis of the metabolic hotspot, presence of a new postoperative neurological deficit, and intraoperative accuracy of 3D brain visualization models, were assessed in each of these patients. RESULTS After reviewing these first 40 cases of glioma, the authors defined a specific protocol with standardized image settings and specific indications that allows for optimal and simultaneous visualization of structural and metabolic data, fiber tracking, and 3D brain visualization. This new protocol was feasible and was estimated to be surgically relevant during navigation-guided surgery in all 11 patients. According to the authors' predefined surgical outcome parameters, they observed a complete resection in all resectable gliomas (n = 5) by using contour visualization with T2-weighted or FLAIR images. Additionally, tumor tissue derived from the metabolic hotspot showed the presence of malignant tissue in all WHO Grade III or IV gliomas (n = 5). Moreover, no permanent postoperative neurological deficits occurred in any of these patients, and fiber tracking and/or intraoperative monitoring were applied during surgery in the vast majority of cases (n = 10). Furthermore, the authors found a significant intraoperative topographical correlation of 3D brain surface and vessel models with gyral anatomy and superficial vessels. Finally, real-time navigation with multimodality imaging data using the advanced electromagnetic navigation system was found to be useful for precise guidance to surgical targets, such as the tumor margin or the metabolic hotspot. CONCLUSIONS In this study, the authors defined a specific protocol for multimodality imaging data in suspected LGGs, and they propose the application of this new protocol for advanced navigation-guided procedures optimally in conjunction with continuous electromagnetic instrument tracking to optimize glioma surgery.

  7. Molecular brain imaging in the multimodality era

    PubMed Central

    Price, Julie C

    2012-01-01

    Multimodality molecular brain imaging encompasses in vivo visualization, evaluation, and measurement of cellular/molecular processes. Instrumentation and software developments over the past 30 years have fueled advancements in multimodality imaging platforms that enable acquisition of multiple complementary imaging outcomes by either combined sequential or simultaneous acquisition. This article provides a general overview of multimodality neuroimaging in the context of positron emission tomography as a molecular imaging tool and magnetic resonance imaging as a structural and functional imaging tool. Several image examples are provided and general challenges are discussed to exemplify complementary features of the modalities, as well as important strengths and weaknesses of combined assessments. Alzheimer's disease is highlighted, as this clinical area has been strongly impacted by multimodality neuroimaging findings that have improved understanding of the natural history of disease progression, early disease detection, and informed therapy evaluation. PMID:22434068

  8. Visual tracking for multi-modality computer-assisted image guidance

    NASA Astrophysics Data System (ADS)

    Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp

    2017-03-01

    With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.

  9. How acute and chronic alcohol consumption affects brain networks: insights from multimodal neuroimaging.

    PubMed

    Schulte, Tilman; Oberlin, Brandon G; Kareken, David A; Marinkovic, Ksenija; Müller-Oehring, Eva M; Meyerhoff, Dieter J; Tapert, Susan

    2012-12-01

    Multimodal imaging combining 2 or more techniques is becoming increasingly important because no single imaging approach has the capacity to elucidate all clinically relevant characteristics of a network. This review highlights recent advances in multimodal neuroimaging (i.e., combined use and interpretation of data collected through magnetic resonance imaging [MRI], functional MRI, diffusion tensor imaging, positron emission tomography, magnetoencephalography, MR perfusion, and MR spectroscopy methods) that leads to a more comprehensive understanding of how acute and chronic alcohol consumption affect neural networks underlying cognition, emotion, reward processing, and drinking behavior. Several innovative investigators have started utilizing multiple imaging approaches within the same individual to better understand how alcohol influences brain systems, both during intoxication and after years of chronic heavy use. Their findings can help identify mechanism-based therapeutic and pharmacological treatment options, and they may increase the efficacy and cost effectiveness of such treatments by predicting those at greatest risk for relapse. Copyright © 2012 by the Research Society on Alcoholism.

  10. Dye-Enhanced Multimodal Confocal Imaging of Brain Cancers

    NASA Astrophysics Data System (ADS)

    Wirth, Dennis; Snuderl, Matija; Sheth, Sameer; Curry, William; Yaroslavsky, Anna

    2011-04-01

    Background and Significance: Accurate high resolution intraoperative detection of brain tumors may result in improved patient survival and better quality of life. The goal of this study was to evaluate dye enhanced multimodal confocal imaging for discriminating normal and cancerous brain tissue. Materials and Methods: Fresh thick brain specimens were obtained from the surgeries. Normal and cancer tissues were investigated. Samples were stained in methylene blue and imaged. Reflectance and fluorescence signals were excited at 658nm. Fluorescence emission and polarization were registered from 670 nm to 710 nm. The system provided lateral resolution of 0.6 μm and axial resolution of 7 μm. Normal and cancer specimens exhibited distinctively different characteristics. H&E histopathology was processed from each imaged sample. Results and Conclusions: The analysis of normal and cancerous tissues indicated clear differences in appearance in both the reflectance and fluorescence responses. These results confirm the feasibility of multimodal confocal imaging for intraoperative detection of small cancer nests and cells.

  11. Multimode optical dermoscopy (SkinSpect) analysis for skin with melanocytic nevus

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas; Saager, Rolf; Kelly, Kristen M.; Maly, Tyler; Chave, Robert; Booth, Nicholas; Durkin, Anthony J.; Farkas, Daniel L.

    2016-04-01

    We have developed a multimode dermoscope (SkinSpect™) capable of illuminating human skin samples in-vivo with spectrally-programmable linearly-polarized light at 33 wavelengths between 468nm and 857 nm. Diffusely reflected photons are separated into collinear and cross-polarized image paths and images captured for each illumination wavelength. In vivo human skin nevi (N = 20) were evaluated with the multimode dermoscope and melanin and hemoglobin concentrations were compared with Spatially Modulated Quantitative Spectroscopy (SMoQS) measurements. Both systems show low correlation between their melanin and hemoglobin concentrations, demonstrating the ability of the SkinSpect™ to separate these molecular signatures and thus act as a biologically plausible device capable of early onset melanoma detection.

  12. Simultaneous in vivo imaging of melanin and lipofuscin in the retina with multimodal photoacoustic ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangyang; Zhang, Hao F.; Zhou, Lixiang; Jiao, Shuliang

    2012-02-01

    We combined photoacoustic ophthalmoscopy (PAOM) with autofluorescence imaging for simultaneous in vivo imaging of dual molecular contrasts in the retina using a single light source. The dual molecular contrasts come from melanin and lipofuscin in the retinal pigment epithelium (RPE). Melanin and lipofuscin are two types of pigments and are believed to play opposite roles (protective vs. exacerbate) in the RPE in the aging process. We successfully imaged the retina of pigmented and albino rats at different ages. The experimental results showed that multimodal PAOM system can be a potentially powerful tool in the study of age-related degenerative retinal diseases.

  13. [Fusion of MRI, fMRI and intraoperative MRI data. Methods and clinical significance exemplified by neurosurgical interventions].

    PubMed

    Moche, M; Busse, H; Dannenberg, C; Schulz, T; Schmitgen, A; Trantakis, C; Winkler, D; Schmidt, F; Kahn, T

    2001-11-01

    The aim of this work was to realize and clinically evaluate an image fusion platform for the integration of preoperative MRI and fMRI data into the intraoperative images of an interventional MRI system with a focus on neurosurgical procedures. A vertically open 0.5 T MRI scanner was equipped with a dedicated navigation system enabling the registration of additional imaging modalities (MRI, fMRI, CT) with the intraoperatively acquired data sets. These merged image data served as the basis for interventional planning and multimodal navigation. So far, the system has been used in 70 neurosurgical interventions (13 of which involved image data fusion--requiring 15 minutes extra time). The augmented navigation system is characterized by a higher frame rate and a higher image quality as compared to the system-integrated navigation based on continuously acquired (near) real time images. Patient movement and tissue shifts can be immediately detected by monitoring the morphological differences between both navigation scenes. The multimodal image fusion allowed a refined navigation planning especially for the resection of deeply seated brain lesions or pathologies close to eloquent areas. Augmented intraoperative orientation and instrument guidance improve the safety and accuracy of neurosurgical interventions.

  14. Resolution and throughput optimized intraoperative spectrally encoded coherence tomography and reflectometry (iSECTR) for multimodal imaging during ophthalmic microsurgery

    NASA Astrophysics Data System (ADS)

    Malone, Joseph D.; El-Haddad, Mohamed T.; Leeburg, Kelsey C.; Terrones, Benjamin D.; Tao, Yuankai K.

    2018-02-01

    Limited visualization of semi-transparent structures in the eye remains a critical barrier to improving clinical outcomes and developing novel surgical techniques. While increases in imaging speed has enabled intraoperative optical coherence tomography (iOCT) imaging of surgical dynamics, several critical barriers to clinical adoption remain. Specifically, these include (1) static field-of-views (FOVs) requiring manual instrument-tracking; (2) high frame-rates require sparse sampling, which limits FOV; and (3) small iOCT FOV also limits the ability to co-register data with surgical microscopy. We previously addressed these limitations in image-guided ophthalmic microsurgery by developing microscope-integrated multimodal intraoperative swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography. Complementary en face images enabled orientation and coregistration with the widefield surgical microscope view while OCT imaging enabled depth-resolved visualization of surgical instrument positions relative to anatomic structures-of-interest. In addition, we demonstrated novel integrated segmentation overlays for augmented-reality surgical guidance. Unfortunately, our previous system lacked the resolution and optical throughput for in vivo retinal imaging and necessitated removal of cornea and lens. These limitations were predominately a result of optical aberrations from imaging through a shared surgical microscope objective lens, which was modeled as a paraxial surface. Here, we present an optimized intraoperative spectrally encoded coherence tomography and reflectometry (iSECTR) system. We use a novel lens characterization method to develop an accurate model of surgical microscope objective performance and balance out inherent aberrations using iSECTR relay optics. Using this system, we demonstrate in vivo multimodal ophthalmic imaging through a surgical microscope

  15. Exogenous Molecular Probes for Targeted Imaging in Cancer: Focus on Multi-modal Imaging

    PubMed Central

    Joshi, Bishnu P.; Wang, Thomas D.

    2010-01-01

    Cancer is one of the major causes of mortality and morbidity in our healthcare system. Molecular imaging is an emerging methodology for the early detection of cancer, guidance of therapy, and monitoring of response. The development of new instruments and exogenous molecular probes that can be labeled for multi-modality imaging is critical to this process. Today, molecular imaging is at a crossroad, and new targeted imaging agents are expected to broadly expand our ability to detect and manage cancer. This integrated imaging strategy will permit clinicians to not only localize lesions within the body but also to manage their therapy by visualizing the expression and activity of specific molecules. This information is expected to have a major impact on drug development and understanding of basic cancer biology. At this time, a number of molecular probes have been developed by conjugating various labels to affinity ligands for targeting in different imaging modalities. This review will describe the current status of exogenous molecular probes for optical, scintigraphic, MRI and ultrasound imaging platforms. Furthermore, we will also shed light on how these techniques can be used synergistically in multi-modal platforms and how these techniques are being employed in current research. PMID:22180839

  16. Concept for Classifying Facade Elements Based on Material, Geometry and Thermal Radiation Using Multimodal Uav Remote Sensing

    NASA Astrophysics Data System (ADS)

    Ilehag, R.; Schenk, A.; Hinz, S.

    2017-08-01

    This paper presents a concept for classification of facade elements, based on the material and the geometry of the elements in addition to the thermal radiation of the facade with the usage of a multimodal Unmanned Aerial Vehicle (UAV) system. Once the concept is finalized and functional, the workflow can be used for energy demand estimations for buildings by exploiting existing methods for estimation of heat transfer coefficient and the transmitted heat loss. The multimodal system consists of a thermal, a hyperspectral and an optical sensor, which can be operational with a UAV. While dealing with sensors that operate in different spectra and have different technical specifications, such as the radiometric and the geometric resolution, the challenges that are faced are presented. Addressed are the different approaches of data fusion, such as image registration, generation of 3D models by performing image matching and the means for classification based on either the geometry of the object or the pixel values. As a first step towards realizing the concept, the result from a geometric calibration with a designed multimodal calibration pattern is presented.

  17. Multimode C-arm fluoroscopy, tomosynthesis, and cone-beam CT for image-guided interventions: from proof of principle to patient protocols

    NASA Astrophysics Data System (ADS)

    Siewerdsen, J. H.; Daly, M. J.; Bachar, G.; Moseley, D. J.; Bootsma, G.; Brock, K. K.; Ansell, S.; Wilson, G. A.; Chhabra, S.; Jaffray, D. A.; Irish, J. C.

    2007-03-01

    High-performance intraoperative imaging is essential to an ever-expanding scope of therapeutic procedures ranging from tumor surgery to interventional radiology. The need for precise visualization of bony and soft-tissue structures with minimal obstruction to the therapy setup presents challenges and opportunities in the development of novel imaging technologies specifically for image-guided procedures. Over the past ~5 years, a mobile C-arm has been modified in collaboration with Siemens Medical Solutions for 3D imaging. Based upon a Siemens PowerMobil, the device includes: a flat-panel detector (Varian PaxScan 4030CB); a motorized orbit; a system for geometric calibration; integration with real-time tracking and navigation (NDI Polaris); and a computer control system for multi-mode fluoroscopy, tomosynthesis, and cone-beam CT. Investigation of 3D imaging performance (noise-equivalent quanta), image quality (human observer studies), and image artifacts (scatter, truncation, and cone-beam artifacts) has driven the development of imaging techniques appropriate to a host of image-guided interventions. Multi-mode functionality presents a valuable spectrum of acquisition techniques: i.) fluoroscopy for real-time 2D guidance; ii.) limited-angle tomosynthesis for fast 3D imaging (e.g., ~10 sec acquisition of coronal slices containing the surgical target); and iii.) fully 3D cone-beam CT (e.g., ~30-60 sec acquisition providing bony and soft-tissue visualization across the field of view). Phantom and cadaver studies clearly indicate the potential for improved surgical performance - up to a factor of 2 increase in challenging surgical target excisions. The C-arm system is currently being deployed in patient protocols ranging from brachytherapy to chest, breast, spine, and head and neck surgery.

  18. Multimodal Discourse Analysis of the Movie "Argo"

    ERIC Educational Resources Information Center

    Bo, Xu

    2018-01-01

    Based on multimodal discourse theory, this paper makes a multimodal discourse analysis of some shots in the movie "Argo" from the perspective of context of culture, context of situation and meaning of image. Results show that this movie constructs multimodal discourse through particular context, language and image, and successfully…

  19. Multimodal Imaging of Human Brain Activity: Rational, Biophysical Aspects and Modes of Integration

    PubMed Central

    Blinowska, Katarzyna; Müller-Putz, Gernot; Kaiser, Vera; Astolfi, Laura; Vanderperren, Katrien; Van Huffel, Sabine; Lemieux, Louis

    2009-01-01

    Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims of multi-modal imaging and the ways in which it can be accomplished using representative applications. Given the importance of haemodynamic and electrophysiological signals in current multi-modal imaging applications, we also review some of the basic physiology relevant to understanding their relationship. PMID:19547657

  20. A multimodal 3D framework for fire characteristics estimation

    NASA Astrophysics Data System (ADS)

    Toulouse, T.; Rossi, L.; Akhloufi, M. A.; Pieri, A.; Maldague, X.

    2018-02-01

    In the last decade we have witnessed an increasing interest in using computer vision and image processing in forest fire research. Image processing techniques have been successfully used in different fire analysis areas such as early detection, monitoring, modeling and fire front characteristics estimation. While the majority of the work deals with the use of 2D visible spectrum images, recent work has introduced the use of 3D vision in this field. This work proposes a new multimodal vision framework permitting the extraction of the three-dimensional geometrical characteristics of fires captured by multiple 3D vision systems. The 3D system is a multispectral stereo system operating in both the visible and near-infrared (NIR) spectral bands. The framework supports the use of multiple stereo pairs positioned so as to capture complementary views of the fire front during its propagation. Multimodal registration is conducted using the captured views in order to build a complete 3D model of the fire front. The registration process is achieved using multisensory fusion based on visual data (2D and NIR images), GPS positions and IMU inertial data. Experiments were conducted outdoors in order to show the performance of the proposed framework. The obtained results are promising and show the potential of using the proposed framework in operational scenarios for wildland fire research and as a decision management system in fighting.

  1. Simulation of brain tumors in MR images for evaluation of segmentation efficacy.

    PubMed

    Prastawa, Marcel; Bullitt, Elizabeth; Gerig, Guido

    2009-04-01

    Obtaining validation data and comparison metrics for segmentation of magnetic resonance images (MRI) are difficult tasks due to the lack of reliable ground truth. This problem is even more evident for images presenting pathology, which can both alter tissue appearance through infiltration and cause geometric distortions. Systems for generating synthetic images with user-defined degradation by noise and intensity inhomogeneity offer the possibility for testing and comparison of segmentation methods. Such systems do not yet offer simulation of sufficiently realistic looking pathology. This paper presents a system that combines physical and statistical modeling to generate synthetic multi-modal 3D brain MRI with tumor and edema, along with the underlying anatomical ground truth, Main emphasis is placed on simulation of the major effects known for tumor MRI, such as contrast enhancement, local distortion of healthy tissue, infiltrating edema adjacent to tumors, destruction and deformation of fiber tracts, and multi-modal MRI contrast of healthy tissue and pathology. The new method synthesizes pathology in multi-modal MRI and diffusion tensor imaging (DTI) by simulating mass effect, warping and destruction of white matter fibers, and infiltration of brain tissues by tumor cells. We generate synthetic contrast enhanced MR images by simulating the accumulation of contrast agent within the brain. The appearance of the the brain tissue and tumor in MRI is simulated by synthesizing texture images from real MR images. The proposed method is able to generate synthetic ground truth and synthesized MR images with tumor and edema that exhibit comparable segmentation challenges to real tumor MRI. Such image data sets will find use in segmentation reliability studies, comparison and validation of different segmentation methods, training and teaching, or even in evaluating standards for tumor size like the RECIST criteria (response evaluation criteria in solid tumors).

  2. Results from the commissioning of a multi-modal endoscope for ultrasound and time of flight PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bugalho, Ricardo

    2015-07-01

    The EndoTOFPET-US collaboration has developed a multi-modal imaging system combining Ultrasound with Time-of-Flight Positron Emission Tomography into an endoscopic imaging device. The objective of the project is to obtain a coincidence time resolution of about 200 ps FWHM and to achieve about 1 mm spatial resolution of the PET system, while integrating all the components in a very compact detector suitable for endoscopic use. This scanner aims to be exploited for diagnostic and surgical oncology, as well as being instrumental in the clinical test of new biomarkers especially targeted for prostate and pancreatic cancer. (authors)

  3. Tinnitus Multimodal Imaging

    DTIC Science & Technology

    2014-10-01

    1 AWARD NUMBER: W81XWH-13-1-0494 TITLE: Tinnitus Multimodal Imaging PRINCIPAL INVESTIGATOR...TYPE Annual 3. DATES COVERED 30 Sept 2013 – 29 Oct 2014 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Tinnitus Multimodal Imaging...AVAILABILITY STATEMENT Approved for Public Release; Distribution Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Tinnitus is a common auditory

  4. Albumin based versatile multifunctional nanocarriers for cancer therapy: Fabrication, surface modification, multimodal therapeutics and imaging approaches.

    PubMed

    Kudarha, Ritu R; Sawant, Krutika K

    2017-12-01

    Albumin is a versatile protein used as a carrier system for cancer therapeutics. As a carrier it can provide tumor specificity, reduce drug related toxicity, maintain therapeutic concentration of the active moiety like drug, gene, peptide, protein etc. for long period of time and also reduce drug related toxicities. Apart from cancer therapy, it is also utilized in the imaging and multimodal therapy of cancer. This review highlights the important properties, structure and types of albumin based nanocarriers with regards to their use for cancer targeting. It also provides brief discussion on methods of preparation of these nanocarriers and their surface modification. Applications of albumin nanocarriers for cancer therapy, gene delivery, imaging, phototherapy and multimodal therapy have also been discussed. This review also provides brief discussion about albumin based marketed nano formulations and those under clinical trials. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. The Neurochemical and Microstructural Changes in the Brain of Systemic Lupus Erythematosus Patients: A Multimodal MRI Study

    PubMed Central

    Zhang, Zhiyan; Wang, Yukai; Shen, Zhiwei; Yang, Zhongxian; Li, Li; Chen, Dongxiao; Yan, Gen; Cheng, Xiaofang; Shen, Yuanyu; Tang, Xiangyong; Hu, Wei; Wu, Renhua

    2016-01-01

    The diagnosis and pathology of neuropsychiatric systemic lupus erythematosus (NPSLE) remains challenging. Herein, we used multimodal imaging to assess anatomical and functional changes in brains of SLE patients instead of a single MRI approach generally used in previous studies. Twenty-two NPSLE patients, 21 non-NPSLE patients and 20 healthy controls (HCs) underwent 3.0 T MRI with multivoxel magnetic resonance spectroscopy, T1-weighted volumetric images for voxel based morphometry (VBM) and diffusional kurtosis imaging (DKI) scans. While there were findings in other basal ganglia regions, the most consistent findings were observed in the posterior cingulate gyrus (PCG). The reduction of multiple metabolite concentration was observed in the PCG in the two patient groups, and the NPSLE patients were more prominent. The two patient groups displayed lower diffusional kurtosis (MK) values in the bilateral PCG compared with HCs (p < 0.01) as assessed by DKI. Grey matter reduction in the PCG was observed in the NPSLE group using VBM. Positive correlations among cognitive function scores and imaging metrics in bilateral PCG were detected. Multimodal imaging is useful for evaluating SLE subjects and potentially determining disease pathology. Impairments of cognitive function in SLE patients may be interpreted by metabolic and microstructural changes in the PCG. PMID:26758023

  6. Multimodality Molecular Imaging-Guided Tumor Border Delineation and Photothermal Therapy Analysis Based on Graphene Oxide-Conjugated Gold Nanoparticles Chelated with Gd.

    PubMed

    Ma, Xibo; Jin, Yushen; Wang, Yi; Zhang, Shuai; Peng, Dong; Yang, Xin; Wei, Shoushui; Chai, Wei; Li, Xuejun; Tian, Jie

    2018-01-01

    Tumor cell complete extinction is a crucial measure to evaluate antitumor efficacy. The difficulties in defining tumor margins and finding satellite metastases are the reason for tumor recurrence. A synergistic method based on multimodality molecular imaging needs to be developed so as to achieve the complete extinction of the tumor cells. In this study, graphene oxide conjugated with gold nanostars and chelated with Gd through 1,4,7,10-tetraazacyclododecane-N,N',N,N'-tetraacetic acid (DOTA) (GO-AuNS-DOTA-Gd) were prepared to target HCC-LM3-fLuc cells and used for therapy. For subcutaneous tumor, multimodality molecular imaging including photoacoustic imaging (PAI) and magnetic resonance imaging (MRI) and the related processing techniques were used to monitor the pharmacokinetics process of GO-AuNS-DOTA-Gd in order to determine the optimal time for treatment. For orthotopic tumor, MRI was used to delineate the tumor location and margin in vivo before treatment. Then handheld photoacoustic imaging system was used to determine the tumor location during the surgery and guided the photothermal therapy. The experiment result based on orthotopic tumor demonstrated that this synergistic method could effectively reduce tumor residual and satellite metastases by 85.71% compared with the routine photothermal method without handheld PAI guidance. These results indicate that this multimodality molecular imaging-guided photothermal therapy method is promising with a good prospect in clinical application.

  7. Photoacoustic-Based Multimodal Nanoprobes: from Constructing to Biological Applications.

    PubMed

    Gao, Duyang; Yuan, Zhen

    2017-01-01

    Multimodal nanoprobes have attracted intensive attentions since they can integrate various imaging modalities to obtain complementary merits of single modality. Meanwhile, recent interest in laser-induced photoacoustic imaging is rapidly growing due to its unique advantages in visualizing tissue structure and function with high spatial resolution and satisfactory imaging depth. In this review, we summarize multimodal nanoprobes involving photoacoustic imaging. In particular, we focus on the method to construct multimodal nanoprobes. We have divided the synthetic methods into two types. First, we call it "one for all" concept, which involves intrinsic properties of the element in a single particle. Second, "all in one" concept, which means integrating different functional blocks in one particle. Then, we simply introduce the applications of the multifunctional nanoprobes for in vivo imaging and imaging-guided tumor therapy. At last, we discuss the advantages and disadvantages of the present methods to construct the multimodal nanoprobes and share our viewpoints in this area.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barstow, Del R; Patlolla, Dilip Reddy; Mann, Christopher J

    Abstract The data captured by existing standoff biometric systems typically has lower biometric recognition performance than their close range counterparts due to imaging challenges, pose challenges, and other factors. To assist in overcoming these limitations systems typically perform in a multi-modal capacity such as Honeywell s Combined Face and Iris (CFAIRS) [21] system. While this improves the systems performance, standoff systems have yet to be proven as accurate as their close range equivalents. We will present a standoff system capable of operating up to 7 meters in range. Unlike many systems such as the CFAIRS our system captures high qualitymore » 12 MP video allowing for a multi-sample as well as multi-modal comparison. We found that for standoff systems multi-sample improved performance more than multi-modal. For a small test group of 50 subjects we were able to achieve 100% rank one recognition performance with our system.« less

  9. Multimodal Diffuse Optical Imaging

    NASA Astrophysics Data System (ADS)

    Intes, Xavier; Venugopal, Vivek; Chen, Jin; Azar, Fred S.

    Diffuse optical imaging, particularly diffuse optical tomography (DOT), is an emerging clinical modality capable of providing unique functional information, at a relatively low cost, and with nonionizing radiation. Multimodal diffuse optical imaging has enabled a synergistic combination of functional and anatomical information: the quality of DOT reconstructions has been significantly improved by incorporating the structural information derived by the combined anatomical modality. In this chapter, we will review the basic principles of diffuse optical imaging, including instrumentation and reconstruction algorithm design. We will also discuss the approaches for multimodal imaging strategies that integrate DOI with clinically established modalities. The merit of the multimodal imaging approaches is demonstrated in the context of optical mammography, but the techniques described herein can be translated to other clinical scenarios such as brain functional imaging or muscle functional imaging.

  10. Prospective Evaluation of Multimodal Optical Imaging with Automated Image Analysis to Detect Oral Neoplasia In Vivo.

    PubMed

    Quang, Timothy; Tran, Emily Q; Schwarz, Richard A; Williams, Michelle D; Vigneswaran, Nadarajah; Gillenwater, Ann M; Richards-Kortum, Rebecca

    2017-10-01

    The 5-year survival rate for patients with oral cancer remains low, in part because diagnosis often occurs at a late stage. Early and accurate identification of oral high-grade dysplasia and cancer can help improve patient outcomes. Multimodal optical imaging is an adjunctive diagnostic technique in which autofluorescence imaging is used to identify high-risk regions within the oral cavity, followed by high-resolution microendoscopy to confirm or rule out the presence of neoplasia. Multimodal optical images were obtained from 206 sites in 100 patients. Histologic diagnosis, either from a punch biopsy or an excised surgical specimen, was used as the gold standard for all sites. Histopathologic diagnoses of moderate dysplasia or worse were considered neoplastic. Images from 92 sites in the first 30 patients were used as a training set to develop automated image analysis methods for identification of neoplasia. Diagnostic performance was evaluated prospectively using images from 114 sites in the remaining 70 patients as a test set. In the training set, multimodal optical imaging with automated image analysis correctly classified 95% of nonneoplastic sites and 94% of neoplastic sites. Among the 56 sites in the test set that were biopsied, multimodal optical imaging correctly classified 100% of nonneoplastic sites and 85% of neoplastic sites. Among the 58 sites in the test set that corresponded to a surgical specimen, multimodal imaging correctly classified 100% of nonneoplastic sites and 61% of neoplastic sites. These findings support the potential of multimodal optical imaging to aid in the early detection of oral cancer. Cancer Prev Res; 10(10); 563-70. ©2017 AACR . ©2017 American Association for Cancer Research.

  11. vECTlab—A fully integrated multi-modality Monte Carlo simulation framework for the radiological imaging sciences

    NASA Astrophysics Data System (ADS)

    Peter, Jörg; Semmler, Wolfhard

    2007-10-01

    Alongside and in part motivated by recent advances in molecular diagnostics, the development of dual-modality instruments for patient and dedicated small animal imaging has gained attention by diverse research groups. The desire for such systems is high not only to link molecular or functional information with the anatomical structures, but also for detecting multiple molecular events simultaneously at shorter total acquisition times. While PET and SPECT have been integrated successfully with X-ray CT, the advance of optical imaging approaches (OT) and the integration thereof into existing modalities carry a high application potential, particularly for imaging small animals. A multi-modality Monte Carlo (MC) simulation approach at present has been developed that is able to trace high-energy (keV) as well as optical (eV) photons concurrently within identical phantom representation models. We show that the involved two approaches for ray-tracing keV and eV photons can be integrated into a unique simulation framework which enables both photon classes to be propagated through various geometry models representing both phantoms and scanners. The main advantage of such integrated framework for our specific application is the investigation of novel tomographic multi-modality instrumentation intended for in vivo small animal imaging through time-resolved MC simulation upon identical phantom geometries. Design examples are provided for recently proposed SPECT-OT and PET-OT imaging systems.

  12. Versatile quantitative phase imaging system applied to high-speed, low noise and multimodal imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Federici, Antoine; Aknoun, Sherazade; Savatier, Julien; Wattellier, Benoit F.

    2017-02-01

    Quadriwave lateral shearing interferometry (QWLSI) is a well-established quantitative phase imaging (QPI) technique based on the analysis of interference patterns of four diffraction orders by an optical grating set in front of an array detector [1]. As a QPI modality, this is a non-invasive imaging technique which allow to measure the optical path difference (OPD) of semi-transparent samples. We present a system enabling QWLSI with high-performance sCMOS cameras [2] and apply it to perform high-speed imaging, low noise as well as multimodal imaging. This modified QWLSI system contains a versatile optomechanical device which images the optical grating near the detector plane. Such a device is coupled with any kind of camera by varying its magnification. In this paper, we study the use of a sCMOS Zyla5.5 camera from Andor along with our modified QWLSI system. We will present high-speed live cell imaging, up to 200Hz frame rate, in order to follow intracellular fast motions while measuring the quantitative phase information. The structural and density information extracted from the OPD signal is complementary to the specific and localized fluorescence signal [2]. In addition, QPI detects cells even when the fluorophore is not expressed. This is very useful to follow a protein expression with time. The 10 µm spatial pixel resolution of our modified QWLSI associated to the high sensitivity of the Zyla5.5 enabling to perform high quality fluorescence imaging, we have carried out multimodal imaging revealing fine structures cells, like actin filaments, merged with the morphological information of the phase. References [1]. P. Bon, G. Maucort, B. Wattellier, and S. Monneret, "Quadriwave lateral shearing interferometry for quantitative phase microscopy of living cells," Opt. Express, vol. 17, pp. 13080-13094, 2009. [2] P. Bon, S. Lécart, E. Fort and S. Lévêque-Fort, "Fast label-free cytoskeletal network imaging in living mammalian cells," Biophysical journal, 106(8), pp. 1588-1595, 2014

  13. The pivotal role of multimodality reporter sensors in drug discovery: from cell based assays to real time molecular imaging.

    PubMed

    Ray, Pritha

    2011-04-01

    Development and marketing of new drugs require stringent validation that are expensive and time consuming. Non-invasive multimodality molecular imaging using reporter genes holds great potential to expedite these processes at reduced cost. New generations of smarter molecular imaging strategies such as Split reporter, Bioluminescence resonance energy transfer, Multimodality fusion reporter technologies will further assist to streamline and shorten the drug discovery and developmental process. This review illustrates the importance and potential of molecular imaging using multimodality reporter genes in drug development at preclinical phases.

  14. In vivo multimodal nonlinear optical imaging of mucosal tissue

    NASA Astrophysics Data System (ADS)

    Sun, Ju; Shilagard, Tuya; Bell, Brent; Motamedi, Massoud; Vargas, Gracie

    2004-05-01

    We present a multimodal nonlinear imaging approach to elucidate microstructures and spectroscopic features of oral mucosa and submucosa in vivo. The hamster buccal pouch was imaged using 3-D high resolution multiphoton and second harmonic generation microscopy. The multimodal imaging approach enables colocalization and differentiation of prominent known spectroscopic and structural features such as keratin, epithelial cells, and submucosal collagen at various depths in tissue. Visualization of cellular morphology and epithelial thickness are in excellent agreement with histological observations. These results suggest that multimodal nonlinear optical microscopy can be an effective tool for studying the physiology and pathology of mucosal tissue.

  15. Design and implementation of a contactless multiple hand feature acquisition system

    NASA Astrophysics Data System (ADS)

    Zhao, Qiushi; Bu, Wei; Wu, Xiangqian; Zhang, David

    2012-06-01

    In this work, an integrated contactless multiple hand feature acquisition system is designed. The system can capture palmprint, palm vein, and palm dorsal vein images simultaneously. Moreover, the images are captured in a contactless manner, that is, users need not to touch any part of the device when capturing. Palmprint is imaged under visible illumination while palm vein and palm dorsal vein are imaged under near infrared (NIR) illumination. The capturing is controlled by computer and the whole process is less than 1 second, which is sufficient for online biometric systems. Based on this device, this paper also implements a contactless hand-based multimodal biometric system. Palmprint, palm vein, palm dorsal vein, finger vein, and hand geometry features are extracted from the captured images. After similarity measure, the matching scores are fused using weighted sum fusion rule. Experimental results show that although the verification accuracy of each uni-modality is not as high as that of state-of-the-art, the fusion result is superior to most of the existing hand-based biometric systems. This result indicates that the proposed device is competent in the application of contactless multimodal hand-based biometrics.

  16. Au Nanocage Functionalized with Ultra-small Fe3O4 Nanoparticles for Targeting T1-T2Dual MRI and CT Imaging of Tumor

    NASA Astrophysics Data System (ADS)

    Wang, Guannan; Gao, Wei; Zhang, Xuanjun; Mei, Xifan

    2016-06-01

    Diagnostic approaches based on multimodal imaging of clinical noninvasive imaging (eg. MRI/CT scanner) are highly developed in recent years for accurate selection of the therapeutic regimens in critical diseases. Therefore, it is highly demanded in the development of appropriate all-in-one multimodal contrast agents (MCAs) for the MRI/CT multimodal imaging. Here a novel ideal MCAs (F-AuNC@Fe3O4) were engineered by assemble Au nanocages (Au NC) and ultra-small iron oxide nanoparticles (Fe3O4) for simultaneous T1-T2dual MRI and CT contrast imaging. In this system, the Au nanocages offer facile thiol modification and strong X-ray attenuation property for CT imaging. The ultra-small Fe3O4 nanoparticles, as excellent contrast agent, is able to provide great enhanced signal of T1- and T2-weighted MRI (r1 = 6.263 mM-1 s-1, r2 = 28.117 mM-1 s-1) due to their ultra-refined size. After functionalization, the present MCAs nanoparticles exhibited small average size, low aggregation and excellent biocompatible. In vitro and In vivo studies revealed that the MCAs show long-term circulation time, renal clearance properties and outstanding capability of selective accumulation in tumor tissues for simultaneous CT imaging and T1- and T2-weighted MRI. Taken together, these results show that as-prepared MCAs are excellent candidates as MRI/CT multimodal imaging contrast agents.

  17. SU-E-J-110: A Novel Level Set Active Contour Algorithm for Multimodality Joint Segmentation/Registration Using the Jensen-Rényi Divergence.

    PubMed

    Markel, D; Naqa, I El; Freeman, C; Vallières, M

    2012-06-01

    To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. It was found that JR divergence when used for segmentation has an improved robustness to noise compared to using mutual information, or other entropy-based metrics. The MI metric failed at around 2/3 the noise power than the JR divergence. The JR divergence metric is useful for the task of joint segmentation/registration of multimodality images and shows improved results compared entropy based metric. The algorithm can be easily modified to incorporate non-intensity based images, which would allow applications into multi-modality and texture analysis. © 2012 American Association of Physicists in Medicine.

  18. Multimodal autofluorescence detection of cancer: from single cells to living organism

    NASA Astrophysics Data System (ADS)

    Horilova, J.; Cunderlikova, B.; Cagalinec, M.; Chorvat, D.; Marcek Chorvatova, A.

    2018-02-01

    Multimodal optical imaging of suspected tissues is showing to be a promising method for distinguishing suspected cancerous tissues from healthy ones. In particular, the combination of steady-state spectroscopic methods with timeresolved fluorescence provides more precise insight into native metabolism when focused on tissue autofluorescence. Cancer is linked to specific metabolic remodelation detectable spectroscopically. In this work, we evaluate possibilities and limitations of multimodal optical cancer detection in single cells, collagen-based 3D cell cultures and in living organisms (whole mice), as a representation of gradually increasing complexity of model systems.

  19. Nanoengineered multimodal contrast agent for medical image guidance

    NASA Astrophysics Data System (ADS)

    Perkins, Gregory J.; Zheng, Jinzi; Brock, Kristy; Allen, Christine; Jaffray, David A.

    2005-04-01

    Multimodality imaging has gained momentum in radiation therapy planning and image-guided treatment delivery. Specifically, computed tomography (CT) and magnetic resonance (MR) imaging are two complementary imaging modalities often utilized in radiation therapy for visualization of anatomical structures for tumour delineation and accurate registration of image data sets for volumetric dose calculation. The development of a multimodal contrast agent for CT and MR with prolonged in vivo residence time would provide long-lasting spatial and temporal correspondence of the anatomical features of interest, and therefore facilitate multimodal image registration, treatment planning and delivery. The multimodal contrast agent investigated consists of nano-sized stealth liposomes encapsulating conventional iodine and gadolinium-based contrast agents. The average loading achieved was 33.5 +/- 7.1 mg/mL of iodine for iohexol and 9.8 +/- 2.0 mg/mL of gadolinium for gadoteridol. The average liposome diameter was 46.2 +/- 13.5 nm. The system was found to be stable in physiological buffer over a 15-day period, releasing 11.9 +/- 1.1% and 11.2 +/- 0.9% of the total amounts of iohexol and gadoteridol loaded, respectively. 200 minutes following in vivo administration, the contrast agent maintained a relative contrast enhancement of 81.4 +/- 13.05 differential Hounsfield units (ΔHU) in CT (40% decrease from the peak signal value achieved 3 minutes post-injection) and 731.9 +/- 144.2 differential signal intensity (ΔSI) in MR (46% decrease from the peak signal value achieved 3 minutes post-injection) in the blood (aorta), a relative contrast enhancement of 38.0 +/- 5.1 ΔHU (42% decrease from the peak signal value achieved 3 minutes post-injection) and 178.6 +/- 41.4 ΔSI (62% decrease from the peak signal value achieved 3 minutes post-injection) in the liver (parenchyma), a relative contrast enhancement of 9.1 +/- 1.7 ΔHU (94% decrease from the peak signal value achieved 3 minutes post-injection) and 461.7 +/- 78.1 ΔSI (60% decrease from the peak signal value achieved 5 minutes post-injection) in the kidney (cortex) of a New Zealand white rabbit. This multimodal contrast agent, with prolonged in vivo residence time and imaging efficacy, has the potential to bring about improvements in the fields of medical imaging and radiation therapy, particularly for image registration and guidance.

  20. Image formation analysis and high resolution image reconstruction for plenoptic imaging systems.

    PubMed

    Shroff, Sapna A; Berkner, Kathrin

    2013-04-01

    Plenoptic imaging systems are often used for applications like refocusing, multimodal imaging, and multiview imaging. However, their resolution is limited to the number of lenslets. In this paper we investigate paraxial, incoherent, plenoptic image formation, and develop a method to recover some of the resolution for the case of a two-dimensional (2D) in-focus object. This enables the recovery of a conventional-resolution, 2D image from the data captured in a plenoptic system. We show simulation results for a plenoptic system with a known response and Gaussian sensor noise.

  1. Distance-Dependent Multimodal Image Registration for Agriculture Tasks

    PubMed Central

    Berenstein, Ron; Hočevar, Marko; Godeša, Tone; Edan, Yael; Ben-Shahar, Ohad

    2015-01-01

    Image registration is the process of aligning two or more images of the same scene taken at different times; from different viewpoints; and/or by different sensors. This research focuses on developing a practical method for automatic image registration for agricultural systems that use multimodal sensory systems and operate in natural environments. While not limited to any particular modalities; here we focus on systems with visual and thermal sensory inputs. Our approach is based on pre-calibrating a distance-dependent transformation matrix (DDTM) between the sensors; and representing it in a compact way by regressing the distance-dependent coefficients as distance-dependent functions. The DDTM is measured by calculating a projective transformation matrix for varying distances between the sensors and possible targets. To do so we designed a unique experimental setup including unique Artificial Control Points (ACPs) and their detection algorithms for the two sensors. We demonstrate the utility of our approach using different experiments and evaluation criteria. PMID:26308000

  2. System and method for ultrasonic tomography

    DOEpatents

    Haddad, Waleed Sami

    2002-01-01

    A system and method for doing both transmission mode and reflection mode three-dimensional ultrasonic imagining. The multimode imaging capability may be used to provide enhanced detectability of cancer tumors within human breast, however, similar imaging systems are applicable to a number of other medical problems as well as a variety of non-medical problems in non-destructive evaluation (NDE).

  3. Ridge-branch-based blood vessel detection algorithm for multimodal retinal images

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hutchings, N.; Knighton, R. W.; Gregori, G.; Lujan, B. J.; Flanagan, J. G.

    2009-02-01

    Automatic detection of retinal blood vessels is important to medical diagnoses and imaging. With the development of imaging technologies, various modals of retinal images are available. Few of currently published algorithms are applied to multimodal retinal images. Besides, the performance of algorithms with pathologies is expected to be improved. The purpose of this paper is to propose an automatic Ridge-Branch-Based (RBB) detection algorithm of blood vessel centerlines and blood vessels for multimodal retinal images (color fundus photographs, fluorescein angiograms, fundus autofluorescence images, SLO fundus images and OCT fundus images, for example). Ridges, which can be considered as centerlines of vessel-like patterns, are first extracted. The method uses the connective branching information of image ridges: if ridge pixels are connected, they are more likely to be in the same class, vessel ridge pixels or non-vessel ridge pixels. Thanks to the good distinguishing ability of the designed "Segment-Based Ridge Features", the classifier and its parameters can be easily adapted to multimodal retinal images without ground truth training. We present thorough experimental results on SLO images, color fundus photograph database and other multimodal retinal images, as well as comparison between other published algorithms. Results showed that the RBB algorithm achieved a good performance.

  4. NaGdF4:Nd3+/Yb3+ Nanoparticles as Multimodal Imaging Agents

    NASA Astrophysics Data System (ADS)

    Pedraza, Francisco; Rightsell, Chris; Kumar, Ga; Giuliani, Jason; Monton, Car; Sardar, Dhiraj

    Medical imaging is a fundamental tool used for the diagnosis of numerous ailments. Each imaging modality has unique advantages; however, they possess intrinsic limitations. Some of which include low spatial resolution, sensitivity, penetration depth, and radiation damage. To circumvent this problem, the combination of imaging modalities, or multimodal imaging, has been proposed, such as Near Infrared Fluorescence imaging (NIRF) and Magnetic Resonance Imaging (MRI). Combining individual advantages, specificity and selectivity of NIRF with the deep penetration and high spatial resolution of MRI, it is possible to circumvent their shortcomings for a more robust imaging technique. In addition, both imaging modalities are very safe and minimally invasive. Fluorescent nanoparticles, such as NaGdF4:Nd3 +/Yb3 +, are excellent candidates for NIRF/MRI multimodal imaging. The dopants, Nd and Yb, absorb and emit within the biological window; where near infrared light is less attenuated by soft tissue. This results in less tissue damage and deeper tissue penetration making it a viable candidate in biological imaging. In addition, the inclusion of Gd results in paramagnetic properties, allowing their use as contrast agents in multimodal imaging. The work presented will include crystallographic results, as well as full optical and magnetic characterization to determine the nanoparticle's viability in multimodal imaging.

  5. Mobile, Multi-modal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia

    DTIC Science & Technology

    2015-10-01

    eyes and image choroidal vessels/capillaries using CARS intravital microscopy Subtask 3: Measure oxy-hemoglobin levels in PBI test and control eyes...AWARD NUMBER: W81XWH-14-1-0537 TITLE: Mobile, Multi-modal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia...4. TITLE AND SUBTITLE Mobile, Multimodal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia 5a. CONTRACT NUMBER W81XWH

  6. Radioactive Nanomaterials for Multimodality Imaging

    PubMed Central

    Chen, Daiqin; Dougherty, Casey A.; Yang, Dongzhi; Wu, Hongwei; Hong, Hao

    2016-01-01

    Nuclear imaging techniques, including primarily positron emission tomography (PET) and single-photon emission computed tomography (SPECT), can provide quantitative information for a biological event in vivo with ultra-high sensitivity, however, the comparatively low spatial resolution is their major limitation in clinical application. By convergence of nuclear imaging with other imaging modalities like computed tomography (CT), magnetic resonance imaging (MRI) and optical imaging, the hybrid imaging platforms can overcome the limitations from each individual imaging technique. Possessing versatile chemical linking ability and good cargo-loading capacity, radioactive nanomaterials can serve as ideal imaging contrast agents. In this review, we provide a brief overview about current state-of-the-art applications of radioactive nanomaterials in the circumstances of multimodality imaging. We present strategies for incorporation of radioisotope(s) into nanomaterials along with applications of radioactive nanomaterials in multimodal imaging. Advantages and limitations of radioactive nanomaterials for multimodal imaging applications are discussed. Finally, a future perspective of possible radioactive nanomaterial utilization is presented for improving diagnosis and patient management in a variety of diseases. PMID:27227167

  7. MO-DE-202-03: Image-Guided Surgery and Interventions in the Advanced Multimodality Image-Guided Operating (AMIGO) Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapur, T.

    At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less

  8. Simultaneous acquisition of magnetic resonance spectroscopy (MRS) data and positron emission tomography (PET) images with a prototype MR-compatible, small animal PET imager

    NASA Astrophysics Data System (ADS)

    Raylman, Raymond R.; Majewski, Stan; Velan, S. Sendhil; Lemieux, Susan; Kross, Brian; Popov, Vladimir; Smith, Mark F.; Weisenberger, Andrew G.

    2007-06-01

    Multi-modality imaging (such as PET-CT) is rapidly becoming a valuable tool in the diagnosis of disease and in the development of new drugs. Functional images produced with PET, fused with anatomical images created by MRI, allow the correlation of form with function. Perhaps more exciting than the combination of anatomical MRI with PET, is the melding of PET with MR spectroscopy (MRS). Thus, two aspects of physiology could be combined in novel ways to produce new insights into the physiology of normal and pathological processes. Our team is developing a system to acquire MRI images and MRS spectra, and PET images contemporaneously. The prototype MR-compatible PET system consists of two opposed detector heads (appropriate in size for small animal imaging), operating in coincidence mode with an active field-of-view of ˜14 cm in diameter. Each detector consists of an array of LSO detector elements coupled through a 2-m long fiber optic light guide to a single position-sensitive photomultiplier tube. The use of light guides allows these magnetic field-sensitive elements of the PET imager to be positioned outside the strong magnetic field of our 3T MRI scanner. The PET scanner imager was integrated with a 12-cm diameter, 12-leg custom, birdcage coil. Simultaneous MRS spectra and PET images were successfully acquired from a multi-modality phantom consisting of a sphere filled with 17 brain relevant substances and a positron-emitting radionuclide. There were no significant changes in MRI or PET scanner performance when both were present in the MRI magnet bore. This successful initial test demonstrates the potential for using such a multi-modality to obtain complementary MRS and PET data.

  9. Bubble-generating nano-lipid carriers for ultrasound/CT imaging-guided efficient tumor therapy.

    PubMed

    Zhang, Nan; Li, Jia; Hou, Ruirui; Zhang, Jiangnan; Wang, Pei; Liu, Xinyang; Zhang, Zhenzhong

    2017-12-20

    Ideal therapeutic effectiveness of chemotherapy is obtained only when tumor cells are exposed to a maximal drug concentration, which is often hindered by dose-limiting toxicity. We designed a bubble-generating liposomal delivery system by introducing ammonium bicarbonate and gold nanorods into folic acid-conjugated liposomes to allow both multimodal imaging and the local release of drug (doxorubicin) with hyperthermia. The key component, ammonium bicarbonate, allows a controlled, rapid release of doxorubicin to provide an effective drug concentration in the tumor microenvironment. An in vitro temperature-triggered drug release study showed that cumulative release improved more than two-fold. In addition, in vitro and in vivo studies indicated that local heat treatment or ultrasonic cavitation enhanced the therapeutic efficiency greatly. The delivery system could also serve as an excellent contrast agent to allow ultrasonic imaging and computerized tomography imaging simultaneously to further achieve the aim of accurate diagnostics. Results of this study showed that this versatile bubble-generating liposome is a promising system to provide optimal therapeutic effects that are guided by multimodal imaging. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Design and fabrication of multimode interference couplers based on digital micro-mirror system

    NASA Astrophysics Data System (ADS)

    Wu, Sumei; He, Xingdao; Shen, Chenbo

    2008-03-01

    Multimode interference (MMI) couplers, based on the self-imaging effect (SIE), are accepted popularly in integrated optics. According to the importance of MMI devices, in this paper, we present a novel method to design and fabricate MMI couplers. A technology of maskless lithography to make MMI couplers based on a smart digital micro-mirror device (DMD) system is proposed. A 1×4 MMI device is designed as an example, which shows the present method is efficient and cost-effective.

  11. Robust Multimodal Dictionary Learning

    PubMed Central

    Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674

  12. Multimodal fusion of brain imaging data: A key to finding the missing link(s) in complex mental illness

    PubMed Central

    Calhoun, Vince D; Sui, Jing

    2016-01-01

    It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness. PMID:27347565

  13. Multimodal fusion of brain imaging data: A key to finding the missing link(s) in complex mental illness.

    PubMed

    Calhoun, Vince D; Sui, Jing

    2016-05-01

    It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness.

  14. Microscopy with multimode fibers

    NASA Astrophysics Data System (ADS)

    Moser, Christophe; Papadopoulos, Ioannis; Farahi, Salma; Psaltis, Demetri

    2013-04-01

    Microscopes are usually thought of comprising imaging elements such as objectives and eye-piece lenses. A different type of microscope, used for endoscopy, consists of waveguiding elements such as fiber bundles, where each fiber in the bundle transports the light corresponding to one pixel in the image. Recently a new type of microscope has emerged that exploits the large number of propagating modes in a single multimode fiber. We have successfully produced fluorescence images of neural cells with sub-micrometer resolution via a 200 micrometer core multimode fiber. The method for achieving imaging consists of using digital phase conjugation to reproduce a focal spot at the tip of the multimode fiber. The image is formed by scanning the focal spot digitally and collecting the fluorescence point by point.

  15. Landmark Image Retrieval by Jointing Feature Refinement and Multimodal Classifier Learning.

    PubMed

    Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun; Ma, Shuai; Xiaoming Zhang; Senzhang Wang; Zhoujun Li; Shuai Ma; Ma, Shuai; Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun

    2018-06-01

    Landmark retrieval is to return a set of images with their landmarks similar to those of the query images. Existing studies on landmark retrieval focus on exploiting the geometries of landmarks for visual similarity matches. However, the visual content of social images is of large diversity in many landmarks, and also some images share common patterns over different landmarks. On the other side, it has been observed that social images usually contain multimodal contents, i.e., visual content and text tags, and each landmark has the unique characteristic of both visual content and text content. Therefore, the approaches based on similarity matching may not be effective in this environment. In this paper, we investigate whether the geographical correlation among the visual content and the text content could be exploited for landmark retrieval. In particular, we propose an effective multimodal landmark classification paradigm to leverage the multimodal contents of social image for landmark retrieval, which integrates feature refinement and landmark classifier with multimodal contents by a joint model. The geo-tagged images are automatically labeled for classifier learning. Visual features are refined based on low rank matrix recovery, and multimodal classification combined with group sparse is learned from the automatically labeled images. Finally, candidate images are ranked by combining classification result and semantic consistence measuring between the visual content and text content. Experiments on real-world datasets demonstrate the superiority of the proposed approach as compared to existing methods.

  16. Multimodal hyperspectral optical microscopy

    DOE PAGES

    Novikova, Irina V.; Smallwood, Chuck R.; Gong, Yu; ...

    2017-09-02

    We describe a unique and convenient approach to multimodal hyperspectral optical microscopy, herein achieved by coupling a portable and transferable hyperspectral imager to various optical microscopes. The experimental and data analysis schemes involved in recording spectrally and spatially resolved fluorescence, dark field, and optical absorption micrographs are illustrated through prototypical measurements targeting selected model systems. Namely, hyperspectral fluorescence micrographs of isolated fluorescent beads are employed to ensure spectral calibration of our detector and to gauge the attainable spatial resolution of our measurements; the recorded images are diffraction-limited. Moreover, spatially over-sampled absorption spectroscopy of a single lipid (18:1 Liss Rhod PE)more » layer reveals that optical densities on the order of 10-3 may be resolved by spatially averaging the recorded optical signatures. We also briefly illustrate two applications of our setup in the general areas of plasmonics and cell biology. Most notably, we deploy hyperspectral optical absorption microscopy to identify and image algal pigments within a single live Tisochrysis lutea cell. Overall, this work paves the way for multimodal multidimensional spectral imaging measurements spanning the realms of several scientific disciples.« less

  17. Simultaneous fluorescence and quantitative phase microscopy with single-pixel detectors

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Suo, Jinli; Zhang, Yuanlong; Dai, Qionghai

    2018-02-01

    Multimodal microscopy offers high flexibilities for biomedical observation and diagnosis. Conventional multimodal approaches either use multiple cameras or a single camera spatially multiplexing different modes. The former needs expertise demanding alignment and the latter suffers from limited spatial resolution. Here, we report an alignment-free full-resolution simultaneous fluorescence and quantitative phase imaging approach using single-pixel detectors. By combining reference-free interferometry with single-pixel detection, we encode the phase and fluorescence of the sample in two detection arms at the same time. Then we employ structured illumination and the correlated measurements between the sample and the illuminations for reconstruction. The recovered fluorescence and phase images are inherently aligned thanks to single-pixel detection. To validate the proposed method, we built a proof-of-concept setup for first imaging the phase of etched glass with the depth of a few hundred nanometers and then imaging the fluorescence and phase of the quantum dot drop. This method holds great potential for multispectral fluorescence microscopy with additional single-pixel detectors or a spectrometer. Besides, this cost-efficient multimodal system might find broad applications in biomedical science and neuroscience.

  18. Multimodal hyperspectral optical microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novikova, Irina V.; Smallwood, Chuck R.; Gong, Yu

    We describe a unique and convenient approach to multimodal hyperspectral optical microscopy, herein achieved by coupling a portable and transferable hyperspectral imager to various optical microscopes. The experimental and data analysis schemes involved in recording spectrally and spatially resolved fluorescence, dark field, and optical absorption micrographs are illustrated through prototypical measurements targeting selected model systems. Namely, hyperspectral fluorescence micrographs of isolated fluorescent beads are employed to ensure spectral calibration of our detector and to gauge the attainable spatial resolution of our measurements; the recorded images are diffraction-limited. Moreover, spatially over-sampled absorption spectroscopy of a single lipid (18:1 Liss Rhod PE)more » layer reveals that optical densities on the order of 10-3 may be resolved by spatially averaging the recorded optical signatures. We also briefly illustrate two applications of our setup in the general areas of plasmonics and cell biology. Most notably, we deploy hyperspectral optical absorption microscopy to identify and image algal pigments within a single live Tisochrysis lutea cell. Overall, this work paves the way for multimodal multidimensional spectral imaging measurements spanning the realms of several scientific disciples.« less

  19. Supervised learning based multimodal MRI brain tumour segmentation using texture features from supervoxels.

    PubMed

    Soltaninejad, Mohammadreza; Yang, Guang; Lambrou, Tryphon; Allinson, Nigel; Jones, Timothy L; Barrick, Thomas R; Howe, Franklyn A; Ye, Xujiong

    2018-04-01

    Accurate segmentation of brain tumour in magnetic resonance images (MRI) is a difficult task due to various tumour types. Using information and features from multimodal MRI including structural MRI and isotropic (p) and anisotropic (q) components derived from the diffusion tensor imaging (DTI) may result in a more accurate analysis of brain images. We propose a novel 3D supervoxel based learning method for segmentation of tumour in multimodal MRI brain images (conventional MRI and DTI). Supervoxels are generated using the information across the multimodal MRI dataset. For each supervoxel, a variety of features including histograms of texton descriptor, calculated using a set of Gabor filters with different sizes and orientations, and first order intensity statistical features are extracted. Those features are fed into a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The method is evaluated on two datasets: 1) Our clinical dataset: 11 multimodal images of patients and 2) BRATS 2013 clinical dataset: 30 multimodal images. For our clinical dataset, the average detection sensitivity of tumour (including tumour core and oedema) using multimodal MRI is 86% with balanced error rate (BER) 7%; while the Dice score for automatic tumour segmentation against ground truth is 0.84. The corresponding results of the BRATS 2013 dataset are 96%, 2% and 0.89, respectively. The method demonstrates promising results in the segmentation of brain tumour. Adding features from multimodal MRI images can largely increase the segmentation accuracy. The method provides a close match to expert delineation across all tumour grades, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Rational Design of a Triple Reporter Gene for Multimodality Molecular Imaging

    PubMed Central

    Hsieh, Ya-Ju; Ke, Chien-Chih; Yeh, Skye Hsin-Hsien; Lin, Chien-Feng; Chen, Fu-Du; Lin, Kang-Ping; Chen, Ran-Chou; Liu, Ren-Shyan

    2014-01-01

    Multimodality imaging using noncytotoxic triple fusion (TF) reporter genes is an important application for cell-based tracking, drug screening, and therapy. The firefly luciferase (fl), monomeric red fluorescence protein (mrfp), and truncated herpes simplex virus type 1 thymidine kinase SR39 mutant (ttksr39) were fused together to create TF reporter gene constructs with different order. The enzymatic activities of TF protein in vitro and in vivo were determined by luciferase reporter assay, H-FEAU cellular uptake experiment, bioluminescence imaging, and micropositron emission tomography (microPET). The TF construct expressed in H1299 cells possesses luciferase activity and red fluorescence. The tTKSR39 activity is preserved in TF protein and mediates high levels of H-FEAU accumulation and significant cell death from ganciclovir (GCV) prodrug activation. In living animals, the luciferase and tTKSR39 activities of TF protein have also been successfully validated by multimodality imaging systems. The red fluorescence signal is relatively weak for in vivo imaging but may expedite FACS-based selection of TF reporter expressing cells. We have developed an optimized triple fusion reporter construct DsRedm-fl-ttksr39 for more effective and sensitive in vivo animal imaging using fluorescence, bioluminescence, and PET imaging modalities, which may facilitate different fields of biomedical research and applications. PMID:24809057

  1. Multimodal optical imager for inner ear hearing loss diagnosis (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Park, Jesung; Maguluri, Gopi N.; Zhao, Youbo; Iftimia, Nicusor V.

    2017-02-01

    Sensorineural hearing loss (SNHL), which typically originates in the cochlea, is the most common otologic problem caused by aging and noise trauma. The cochlea, a delicate and complex biological mechanosensory transducer in the inner ear, has been extensively studied with the goal of improving diagnosis of SNHL. However, the difficulty associated with accessing the cochlea and resolving the microstructures that facilitate hearing within it in a minimally-invasive way has prevented us from being able to assess the pathology underlying SNHL in humans. To address this problem we investigated the ability of a multimodal optical system that combines optical coherence tomography (OCT) and single photon autofluorescence imaging (AFI) to enable visualization and evaluation of microstructures in the cochlea. A laboratory OCT/AFI imager was built to acquire high resolution OCT and single photon fluorescence images of the cochlea. The imager's ability to resolve diagnostically-relevant details was evaluated in ears extracted from normal and noise-exposed mice. A prototype endoscopic OCT/AFI imager was developed based on a double-clad fiber approach. Our measurements show that the multimodal OCT/AFI imager can be used to evaluate structural integrity in the mouse cochlea. Therefore, we believe that this technology is promising as a potential clinical evaluation tool, and as a technique for guiding otologic surgeries such as cochlear implant surgery.

  2. MO-DE-202-04: Multimodality Image-Guided Surgery and Intervention: For the Rest of Us

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shekhar, R.

    At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less

  3. Tinnitus Multimodal Imaging

    DTIC Science & Technology

    2015-10-01

    AWARD NUMBER: W81XWH-13-1-0494 TITLE: Tinnitus Multimodal Imaging PRINCIPAL INVESTIGATOR: Steven Wan Cheung CONTRACTING ORGANIZATION...NUMBER W81XWH-13-1-0494 Tinnitus Multimodal Imaging 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Steven W. Cheung...13. SUPPLEMENTARY NOTES 14. ABSTRACT Tinnitus is a common auditory perceptual disorder whose neural substrates are under intense debate. This project

  4. A gantry-based tri-modality system for bioluminescence tomography

    PubMed Central

    Yan, Han; Lin, Yuting; Barber, William C.; Unlu, Mehmet Burcin; Gulsen, Gultekin

    2012-01-01

    A gantry-based tri-modality system that combines bioluminescence (BLT), diffuse optical (DOT), and x-ray computed tomography (XCT) into the same setting is presented here. The purpose of this system is to perform bioluminescence tomography using a multi-modality imaging approach. As parts of this hybrid system, XCT and DOT provide anatomical information and background optical property maps. This structural and functional a priori information is used to guide and restrain bioluminescence reconstruction algorithm and ultimately improve the BLT results. The performance of the combined system is evaluated using multi-modality phantoms. In particular, a cylindrical heterogeneous multi-modality phantom that contains regions with higher optical absorption and x-ray attenuation is constructed. We showed that a 1.5 mm diameter bioluminescence inclusion can be localized accurately with the functional a priori information while its source strength can be recovered more accurately using both structural and the functional a priori information. PMID:22559540

  5. Quantitative reconstructions in multi-modal photoacoustic and optical coherence tomography imaging

    NASA Astrophysics Data System (ADS)

    Elbau, P.; Mindrinos, L.; Scherzer, O.

    2018-01-01

    In this paper we perform quantitative reconstruction of the electric susceptibility and the Grüneisen parameter of a non-magnetic linear dielectric medium using measurement of a multi-modal photoacoustic and optical coherence tomography system. We consider the mathematical model presented in Elbau et al (2015 Handbook of Mathematical Methods in Imaging ed O Scherzer (New York: Springer) pp 1169-204), where a Fredholm integral equation of the first kind for the Grüneisen parameter was derived. For the numerical solution of the integral equation we consider a Galerkin type method.

  6. A novel multimodal optical imaging system for early detection of oral cancer

    PubMed Central

    Malik, Bilal H.; Jabbour, Joey M.; Cheng, Shuna; Cuenca, Rodrigo; Cheng, Yi-Shing Lisa; Wright, John M.; Jo, Javier A.; Maitland, Kristen C.

    2015-01-01

    Objectives Several imaging techniques have been advocated as clinical adjuncts to improve identification of suspicious oral lesions. However, these have not yet shown superior sensitivity or specificity over conventional oral examination techniques. We developed a multimodal, multi-scale optical imaging system that combines macroscopic biochemical imaging of fluorescence lifetime imaging (FLIM) with subcellular morphologic imaging of reflectance confocal microscopy (RCM) for early detection of oral cancer. We tested our system on excised human oral tissues. Study Design A total of four tissue specimen were imaged. These specimens were diagnosed as one each: clinically normal, oral lichen planus, gingival hyperplasia, and superficially-invasive squamous cell carcinoma (SCC). The optical and fluorescence lifetime properties of each specimen were recorded. Results Both quantitative and qualitative differences between normal, benign and SCC lesions can be resolved with FLIM-RCM imaging. The results demonstrate that an integrated approach based on these two methods can potentially enable rapid screening and evaluation of large areas of oral epithelial tissue. Conclusions Early results from ongoing studies of imaging human oral cavity illustrate the synergistic combination of the two modalities. An adjunct device based on such optical characterization of oral mucosa can potentially be used to detect oral carcinogenesis in early stages. PMID:26725720

  7. Multimodal Imaging of the Normal Eye.

    PubMed

    Kawali, Ankush; Pichi, Francesco; Avadhani, Kavitha; Invernizzi, Alessandro; Hashimoto, Yuki; Mahendradas, Padmamalini

    2017-10-01

    Multimodal imaging is the concept of "bundling" images obtained from various imaging modalities, viz., fundus photograph, fundus autofluorescence imaging, infrared (IR) imaging, simultaneous fluorescein and indocyanine angiography, optical coherence tomography (OCT), and, more recently, OCT angiography. Each modality has its pros and cons as well as its limitations. Combination of multiple imaging techniques will overcome their individual weaknesses and give a comprehensive picture. Such approach helps in accurate localization of a lesion and understanding the pathology in posterior segment. It is important to know imaging of normal eye before one starts evaluating pathology. This article describes multimodal imaging modalities in detail and discusses healthy eye features as seen on various imaging modalities mentioned above.

  8. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-08-01

    Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.

  9. Correlative super-resolution fluorescence microscopy combined with optical coherence microscopy

    NASA Astrophysics Data System (ADS)

    Kim, Sungho; Kim, Gyeong Tae; Jang, Soohyun; Shim, Sang-Hee; Bae, Sung Chul

    2015-03-01

    Recent development of super-resolution fluorescence imaging technique such as stochastic optical reconstruction microscopy (STORM) and photoactived localization microscope (PALM) has brought us beyond the diffraction limits. It allows numerous opportunities in biology because vast amount of formerly obscured molecular structures, due to lack of spatial resolution, now can be directly observed. A drawback of fluorescence imaging, however, is that it lacks complete structural information. For this reason, we have developed a super-resolution multimodal imaging system based on STORM and full-field optical coherence microscopy (FF-OCM). FF-OCM is a type of interferometry systems based on a broadband light source and a bulk Michelson interferometer, which provides label-free and non-invasive visualization of biological samples. The integration between the two systems is simple because both systems use a wide-field illumination scheme and a conventional microscope. This combined imaging system gives us both functional information at a molecular level (~20nm) and structural information at the sub-cellular level (~1μm). For thick samples such as tissue slices, while FF-OCM is readily capable of imaging the 3D architecture, STORM suffer from aberrations and high background fluorescence that substantially degrade the resolution. In order to correct the aberrations in thick tissues, we employed an adaptive optics system in the detection path of the STORM microscope. We used our multimodal system to obtain images on brain tissue samples with structural and functional information.

  10. Integrated femtosecond stimulated Raman scattering and two-photon fluorescence imaging of subcellular lipid and vesicular structures

    NASA Astrophysics Data System (ADS)

    Li, Xuesong; Lam, Wen Jiun; Cao, Zhe; Hao, Yan; Sun, Qiqi; He, Sicong; Mak, Ho Yi; Qu, Jianan Y.

    2015-11-01

    The primary goal of this study is to demonstrate that stimulated Raman scattering (SRS) as a new imaging modality can be integrated into a femtosecond (fs) nonlinear optical (NLO) microscope system. The fs sources of high pulse peak power are routinely used in multimodal nonlinear microscopy to enable efficient excitation of multiple NLO signals. However, with fs excitations, the SRS imaging of subcellular lipid and vesicular structures encounters significant interference from proteins due to poor spectral resolution and a lack of chemical specificity, respectively. We developed a unique NLO microscope of fs excitation that enables rapid acquisition of SRS and multiple two-photon excited fluorescence (TPEF) signals. In the in vivo imaging of transgenic C. elegans animals, we discovered that by cross-filtering false positive lipid signals based on the TPEF signals from tryptophan-bearing endogenous proteins and lysosome-related organelles, the imaging system produced highly accurate assignment of SRS signals to lipid. Furthermore, we demonstrated that the multimodal NLO microscope system could sequentially image lipid structure/content and organelles, such as mitochondria, lysosomes, and the endoplasmic reticulum, which are intricately linked to lipid metabolism.

  11. Multimodality animal rotation imaging system (Mars) for in vivo detection of intraperitoneal tumors.

    PubMed

    Pizzonia, John; Holmberg, Jennie; Orton, Sean; Alvero, Ayesha; Viteri, Oscar; McLaughlin, William; Feke, Gil; Mor, Gil

    2012-01-01

    PROBLEM Ovarian cancer stem cells (OCSCs) have been postulated as the potential source of recurrence and chemoresistance. Therefore identification of OvCSC and their complete removal is a pivotal stage for the treatment of ovarian cancer. The objective of the following study was to develop a new in vivo imaging model that allows for the detection and monitoring of OCSCs. METHOD OF STUDY  OCSCs were labeled with X-Sight 761 Nanospheres and injected intra-peritoneally (i.p.) and sub-cutaneously (s.c.) to Athymic nude mice. The Carestream In-Vivo Imaging System FX was used to obtain X-ray and, concurrently, near-infrared fluorescence images. Tumor images in the mouse were observed from different angles by automatic rotation of the mouse. RESULTS  X-Sight 761 Nanospheres labeled almost 100% of the cells. No difference on growth rate was observed between labeled and unlabeled cells. Tumors were observed and monitoring revealed strong signaling up to 21 days. CONCLUSION  We describe the use of near-infrared nanoparticle probes for in vivo imaging of metastatic ovarian cancer models. Visualization of multiple sites around the animals was enhanced with the use of the Carestream Multimodal Animal Rotation System. © 2011 John Wiley & Sons A/S.

  12. Computer-assisted surgical planning and automation of laser delivery systems

    NASA Astrophysics Data System (ADS)

    Zamorano, Lucia J.; Dujovny, Manuel; Dong, Ada; Kadi, A. Majeed

    1991-05-01

    This paper describes a 'real time' surgical treatment planning interactive workstation, utilizing multimodality imaging (computer tomography, magnetic resonance imaging, digital angiography) that has been developed to provide the neurosurgeon with two-dimensional multiplanar and three-dimensional 'display' of a patient's lesion.

  13. Multimodal Image Registration through Simultaneous Segmentation.

    PubMed

    Aganj, Iman; Fischl, Bruce

    2017-11-01

    Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.

  14. Multimodal hard x-ray imaging with resolution approaching 10 nm for studies in material science

    NASA Astrophysics Data System (ADS)

    Yan, Hanfei; Bouet, Nathalie; Zhou, Juan; Huang, Xiaojing; Nazaretski, Evgeny; Xu, Weihe; Cocco, Alex P.; Chiu, Wilson K. S.; Brinkman, Kyle S.; Chu, Yong S.

    2018-03-01

    We report multimodal scanning hard x-ray imaging with spatial resolution approaching 10 nm and its application to contemporary studies in the field of material science. The high spatial resolution is achieved by focusing hard x-rays with two crossed multilayer Laue lenses and raster-scanning a sample with respect to the nanofocusing optics. Various techniques are used to characterize and verify the achieved focus size and imaging resolution. The multimodal imaging is realized by utilizing simultaneously absorption-, phase-, and fluorescence-contrast mechanisms. The combination of high spatial resolution and multimodal imaging enables a comprehensive study of a sample on a very fine length scale. In this work, the unique multimodal imaging capability was used to investigate a mixed ionic-electronic conducting ceramic-based membrane material employed in solid oxide fuel cells and membrane separations (compound of Ce0.8Gd0.2O2‑x and CoFe2O4) which revealed the existence of an emergent material phase and quantified the chemical complexity at the nanoscale.

  15. Towards a Compact Fiber Laser for Multimodal Imaging

    NASA Astrophysics Data System (ADS)

    Nie, Bai; Saytashev, Ilyas; Dantus, Marcos

    We report on multimodal depth-resolved imaging of unstained living Drosophila Melanogaster larva using sub-50 fs pulses centered at 1060 nm wavelength. Both second harmonic and third harmonic generation imaging modalities are demonstrated.

  16. Towards a compact fiber laser for multimodal imaging

    NASA Astrophysics Data System (ADS)

    Nie, Bai; Saytashev, Ilyas; Dantus, Marcos

    2014-03-01

    We report on multimodal depth-resolved imaging of unstained living Drosophila Melanogaster larva using sub-50 fs pulses centered at 1060 nm wavelength. Both second harmonic and third harmonic generation imaging modalities are demonstrated.

  17. Multimodal breast cancer imaging using coregistered dynamic diffuse optical tomography and digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Zimmermann, Bernhard B.; Deng, Bin; Singh, Bhawana; Martino, Mark; Selb, Juliette; Fang, Qianqian; Sajjadi, Amir Y.; Cormier, Jayne; Moore, Richard H.; Kopans, Daniel B.; Boas, David A.; Saksena, Mansi A.; Carp, Stefan A.

    2017-04-01

    Diffuse optical tomography (DOT) is emerging as a noninvasive functional imaging method for breast cancer diagnosis and neoadjuvant chemotherapy monitoring. In particular, the multimodal approach of combining DOT with x-ray digital breast tomosynthesis (DBT) is especially synergistic as DBT prior information can be used to enhance the DOT reconstruction. DOT, in turn, provides a functional information overlay onto the mammographic images, increasing sensitivity and specificity to cancer pathology. We describe a dynamic DOT apparatus designed for tight integration with commercial DBT scanners and providing a fast (up to 1 Hz) image acquisition rate to enable tracking hemodynamic changes induced by the mammographic breast compression. The system integrates 96 continuous-wave and 24 frequency-domain source locations as well as 32 continuous wave and 20 frequency-domain detection locations into low-profile plastic plates that can easily mate to the DBT compression paddle and x-ray detector cover, respectively. We demonstrate system performance using static and dynamic tissue-like phantoms as well as in vivo images acquired from the pool of patients recalled for breast biopsies at the Massachusetts General Hospital Breast Imaging Division.

  18. Multimodal computational microscopy based on transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao

    2016-12-01

    Transport of intensity equation (TIE) is a powerful tool for phase retrieval and quantitative phase imaging, which requires intensity measurements only at axially closely spaced planes without a separate reference beam. It does not require coherent illumination and works well on conventional bright-field microscopes. The quantitative phase reconstructed by TIE gives valuable information that has been encoded in the complex wave field by passage through a sample of interest. Such information may provide tremendous flexibility to emulate various microscopy modalities computationally without requiring specialized hardware components. We develop a requisite theory to describe such a hybrid computational multimodal imaging system, which yields quantitative phase, Zernike phase contrast, differential interference contrast, and light field moment imaging, simultaneously. It makes the various observations for biomedical samples easy. Then we give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens-based TIE system, combined with the appropriate postprocessing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.

  19. A multimodal image guiding system for Navigated Ultrasound Bronchoscopy (EBUS): A human feasibility study

    PubMed Central

    Hofstad, Erlend Fagertun; Amundsen, Tore; Langø, Thomas; Bakeng, Janne Beate Lervik; Leira, Håkon Olav

    2017-01-01

    Background Endobronchial ultrasound transbronchial needle aspiration (EBUS-TBNA) is the endoscopic method of choice for confirming lung cancer metastasis to mediastinal lymph nodes. Precision is crucial for correct staging and clinical decision-making. Navigation and multimodal imaging can potentially improve EBUS-TBNA efficiency. Aims To demonstrate the feasibility of a multimodal image guiding system using electromagnetic navigation for ultrasound bronchoschopy in humans. Methods Four patients referred for lung cancer diagnosis and staging with EBUS-TBNA were enrolled in the study. Target lymph nodes were predefined from the preoperative computed tomography (CT) images. A prototype convex probe ultrasound bronchoscope with an attached sensor for position tracking was used for EBUS-TBNA. Electromagnetic tracking of the ultrasound bronchoscope and ultrasound images allowed fusion of preoperative CT and intraoperative ultrasound in the navigation software. Navigated EBUS-TBNA was used to guide target lymph node localization and sampling. Navigation system accuracy was calculated, measured by the deviation between lymph node position in ultrasound and CT in three planes. Procedure time, diagnostic yield and adverse events were recorded. Results Preoperative CT and real-time ultrasound images were successfully fused and displayed in the navigation software during the procedures. Overall navigation accuracy (11 measurements) was 10.0 ± 3.8 mm, maximum 17.6 mm, minimum 4.5 mm. An adequate sample was obtained in 6/6 (100%) of targeted lymph nodes. No adverse events were registered. Conclusions Electromagnetic navigated EBUS-TBNA was feasible, safe and easy in this human pilot study. The clinical usefulness was clearly demonstrated. Fusion of real-time ultrasound, preoperative CT and electromagnetic navigational bronchoscopy provided a controlled guiding to level of target, intraoperative overview and procedure documentation. PMID:28182758

  20. Calibration and analysis of a multimodal micro-CT and structured light imaging system for the evaluation of excised breast tissue

    NASA Astrophysics Data System (ADS)

    McClatchy, David M., III; Rizzo, Elizabeth J.; Meganck, Jeff; Kempner, Josh; Vicory, Jared; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.

    2017-12-01

    A multimodal micro-computed tomography (CT) and multi-spectral structured light imaging (SLI) system is introduced and systematically analyzed to test its feasibility to aid in margin delineation during breast conserving surgery (BCS). Phantom analysis of the micro-CT yielded a signal-to-noise ratio of 34, a contrast of 1.64, and a minimum detectable resolution of 240 μm for a 1.2 min scan. The SLI system, spanning wavelengths 490 nm to 800 nm and spatial frequencies up to 1.37 mm-1 , was evaluated with aqueous tissue simulating phantoms having variations in particle size distribution, scatter density, and blood volume fraction. The reduced scattering coefficient, μs\\prime and phase function parameter, γ, were accurately recovered over all wavelengths independent of blood volume fractions from 0% to 4%, assuming a flat sample geometry perpendicular to the imaging plane. The resolution of the optical system was tested with a step phantom, from which the modulation transfer function was calculated yielding a maximum resolution of 3.78 cycles per mm. The three dimensional spatial co-registration between the CT and optical imaging space was tested and shown to be accurate within 0.7 mm. A freshly resected breast specimen, with lobular carcinoma, fibrocystic disease, and adipose, was imaged with the system. The micro-CT provided visualization of the tumor mass and its spiculations, and SLI yielded superficial quantification of light scattering parameters for the malignant and benign tissue types. These results appear to be the first demonstration of SLI combined with standard medical tomography for imaging excised tumor specimens. While further investigations are needed to determine and test the spectral, spatial, and CT features required to classify tissue, this study demonstrates the ability of multimodal CT/SLI to quantify, visualize, and spatially navigate breast tumor specimens, which could potentially aid in the assessment of tumor margin status during BCS.

  1. A hybrid multimodal non-rigid registration of MR images based on diffeomorphic demons.

    PubMed

    Lu, Huanxiang; Cattin, Philippe C; Reyes, Mauricio

    2010-01-01

    In this paper we present a novel hybrid approach for multimodal medical image registration based on diffeomorphic demons. Diffeomorphic demons have proven to be a robust and efficient way for intensity-based image registration. A very recent extension even allows to use mutual information (MI) as a similarity measure to registration multimodal images. However, due to the intensity correspondence uncertainty existing in some anatomical parts, it is difficult for a purely intensity-based algorithm to solve the registration problem. Therefore, we propose to combine the resulting transformations from both intensity-based and landmark-based methods for multimodal non-rigid registration based on diffeomorphic demons. Several experiments on different types of MR images were conducted, for which we show that a better anatomical correspondence between the images can be obtained using the hybrid approach than using either intensity information or landmarks alone.

  2. Multimodal MSI in Conjunction with Broad Coverage Spatially Resolved MS2 Increases Confidence in Both Molecular Identification and Localization.

    PubMed

    Veličković, Dušan; Chu, Rosalie K; Carrell, Alyssa A; Thomas, Mathew; Paša-Tolić, Ljiljana; Weston, David J; Anderton, Christopher R

    2018-01-02

    One critical aspect of mass spectrometry imaging (MSI) is the need to confidently identify detected analytes. While orthogonal tandem MS (e.g., LC-MS 2 ) experiments from sample extracts can assist in annotating ions, the spatial information about these molecules is lost. Accordingly, this could cause mislead conclusions, especially in cases where isobaric species exhibit different distributions within a sample. In this Technical Note, we employed a multimodal imaging approach, using matrix assisted laser desorption/ionization (MALDI)-MSI and liquid extraction surface analysis (LESA)-MS 2 I, to confidently annotate and localize a broad range of metabolites involved in a tripartite symbiosis system of moss, cyanobacteria, and fungus. We found that the combination of these two imaging modalities generated very congruent ion images, providing the link between highly accurate structural information onfered by LESA and high spatial resolution attainable by MALDI. These results demonstrate how this combined methodology could be very useful in differentiating metabolite routes in complex systems.

  3. MINC 2.0: A Flexible Format for Multi-Modal Images.

    PubMed

    Vincent, Robert D; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L; Fonov, Vladimir S; Robbins, Steven M; Baghdadi, Leila; Lerch, Jason; Sled, John G; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P; Collins, D Louis; Evans, Alan C

    2016-01-01

    It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities.

  4. A portable microscopy system for fluorescence, polarized, and brightfield imaging

    NASA Astrophysics Data System (ADS)

    Gordon, Paul; Wattinger, Rolla; Lewis, Cody; Venancio, Vinicius Paula; Mertens-Talcott, Susanne U.; Coté, Gerard

    2018-02-01

    The use of mobile phones to conduct diagnostic microscopy at the point-of-care presents intriguing possibilities for the advancement of high-quality medical care in remote settings. However, it is challenging to create a single device that can adapt to the ever-varying camera technologies in phones or that can image with the customization that multiple modalities require for applications such as malaria diagnosis. A portable multi-modal microscope system is presented that utilizes a Raspberry Pi to collect and transmit data wirelessly to a myriad of electronic devices for image analysis. The microscopy system is capable of providing to the user correlated brightfield, polarized, and fluorescent images of samples fixed on traditional microscopy slides. The multimodal diagnostic capabilities of the microscope were assessed by measuring parasitemia of Plasmodium falciparum-infected thin blood smears. The device is capable of detecting fluorescently-labeled DNA using FITC excitation (490 nm) and emission (525 nm), the birefringent P. falciparum byproduct hemozoin, and detecting brightfield absorption with a resolution of 0.78 micrometers (element 9-3 of a 1951 Air Force Target). This microscopy system is a novel portable imaging tool that may be a viable candidate for field implementation if challenges of system durability, cost considerations, and full automation can be overcome.

  5. Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.

    PubMed

    Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng

    2017-12-01

    How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.

  6. Multimodality cardiac imaging at IRCCS Policlinico San Donato: a new interdisciplinary vision.

    PubMed

    Lombardi, Massimo; Secchi, Francesco; Pluchinotta, Francesca R; Castelvecchio, Serenella; Montericcio, Vincenzo; Camporeale, Antonia; Bandera, Francesco

    2016-04-28

    Multimodality imaging is the efficient integration of various methods of cardiovascular imaging to improve the ability to diagnose, guide therapy, or predict outcome. This approach implies both the availability of different technologies in a single unit and the presence of dedicated staff with cardiologic and radiologic background and certified competence in more than one imaging technique. Interaction with clinical practice and existence of research programmes and educational activities are pivotal for the success of this model. The aim of this paper is to describe the multimodality cardiac imaging programme recently started at San Donato Hospital.

  7. Polarization-Sensitive Hyperspectral Imaging in vivo: A Multimode Dermoscope for Skin Analysis

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas; Saager, Rolf B.; Durkin, Anthony J.; Chave, Robert; Lindsley, Erik H.; Farkas, Daniel L.

    2014-05-01

    Attempts to understand the changes in the structure and physiology of human skin abnormalities by non-invasive optical imaging are aided by spectroscopic methods that quantify, at the molecular level, variations in tissue oxygenation and melanin distribution. However, current commercial and research systems to map hemoglobin and melanin do not correlate well with pathology for pigmented lesions or darker skin. We developed a multimode dermoscope that combines polarization and hyperspectral imaging with an efficient analytical model to map the distribution of specific skin bio-molecules. This corrects for the melanin-hemoglobin misestimation common to other systems, without resorting to complex and computationally intensive tissue optical models. For this system's proof of concept, human skin measurements on melanocytic nevus, vitiligo, and venous occlusion conditions were performed in volunteers. The resulting molecular distribution maps matched physiological and anatomical expectations, confirming a technologic approach that can be applied to next generation dermoscopes and having biological plausibility that is likely to appeal to dermatologists.

  8. Use of multidimensional, multimodal imaging and PACS to support neurological diagnoses

    NASA Astrophysics Data System (ADS)

    Wong, Stephen T. C.; Knowlton, Robert C.; Hoo, Kent S.; Huang, H. K.

    1995-05-01

    Technological advances in brain imaging have revolutionized diagnosis in neurology and neurological surgery. Major imaging techniques include magnetic resonance imaging (MRI) to visualize structural anatomy, positron emission tomography (PET) to image metabolic function and cerebral blood flow, magnetoencephalography (MEG) to visualize the location of physiologic current sources, and magnetic resonance spectroscopy (MRS) to measure specific biochemicals. Each of these techniques studies different biomedical aspects of the brain, but there lacks an effective means to quantify and correlate the disparate imaging datasets in order to improve clinical decision making processes. This paper describes several techniques developed in a UNIX-based neurodiagnostic workstation to aid the noninvasive presurgical evaluation of epilepsy patients. These techniques include online access to the picture archiving and communication systems (PACS) multimedia archive, coregistration of multimodality image datasets, and correlation and quantitation of structural and functional information contained in the registered images. For illustration, we describe the use of these techniques in a patient case of nonlesional neocortical epilepsy. We also present out future work based on preliminary studies.

  9. Ions doped melanin nanoparticle as a multiple imaging agent.

    PubMed

    Ha, Shin-Woo; Cho, Hee-Sang; Yoon, Young Il; Jang, Moon-Sun; Hong, Kwan Soo; Hui, Emmanuel; Lee, Jung Hee; Yoon, Tae-Jong

    2017-10-10

    Multimodal nanomaterials are useful for providing enhanced diagnostic information simultaneously for a variety of in vivo imaging methods. According to our research findings, these multimodal nanomaterials offer promising applications for cancer therapy. Melanin nanoparticles can be used as a platform imaging material and they can be simply produced by complexation with various imaging active ions. They are capable of specifically targeting epidermal growth factor receptor (EGFR)-expressing cancer cells by being anchored with a specific antibody. Ion-doped melanin nanoparticles were found to have high bioavailability with long-term stability in solution, without any cytotoxicity in both in vitro and in vivo systems. By combining different imaging modalities with melanin particles, we can use the complexes to obtain faster diagnoses by computed tomography deep-body imaging and greater detailed pathological diagnostic information by magnetic resonance imaging. The ion-doped melanin nanoparticles also have applications for radio-diagnostic treatment and radio imaging-guided surgery, warranting further proof of concept experimental.

  10. Dye-enhanced multimodal confocal imaging as a novel approach to intraoperative diagnosis of brain tumors.

    PubMed

    Snuderl, Matija; Wirth, Dennis; Sheth, Sameer A; Bourne, Sarah K; Kwon, Churl-Su; Ancukiewicz, Marek; Curry, William T; Frosch, Matthew P; Yaroslavsky, Anna N

    2013-01-01

    Intraoperative diagnosis plays an important role in accurate sampling of brain tumors, limiting the number of biopsies required and improving the distinction between brain and tumor. The goal of this study was to evaluate dye-enhanced multimodal confocal imaging for discriminating gliomas from nonglial brain tumors and from normal brain tissue for diagnostic use. We investigated a total of 37 samples including glioma (13), meningioma (7), metastatic tumors (9) and normal brain removed for nontumoral indications (8). Tissue was stained in 0.05 mg/mL aqueous solution of methylene blue (MB) for 2-5 minutes and multimodal confocal images were acquired using a custom-built microscope. After imaging, tissue was formalin fixed and paraffin embedded for standard neuropathologic evaluation. Thirteen pathologists provided diagnoses based on the multimodal confocal images. The investigated tumor types exhibited distinctive and complimentary characteristics in both the reflectance and fluorescence responses. Images showed distinct morphological features similar to standard histology. Pathologists were able to distinguish gliomas from normal brain tissue and nonglial brain tumors, and to render diagnoses from the images in a manner comparable to haematoxylin and eosin (H&E) slides. These results confirm the feasibility of multimodal confocal imaging for intravital intraoperative diagnosis. © 2012 The Authors; Brain Pathology © 2012 International Society of Neuropathology.

  11. Multimodal MSI in Conjunction with Broad Coverage Spatially Resolved MS 2 Increases Confidence in Both Molecular Identification and Localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veličković, Dušan; Chu, Rosalie K.; Carrell, Alyssa A.

    One critical aspect of mass spectrometry imaging (MSI) is the need to confidently identify detected analytes. While orthogonal tandem MS (e.g., LC-MS 2) experiments from sample extracts can assist in annotating ions, the spatial information about these molecules is lost. Accordingly, this could cause mislead conclusions, especially in cases where isobaric species exhibit different distributions within a sample. In this Technical Note, we employed a multimodal imaging approach, using matrix assisted laser desorption/ionization (MALDI)-MSI and liquid extraction surface analysis (LESA)-MS 2I, to confidently annotate and One critical aspect of mass spectrometry imaging (MSI) is the need to confidently identify detectedmore » analytes. While orthogonal tandem MS (e.g., LC-MS2) experiments from sample extracts can assist in annotating ions, the spatial information about these molecules is lost. Accordingly, this could cause mislead conclusions, especially in cases where isobaric species exhibit different distributions within a sample. In this Technical Note, we employed a multimodal imaging approach, using matrix assisted laser desorption/ionization (MALDI)-MSI and liquid extraction surface analysis (LESA)-MS 2I, to confidently annotate and localize a broad range of metabolites involved in a tripartite symbiosis system of moss, cyanobacteria, and fungus. We found that the combination of these two imaging modalities generated very congruent ion images, providing the link between highly accurate structural information onfered by LESA and high spatial resolution attainable by MALDI. These results demonstrate how this combined methodology could be very useful in differentiating metabolite routes in complex systems.« less

  12. Multi-modal molecular diffuse optical tomography system for small animal imaging

    PubMed Central

    Guggenheim, James A.; Basevi, Hector R. A.; Frampton, Jon; Styles, Iain B.; Dehghani, Hamid

    2013-01-01

    A multi-modal optical imaging system for quantitative 3D bioluminescence and functional diffuse imaging is presented, which has no moving parts and uses mirrors to provide multi-view tomographic data for image reconstruction. It is demonstrated that through the use of trans-illuminated spectral near infrared measurements and spectrally constrained tomographic reconstruction, recovered concentrations of absorbing agents can be used as prior knowledge for bioluminescence imaging within the visible spectrum. Additionally, the first use of a recently developed multi-view optical surface capture technique is shown and its application to model-based image reconstruction and free-space light modelling is demonstrated. The benefits of model-based tomographic image recovery as compared to 2D planar imaging are highlighted in a number of scenarios where the internal luminescence source is not visible or is confounding in 2D images. The results presented show that the luminescence tomographic imaging method produces 3D reconstructions of individual light sources within a mouse-sized solid phantom that are accurately localised to within 1.5mm for a range of target locations and depths indicating sensitivity and accurate imaging throughout the phantom volume. Additionally the total reconstructed luminescence source intensity is consistent to within 15% which is a dramatic improvement upon standard bioluminescence imaging. Finally, results from a heterogeneous phantom with an absorbing anomaly are presented demonstrating the use and benefits of a multi-view, spectrally constrained coupled imaging system that provides accurate 3D luminescence images. PMID:24954977

  13. Large field of view, fast and low dose multimodal phase-contrast imaging at high x-ray energy.

    PubMed

    Astolfo, Alberto; Endrizzi, Marco; Vittoria, Fabio A; Diemoz, Paul C; Price, Benjamin; Haig, Ian; Olivo, Alessandro

    2017-05-19

    X-ray phase contrast imaging (XPCI) is an innovative imaging technique which extends the contrast capabilities of 'conventional' absorption based x-ray systems. However, so far all XPCI implementations have suffered from one or more of the following limitations: low x-ray energies, small field of view (FOV) and long acquisition times. Those limitations relegated XPCI to a 'research-only' technique with an uncertain future in terms of large scale, high impact applications. We recently succeeded in designing, realizing and testing an XPCI system, which achieves significant steps toward simultaneously overcoming these limitations. Our system combines, for the first time, large FOV, high energy and fast scanning. Importantly, it is capable of providing high image quality at low x-ray doses, compatible with or even below those currently used in medical imaging. This extends the use of XPCI to areas which were unpractical or even inaccessible to previous XPCI solutions. We expect this will enable a long overdue translation into application fields such as security screening, industrial inspections and large FOV medical radiography - all with the inherent advantages of the XPCI multimodality.

  14. Toward in vivo diagnosis of skin cancer using multimode imaging dermoscopy: (II) molecular mapping of highly pigmented lesions

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas; Farkas, Daniel L.

    2014-03-01

    We have developed a multimode imaging dermoscope that combines polarization and hyperspectral imaging with a computationally rapid analytical model. This approach employs specific spectral ranges of visible and near infrared wavelengths for mapping the distribution of specific skin bio-molecules. This corrects for the melanin-hemoglobin misestimation common to other systems, without resorting to complex and computationally intensive tissue optical models that are prone to inaccuracies due to over-modeling. Various human skin measurements including a melanocytic nevus, and venous occlusion conditions were investigated and compared with other ratiometric spectral imaging approaches. Access to the broad range of hyperspectral data in the visible and near-infrared range allows our algorithm to flexibly use different wavelength ranges for chromophore estimation while minimizing melanin-hemoglobin optical signature cross-talk.

  15. MO-DE-202-00: Image-Guided Interventions: Advances in Intraoperative Imaging, Guidance, and An Emerging Role for Medical Physics in Surgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less

  16. MO-DE-202-02: Advances in Image Registration and Reconstruction for Image-Guided Neurosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siewerdsen, J.

    At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less

  17. A high-resolution multimode digital microscope system.

    PubMed

    Salmon, Edward D; Shaw, Sidney L; Waters, Jennifer C; Waterman-Storer, Clare M; Maddox, Paul S; Yeh, Elaine; Bloom, Kerry

    2013-01-01

    This chapter describes the development of a high-resolution, multimode digital imaging system based on a wide-field epifluorescent and transmitted light microscope, and a cooled charge-coupled device (CCD) camera. The three main parts of this imaging system are Nikon FXA microscope, Hamamatsu C4880 cooled CCD camera, and MetaMorph digital imaging system. This chapter presents various design criteria for the instrument and describes the major features of the microscope components-the cooled CCD camera and the MetaMorph digital imaging system. The Nikon FXA upright microscope can produce high resolution images for both epifluorescent and transmitted light illumination without switching the objective or moving the specimen. The functional aspects of the microscope set-up can be considered in terms of the imaging optics, the epi-illumination optics, the transillumination optics, the focus control, and the vibration isolation table. This instrument is somewhat specialized for microtubule and mitosis studies, and it is also applicable to a variety of problems in cellular imaging, including tracking proteins fused to the green fluorescent protein in live cells. The instrument is also valuable for correlating the assembly dynamics of individual cytoplasmic microtubules (labeled by conjugating X-rhodamine to tubulin) with the dynamics of membranes of the endoplasmic reticulum (labeled with DiOC6) and the dynamics of the cell cortex (by differential interference contrast) in migrating vertebrate epithelial cells. This imaging system also plays an important role in the analysis of mitotic mutants in the powerful yeast genetic system Saccharomyces cerevisiae. Copyright © 1998 Elsevier Inc. All rights reserved.

  18. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  19. New developments in multimodal clinical multiphoton tomography

    NASA Astrophysics Data System (ADS)

    König, Karsten

    2011-03-01

    80 years ago, the PhD student Maria Goeppert predicted in her thesis in Goettingen, Germany, two-photon effects. It took 30 years to prove her theory, and another three decades to realize the first two-photon microscope. With the beginning of this millennium, first clinical multiphoton tomographs started operation in research institutions, hospitals, and in the cosmetic industry. The multiphoton tomograph MPTflexTM with its miniaturized flexible scan head became the Prism-Award 2010 winner in the category Life Sciences. Multiphoton tomographs with its superior submicron spatial resolution can be upgraded to 5D imaging tools by adding spectral time-correlated single photon counting units. Furthermore, multimodal hybrid tomographs provide chemical fingerprinting and fast wide-field imaging. The world's first clinical CARS studies have been performed with a hybrid multimodal multiphoton tomograph in spring 2010. In particular, nonfluorescent lipids and water as well as mitochondrial fluorescent NAD(P)H, fluorescent elastin, keratin, and melanin as well as SHG-active collagen have been imaged in patients with dermatological disorders. Further multimodal approaches include the combination of multiphoton tomographs with low-resolution imaging tools such as ultrasound, optoacoustic, OCT, and dermoscopy systems. Multiphoton tomographs are currently employed in Australia, Japan, the US, and in several European countries for early diagnosis of skin cancer (malignant melanoma), optimization of treatment strategies (wound healing, dermatitis), and cosmetic research including long-term biosafety tests of ZnO sunscreen nanoparticles and the measurement of the stimulated biosynthesis of collagen by anti-ageing products.

  20. Hybrid Core-Shell (HyCoS) Nanoparticles produced by Complex Coacervation for Multimodal Applications

    NASA Astrophysics Data System (ADS)

    Vecchione, D.; Grimaldi, A. M.; Forte, E.; Bevilacqua, Paolo; Netti, P. A.; Torino, E.

    2017-03-01

    Multimodal imaging probes can provide diagnostic information combining different imaging modalities. Nanoparticles (NPs) can contain two or more imaging tracers that allow several diagnostic techniques to be used simultaneously. In this work, a complex coacervation process to produce core-shell completely biocompatible polymeric nanoparticles (HyCoS) for multimodal imaging applications is described. Innovations on the traditional coacervation process are found in the control of the reaction temperature, allowing a speeding up of the reaction itself, and the production of a double-crosslinked system to improve the stability of the nanostructures in the presence of a clinically relevant contrast agent for MRI (Gd-DTPA). Through the control of the crosslinking behavior, an increase up to 6 times of the relaxometric properties of the Gd-DTPA is achieved. Furthermore, HyCoS can be loaded with a high amount of dye such as ATTO 633 or conjugated with a model dye such as FITC for in vivo optical imaging. The results show stable core-shell polymeric nanoparticles that can be used both for MRI and for optical applications allowing detection free from harmful radiation. Additionally, preliminary results about the possibility to trigger the release of a drug through a pH effect are reported.

  1. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  2. Shape-Controlled Synthesis of Isotopic Yttrium-90-Labeled Rare Earth Fluoride Nanocrystals for Multimodal Imaging.

    PubMed

    Paik, Taejong; Chacko, Ann-Marie; Mikitsh, John L; Friedberg, Joseph S; Pryma, Daniel A; Murray, Christopher B

    2015-09-22

    Isotopically labeled nanomaterials have recently attracted much attention in biomedical research, environmental health studies, and clinical medicine because radioactive probes allow the elucidation of in vitro and in vivo cellular transport mechanisms, as well as the unambiguous distribution and localization of nanomaterials in vivo. In addition, nanocrystal-based inorganic materials have a unique capability of customizing size, shape, and composition; with the potential to be designed as multimodal imaging probes. Size and shape of nanocrystals can directly influence interactions with biological systems, hence it is important to develop synthetic methods to design radiolabeled nanocrystals with precise control of size and shape. Here, we report size- and shape-controlled synthesis of rare earth fluoride nanocrystals doped with the β-emitting radioisotope yttrium-90 ((90)Y). Size and shape of nanocrystals are tailored via tight control of reaction parameters and the type of rare earth hosts (e.g., Gd or Y) employed. Radiolabeled nanocrystals are synthesized in high radiochemical yield and purity as well as excellent radiolabel stability in the face of surface modification with different polymeric ligands. We demonstrate the Cerenkov radioluminescence imaging and magnetic resonance imaging capabilities of (90)Y-doped GdF3 nanoplates, which offer unique opportunities as a promising platform for multimodal imaging and targeted therapy.

  3. On-road anomaly detection by multimodal sensor analysis and multimedia processing

    NASA Astrophysics Data System (ADS)

    Orhan, Fatih; Eren, P. E.

    2014-03-01

    The use of smartphones in Intelligent Transportation Systems is gaining popularity, yet many challenges exist in developing functional applications. Due to the dynamic nature of transportation, vehicular social applications face complexities such as developing robust sensor management, performing signal and image processing tasks, and sharing information among users. This study utilizes a multimodal sensor analysis framework which enables the analysis of sensors in multimodal aspect. It also provides plugin-based analyzing interfaces to develop sensor and image processing based applications, and connects its users via a centralized application as well as to social networks to facilitate communication and socialization. With the usage of this framework, an on-road anomaly detector is being developed and tested. The detector utilizes the sensors of a mobile device and is able to identify anomalies such as hard brake, pothole crossing, and speed bump crossing. Upon such detection, the video portion containing the anomaly is automatically extracted in order to enable further image processing analysis. The detection results are shared on a central portal application for online traffic condition monitoring.

  4. MO-DE-202-01: Image-Guided Focused Ultrasound Surgery and Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farahani, K.

    At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less

  5. Hand biometric recognition based on fused hand geometry and vascular patterns.

    PubMed

    Park, GiTae; Kim, Soowon

    2013-02-28

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%.

  6. Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns

    PubMed Central

    Park, GiTae; Kim, Soowon

    2013-01-01

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%. PMID:23449119

  7. Multimodal ophthalmic imaging using swept source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Malone, Joseph D.; El-Haddad, Mohamed T.; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Tao, Yuankai K.

    2016-03-01

    Scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) benefit clinical diagnostic imaging in ophthalmology by enabling in vivo noninvasive en face and volumetric visualization of retinal structures, respectively. Spectrally encoding methods enable confocal imaging through fiber optics and reduces system complexity. Previous applications in ophthalmic imaging include spectrally encoded confocal scanning laser ophthalmoscopy (SECSLO) and a combined SECSLO-OCT system for image guidance, tracking, and registration. However, spectrally encoded imaging suffers from speckle noise because each spectrally encoded channel is effectively monochromatic. Here, we demonstrate in vivo human retinal imaging using a swept source spectrally encoded scanning laser ophthalmoscope and OCT (SSSESLO- OCT) at 1060 nm. SS-SESLO-OCT uses a shared 100 kHz Axsun swept source, shared scanner and imaging optics, and are detected simultaneously on a shared, dual channel high-speed digitizer. SESLO illumination and detection was performed using the single mode core and multimode inner cladding of a double clad fiber coupler, respectively, to preserve lateral resolution while improving collection efficiency and reducing speckle contrast at the expense of confocality. Concurrent en face SESLO and cross-sectional OCT images were acquired with 1376 x 500 pixels at 200 frames-per-second. Our system design is compact and uses a shared light source, imaging optics, and digitizer, which reduces overall system complexity and ensures inherent co-registration between SESLO and OCT FOVs. En face SESLO images acquired concurrent with OCT cross-sections enables lateral motion tracking and three-dimensional volume registration with broad applications in multivolume OCT averaging, image mosaicking, and intraoperative instrument tracking.

  8. A coaxially focused multi-mode beam for optical coherence tomography imaging with extended depth of focus (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yin, Biwei; Liang, Chia-Pin; Vuong, Barry; Tearney, Guillermo J.

    2017-02-01

    Conventional OCT images, obtained using a focused Gaussian beam have a lateral resolution of approximately 30 μm and a depth of focus (DOF) of 2-3 mm, defined as the confocal parameter (twice of Gaussian beam Rayleigh range). Improvement of lateral resolution without sacrificing imaging range requires techniques that can extend the DOF. Previously, we described a self-imaging wavefront division optical system that provided an estimated one order of magnitude DOF extension. In this study, we further investigate the properties of the coaxially focused multi-mode (CAFM) beam created by this self-imaging wavefront division optical system and demonstrate its feasibility for real-time biological tissue imaging. Gaussian beam and CAFM beam fiber optic probes with similar numerical apertures (objective NA≈0.5) were fabricated, providing lateral resolutions of approximately 2 μm. Rigorous lateral resolution characterization over depth was performed for both probes. The CAFM beam probe was found to be able to provide a DOF that was approximately one order of magnitude greater than that of Gaussian beam probe. By incorporating the CAFM beam fiber optic probe into a μOCT system with 1.5 μm axial resolution, we were able to acquire cross-sectional images of swine small intestine ex vivo, enabling the visualization of subcellular structures, providing high quality OCT images over more than a 300 μm depth range.

  9. Superparamagnetic nanoparticles for enhanced magnetic resonance and multimodal imaging

    NASA Astrophysics Data System (ADS)

    Sikma, Elise Ann Schultz

    Magnetic resonance imaging (MRI) is a powerful tool for noninvasive tomographic imaging of biological systems with high spatial and temporal resolution. Superparamagnetic (SPM) nanoparticles have emerged as highly effective MR contrast agents due to their biocompatibility, ease of surface modification and magnetic properties. Conventional nanoparticle contrast agents suffer from difficult synthetic reproducibility, polydisperse sizes and weak magnetism. Numerous synthetic techniques and nanoparticle formulations have been developed to overcome these barriers. However, there are still major limitations in the development of new nanoparticle-based probes for MR and multimodal imaging including low signal amplification and absence of biochemical reporters. To address these issues, a set of multimodal (T2/optical) and dual contrast (T1/T2) nanoparticle probes has been developed. Their unique magnetic properties and imaging capabilities were thoroughly explored. An enzyme-activatable contrast agent is currently being developed as an innovative means for early in vivo detection of cancer at the cellular level. Multimodal probes function by combining the strengths of multiple imaging techniques into a single agent. Co-registration of data obtained by multiple imaging modalities validates the data, enhancing its quality and reliability. A series of T2/optical probes were successfully synthesized by attachment of a fluorescent dye to the surface of different types of nanoparticles. The multimodal nanoparticles generated sufficient MR and fluorescence signal to image transplanted islets in vivo. Dual contrast T1/T2 imaging probes were designed to overcome disadvantages inherent in the individual T1 and T2 components. A class of T1/T2 agents was developed consisting of a gadolinium (III) complex (DTPA chelate or DO3A macrocycle) conjugated to a biocompatible silica-coated metal oxide nanoparticle through a disulfide linker. The disulfide linker has the ability to be reduced in vivo by glutathione, releasing large payloads of signal-enhancing T1 probes into the surrounding environment. Optimization of the agent occurred over three sequential generations, with each generation addressing a new challenge. The result was a T2 nanoparticle containing high levels of conjugated T1 complex demonstrating enhanced MR relaxation properties. The probes created here have the potential to play a key role in the advancement of nanoparticle-based agents in biomedical MRI applications.

  10. Continuum generation in optical fibers for high-resolution holographic coherence domain imaging application

    NASA Astrophysics Data System (ADS)

    Li, Linghui; Gruzdev, Vitaly; Yu, Ping; Chen, J. K.

    2009-02-01

    High pulse energy continuum generation in conventional multimode optical fibers has been studied for potential applications to a holographic optical coherence imaging system. As a new imaging modality for the biological tissue imaging, high-resolution holographic optical coherence imaging requires a broadband light source with a high brightness, a relatively low spatial coherence and a high stability. A broadband femtosecond laser can not be used as the light source of holographic imaging system since the laser creates a lot of speckle patterns. By coupling high peak power femtosecond laser pulses into a multimode optical fiber, nonlinear optical effects cause a continuum generation that can be served as a super-bright and broadband light source. In our experiment, an amplified femtosecond laser was coupled into the fiber through a microscopic objective. We measured the FWHM of the continuum generation as a function of incident pulse energy from 80 nJ to 800 μJ. The maximum FWHM is about 8 times higher than that of the input pulses. The stability was analyzed at different pump energies, integration times and fiber lengths. The spectral broadening and peak position show that more than two processes compete in the fiber.

  11. Predictive assessment of kidney functional recovery following ischemic injury using optical spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raman, Rajesh N.; Pivetti, Christopher D.; Ramsamooj, Rajendra

    Functional changes in rat kidneys during the induced ischemic injury and recovery phases were explored using multimodal autofluorescence and light scattering imaging. We aim to evaluate the use of noncontact optical signatures for rapid assessment of tissue function and viability. Specifically, autofluorescence images were acquired in vivo under 355, 325, and 266 nm illumination while light scattering images were collected at the excitation wavelengths as well as using relatively narrowband light centered at 500 nm. The images were simultaneously recorded using a multimodal optical imaging system. We also analyzed to obtain time constants, which were correlated to kidney dysfunction asmore » determined by a subsequent survival study and histopathological analysis. This analysis of both the light scattering and autofluorescence images suggests that changes in tissue microstructure, fluorophore emission, and blood absorption spectral characteristics, coupled with vascular response, contribute to the behavior of the observed signal, which may be used to obtain tissue functional information and offer the ability to predict posttransplant kidney function.« less

  12. A versatile clearing agent for multi-modal brain imaging

    PubMed Central

    Costantini, Irene; Ghobril, Jean-Pierre; Di Giovanna, Antonino Paolo; Mascaro, Anna Letizia Allegra; Silvestri, Ludovico; Müllenbroich, Marie Caroline; Onofri, Leonardo; Conti, Valerio; Vanzi, Francesco; Sacconi, Leonardo; Guerrini, Renzo; Markram, Henry; Iannello, Giulio; Pavone, Francesco Saverio

    2015-01-01

    Extensive mapping of neuronal connections in the central nervous system requires high-throughput µm-scale imaging of large volumes. In recent years, different approaches have been developed to overcome the limitations due to tissue light scattering. These methods are generally developed to improve the performance of a specific imaging modality, thus limiting comprehensive neuroanatomical exploration by multi-modal optical techniques. Here, we introduce a versatile brain clearing agent (2,2′-thiodiethanol; TDE) suitable for various applications and imaging techniques. TDE is cost-efficient, water-soluble and low-viscous and, more importantly, it preserves fluorescence, is compatible with immunostaining and does not cause deformations at sub-cellular level. We demonstrate the effectiveness of this method in different applications: in fixed samples by imaging a whole mouse hippocampus with serial two-photon tomography; in combination with CLARITY by reconstructing an entire mouse brain with light sheet microscopy and in translational research by imaging immunostained human dysplastic brain tissue. PMID:25950610

  13. Predictive assessment of kidney functional recovery following ischemic injury using optical spectroscopy

    DOE PAGES

    Raman, Rajesh N.; Pivetti, Christopher D.; Ramsamooj, Rajendra; ...

    2017-05-03

    Functional changes in rat kidneys during the induced ischemic injury and recovery phases were explored using multimodal autofluorescence and light scattering imaging. We aim to evaluate the use of noncontact optical signatures for rapid assessment of tissue function and viability. Specifically, autofluorescence images were acquired in vivo under 355, 325, and 266 nm illumination while light scattering images were collected at the excitation wavelengths as well as using relatively narrowband light centered at 500 nm. The images were simultaneously recorded using a multimodal optical imaging system. We also analyzed to obtain time constants, which were correlated to kidney dysfunction asmore » determined by a subsequent survival study and histopathological analysis. This analysis of both the light scattering and autofluorescence images suggests that changes in tissue microstructure, fluorophore emission, and blood absorption spectral characteristics, coupled with vascular response, contribute to the behavior of the observed signal, which may be used to obtain tissue functional information and offer the ability to predict posttransplant kidney function.« less

  14. Adaptive Optics Imaging in Laser Pointer Maculopathy.

    PubMed

    Sheyman, Alan T; Nesper, Peter L; Fawzi, Amani A; Jampol, Lee M

    2016-08-01

    The authors report multimodal imaging including adaptive optics scanning laser ophthalmoscopy (AOSLO) (Apaeros retinal image system AOSLO prototype; Boston Micromachines Corporation, Boston, MA) in a case of previously diagnosed unilateral acute idiopathic maculopathy (UAIM) that demonstrated features of laser pointer maculopathy. The authors also show the adaptive optics images of a laser pointer maculopathy case previously reported. A 15-year-old girl was referred for the evaluation of a maculopathy suspected to be UAIM. The authors reviewed the patient's history and obtained fluorescein angiography, autofluorescence, optical coherence tomography, infrared reflectance, and AOSLO. The time course of disease and clinical examination did not fit with UAIM, but the linear pattern of lesions was suspicious for self-inflicted laser pointer injury. This was confirmed on subsequent questioning of the patient. The presence of linear lesions in the macula that are best highlighted with multimodal imaging techniques should alert the physician to the possibility of laser pointer injury. AOSLO further characterizes photoreceptor damage in this condition. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:782-785.]. Copyright 2016, SLACK Incorporated.

  15. Medical image informatics infrastructure design and applications.

    PubMed

    Huang, H K; Wong, S T; Pietka, E

    1997-01-01

    Picture archiving and communication systems (PACS) is a system integration of multimodality images and health information systems designed for improving the operation of a radiology department. As it evolves, PACS becomes a hospital image document management system with a voluminous image and related data file repository. A medical image informatics infrastructure can be designed to take advantage of existing data, providing PACS with add-on value for health care service, research, and education. A medical image informatics infrastructure (MIII) consists of the following components: medical images and associated data (including PACS database), image processing, data/knowledge base management, visualization, graphic user interface, communication networking, and application oriented software. This paper describes these components and their logical connection, and illustrates some applications based on the concept of the MIII.

  16. DBSAR's First Multimode Flight Campaign

    NASA Technical Reports Server (NTRS)

    Rincon, Rafael F.; Vega, Manuel; Buenfil, Manuel; Geist, Alessandro; Hilliard, Lawrence; Racette, Paul

    2010-01-01

    The Digital Beamforming SAR (DBSAR) is an airborne imaging radar system that combines phased array technology, reconfigurable on-board processing and waveform generation, and advances in signal processing to enable techniques not possible with conventional SARs. The system exploits the versatility inherently in phased-array technology with a state-of-the-art data acquisition and real-time processor in order to implement multi-mode measurement techniques in a single radar system. Operational modes include scatterometry over multiple antenna beams, Synthetic Aperture Radar (SAR) over several antenna beams, or Altimetry. The radar was flight tested in October 2008 on board of the NASA P3 aircraft over the Delmarva Peninsula, MD. The results from the DBSAR system performance is presented.

  17. Tinnitus Multimodal Imaging

    DTIC Science & Technology

    2016-12-01

    images were segmented into gray and white matter images and spatially normalized to the MNI template (3 mm isotropic voxels) using the DARTEL toolbox in...AWARD NUMBER: W81XWH-13-1-0494 TITLE: Tinnitus Multimodal Imaging PRINCIPAL INVESTIGATOR: Steven Wan Cheung CONTRACTING ORGANIZATION... Medical Research and Materiel Command Fort Detrick, Maryland 21702-5012 DISTRIBUTION STATEMENT: Approved for Public Release; Distribution Unlimited

  18. Observation of Geometric Parametric Instability Induced by the Periodic Spatial Self-Imaging of Multimode Waves

    NASA Astrophysics Data System (ADS)

    Krupa, Katarzyna; Tonello, Alessandro; Barthélémy, Alain; Couderc, Vincent; Shalaby, Badr Mohamed; Bendahmane, Abdelkrim; Millot, Guy; Wabnitz, Stefan

    2016-05-01

    Spatiotemporal mode coupling in highly multimode physical systems permits new routes for exploring complex instabilities and forming coherent wave structures. We present here the first experimental demonstration of multiple geometric parametric instability sidebands, generated in the frequency domain through resonant space-time coupling, owing to the natural periodic spatial self-imaging of a multimode quasi-continuous-wave beam in a standard graded-index multimode fiber. The input beam was launched in the fiber by means of an amplified microchip laser emitting sub-ns pulses at 1064 nm. The experimentally observed frequency spacing among sidebands agrees well with analytical predictions and numerical simulations. The first-order peaks are located at the considerably large detuning of 123.5 THz from the pump. These results open the remarkable possibility to convert a near-infrared laser directly into a broad spectral range spanning visible and infrared wavelengths, by means of a single resonant parametric nonlinear effect occurring in the normal dispersion regime. As further evidence of our strong space-time coupling regime, we observed the striking effect that all of the different sideband peaks were carried by a well-defined and stable bell-shaped spatial profile.

  19. Multimodal fiber source for nonlinear microscopy based on a dissipative soliton laser

    PubMed Central

    Lamb, Erin S.; Wise, Frank W.

    2015-01-01

    Recent developments in high energy femtosecond fiber lasers have enabled robust and lower-cost sources for multiphoton-fluorescence and harmonic-generation imaging. However, picosecond pulses are better suited for Raman scattering microscopy, so the ideal multimodal source for nonlinear microcopy needs to provide both durations. Here we present spectral compression of a high-power femtosecond fiber laser as a route to producing transform-limited picosecond pulses. These pulses pump a fiber optical parametric oscillator to yield a robust fiber source capable of providing the synchronized picosecond pulse trains needed for Raman scattering microscopy. Thus, this system can be used as a multimodal platform for nonlinear microscopy techniques. PMID:26417497

  20. Ambulatory diffuse optical tomography and multimodality physiological monitoring system for muscle and exercise applications

    NASA Astrophysics Data System (ADS)

    Hu, Gang; Zhang, Quan; Ivkovic, Vladimir; Strangman, Gary E.

    2016-09-01

    Ambulatory diffuse optical tomography (aDOT) is based on near-infrared spectroscopy (NIRS) and enables three-dimensional imaging of regional hemodynamics and oxygen consumption during a person's normal activities. Although NIRS has been previously used for muscle assessment, it has been notably limited in terms of the number of channels measured, the extent to which subjects can be ambulatory, and/or the ability to simultaneously acquire synchronized auxiliary data such as electromyography (EMG) or electrocardiography (ECG). We describe the development of a prototype aDOT system, called NINscan-M, capable of ambulatory tomographic imaging as well as simultaneous auxiliary multimodal physiological monitoring. Powered by four AA size batteries and weighing 577 g, the NINscan-M prototype can synchronously record 64-channel NIRS imaging data, eight channels of EMG, ECG, or other analog signals, plus force, acceleration, rotation, and temperature for 24+ h at up to 250 Hz. We describe the system's design, characterization, and performance characteristics. We also describe examples of isometric, cycle ergometer, and free-running ambulatory exercise to demonstrate tomographic imaging at 25 Hz. NINscan-M represents a multiuse tool for muscle physiology studies as well as clinical muscle assessment.

  1. Targeted delivery of cancer-specific multimodal contrast agents for intraoperative detection of tumor boundaries and therapeutic margins

    NASA Astrophysics Data System (ADS)

    Xu, Ronald X.; Xu, Jeff S.; Huang, Jiwei; Tweedle, Michael F.; Schmidt, Carl; Povoski, Stephen P.; Martin, Edward W.

    2010-02-01

    Background: Accurate assessment of tumor boundaries and intraoperative detection of therapeutic margins are important oncologic principles for minimal recurrence rates and improved long-term outcomes. However, many existing cancer imaging tools are based on preoperative image acquisition and do not provide real-time intraoperative information that supports critical decision-making in the operating room. Method: Poly lactic-co-glycolic acid (PLGA) microbubbles (MBs) and nanobubbles (NBs) were synthesized by a modified double emulsion method. The MB and NB surfaces were conjugated with CC49 antibody to target TAG-72 antigen, a human glycoprotein complex expressed in many epithelial-derived cancers. Multiple imaging agents were encapsulated in MBs and NBs for multimodal imaging. Both one-step and multi-step cancer targeting strategies were explored. Active MBs/NBs were also fabricated for therapeutic margin assessment in cancer ablation therapies. Results: The multimodal contrast agents and the cancer-targeting strategies were tested on tissue simulating phantoms, LS174 colon cancer cell cultures, and cancer xenograft nude mice. Concurrent multimodal imaging was demonstrated using fluorescence and ultrasound imaging modalities. Technical feasibility of using active MBs and portable imaging tools such as ultrasound for intraoperative therapeutic margin assessment was demonstrated in a biological tissue model. Conclusion: The cancer-specific multimodal contrast agents described in this paper have the potential for intraoperative detection of tumor boundaries and therapeutic margins.

  2. Frameless multimodal image guidance of localized convection-enhanced delivery of therapeutics in the brain

    PubMed Central

    van der Bom, Imramsjah M J; Moser, Richard P; Gao, Guanping; Sena-Esteves, Miguel; Aronin, Neil

    2013-01-01

    Introduction Convection-enhanced delivery (CED) has been shown to be an effective method of administering macromolecular compounds into the brain that are unable to cross the blood-brain barrier. Because the administration is highly localized, accurate cannula placement by minimally invasive surgery is an important requisite. This paper reports on the use of an angiographic c-arm system which enables truly frameless multimodal image guidance during CED surgery. Methods A microcannula was placed into the striatum of five sheep under real-time fluoroscopic guidance using imaging data previously acquired by cone beam computed tomography (CBCT) and MRI, enabling three-dimensional navigation. After introduction of the cannula, high resolution CBCT was performed and registered with MRI to confirm the position of the cannula tip and to make adjustments as necessary. Adeno-associated viral vector-10, designed to deliver small-hairpin micro RNA (shRNAmir), was mixed with 2.0 mM gadolinium (Gd) and infused at a rate of 3 μl/min for a total of 100 μl. Upon completion, the animals were transferred to an MR scanner to assess the approximate distribution by measuring the volume of spread of Gd. Results The cannula was successfully introduced under multimodal image guidance. High resolution CBCT enabled validation of the cannula position and Gd-enhanced MRI after CED confirmed localized administration of the therapy. Conclusion A microcannula for CED was introduced into the striatum of five sheep under multimodal image guidance. The non-alloy 300 μm diameter cannula tip was well visualized using CBCT, enabling confirmation of the position of the end of the tip in the area of interest. PMID:22193239

  3. GRAFT-VERSUS-HOST DISEASE PANUVEITIS AND BILATERAL SEROUS DETACHMENTS: MULTIMODAL IMAGING ANALYSIS.

    PubMed

    Jung, Jesse J; Chen, Michael H; Rofagha, Soraya; Lee, Scott S

    2017-01-01

    To report the multimodal imaging findings and follow-up of a case of graft-versus-host disease-induced bilateral panuveitis and serous retinal detachments after allogenic bone marrow transplant for acute myeloid leukemia. A 75-year-old black man presented with acute decreased vision in both eyes for 1 week. Clinical examination and multimodal imaging, including spectral domain optical coherence tomography, fundus autofluorescence, fluorescein angiography, and swept-source optical coherence tomography angiography (Investigational Device; Carl Zeiss Meditec Inc) were performed. Clinical examination of the patient revealed anterior and posterior inflammation and bilateral serous retinal detachments. Ultra-widefield fundus autofluorescence demonstrated hyperautofluorescence secondary to subretinal fluid; and fluorescein angiography revealed multiple areas of punctate hyperfluorescence, leakage, and staining of the optic discs. Spectral domain and enhanced depth imaging optical coherence tomography demonstrated subretinal fluid, a thickened, undulating retinal pigment epithelium layer, and a thickened choroid in both eyes. En-face swept-source optical coherence tomography angiography did not show any retinal vascular abnormalities but did demonstrate patchy areas of decreased choriocapillaris flow. An extensive systemic infectious and malignancy workup was negative and the patient was treated with high-dose oral prednisone immunosuppression. Subsequent 6-month follow-up demonstrated complete resolution of the inflammation and bilateral serous detachments after completion of the prednisone taper over a 3-month period. Graft-versus-host disease panuveitis and bilateral serous retinal detachments are rare complications of allogenic bone marrow transplant for acute myeloid leukemia and can be diagnosed with clinical and multimodal imaging analysis. This form of autoimmune inflammation may occur after the recovery of T-cell activity within the donor graft targeting the host. Infectious and recurrent malignancy must be ruled out before initiation of immunosuppression, which can affectively treat this form of graft-versus-host disease.

  4. Multimodal MSI in Conjunction with Broad Coverage Spatially Resolved MS 2 Increases Confidence in Both Molecular Identification and Localization

    DOE PAGES

    Veličković, Dušan; Chu, Rosalie K.; Carrell, Alyssa A.; ...

    2017-12-06

    One critical aspect of mass spectrometry imaging (MSI) is the need to confidently identify detected analytes. While orthogonal tandem MS (e.g., LC–MS 2) experiments from sample extracts can assist in annotating ions, the spatial information about these molecules is lost. Accordingly, this could cause mislead conclusions, especially in cases where isobaric species exhibit different distributions within a sample. In this Technical Note, we employed a multimodal imaging approach, using matrix assisted laser desorption/ionization (MALDI)-MSI and liquid extraction surface analysis (LESA)-MS 2I, to confidently annotate and localize a broad range of metabolites involved in a tripartite symbiosis system of moss, cyanobacteria,more » and fungus. In conclusion, we found that the combination of these two imaging modalities generated very congruent ion images, providing the link between highly accurate structural information onfered by LESA and high spatial resolution attainable by MALDI. These results demonstrate how this combined methodology could be very useful in differentiating metabolite routes in complex systems.« less

  5. Multimodal MSI in Conjunction with Broad Coverage Spatially Resolved MS 2 Increases Confidence in Both Molecular Identification and Localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veličković, Dušan; Chu, Rosalie K.; Carrell, Alyssa A.

    One critical aspect of mass spectrometry imaging (MSI) is the need to confidently identify detected analytes. While orthogonal tandem MS (e.g., LC–MS 2) experiments from sample extracts can assist in annotating ions, the spatial information about these molecules is lost. Accordingly, this could cause mislead conclusions, especially in cases where isobaric species exhibit different distributions within a sample. In this Technical Note, we employed a multimodal imaging approach, using matrix assisted laser desorption/ionization (MALDI)-MSI and liquid extraction surface analysis (LESA)-MS 2I, to confidently annotate and localize a broad range of metabolites involved in a tripartite symbiosis system of moss, cyanobacteria,more » and fungus. In conclusion, we found that the combination of these two imaging modalities generated very congruent ion images, providing the link between highly accurate structural information onfered by LESA and high spatial resolution attainable by MALDI. These results demonstrate how this combined methodology could be very useful in differentiating metabolite routes in complex systems.« less

  6. Carbon Tube Electrodes for Electrocardiography-Gated Cardiac Multimodality Imaging in Mice

    PubMed Central

    Choquet, Philippe; Goetz, Christian; Aubertin, Gaelle; Hubele, Fabrice; Sannié, Sébastien; Constantinesco, André

    2011-01-01

    This report describes a simple design of noninvasive carbon tube electrodes that facilitates electrocardiography (ECG) in mice during cardiac multimodality preclinical imaging. Both forepaws and the left hindpaw, covered by conductive gel, of mice were placed into the openings of small carbon tubes. Cardiac ECG-gated single-photon emission CT, X-ray CT, and MRI were tested (n = 60) in 20 mice. For all applications, electrodes were used in a warmed multimodality imaging cell. A heart rate of 563 ± 48 bpm was recorded from anesthetized mice regardless of the imaging technique used, with acquisition times ranging from 1 to 2 h. PMID:21333165

  7. Use of multidimensional, multimodal imaging and PACS to support neurological diagnoses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, S.T.C.; Knowlton, R.; Hoo, K.S.

    1995-12-31

    Technological advances in brain imaging have revolutionized diagnosis in neurology and neurological surgery. Major imaging techniques include magnetic resonance imaging (MRI) to visualize structural anatomy, positron emission tomography (PET) to image metabolic function and cerebral blood flow, magnetoencephalography (MEG) to visualize the location of physiologic current sources, and magnetic resonance spectroscopy (MRS) to measure specific biochemicals. Each of these techniques studies different biomedical aspects of the grain, but there lacks an effective means to quantify and correlate the disparate imaging datasets in order to improve clinical decision making processes. This paper describes several techniques developed in a UNIX-based neurodiagnostic workstationmore » to aid the non-invasive presurgical evaluation of epilepsy patients. These techniques include on-line access to the picture archiving and communication systems (PACS) multimedia archive, coregistration of multimodality image datasets, and correlation and quantitative of structural and functional information contained in the registered images. For illustration, the authors describe the use of these techniques in a patient case of non-lesional neocortical epilepsy. They also present the future work based on preliminary studies.« less

  8. In vivo multi-modality photoacoustic and pulse echo tracking of prostate tumor growth using a window chamber

    NASA Astrophysics Data System (ADS)

    Bauer, Daniel R.; Olafsson, Ragnar; Montilla, Leonardo G.; Witte, Russell S.

    2010-02-01

    Understanding the tumor microenvironment is critical to characterizing how cancers operate and predicting how they will eventually respond to treatment. The mouse window chamber model is an excellent tool for cancer research, because it enables high resolution tumor imaging and cross-validation using multiple modalities. We describe a novel multimodality imaging system that incorporates three dimensional (3D) photoacoustics with pulse echo ultrasound for imaging the tumor microenvironment and tracking tissue growth in mice. Three mice were implanted with a dorsal skin flap window chamber. PC-3 prostate tumor cells, expressing green fluorescent protein (GFP), were injected into the skin. The ensuing tumor invasion was mapped using photoacoustic and pulse echo imaging, as well as optical and fluorescent imaging for comparison and cross validation. The photoacoustic imaging and spectroscopy system, consisting of a tunable (680-1000nm) pulsed laser and 25 MHz ultrasound transducer, revealed near infrared absorbing regions, primarily blood vessels. Pulse echo images, obtained simultaneously, provided details of the tumor microstructure and growth with 100-μm3 resolution. The tumor size in all three mice increased between three and five fold during 3+ weeks of imaging. Results were consistent with the optical and fluorescent images. Photoacoustic imaging revealed detailed maps of the tumor vasculature, whereas photoacoustic spectroscopy identified regions of oxygenated and deoxygenated blood vessels. The 3D photoacoustic and pulse echo imaging system provided complementary information to track the tumor microenvironment, evaluate new cancer therapies, and develop molecular imaging agents in vivo. Finally, these safe and noninvasive techniques are potentially applicable for human cancer imaging.

  9. Stability, structure and scale: improvements in multi-modal vessel extraction for SEEG trajectory planning.

    PubMed

    Zuluaga, Maria A; Rodionov, Roman; Nowell, Mark; Achhala, Sufyan; Zombori, Gergely; Mendelson, Alex F; Cardoso, M Jorge; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sébastien

    2015-08-01

    Brain vessels are among the most critical landmarks that need to be assessed for mitigating surgical risks in stereo-electroencephalography (SEEG) implantation. Intracranial haemorrhage is the most common complication associated with implantation, carrying significantly associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice, neurosurgeons have no assistance in the planning of electrode trajectories. There is great interest in developing computer-assisted planning systems that can optimise the safety profile of electrode trajectories, maximising the distance to critical structures. This paper presents a method that integrates the concepts of scale, neighbourhood structure and feature stability with the aim of improving robustness and accuracy of vessel extraction within a SEEG planning system. The developed method accounts for scale and vicinity of a voxel by formulating the problem within a multi-scale tensor voting framework. Feature stability is achieved through a similarity measure that evaluates the multi-modal consistency in vesselness responses. The proposed measurement allows the combination of multiple images modalities into a single image that is used within the planning system to visualise critical vessels. Twelve paired data sets from two image modalities available within the planning system were used for evaluation. The mean Dice similarity coefficient was 0.89 ± 0.04, representing a statistically significantly improvement when compared to a semi-automated single human rater, single-modality segmentation protocol used in clinical practice (0.80 ± 0.03). Multi-modal vessel extraction is superior to semi-automated single-modality segmentation, indicating the possibility of safer SEEG planning, with reduced patient morbidity.

  10. Automated diagnosis of prostate cancer in multi-parametric MRI based on multimodal convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Le, Minh Hung; Chen, Jingyu; Wang, Liang; Wang, Zhiwei; Liu, Wenyu; (Tim Cheng, Kwang-Ting; Yang, Xin

    2017-08-01

    Automated methods for prostate cancer (PCa) diagnosis in multi-parametric magnetic resonance imaging (MP-MRIs) are critical for alleviating requirements for interpretation of radiographs while helping to improve diagnostic accuracy (Artan et al 2010 IEEE Trans. Image Process. 19 2444-55, Litjens et al 2014 IEEE Trans. Med. Imaging 33 1083-92, Liu et al 2013 SPIE Medical Imaging (International Society for Optics and Photonics) p 86701G, Moradi et al 2012 J. Magn. Reson. Imaging 35 1403-13, Niaf et al 2014 IEEE Trans. Image Process. 23 979-91, Niaf et al 2012 Phys. Med. Biol. 57 3833, Peng et al 2013a SPIE Medical Imaging (International Society for Optics and Photonics) p 86701H, Peng et al 2013b Radiology 267 787-96, Wang et al 2014 BioMed. Res. Int. 2014). This paper presents an automated method based on multimodal convolutional neural networks (CNNs) for two PCa diagnostic tasks: (1) distinguishing between cancerous and noncancerous tissues and (2) distinguishing between clinically significant (CS) and indolent PCa. Specifically, our multimodal CNNs effectively fuse apparent diffusion coefficients (ADCs) and T2-weighted MP-MRI images (T2WIs). To effectively fuse ADCs and T2WIs we design a new similarity loss function to enforce consistent features being extracted from both ADCs and T2WIs. The similarity loss is combined with the conventional classification loss functions and integrated into the back-propagation procedure of CNN training. The similarity loss enables better fusion results than existing methods as the feature learning processes of both modalities are mutually guided, jointly facilitating CNN to ‘see’ the true visual patterns of PCa. The classification results of multimodal CNNs are further combined with the results based on handcrafted features using a support vector machine classifier. To achieve a satisfactory accuracy for clinical use, we comprehensively investigate three critical factors which could greatly affect the performance of our multimodal CNNs but have not been carefully studied previously. (1) Given limited training data, how can these be augmented in sufficient numbers and variety for fine-tuning deep CNN networks for PCa diagnosis? (2) How can multimodal MP-MRI information be effectively combined in CNNs? (3) What is the impact of different CNN architectures on the accuracy of PCa diagnosis? Experimental results on extensive clinical data from 364 patients with a total of 463 PCa lesions and 450 identified noncancerous image patches demonstrate that our system can achieve a sensitivity of 89.85% and a specificity of 95.83% for distinguishing cancer from noncancerous tissues and a sensitivity of 100% and a specificity of 76.92% for distinguishing indolent PCa from CS PCa. This result is significantly superior to the state-of-the-art method relying on handcrafted features.

  11. Quantitative multimodality imaging in cancer research and therapy.

    PubMed

    Yankeelov, Thomas E; Abramson, Richard G; Quarles, C Chad

    2014-11-01

    Advances in hardware and software have enabled the realization of clinically feasible, quantitative multimodality imaging of tissue pathophysiology. Earlier efforts relating to multimodality imaging of cancer have focused on the integration of anatomical and functional characteristics, such as PET-CT and single-photon emission CT (SPECT-CT), whereas more-recent advances and applications have involved the integration of multiple quantitative, functional measurements (for example, multiple PET tracers, varied MRI contrast mechanisms, and PET-MRI), thereby providing a more-comprehensive characterization of the tumour phenotype. The enormous amount of complementary quantitative data generated by such studies is beginning to offer unique insights into opportunities to optimize care for individual patients. Although important technical optimization and improved biological interpretation of multimodality imaging findings are needed, this approach can already be applied informatively in clinical trials of cancer therapeutics using existing tools. These concepts are discussed herein.

  12. Vision 20/20: Simultaneous CT-MRI — Next chapter of multimodality imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ge, E-mail: wangg6@rpi.edu; Xi, Yan; Gjesteby, Lars

    Multimodality imaging systems such as positron emission tomography-computed tomography (PET-CT) and MRI-PET are widely available, but a simultaneous CT-MRI instrument has not been developed. Synergies between independent modalities, e.g., CT, MRI, and PET/SPECT can be realized with image registration, but such postprocessing suffers from registration errors that can be avoided with synchronized data acquisition. The clinical potential of simultaneous CT-MRI is significant, especially in cardiovascular and oncologic applications where studies of the vulnerable plaque, response to cancer therapy, and kinetic and dynamic mechanisms of targeted agents are limited by current imaging technologies. The rationale, feasibility, and realization of simultaneous CT-MRImore » are described in this perspective paper. The enabling technologies include interior tomography, unique gantry designs, open magnet and RF sequences, and source and detector adaptation. Based on the experience with PET-CT, PET-MRI, and MRI-LINAC instrumentation where hardware innovation and performance optimization were instrumental to construct commercial systems, the authors provide top-level concepts for simultaneous CT-MRI to meet clinical requirements and new challenges. Simultaneous CT-MRI fills a major gap of modality coupling and represents a key step toward the so-called “omnitomography” defined as the integration of all relevant imaging modalities for systems biology and precision medicine.« less

  13. Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain

    PubMed Central

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images. PMID:25214889

  14. Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.

    PubMed

    Franchi, G; Angulo, J; Moreaud, M; Sorbier, L

    2018-01-01

    The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  15. Multi-mode of Four and Six Wave Parametric Amplified Process

    NASA Astrophysics Data System (ADS)

    Zhu, Dayu; Yang, Yiheng; Zhang, Da; Liu, Ruizhou; Ma, Danmeng; Li, Changbiao; Zhang, Yanpeng

    2017-03-01

    Multiple quantum modes in correlated fields are essential for future quantum information processing and quantum computing. Here we report the generation of multi-mode phenomenon through parametric amplified four- and six-wave mixing processes in a rubidium atomic ensemble. The multi-mode properties in both frequency and spatial domains are studied. On one hand, the multi-mode behavior is dominantly controlled by the intensity of external dressing effect, or nonlinear phase shift through internal dressing effect, in frequency domain; on the other hand, the multi-mode behavior is visually demonstrated from the images of the biphoton fields directly, in spatial domain. Besides, the correlation of the two output fields is also demonstrated in both domains. Our approach supports efficient applications for scalable quantum correlated imaging.

  16. Multi-mode of Four and Six Wave Parametric Amplified Process.

    PubMed

    Zhu, Dayu; Yang, Yiheng; Zhang, Da; Liu, Ruizhou; Ma, Danmeng; Li, Changbiao; Zhang, Yanpeng

    2017-03-03

    Multiple quantum modes in correlated fields are essential for future quantum information processing and quantum computing. Here we report the generation of multi-mode phenomenon through parametric amplified four- and six-wave mixing processes in a rubidium atomic ensemble. The multi-mode properties in both frequency and spatial domains are studied. On one hand, the multi-mode behavior is dominantly controlled by the intensity of external dressing effect, or nonlinear phase shift through internal dressing effect, in frequency domain; on the other hand, the multi-mode behavior is visually demonstrated from the images of the biphoton fields directly, in spatial domain. Besides, the correlation of the two output fields is also demonstrated in both domains. Our approach supports efficient applications for scalable quantum correlated imaging.

  17. Improved heuristics for early melanoma detection using multimode hyperspectral dermoscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas B.; Booth, Nicholas; Farkas, Daniel L.

    2017-02-01

    Purpose: To determine the performance of a multimode dermoscopy system (SkinSpect) designed to quantify and 3-D map in vivo melanin and hemoglobin concentrations in skin and its melanoma scoring system, and compare the results accuracy with SIAscopy, and histopathology. Methods: A multimode imaging dermoscope is presented that combines polarization, fluorescence and hyperspectral imaging to accurately map the distribution of skin melanin, collagen and hemoglobin in pigmented lesions. We combine two depth-sensitive techniques: polarization, and hyperspectral imaging, to determine the spatial distribution of melanin and hemoglobin oxygenation in a skin lesion. By quantifying melanin absorption in pigmented areas, we can also more accurately estimate fluorescence emission distribution mainly from skin collagen. Results and discussion: We compared in vivo features of melanocytic lesions (N = 10) extracted by non-invasive SkinSpect and SIMSYS-MoleMate SIAscope, and correlate them to pathology report. Melanin distribution at different depths as well as hemodynamics including abnormal vascularity we detected will be discussed. We will adapt SkinSpect scoring with ABCDE (asymmetry , border, color, diameter, evolution) and seven point dermatologic checklist including: (1) atypical pigment network, (2) blue-whitish veil, (3) atypical vascular pattern, (4) irregular streaks, (5) irregular pigmentation, (6) irregular dots and globules, (7) regression structures estimated by dermatologist. Conclusion: Distinctive, diagnostic features seen by SkinSpect in melanoma vs. normal pigmented lesions will be compared by SIAscopy and results from histopathology.

  18. MULTIMODAL IMAGING OF CHOROIDAL LESIONS IN DISSEMINATED MYCOBACTERIUM CHIMAERA INFECTION AFTER CARDIOTHORACIC SURGERY.

    PubMed

    Böni, Christian; Al-Sheikh, Mayss; Hasse, Barbara; Eberhard, Roman; Kohler, Philipp; Hasler, Pascal; Erb, Stefan; Hoffmann, Matthias; Barthelmes, Daniel; Zweifel, Sandrine A

    2017-12-04

    To explore morphologic characteristics of choroidal lesions in patients with disseminated Mycobacterium chimaera infection subsequent to open-heart surgery. Nine patients (18 eyes) with systemic M. chimaera infection were reviewed. Activity of choroidal lesions were evaluated using biomicroscopy, fundus autofluorescence, enhanced depth imaging optical coherence tomography, fluorescein angiography/indocyanine green angiography, and optical coherence tomography angiography. Relationships of choroidal findings to systemic disease activity were sought. All 9 male patients, aged between 49 and 66 years, were diagnosed with endocarditis and/or aortic graft infection. Mean follow-up was 17.6 months. Four patients had only inactive lesions (mild disease). In all five patients (10 eyes) with progressive ocular disease, indocyanine green angiography was superior to other tests for revealing new lesions and active lesions correlated with hyporeflective choroidal areas on enhanced depth imaging optical coherence tomography. One eye with a large choroidal granuloma developed choroidal neovascularization. Optical coherence tomography angiography showed areas with reduced perfusion at the inner choroid. All 5 patients with progressive ocular disease had evidence of systemic disease activity within ±6 weeks' duration. Choroidal manifestation of disseminated M. chimaera infection indicates systemic disease activity. Multimodal imaging is suitable to recognize progressive ocular disease. We propose ophthalmologic screening examinations for patients with M. chimaera infection.

  19. Highlights lecture EANM 2016: "Embracing molecular imaging and multi-modal imaging: a smart move for nuclear medicine towards personalized medicine".

    PubMed

    Aboagye, Eric O; Kraeber-Bodéré, Françoise

    2017-08-01

    The 2016 EANM Congress took place in Barcelona, Spain, from 15 to 19 October under the leadership of Prof. Wim Oyen, chair of the EANM Scientific Committee. With more than 6,000 participants, this congress was the most important European event in nuclear medicine, bringing together a multidisciplinary community involved in the different fields of nuclear medicine. There were over 600 oral and 1,200 poster or e-Poster presentations with an overwhelming focus on development and application of imaging for personalized care, which is timely for the community. Beyond FDG PET, major highlights included progress in the use of PSMA and SSTR receptor-targeted radiopharmaceuticals and associated theranostics in oncology. Innovations in radiopharmaceuticals for imaging pathologies of the brain and cardiovascular system, as well as infection and inflammation, were also highlighted. In the areas of physics and instrumentation, multimodality imaging and radiomics were highlighted as promising areas of research.

  20. Recommendations on nuclear and multimodality imaging in IE and CIED infections.

    PubMed

    Erba, Paola Anna; Lancellotti, Patrizio; Vilacosta, Isidre; Gaemperli, Oliver; Rouzet, Francois; Hacker, Marcus; Signore, Alberto; Slart, Riemer H J A; Habib, Gilbert

    2018-05-24

    In the latest update of the European Society of Cardiology (ESC) guidelines for the management of infective endocarditis (IE), imaging is positioned at the centre of the diagnostic work-up so that an early and accurate diagnosis can be reached. Besides echocardiography, contrast-enhanced CT (ce-CT), radiolabelled leucocyte (white blood cell, WBC) SPECT/CT and [ 18 F]FDG PET/CT are included as diagnostic tools in the diagnostic flow chart for IE. Following the clinical guidelines that provided a straightforward message on the role of multimodality imaging, we believe that it is highly relevant to produce specific recommendations on nuclear multimodality imaging in IE and cardiac implantable electronic device infections. In these procedural recommendations we therefore describe in detail the technical and practical aspects of WBC SPECT/CT and [ 18 F]FDG PET/CT, including ce-CT acquisition protocols. We also discuss the advantages and limitations of each procedure, specific pitfalls when interpreting images, and the most important results from the literature, and also provide recommendations on the appropriate use of multimodality imaging.

  1. On-the-fly augmented reality for orthopedic surgery using a multimodal fiducial.

    PubMed

    Andress, Sebastian; Johnson, Alex; Unberath, Mathias; Winkler, Alexander Felix; Yu, Kevin; Fotouhi, Javad; Weidert, Simon; Osgood, Greg; Navab, Nassir

    2018-04-01

    Fluoroscopic x-ray guidance is a cornerstone for percutaneous orthopedic surgical procedures. However, two-dimensional (2-D) observations of the three-dimensional (3-D) anatomy suffer from the effects of projective simplification. Consequently, many x-ray images from various orientations need to be acquired for the surgeon to accurately assess the spatial relations between the patient's anatomy and the surgical tools. We present an on-the-fly surgical support system that provides guidance using augmented reality and can be used in quasiunprepared operating rooms. The proposed system builds upon a multimodality marker and simultaneous localization and mapping technique to cocalibrate an optical see-through head mounted display to a C-arm fluoroscopy system. Then, annotations on the 2-D x-ray images can be rendered as virtual objects in 3-D providing surgical guidance. We quantitatively evaluate the components of the proposed system and, finally, design a feasibility study on a semianthropomorphic phantom. The accuracy of our system was comparable to the traditional image-guided technique while substantially reducing the number of acquired x-ray images as well as procedure time. Our promising results encourage further research on the interaction between virtual and real objects that we believe will directly benefit the proposed method. Further, we would like to explore the capabilities of our on-the-fly augmented reality support system in a larger study directed toward common orthopedic interventions.

  2. Multimodal adaptive optics for depth-enhanced high-resolution ophthalmic imaging

    NASA Astrophysics Data System (ADS)

    Hammer, Daniel X.; Mujat, Mircea; Iftimia, Nicusor V.; Lue, Niyom; Ferguson, R. Daniel

    2010-02-01

    We developed a multimodal adaptive optics (AO) retinal imager for diagnosis of retinal diseases, including glaucoma, diabetic retinopathy (DR), age-related macular degeneration (AMD), and retinitis pigmentosa (RP). The development represents the first ever high performance AO system constructed that combines AO-corrected scanning laser ophthalmoscopy (SLO) and swept source Fourier domain optical coherence tomography (SSOCT) imaging modes in a single compact clinical prototype platform. The SSOCT channel operates at a wavelength of 1 μm for increased penetration and visualization of the choriocapillaris and choroid, sites of major disease activity for DR and wet AMD. The system is designed to operate on a broad clinical population with a dual deformable mirror (DM) configuration that allows simultaneous low- and high-order aberration correction. The system also includes a wide field line scanning ophthalmoscope (LSO) for initial screening, target identification, and global orientation; an integrated retinal tracker (RT) to stabilize the SLO, OCT, and LSO imaging fields in the presence of rotational eye motion; and a high-resolution LCD-based fixation target for presentation to the subject of stimuli and other visual cues. The system was tested in a limited number of human subjects without retinal disease for performance optimization and validation. The system was able to resolve and quantify cone photoreceptors across the macula to within ~0.5 deg (~100-150 μm) of the fovea, image and delineate ten retinal layers, and penetrate to resolve targets deep into the choroid. In addition to instrument hardware development, analysis algorithms were developed for efficient information extraction from clinical imaging sessions, with functionality including automated image registration, photoreceptor counting, strip and montage stitching, and segmentation. The system provides clinicians and researchers with high-resolution, high performance adaptive optics imaging to help guide therapies, develop new drugs, and improve patient outcomes.

  3. Multimodal MRI for early diabetic mild cognitive impairment: study protocol of a prospective diagnostic trial.

    PubMed

    Yu, Ying; Sun, Qian; Yan, Lin-Feng; Hu, Yu-Chuan; Nan, Hai-Yan; Yang, Yang; Liu, Zhi-Cheng; Wang, Wen; Cui, Guang-Bin

    2016-08-24

    Type 2 diabetes mellitus (T2DM) is a risk factor for dementia. Mild cognitive impairment (MCI), an intermediary state between normal cognition and dementia, often occurs during the prodromal diabetic stage, making early diagnosis and intervention of MCI very important. Latest neuroimaging techniques revealed some underlying microstructure alterations for diabetic MCI, from certain aspects. But there still lacks an integrated multimodal MRI system to detect early neuroimaging changes in diabetic MCI patients. Thus, we intended to conduct a diagnostic trial using multimodal MRI techniques to detect early diabetic MCI that is determined by the Montreal Cognitive Assessment (MoCA). In this study, healthy controls, prodromal diabetes and diabetes subjects (53 subjects/group) aged 40-60 years will be recruited from the physical examination center of Tangdu Hospital. The neuroimaging and psychometric measurements will be repeated at a 0.5 year-interval for 2.5 years' follow-up. The primary outcome measures are 1) Microstructural and functional alterations revealed with multimodal MRI scans including structure magnetic resonance imaging (sMRI), resting state functional magnetic resonance imaging (rs-fMRI), diffusion kurtosis imaging (DKI), and three-dimensional pseudo-continuous arterial spin labeling (3D-pCASL); 2) Cognition evaluation with MoCA. The second outcome measures are obesity, metabolic characteristics, lifestyle and quality of life. The study will provide evidence for the potential use of multimodal MRI techniques with psychometric evaluation in diagnosing MCI at prodromal diabetic stage so as to help decision making in early intervention and improve the prognosis of T2DM. This study has been registered to ClinicalTrials.gov ( NCT02420470 ) on April 2, 2015 and published on July 29, 2015.

  4. A Review of Intravascular Ultrasound–Based Multimodal Intravascular Imaging: The Synergistic Approach to Characterizing Vulnerable Plaques

    PubMed Central

    Ma, Teng; Zhou, Bill; Hsiai, Tzung K.; Shung, K. Kirk

    2015-01-01

    Catheter-based intravascular imaging modalities are being developed to visualize pathologies in coronary arteries, such as high-risk vulnerable atherosclerotic plaques known as thin-cap fibroatheroma, to guide therapeutic strategy at preventing heart attacks. Mounting evidences have shown three distinctive histopathological features—the presence of a thin fibrous cap, a lipid-rich necrotic core, and numerous infiltrating macrophages—are key markers of increased vulnerability in atherosclerotic plaques. To visualize these changes, the majority of catheter-based imaging modalities used intravascular ultrasound (IVUS) as the technical foundation and integrated emerging intravascular imaging techniques to enhance the characterization of vulnerable plaques. However, no current imaging technology is the unequivocal “gold standard” for the diagnosis of vulnerable atherosclerotic plaques. Each intravascular imaging technology possesses its own unique features that yield valuable information although encumbered by inherent limitations not seen in other modalities. In this context, the aim of this review is to discuss current scientific innovations, technical challenges, and prospective strategies in the development of IVUS-based multi-modality intravascular imaging systems aimed at assessing atherosclerotic plaque vulnerability. PMID:26400676

  5. Cross-modal learning to rank via latent joint representation.

    PubMed

    Wu, Fei; Jiang, Xinyang; Li, Xi; Tang, Siliang; Lu, Weiming; Zhang, Zhongfei; Zhuang, Yueting

    2015-05-01

    Cross-modal ranking is a research topic that is imperative to many applications involving multimodal data. Discovering a joint representation for multimodal data and learning a ranking function are essential in order to boost the cross-media retrieval (i.e., image-query-text or text-query-image). In this paper, we propose an approach to discover the latent joint representation of pairs of multimodal data (e.g., pairs of an image query and a text document) via a conditional random field and structural learning in a listwise ranking manner. We call this approach cross-modal learning to rank via latent joint representation (CML²R). In CML²R, the correlations between multimodal data are captured in terms of their sharing hidden variables (e.g., topics), and a hidden-topic-driven discriminative ranking function is learned in a listwise ranking manner. The experiments show that the proposed approach achieves a good performance in cross-media retrieval and meanwhile has the capability to learn the discriminative representation of multimodal data.

  6. Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer's Disease Diagnosis.

    PubMed

    Liu, Manhua; Cheng, Danni; Wang, Kundong; Wang, Yaping

    2018-03-23

    Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance.

  7. Crucial breakthrough of second near-infrared biological window fluorophores: design and synthesis toward multimodal imaging and theranostics

    DOE PAGES

    He, Shuqing; Song, Jun; Qu, Junle; ...

    2018-01-01

    Recent advances in the chemical design and synthesis of fluorophores in the second near-infrared biological window (NIR-II) for multimodal imaging and theranostics are summarized and highlighted in this review article.

  8. Crucial breakthrough of second near-infrared biological window fluorophores: design and synthesis toward multimodal imaging and theranostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Shuqing; Song, Jun; Qu, Junle

    Recent advances in the chemical design and synthesis of fluorophores in the second near-infrared biological window (NIR-II) for multimodal imaging and theranostics are summarized and highlighted in this review article.

  9. The sweet spot: FDG and other 2-carbon glucose analogs for multi-modal metabolic imaging of tumor metabolism

    PubMed Central

    Cox, Benjamin L; Mackie, Thomas R; Eliceiri, Kevin W

    2015-01-01

    Multi-modal imaging approaches of tumor metabolism that provide improved specificity, physiological relevance and spatial resolution would improve diagnosing of tumors and evaluation of tumor progression. Currently, the molecular probe FDG, glucose fluorinated with 18F at the 2-carbon, is the primary metabolic approach for clinical diagnostics with PET imaging. However, PET lacks the resolution necessary to yield intratumoral distributions of deoxyglucose, on the cellular level. Multi-modal imaging could elucidate this problem, but requires the development of new glucose analogs that are better suited for other imaging modalities. Several such analogs have been created and are reviewed here. Also reviewed are several multi-modal imaging studies that have been performed that attempt to shed light on the cellular distribution of glucose analogs within tumors. Some of these studies are performed in vitro, while others are performed in vivo, in an animal model. The results from these studies introduce a visualization gap between the in vitro and in vivo studies that, if solved, could enable the early detection of tumors, the high resolution monitoring of tumors during treatment, and the greater accuracy in assessment of different imaging agents. PMID:25625022

  10. Multimodal imaging of lung cancer and its microenvironment (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hariri, Lida P.; Niederst, Matthew J.; Mulvey, Hillary; Adams, David C.; Hu, Haichuan; Chico Calero, Isabel; Szabari, Margit V.; Vakoc, Benjamin J.; Hasan, Tayyaba; Bouma, Brett E.; Engelman, Jeffrey A.; Suter, Melissa J.

    2016-03-01

    Despite significant advances in targeted therapies for lung cancer, nearly all patients develop drug resistance within 6-12 months and prognosis remains poor. Developing drug resistance is a progressive process that involves tumor cells and their microenvironment. We hypothesize that microenvironment factors alter tumor growth and response to targeted therapy. We conducted in vitro studies in human EGFR-mutant lung carcinoma cells, and demonstrated that factors secreted from lung fibroblasts results in increased tumor cell survival during targeted therapy with EGFR inhibitor, gefitinib. We also demonstrated that increased environment stiffness results in increased tumor survival during gefitinib therapy. In order to test our hypothesis in vivo, we developed a multimodal optical imaging protocol for preclinical intravital imaging in mouse models to assess tumor and its microenvironment over time. We have successfully conducted multimodal imaging of dorsal skinfold chamber (DSC) window mice implanted with GFP-labeled human EGFR mutant lung carcinoma cells and visualized changes in tumor development and microenvironment facets over time. Multimodal imaging included structural OCT to assess tumor viability and necrosis, polarization-sensitive OCT to measure tissue birefringence for collagen/fibroblast detection, and Doppler OCT to assess tumor vasculature. Confocal imaging was also performed for high-resolution visualization of EGFR-mutant lung cancer cells labeled with GFP, and was coregistered with OCT. Our results demonstrated that stromal support and vascular growth are essential to tumor progression. Multimodal imaging is a useful tool to assess tumor and its microenvironment over time.

  11. COSBID-M3: a platform for multimodal monitoring, data collection, and research in neurocritical care.

    PubMed

    Wilson, J Adam; Shutter, Lori A; Hartings, Jed A

    2013-01-01

    Neuromonitoring in patients with severe brain trauma and stroke is often limited to intracranial pressure (ICP); advanced neuroscience intensive care units may also monitor brain oxygenation (partial pressure of brain tissue oxygen, P(bt)O(2)), electroencephalogram (EEG), cerebral blood flow (CBF), or neurochemistry. For example, cortical spreading depolarizations (CSDs) recorded by electrocorticography (ECoG) are associated with delayed cerebral ischemia after subarachnoid hemorrhage and are an attractive target for novel therapeutic approaches. However, to better understand pathophysiologic relations and realize the potential of multimodal monitoring, a common platform for data collection and integration is needed. We have developed a multimodal system that integrates clinical, research, and imaging data into a single research and development (R&D) platform. Our system is adapted from the widely used BCI2000, a brain-computer interface tool which is written in the C++ language and supports over 20 data acquisition systems. It is optimized for real-time analysis of multimodal data using advanced time and frequency domain analyses and is extensible for research development using a combination of C++, MATLAB, and Python languages. Continuous streams of raw and processed data, including BP (blood pressure), ICP, PtiO2, CBF, ECoG, EEG, and patient video are stored in an open binary data format. Selected events identified in raw (e.g., ICP) or processed (e.g., CSD) measures are displayed graphically, can trigger alarms, or can be sent to researchers or clinicians via text message. For instance, algorithms for automated detection of CSD have been incorporated, and processed ECoG signals are projected onto three-dimensional (3D) brain models based on patient magnetic resonance imaging (MRI) and computed tomographic (CT) scans, allowing real-time correlation of pathoanatomy and cortical function. This platform will provide clinicians and researchers with an advanced tool to investigate pathophysiologic relationships and novel measures of cerebral status, as well as implement treatment algorithms based on such multimodal measures.

  12. Development of a precision multimodal surgical navigation system for lung robotic segmentectomy

    PubMed Central

    Soldea, Valentin; Lachkar, Samy; Rinieri, Philippe; Sarsam, Mathieu; Bottet, Benjamin; Peillon, Christophe

    2018-01-01

    Minimally invasive sublobar anatomical resection is becoming more and more popular to manage early lung lesions. Robotic-assisted thoracic surgery (RATS) is unique in comparison with other minimally invasive techniques. Indeed, RATS is able to better integrate multiple streams of information including advanced imaging techniques, in an immersive experience at the level of the robotic console. Our aim was to describe three-dimensional (3D) imaging throughout the surgical procedure from preoperative planning to intraoperative assistance and complementary investigations such as radial endobronchial ultrasound (R-EBUS) and virtual bronchoscopy for pleural dye marking. All cases were operated using the DaVinci SystemTM. Modelisation was provided by Visible Patient™ (Strasbourg, France). Image integration in the operative field was achieved using the Tile Pro multi display input of the DaVinci console. Our experience was based on 114 robotic segmentectomies performed between January 2012 and October 2017. The clinical value of 3D imaging integration was evaluated in 2014 in a pilot study. Progressively, we have reached the conclusion that the use of such an anatomic model improves the safety and reliability of procedures. The multimodal system including 3D imaging has been used in more than 40 patients so far and demonstrated a perfect operative anatomic accuracy. Currently, we are developing an original virtual reality experience by exploring 3D imaging models at the robotic console level. The act of operating is being transformed and the surgeon now oversees a complex system that improves decision making. PMID:29785294

  13. Development of a precision multimodal surgical navigation system for lung robotic segmentectomy.

    PubMed

    Baste, Jean Marc; Soldea, Valentin; Lachkar, Samy; Rinieri, Philippe; Sarsam, Mathieu; Bottet, Benjamin; Peillon, Christophe

    2018-04-01

    Minimally invasive sublobar anatomical resection is becoming more and more popular to manage early lung lesions. Robotic-assisted thoracic surgery (RATS) is unique in comparison with other minimally invasive techniques. Indeed, RATS is able to better integrate multiple streams of information including advanced imaging techniques, in an immersive experience at the level of the robotic console. Our aim was to describe three-dimensional (3D) imaging throughout the surgical procedure from preoperative planning to intraoperative assistance and complementary investigations such as radial endobronchial ultrasound (R-EBUS) and virtual bronchoscopy for pleural dye marking. All cases were operated using the DaVinci System TM . Modelisation was provided by Visible Patient™ (Strasbourg, France). Image integration in the operative field was achieved using the Tile Pro multi display input of the DaVinci console. Our experience was based on 114 robotic segmentectomies performed between January 2012 and October 2017. The clinical value of 3D imaging integration was evaluated in 2014 in a pilot study. Progressively, we have reached the conclusion that the use of such an anatomic model improves the safety and reliability of procedures. The multimodal system including 3D imaging has been used in more than 40 patients so far and demonstrated a perfect operative anatomic accuracy. Currently, we are developing an original virtual reality experience by exploring 3D imaging models at the robotic console level. The act of operating is being transformed and the surgeon now oversees a complex system that improves decision making.

  14. Novel design of interactive multimodal biofeedback system for neurorehabilitation.

    PubMed

    Huang, He; Chen, Y; Xu, W; Sundaram, H; Olson, L; Ingalls, T; Rikakis, T; He, Jiping

    2006-01-01

    A previous design of a biofeedback system for Neurorehabilitation in an interactive multimodal environment has demonstrated the potential of engaging stroke patients in task-oriented neuromotor rehabilitation. This report explores the new concept and alternative designs of multimedia based biofeedback systems. In this system, the new interactive multimodal environment was constructed with abstract presentation of movement parameters. Scenery images or pictures and their clarity and orientation are used to reflect the arm movement and relative position to the target instead of the animated arm. The multiple biofeedback parameters were classified into different hierarchical levels w.r.t. importance of each movement parameter to performance. A new quantified measurement for these parameters were developed to assess the patient's performance both real-time and offline. These parameters were represented by combined visual and auditory presentations with various distinct music instruments. Overall, the objective of newly designed system is to explore what information and how to feedback information in interactive virtual environment could enhance the sensorimotor integration that may facilitate the efficient design and application of virtual environment based therapeutic intervention.

  15. Multimodal visualization interface for data management, self-learning and data presentation.

    PubMed

    Van Sint Jan, S; Demondion, X; Clapworthy, G; Louryan, S; Rooze, M; Cotten, A; Viceconti, M

    2006-10-01

    A multimodal visualization software, called the Data Manager (DM), has been developed to increase interdisciplinary communication around the topic of visualization and modeling of various aspects of the human anatomy. Numerous tools used in Radiology are integrated in the interface that runs on standard personal computers. The available tools, combined to hierarchical data management and custom layouts, allow analyzing of medical imaging data using advanced features outside radiological premises (for example, for patient review, conference presentation or tutorial preparation). The system is free, and based on an open-source software development architecture, and therefore updates of the system for custom applications are possible.

  16. Composite PET and MRI for accurate localization and metabolic modeling: a very useful tool for research and clinic

    NASA Astrophysics Data System (ADS)

    Bidaut, Luc M.

    1991-06-01

    In order to help in analyzing PET data and really take advantage of their metabolic content, a system was designed and implemented to align and process data from various medical imaging modalities, particularly (but not only) for brain studies. Although this system is for now mostly used for anatomical localization, multi-modality ROIs and pharmaco-kinetic modeling, more multi-modality protocols will be implemented in the future, not only to help in PET reconstruction data correction and semi-automated ROI definition, but also for helping in improving diagnostic accuracy along with surgery and therapy planning.

  17. New Technologies, New Possibilities for the Arts and Multimodality in English Language Arts

    ERIC Educational Resources Information Center

    Williams, Wendy R.

    2014-01-01

    This article discusses the arts, multimodality, and new technologies in English language arts. It then turns to the example of the illuminated text--a multimodal book report consisting of animated text, music, and images--to consider how art, multimodality, and technology can work together to support students' reading of literature and inspire…

  18. Combining kriging, multispectral and multimodal microscopy to resolve malaria-infected erythrocyte contents.

    PubMed

    Dabo-Niang, S; Zoueu, J T

    2012-09-01

    In this communication, we demonstrate how kriging, combine with multispectral and multimodal microscopy can enhance the resolution of malaria-infected images and provide more details on their composition, for analysis and diagnosis. The results of this interpolation applied to the two principal components of multispectral and multimodal images illustrate that the examination of the content of Plasmodium falciparum infected human erythrocyte is improved. © 2012 The Authors Journal of Microscopy © 2012 Royal Microscopical Society.

  19. Multi-Modal Nano-Probes for Radionuclide and 5-color Near Infrared Optical Lymphatic Imaging

    PubMed Central

    Kobayashi, Hisataka; Koyama, Yoshinori; Barrett, Tristan; Hama, Yukihiro; Regino, Celeste A. S.; Shin, In Soo; Jang, Beom-Su; Le, Nhat; Paik, Chang H.; Choyke, Peter L.; Urano, Yasuteru

    2008-01-01

    Current contrast agents generally have one function and can only be imaged in monochrome, therefore, the majority of imaging methods can only impart uniparametric information. A single nano-particle has the potential to be loaded with multiple payloads. Such multi-modality probes have the ability to be imaged by more than one imaging technique, which could compensate for the weakness or even combine the advantages of each individual modality. Furthermore, optical imaging using different optical probes enables us to achieve multi-color in vivo imaging, wherein multiple parameters can be read from a single image. To allow differentiation of multiple optical signals in vivo, each probe should have a close but different near infrared emission. To this end, we synthesized nano-probes with multi-modal and multi-color potential, which employed a polyamidoamine dendrimer platform linked to both radionuclides and optical probes, permitting dual-modality scintigraphic and 5-color near infrared optical lymphatic imaging using a multiple excitation spectrally-resolved fluorescence imaging technique. PMID:19079788

  20. Multimodal nanoprobes for radionuclide and five-color near-infrared optical lymphatic imaging.

    PubMed

    Kobayashi, Hisataka; Koyama, Yoshinori; Barrett, Tristan; Hama, Yukihiro; Regino, Celeste A S; Shin, In Soo; Jang, Beom-Su; Le, Nhat; Paik, Chang H; Choyke, Peter L; Urano, Yasuteru

    2007-11-01

    Current contrast agents generally have one function and can only be imaged in monochrome; therefore, the majority of imaging methods can only impart uniparametric information. A single nanoparticle has the potential to be loaded with multiple payloads. Such multimodality probes have the ability to be imaged by more than one imaging technique, which could compensate for the weakness or even combine the advantages of each individual modality. Furthermore, optical imaging using different optical probes enables us to achieve multicolor in vivo imaging, wherein multiple parameters can be read from a single image. To allow differentiation of multiple optical signals in vivo, each probe should have a close but different near-infrared emission. To this end, we synthesized nanoprobes with multimodal and multicolor potential, which employed a polyamidoamine dendrimer platform linked to both radionuclides and optical probes, permitting dual-modality scintigraphic and five-color near-infrared optical lymphatic imaging using a multiple-excitation spectrally resolved fluorescence imaging technique.

  1. A framework for biomedical figure segmentation towards image-based document retrieval

    PubMed Central

    2013-01-01

    The figures included in many of the biomedical publications play an important role in understanding the biological experiments and facts described within. Recent studies have shown that it is possible to integrate the information that is extracted from figures in classical document classification and retrieval tasks in order to improve their accuracy. One important observation about the figures included in biomedical publications is that they are often composed of multiple subfigures or panels, each describing different methodologies or results. The use of these multimodal figures is a common practice in bioscience, as experimental results are graphically validated via multiple methodologies or procedures. Thus, for a better use of multimodal figures in document classification or retrieval tasks, as well as for providing the evidence source for derived assertions, it is important to automatically segment multimodal figures into subfigures and panels. This is a challenging task, however, as different panels can contain similar objects (i.e., barcharts and linecharts) with multiple layouts. Also, certain types of biomedical figures are text-heavy (e.g., DNA sequences and protein sequences images) and they differ from traditional images. As a result, classical image segmentation techniques based on low-level image features, such as edges or color, are not directly applicable to robustly partition multimodal figures into single modal panels. In this paper, we describe a robust solution for automatically identifying and segmenting unimodal panels from a multimodal figure. Our framework starts by robustly harvesting figure-caption pairs from biomedical articles. We base our approach on the observation that the document layout can be used to identify encoded figures and figure boundaries within PDF files. Taking into consideration the document layout allows us to correctly extract figures from the PDF document and associate their corresponding caption. We combine pixel-level representations of the extracted images with information gathered from their corresponding captions to estimate the number of panels in the figure. Thus, our approach simultaneously identifies the number of panels and the layout of figures. In order to evaluate the approach described here, we applied our system on documents containing protein-protein interactions (PPIs) and compared the results against a gold standard that was annotated by biologists. Experimental results showed that our automatic figure segmentation approach surpasses pure caption-based and image-based approaches, achieving a 96.64% accuracy. To allow for efficient retrieval of information, as well as to provide the basis for integration into document classification and retrieval systems among other, we further developed a web-based interface that lets users easily retrieve panels containing the terms specified in the user queries. PMID:24565394

  2. Regional Lung Ventilation Analysis Using Temporally Resolved Magnetic Resonance Imaging.

    PubMed

    Kolb, Christoph; Wetscherek, Andreas; Buzan, Maria Teodora; Werner, René; Rank, Christopher M; Kachelrie, Marc; Kreuter, Michael; Dinkel, Julien; Heuel, Claus Peter; Maier-Hein, Klaus

    We propose a computer-aided method for regional ventilation analysis and observation of lung diseases in temporally resolved magnetic resonance imaging (4D MRI). A shape model-based segmentation and registration workflow was used to create an atlas-derived reference system in which regional tissue motion can be quantified and multimodal image data can be compared regionally. Model-based temporal registration of the lung surfaces in 4D MRI data was compared with the registration of 4D computed tomography (CT) images. A ventilation analysis was performed on 4D MR images of patients with lung fibrosis; 4D MR ventilation maps were compared with corresponding diagnostic 3D CT images of the patients and 4D CT maps of subjects without impaired lung function (serving as reference). Comparison between the computed patient-specific 4D MR regional ventilation maps and diagnostic CT images shows good correlation in conspicuous regions. Comparison to 4D CT-derived ventilation maps supports the plausibility of the 4D MR maps. Dynamic MRI-based flow-volume loops and spirograms further visualize the free-breathing behavior. The proposed methods allow for 4D MR-based regional analysis of tissue dynamics and ventilation in spontaneous breathing and comparison of patient data. The proposed atlas-based reference coordinate system provides an automated manner of annotating and comparing multimodal lung image data.

  3. Multimodality imaging of foreign bodies of the musculoskeletal system.

    PubMed

    Jarraya, Mohamed; Hayashi, Daichi; de Villiers, Richard V; Roemer, Frank W; Murakami, Akira M; Cossi, Alda; Guermazi, Ali

    2014-07-01

    The purpose of this article is to clarify the most relevant points in managing suspected foreign bodies of the musculoskeletal system on the basis of a literature review and published reports with cases to illustrate each type on different imaging modalities. Foreign bodies of the musculoskeletal system are a common problem in emergency departments, with more than a third missed in the initial clinical evaluation. These retained objects may result in various complications and also offer fertile ground for litigation.

  4. Spectral embedding-based registration (SERg) for multimodal fusion of prostate histology and MRI

    NASA Astrophysics Data System (ADS)

    Hwuang, Eileen; Rusu, Mirabela; Karthigeyan, Sudha; Agner, Shannon C.; Sparks, Rachel; Shih, Natalie; Tomaszewski, John E.; Rosen, Mark; Feldman, Michael; Madabhushi, Anant

    2014-03-01

    Multi-modal image registration is needed to align medical images collected from different protocols or imaging sources, thereby allowing the mapping of complementary information between images. One challenge of multimodal image registration is that typical similarity measures rely on statistical correlations between image intensities to determine anatomical alignment. The use of alternate image representations could allow for mapping of intensities into a space or representation such that the multimodal images appear more similar, thus facilitating their co-registration. In this work, we present a spectral embedding based registration (SERg) method that uses non-linearly embedded representations obtained from independent components of statistical texture maps of the original images to facilitate multimodal image registration. Our methodology comprises the following main steps: 1) image-derived textural representation of the original images, 2) dimensionality reduction using independent component analysis (ICA), 3) spectral embedding to generate the alternate representations, and 4) image registration. The rationale behind our approach is that SERg yields embedded representations that can allow for very different looking images to appear more similar, thereby facilitating improved co-registration. Statistical texture features are derived from the image intensities and then reduced to a smaller set by using independent component analysis to remove redundant information. Spectral embedding generates a new representation by eigendecomposition from which only the most important eigenvectors are selected. This helps to accentuate areas of salience based on modality-invariant structural information and therefore better identifies corresponding regions in both the template and target images. The spirit behind SERg is that image registration driven by these areas of salience and correspondence should improve alignment accuracy. In this work, SERg is implemented using Demons to allow the algorithm to more effectively register multimodal images. SERg is also tested within the free-form deformation framework driven by mutual information. Nine pairs of synthetic T1-weighted to T2-weighted brain MRI were registered under the following conditions: five levels of noise (0%, 1%, 3%, 5%, and 7%) and two levels of bias field (20% and 40%) each with and without noise. We demonstrate that across all of these conditions, SERg yields a mean squared error that is 81.51% lower than that of Demons driven by MRI intensity alone. We also spatially align twenty-six ex vivo histology sections and in vivo prostate MRI in order to map the spatial extent of prostate cancer onto corresponding radiologic imaging. SERg performs better than intensity registration by decreasing the root mean squared distance of annotated landmarks in the prostate gland via both Demons algorithm and mutual information-driven free-form deformation. In both synthetic and clinical experiments, the observed improvement in alignment of the template and target images suggest the utility of parametric eigenvector representations and hence SERg for multimodal image registration.

  5. NIR fluorescence lifetime sensing through a multimode fiber for intravascular molecular probing

    NASA Astrophysics Data System (ADS)

    Ingelberts, H.; Hernot, S.; Debie, P.; Lahoutte, T.; Kuijk, M.

    2016-04-01

    Coronary artery disease (CAD) contributes to millions of deaths each year. The identification of vulnerable plaques is essential to the diagnosis of CAD but is challenging. Molecular probes can improve the detection of these plaques using intravascular imaging methods. Fluorescence lifetime sensing is a safe and robust method to image these molecular probes. We present two variations of an optical system for intravascular near-infrared (NIR) fluorescence lifetime sensing through a multimode fiber. Both systems are built around a recently developed fast and efficient CMOS detector, the current-assisted photonic sampler (CAPS) that is optimized for sub-nanosecond NIR fluorescence lifetime sensing. One system mimics the optical setup of an epifluorescence microscope while the other uses a practical fiber optic coupler to separate fluorescence excitation and emission. We test both systems by measuring the lifetime of several NIR dyes in DMSO solutions and we show that these systems are capable of detecting lifetimes of solutions with concentrations down to 370 nM and this with short acquisition times. These results are compared with time-correlated single photon counting (TCSPC) measurements for reference.

  6. Improving In Vivo High-Resolution CT Imaging of the Tumour Vasculature in Xenograft Mouse Models through Reduction of Motion and Bone-Streak Artefacts

    PubMed Central

    Kersemans, Veerle; Kannan, Pavitra; Beech, John S.; Bates, Russell; Irving, Benjamin; Gilchrist, Stuart; Allen, Philip D.; Thompson, James; Kinchesh, Paul; Casteleyn, Christophe; Schnabel, Julia; Partridge, Mike; Muschel, Ruth J.; Smart, Sean C.

    2015-01-01

    Introduction Preclinical in vivo CT is commonly used to visualise vessels at a macroscopic scale. However, it is prone to many artefacts which can degrade the quality of CT images significantly. Although some artefacts can be partially corrected for during image processing, they are best avoided during acquisition. Here, a novel imaging cradle and tumour holder was designed to maximise CT resolution. This approach was used to improve preclinical in vivo imaging of the tumour vasculature. Procedures A custom built cradle containing a tumour holder was developed and fix-mounted to the CT system gantry to avoid artefacts arising from scanner vibrations and out-of-field sample positioning. The tumour holder separated the tumour from bones along the axis of rotation of the CT scanner to avoid bone-streaking. It also kept the tumour stationary and insensitive to respiratory motion. System performance was evaluated in terms of tumour immobilisation and reduction of motion and bone artefacts. Pre- and post-contrast CT followed by sequential DCE-MRI of the tumour vasculature in xenograft transplanted mice was performed to confirm vessel patency and demonstrate the multimodal capacity of the new cradle. Vessel characteristics such as diameter, and branching were quantified. Results Image artefacts originating from bones and out-of-field sample positioning were avoided whilst those resulting from motions were reduced significantly, thereby maximising the resolution that can be achieved with CT imaging in vivo. Tumour vessels ≥ 77 μm could be resolved and blood flow to the tumour remained functional. The diameter of each tumour vessel was determined and plotted as histograms and vessel branching maps were created. Multimodal imaging using this cradle assembly was preserved and demonstrated. Conclusions The presented imaging workflow minimised image artefacts arising from scanner induced vibrations, respiratory motion and radiopaque structures and enabled in vivo CT imaging and quantitative analysis of the tumour vasculature at higher resolution than was possible before. Moreover, it can be applied in a multimodal setting, therefore combining anatomical and dynamic information. PMID:26046526

  7. Design of multi-mode compatible image acquisition system for HD area array CCD

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Sui, Xiubao

    2014-11-01

    Combining with the current development trend in video surveillance-digitization and high-definition, a multimode-compatible image acquisition system for HD area array CCD is designed. The hardware and software designs of the color video capture system of HD area array CCD KAI-02150 presented by Truesense Imaging company are analyzed, and the structure parameters of the HD area array CCD and the color video gathering principle of the acquisition system are introduced. Then, the CCD control sequence and the timing logic of the whole capture system are realized. The noises of the video signal (KTC noise and 1/f noise) are filtered by using the Correlated Double Sampling (CDS) technique to enhance the signal-to-noise ratio of the system. The compatible designs in both software and hardware for the two other image sensors of the same series: KAI-04050 and KAI-08050 are put forward; the effective pixels of these two HD image sensors are respectively as many as four million and eight million. A Field Programmable Gate Array (FPGA) is adopted as the key controller of the system to perform the modularization design from top to bottom, which realizes the hardware design by software and improves development efficiency. At last, the required time sequence driving is simulated accurately by the use of development platform of Quartus II 12.1 combining with VHDL. The result of the simulation indicates that the driving circuit is characterized by simple framework, low power consumption, and strong anti-interference ability, which meet the demand of miniaturization and high-definition for the current tendency.

  8. Identifying Multimodal Intermediate Phenotypes between Genetic Risk Factors and Disease Status in Alzheimer’s Disease

    PubMed Central

    Hao, Xiaoke; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L.; Saykin, Andrew J.; Zhang, Daoqiang; Shen, Li

    2016-01-01

    Neuroimaging genetics has attracted growing attention and interest, which is thought to be a powerful strategy to examine the influence of genetic variants (i.e., single nucleotide polymorphisms (SNPs)) on structures or functions of human brain. In recent studies, univariate or multivariate regression analysis methods are typically used to capture the effective associations between genetic variants and quantitative traits (QTs) such as brain imaging phenotypes. The identified imaging QTs, although associated with certain genetic markers, may not be all disease specific. A useful, but underexplored, scenario could be to discover only those QTs associated with both genetic markers and disease status for revealing the chain from genotype to phenotype to symptom. In addition, multimodal brain imaging phenotypes are extracted from different perspectives and imaging markers consistently showing up in multimodalities may provide more insights for mechanistic understanding of diseases (i.e., Alzheimer’s disease (AD)). In this work, we propose a general framework to exploit multi-modal brain imaging phenotypes as intermediate traits that bridge genetic risk factors and multi-class disease status. We applied our proposed method to explore the relation between the well-known AD risk SNP APOE rs429358 and three baseline brain imaging modalities (i.e., structural magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET) and F-18 florbetapir PET scans amyloid imaging (AV45)) from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The empirical results demonstrate that our proposed method not only helps improve the performances of imaging genetic associations, but also discovers robust and consistent regions of interests (ROIs) across multi-modalities to guide the disease-induced interpretation. PMID:27277494

  9. [Research on non-rigid registration of multi-modal medical image based on Demons algorithm].

    PubMed

    Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang

    2014-02-01

    Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.

  10. Multimodality Imaging Approach towards Primary Aortic Sarcomas Arising after Endovascular Abdominal Aortic Aneurysm Repair: Case Series Report.

    PubMed

    Kamran, Mudassar; Fowler, Kathryn J; Mellnick, Vincent M; Sicard, Gregorio A; Narra, Vamsi R

    2016-06-01

    Primary aortic neoplasms are rare. Aortic sarcoma arising after endovascular aneurysm repair (EVAR) is a scarce subset of primary aortic malignancies, reports of which are infrequent in the published literature. The diagnosis of aortic sarcoma is challenging due to its non-specific clinical presentation, and the prognosis is poor due to delayed diagnosis, rapid proliferation, and propensity for metastasis. Post-EVAR, aortic sarcomas may mimic other more common aortic processes on surveillance imaging. Radiologists are rarely knowledgeable about this rare entity for which multimodality imaging and awareness are invaluable in early diagnosis. A series of three pathologically confirmed cases are presented to display the multimodality imaging features and clinical presentations of aortic sarcoma arising after EVAR.

  11. Intravascular atherosclerotic imaging with combined fluorescence and optical coherence tomography probe based on a double-clad fiber combiner

    NASA Astrophysics Data System (ADS)

    Liang, Shanshan; Saidi, Arya; Jing, Joe; Liu, Gangjun; Li, Jiawen; Zhang, Jun; Sun, Changsen; Narula, Jagat; Chen, Zhongping

    2012-07-01

    We developed a multimodality fluorescence and optical coherence tomography probe based on a double-clad fiber (DCF) combiner. The probe is composed of a DCF combiner, grin lens, and micromotor in the distal end. An integrated swept-source optical coherence tomography and fluorescence intensity imaging system was developed based on the combined probe for the early diagnoses of atherosclerosis. This system is capable of real-time data acquisition and processing as well as image display. For fluorescence imaging, the inflammation of atherosclerosis and necrotic core formed with the annexin V-conjugated Cy5.5 were imaged. Ex vivo imaging of New Zealand white rabbit arteries demonstrated the capability of the combined system.

  12. Computer-aided, multi-modal, and compression diffuse optical studies of breast tissue

    NASA Astrophysics Data System (ADS)

    Busch, David Richard, Jr.

    Diffuse Optical Tomography and Spectroscopy permit measurement of important physiological parameters non-invasively through ˜10 cm of tissue. I have applied these techniques in measurements of human breast and breast cancer. My thesis integrates three loosely connected themes in this context: multi-modal breast cancer imaging, automated data analysis of breast cancer images, and microvascular hemodynamics of breast under compression. As per the first theme, I describe construction, testing, and the initial clinical usage of two generations of imaging systems for simultaneous diffuse optical and magnetic resonance imaging. The second project develops a statistical analysis of optical breast data from many spatial locations in a population of cancers to derive a novel optical signature of malignancy; I then apply this data-derived signature for localization of cancer in additional subjects. Finally, I construct and deploy diffuse optical instrumentation to measure blood content and blood flow during breast compression; besides optics, this research has implications for any method employing breast compression, e.g., mammography.

  13. Research-oriented image registry for multimodal image integration.

    PubMed

    Tanaka, M; Sadato, N; Ishimori, Y; Yonekura, Y; Yamashita, Y; Komuro, H; Hayahsi, N; Ishii, Y

    1998-01-01

    To provide multimodal biomedical images automatically, we constructed the research-oriented image registry, Data Delivery System (DDS). DDS was constructed on the campus local area network. Machines which generate images (imagers: DSA, ultrasound, PET, MRI, SPECT and CT) were connected to the campus LAN. Once a patient is registered, all his images are automatically picked up by DDS as they are generated, transferred through the gateway server to the intermediate server, and copied into the directory of the user who registered the patient. DDS informs the user through e-mail that new data have been generated and transferred. Data format is automatically converted into one which is chosen by the user. Data inactive for a certain period in the intermediate server are automatically achieved into the final and permanent data server based on compact disk. As a soft link is automatically generated through this step, a user has access to all (old or new) image data of the patient of his interest. As DDS runs with minimal maintenance, cost and time for data transfer are significantly saved. By making the complex process of data transfer and conversion invisible, DDS has made it easy for naive-to-computer researchers to concentrate on their biomedical interest.

  14. Multimodal targeted high relaxivity thermosensitive liposome for in vivo imaging

    NASA Astrophysics Data System (ADS)

    Kuijten, Maayke M. P.; Hannah Degeling, M.; Chen, John W.; Wojtkiewicz, Gregory; Waterman, Peter; Weissleder, Ralph; Azzi, Jamil; Nicolay, Klaas; Tannous, Bakhos A.

    2015-11-01

    Liposomes are spherical, self-closed structures formed by lipid bilayers that can encapsulate drugs and/or imaging agents in their hydrophilic core or within their membrane moiety, making them suitable delivery vehicles. We have synthesized a new liposome containing gadolinium-DOTA lipid bilayer, as a targeting multimodal molecular imaging agent for magnetic resonance and optical imaging. We showed that this liposome has a much higher molar relaxivities r1 and r2 compared to a more conventional liposome containing gadolinium-DTPA-BSA lipid. By incorporating both gadolinium and rhodamine in the lipid bilayer as well as biotin on its surface, we used this agent for multimodal imaging and targeting of tumors through the strong biotin-streptavidin interaction. Since this new liposome is thermosensitive, it can be used for ultrasound-mediated drug delivery at specific sites, such as tumors, and can be guided by magnetic resonance imaging.

  15. Multimodality Imaging in Cardiac Sarcoidosis: Is There a Winner?

    PubMed Central

    Perez, Irving E.; Garcia, Mario J.; Taub, Cynthia C.

    2016-01-01

    Sarcoidosis is a multisystem granulomatous disease of unknown cause that can affect the heart. Cardiac sarcoidosis may be present in as many as 25% of patients with systemic sarcoidosis, and it is frequently underdiagnosed. The early and accurate diagnosis of myocardial involvement is challenging. Advanced imaging techniques play important roles in the diagnosis and management of patients with cardiac sarcoidosis. PMID:25784137

  16. Amphiphilic semiconducting polymer as multifunctional nanocarrier for fluorescence/photoacoustic imaging guided chemo-photothermal therapy.

    PubMed

    Jiang, Yuyan; Cui, Dong; Fang, Yuan; Zhen, Xu; Upputuri, Paul Kumar; Pramanik, Manojit; Ding, Dan; Pu, Kanyi

    2017-11-01

    Chemo-photothermal nanotheranostics has the advantage of synergistic therapeutic effect, providing opportunities for optimized cancer therapy. However, current chemo-photothermal nanotheranostic systems generally comprise more than three components, encountering the potential issues of unstable nanostructures and unexpected conflicts in optical and biophysical properties among different components. We herein synthesize an amphiphilic semiconducting polymer (PEG-PCB) and utilize it as a multifunctional nanocarrier to simplify chemo-photothermal nanotheranostics. PEG-PCB has a semiconducting backbone that not only serves as the diagnostic component for near-infrared (NIR) fluorescence and photoacoustic (PA) imaging, but also acts as the therapeutic agent for photothermal therapy. In addition, the hydrophobic backbone of PEG-PCB provides strong hydrophobic and π-π interactions with the aromatic anticancer drug such as doxorubicin for drug encapsulation and delivery. Such a trifunctionality of PEG-PCB eventually results in a greatly simplified nanotheranostic system with only two components but multimodal imaging and therapeutic capacities, permitting effective NIR fluorescence/PA imaging guided chemo-photothermal therapy of cancer in living mice. Our study thus provides a molecular engineering approach to integrate essential properties into one polymer for multimodal nanotheranostics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.

    2007-03-01

    In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.

  18. Parallel Information Processing (Image Transmission Via Fiber Bundle and Multimode Fiber

    NASA Technical Reports Server (NTRS)

    Kukhtarev, Nicholai

    2003-01-01

    Growing demand for visual, user-friendly representation of information inspires search for the new methods of image transmission. Currently used in-series (sequential) methods of information processing are inherently slow and are designed mainly for transmission of one or two dimensional arrays of data. Conventional transmission of data by fibers requires many fibers with array of laser diodes and photodetectors. In practice, fiber bundles are also used for transmission of images. Image is formed on the fiber-optic bundle entrance surface and each fiber transmits the incident image to the exit surface. Since the fibers do not preserve phase, only 2D intensity distribution can be transmitted in this way. Each single mode fiber transmit only one pixel of an image. Multimode fibers may be also used, so that each mode represent different pixel element. Direct transmission of image through multimode fiber is hindered by the mode scrambling and phase randomization. To overcome these obstacles wavelength and time-division multiplexing have been used, with each pixel transmitted on a separate wavelength or time interval. Phase-conjugate techniques also was tested in, but only in the unpractical scheme when reconstructed image return back to the fiber input end. Another method of three-dimensional imaging over single mode fibers was demonstrated in, using laser light of reduced spatial coherence. Coherence encoding, needed for a transmission of images by this methods, was realized with grating interferometer or with the help of an acousto-optic deflector. We suggest simple practical holographic method of image transmission over single multimode fiber or over fiber bundle with coherent light using filtering by holographic optical elements. Originally this method was successfully tested for the single multimode fiber. In this research we have modified holographic method for transmission of laser illuminated images over commercially available fiber bundle (fiber endoscope, or fiberscope).

  19. High resolution multimodal clinical ophthalmic imaging system

    PubMed Central

    Mujat, Mircea; Ferguson, R. Daniel; Patel, Ankit H.; Iftimia, Nicusor; Lue, Niyom; Hammer, Daniel X.

    2010-01-01

    We developed a multimodal adaptive optics (AO) retinal imager which is the first to combine high performance AO-corrected scanning laser ophthalmoscopy (SLO) and swept source Fourier domain optical coherence tomography (SSOCT) imaging modes in a single compact clinical prototype platform. Such systems are becoming ever more essential to vision research and are expected to prove their clinical value for diagnosis of retinal diseases, including glaucoma, diabetic retinopathy (DR), age-related macular degeneration (AMD), and retinitis pigmentosa. The SSOCT channel operates at a wavelength of 1 µm for increased penetration and visualization of the choriocapillaris and choroid, sites of major disease activity for DR and wet AMD. This AO system is designed for use in clinical populations; a dual deformable mirror (DM) configuration allows simultaneous low- and high-order aberration correction over a large range of refractions and ocular media quality. The system also includes a wide field (33 deg.) line scanning ophthalmoscope (LSO) for initial screening, target identification, and global orientation, an integrated retinal tracker (RT) to stabilize the SLO, OCT, and LSO imaging fields in the presence of lateral eye motion, and a high-resolution LCD-based fixation target for presentation of visual cues. The system was tested in human subjects without retinal disease for performance optimization and validation. We were able to resolve and quantify cone photoreceptors across the macula to within ~0.5 deg (~100-150 µm) of the fovea, image and delineate ten retinal layers, and penetrate to resolve features deep into the choroid. The prototype presented here is the first of a new class of powerful flexible imaging platforms that will provide clinicians and researchers with high-resolution, high performance adaptive optics imaging to help guide therapies, develop new drugs, and improve patient outcomes. PMID:20589021

  20. An image database management system for conducting CAD research

    NASA Astrophysics Data System (ADS)

    Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.

    2007-03-01

    The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.

  1. Simultaneous off-axis multiplexed holography and regular fluorescence microscopy of biological cells.

    PubMed

    Nygate, Yoav N; Singh, Gyanendra; Barnea, Itay; Shaked, Natan T

    2018-06-01

    We present a new technique for obtaining simultaneous multimodal quantitative phase and fluorescence microscopy of biological cells, providing both quantitative phase imaging and molecular specificity using a single camera. Our system is based on an interferometric multiplexing module, externally positioned at the exit of an optical microscope. In contrast to previous approaches, the presented technique allows conventional fluorescence imaging, rather than interferometric off-axis fluorescence imaging. We demonstrate the presented technique for imaging fluorescent beads and live biological cells.

  2. A digital library for medical imaging activities

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcelo; Furuie, Sérgio S.

    2007-03-01

    This work presents the development of an electronic infrastructure to make available a free, online, multipurpose and multimodality medical image database. The proposed infrastructure implements a distributed architecture for medical image database, authoring tools, and a repository for multimedia documents. Also it includes a peer-reviewed model that assures quality of dataset. This public repository provides a single point of access for medical images and related information to facilitate retrieval tasks. The proposed approach has been used as an electronic teaching system in Radiology as well.

  3. Multimodal image analysis of the retina in Hunter syndrome (mucopolysaccharidosis type II): Case report.

    PubMed

    Salvucci, Isadora Darriba Macedo; Finzi, Simone; Oyamada, Maria Kiyoko; Kim, Chong Ae; Pimentel, Sérgio Luis Gianotti

    2018-01-01

    We report a case of retinal and posterior ocular findings in a 33-year-old man diagnosed with Hunter syndrome (Mucopolysaccharidosis type II) in a multimodal imaging way. Our patient was complaining of blurred night vision for the past 3 years. He had not received any systemic treatment for Hunter syndrome. Vision acuity was 20/20 in both eyes and corneas were clear. Fundus examination revealed bilateral crowded and hyperemic optic nerve heads (elevated in the ocular ultrasound) and areas of subretinal hypopigmentation. There was hyperautofluorescence at the central fovea and perifovea, and a diffuse bilateral choroidal fluorescence in angiography. Macular SD-OCT showed a thinning of the external retina at the perifovea in both eyes. Visual field testing showed a bilateral ring scotoma. The full field ERG was subnormal, with a negative response in the scotopic phase. Visual Evoked Potencial test and cranial MRI were normal. Our multimodal analysis reported here attempted to contribute to the knowledge of the natural history of GAG deposition in the eye, focusing on the retina and retinal pigment epithelium. Defining this natural history is essential for a proper comparison with Hunter patients receiving systemic treatment, thus determining if it can or cannot improve retinal function in humans with this disorder.

  4. A system for simultaneous near-infrared reflectance and transillumination imaging of occlusal carious lesions

    NASA Astrophysics Data System (ADS)

    Simon, Jacob C.; Darling, Cynthia L.; Fried, Daniel

    2016-02-01

    Clinicians need technologies to improve the diagnosis of questionable occlusal carious lesions (QOC's) and determine if decay has penetrated to the underlying dentin. Assessing lesion depth from near-infrared (NIR) images holds great potential due to the high transparency of enamel and stain to NIR light at λ=1300-1700-nm, which allows direct visualization and quantified measurements of enamel demineralization. Unfortunately, NIR reflectance measurements alone are limited in utility for approximating occlusal lesion depth >200-μm due to light attenuation from the lesion body. Previous studies sought to combine NIR reflectance and transillumination measurements taken at λ=1300-nm in order to estimate QOC depth and severity. The objective of this study was to quantify the change in lesion contrast and size measured from multispectral NIR reflectance and transillumination images of natural occlusal carious lesions with increasing lesion depth and severity in order to determine the optimal multimodal wavelength combinations for estimating QOC depth. Extracted teeth with varying amounts of natural occlusal decay were measured using a multispectral-multimodal NIR imaging system at prominent wavelengths within the λ=1300-1700-nm spectral region. Image analysis software was used to calculate lesion contrast and area values between sound and carious enamel regions.

  5. Simultaneous measurement of breathing rate and heart rate using a microbend multimode fiber optic sensor

    NASA Astrophysics Data System (ADS)

    Chen, Zhihao; Lau, Doreen; Teo, Ju Teng; Ng, Soon Huat; Yang, Xiufeng; Kei, Pin Lin

    2014-05-01

    We propose and demonstrate the feasibility of using a highly sensitive microbend multimode fiber optic sensor for simultaneous measurement of breathing rate (BR) and heart rate (HR). The sensing system consists of a transceiver, microbend multimode fiber, and a computer. The transceiver is comprised of an optical transmitter, an optical receiver, and circuits for data communication with the computer via Bluetooth. Comparative experiments conducted between the sensor and predicate commercial physiologic devices showed an accuracy of ±2 bpm for both BR and HR measurement. Our preliminary study of simultaneous measurement of BR and HR in a clinical trial conducted on 11 healthy subjects during magnetic resonance imaging (MRI) also showed very good agreement with measurements obtained from conventional MR-compatible devices.

  6. Influence of mode-beating pulse on laser-induced plasma

    NASA Astrophysics Data System (ADS)

    Nishihara, M.; Freund, J. B.; Glumac, N. G.; Elliott, G. S.

    2018-04-01

    This paper addresses the influence of mode-beating pulse on laser-induced plasma. The second harmonic of a Nd:YAG laser, operated either with the single mode or multimode, was used for non-resonant optical breakdown, and subsequent plasma development was visualized using a streak imaging system. The single mode lasing leads to a stable breakdown location and smooth envelopment of the plasma boundary, while the multimode lasing, with the dominant mode-beating frequency of 500-800 MHz, leads to fluctuations in the breakdown location, a globally modulated plasma surface, and growth of local microstructures at the plasma boundary. The distribution of the local inhomogeneity was measured from the elastic scattering signals on the streak image. The distance between the local structures agreed with the expected wavelength of hydrodynamic instability development due to the interference between the surface excited wave and transmitted wave. A numerical simulation, however, indicates that the local microstructure could also be directly generated at the peaks of the higher harmonic components if the multimode pulse contains up to the eighth harmonic of the fundamental cavity mode.

  7. Simulated microsurgery monitoring using intraoperative multimodal surgical microscopy

    NASA Astrophysics Data System (ADS)

    Lee, Donghyun; Lee, Changho; Kim, Sehui; Zhou, Qifa; Kim, Jeehyun; Kim, Chulhong

    2016-03-01

    We have developed an intraoperative multimodal surgical microscopy system that provides simultaneous real-time enlarged surface views and subsurface anatomic information during surgeries by integrating spectral domain optical coherence tomography (SD-OCT), optical-resolution photoacoustic microscopy (OR-PAM), and conventional surgical microscopy. By sharing the same optical path, both OCT and PAM images were simultaneously acquired. Additionally, the custom-made needle-type transducer received the generated PA signals enabling convenient surgical operation without using a water bath. Using a simple augmented device, the OCT and PAM images were projected on the view plane of the surgical microscope. To quantify the performance of our system, we measured spatial resolutions of our system. Then, three microsurgery simulation and analysis were processed: (1) ex vivo needle tracking and monitoring injection of carbon particles in biological tissues, (2) in vivo needle tracking and monitoring injection of carbon particles in tumor-bearing mice, and (3) in vivo guiding of melanoma removal in melanoma-bearing mice. The results indicate that this triple modal system is useful for intraoperative purposes, and can potentially be a vital tool in microsurgeries.

  8. Novel multimodality segmentation using level sets and Jensen-Rényi divergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markel, Daniel, E-mail: daniel.markel@mail.mcgill.ca; Zaidi, Habib; Geneva Neuroscience Center, Geneva University, CH-1205 Geneva

    2013-12-15

    Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. Methods: A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set activemore » contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. Results: The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with aR{sup 2} value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. Conclusions: The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.« less

  9. Novel multimodality segmentation using level sets and Jensen-Rényi divergence.

    PubMed

    Markel, Daniel; Zaidi, Habib; El Naqa, Issam

    2013-12-01

    Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set active contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with a R(2) value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.

  10. Beyond endoscopic assessment in inflammatory bowel disease: real-time histology of disease activity by non-linear multimodal imaging

    NASA Astrophysics Data System (ADS)

    Chernavskaia, Olga; Heuke, Sandro; Vieth, Michael; Friedrich, Oliver; Schürmann, Sebastian; Atreya, Raja; Stallmach, Andreas; Neurath, Markus F.; Waldner, Maximilian; Petersen, Iver; Schmitt, Michael; Bocklitz, Thomas; Popp, Jürgen

    2016-07-01

    Assessing disease activity is a prerequisite for an adequate treatment of inflammatory bowel diseases (IBD) such as Crohn’s disease and ulcerative colitis. In addition to endoscopic mucosal healing, histologic remission poses a promising end-point of IBD therapy. However, evaluating histological remission harbors the risk for complications due to the acquisition of biopsies and results in a delay of diagnosis because of tissue processing procedures. In this regard, non-linear multimodal imaging techniques might serve as an unparalleled technique that allows the real-time evaluation of microscopic IBD activity in the endoscopy unit. In this study, tissue sections were investigated using the non-linear multimodal microscopy combination of coherent anti-Stokes Raman scattering (CARS), two-photon excited auto fluorescence (TPEF) and second-harmonic generation (SHG). After the measurement a gold-standard assessment of histological indexes was carried out based on a conventional H&E stain. Subsequently, various geometry and intensity related features were extracted from the multimodal images. An optimized feature set was utilized to predict histological index levels based on a linear classifier. Based on the automated prediction, the diagnosis time interval is decreased. Therefore, non-linear multimodal imaging may provide a real-time diagnosis of IBD activity suited to assist clinical decision making within the endoscopy unit.

  11. Single photon emission computed tomography/positron emission tomography imaging and targeted radionuclide therapy of melanoma: new multimodal fluorinated and iodinated radiotracers.

    PubMed

    Maisonial, Aurélie; Kuhnast, Bertrand; Papon, Janine; Boisgard, Raphaël; Bayle, Martine; Vidal, Aurélien; Auzeloux, Philippe; Rbah, Latifa; Bonnet-Duquennoy, Mathilde; Miot-Noirault, Elisabeth; Galmier, Marie-Josèphe; Borel, Michèle; Askienazy, Serge; Dollé, Frédéric; Tavitian, Bertrand; Madelmont, Jean-Claude; Moins, Nicole; Chezal, Jean-Michel

    2011-04-28

    This study reports a series of 14 new iodinated and fluorinated compounds offering both early imaging ((123)I, (124)I, (18)F) and systemic treatment ((131)I) of melanoma potentialities. The biodistribution of each (125)I-labeled tracer was evaluated in a model of melanoma B16F0-bearing mice, using in vivo serial γ scintigraphic imaging. Among this series, [(125)I]56 emerged as the most promising compound in terms of specific tumoral uptake and in vivo kinetic profile. To validate our multimodality concept, the radiosynthesis of [(18)F]56 was then optimized and this radiotracer has been successfully investigated for in vivo PET imaging of melanoma in B16F0- and B16F10-bearing mouse model. The therapeutic efficacy of [(131)I]56 was then evaluated in mice bearing subcutaneous B16F0 melanoma, and a significant slow down in tumoral growth was demonstrated. These data support further development of 56 for PET imaging ((18)F, (124)I) and targeted radionuclide therapy ((131)I) of melanoma using a single chemical structure.

  12. Multimodality imaging of adult gastric emergencies: A pictorial review

    PubMed Central

    Sunnapwar, Abhijit; Ojili, Vijayanadh; Katre, Rashmi; Shah, Hardik; Nagar, Arpit

    2017-01-01

    Acute gastric emergencies require urgent surgical or nonsurgical intervention because they are associated with high morbidity and mortality. Imaging plays an important role in diagnosis since the clinical symptoms are often nonspecific and radiologist may be the first one to suggest a diagnosis as the imaging findings are often characteristic. The purpose of this article is to provide a comprehensive review of multimodality imaging (plain radiograph, fluoroscopy, and computed tomography) of various life threatening gastric emergencies. PMID:28515579

  13. Cross-Modality Image Synthesis via Weakly Coupled and Geometry Co-Regularized Joint Dictionary Learning.

    PubMed

    Huang, Yawen; Shao, Ling; Frangi, Alejandro F

    2018-03-01

    Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors, such as patient discomfort, increased cost, prolonged scanning time, and scanner unavailability. In additionally, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. In this paper, we propose a weakly coupled and geometry co-regularized joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting the large amounts of training data is often impractical. Our learning stage requires only a few registered multi-modality image pairs as training data. To employ both paired images and a large set of unpaired data, a cross-modality image matching criterion is proposed. Then, we propose a unified model by integrating such a criterion into the joint dictionary learning and the observed common feature space for associating cross-modality data for the purpose of synthesis. Furthermore, two regularization terms are added to construct robust sparse representations. Our experimental results demonstrate superior performance of the proposed model over state-of-the-art methods.

  14. Application of Virtual Navigation with Multimodality Image Fusion in Foramen Ovale Cannulation.

    PubMed

    Qiu, Xixiong; Liu, Weizong; Zhang, Mingdong; Lin, Hengzhou; Zhou, Shoujun; Lei, Yi; Xia, Jun

    2017-11-01

    Idiopathic trigeminal neuralgia (ITN) can be effectively treated with radiofrequency thermocoagulation. However, this procedure requires cannulation of the foramen ovale, and conventional cannulation methods are associated with high failure rates. Multimodality imaging can improve the accuracy of cannulation because each imaging method can compensate for the drawbacks of the other. We aim to determine the feasibility and accuracy of percutaneous foramen ovale cannulation under the guidance of virtual navigation with multimodality image fusion in a self-designed anatomical model of human cadaveric heads. Five cadaveric head specimens were investigated in this study. Spiral computed tomography (CT) scanning clearly displayed the foramen ovale in all five specimens (10 foramina), which could not be visualized using two-dimensional ultrasound alone. The ultrasound and spiral CT images were fused, and percutaneous cannulation of the foramen ovale was performed under virtual navigation. After this, spiral CT scanning was immediately repeated to confirm the accuracy of the cannulation. Postprocedural spiral CT confirmed that the ultrasound and CT images had been successfully fused for all 10 foramina, which were accurately and successfully cannulated. The success rates of both image fusion and cannulation were 100%. Virtual navigation with multimodality image fusion can substantially facilitate foramen ovale cannulation and is worthy of clinical application. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  15. Multimodal US-gamma imaging using collaborative robotics for cancer staging biopsies.

    PubMed

    Esposito, Marco; Busam, Benjamin; Hennersperger, Christoph; Rackerseder, Julia; Navab, Nassir; Frisch, Benjamin

    2016-09-01

    The staging of female breast cancer requires detailed information about the level of cancer spread through the lymphatic system. Common practice to obtain this information for patients with early-stage cancer is sentinel lymph node (SLN) biopsy, where LNs are radioactively identified for surgical removal and subsequent histological analysis. Punch needle biopsy is a less invasive approach but suffers from the lack of combined anatomical and nuclear information. We present and evaluate a system that introduces live collaborative robotic 2D gamma imaging in addition to live 2D ultrasound to identify SLNs in the surrounding anatomy. The system consists of a robotic arm equipped with both a gamma camera and a stereoscopic tracking system that monitors the position of an ultrasound probe operated by the physician. The arm cooperatively places the gamma camera parallel to the ultrasound imaging plane to provide live multimodal visualization and guidance. We validate the system by evaluating the target registration errors between fused nuclear and US image data in a phantom consisting of two spheres, one of which is filled with radioactivity. Medical experts perform punch biopsies on agar-gelatine phantoms with complex configurations of hot and cold lesions to provide a qualitative and quantitative evaluation of the system. The average point registration error for the overlay is [Formula: see text] mm. The time of the entire procedure was reduced by 36 %, with 80v of the biopsies being successful. The users' feedback was very positive, and the system was deemed to be very intuitive, with handling similar to classic US-guided needle biopsy. We present and evaluate the first medical collaborative robotic imaging system. Feedback from potential users for SLN punch needle biopsy is encouraging. Ongoing work investigates the clinical feasibility with more complex and realistic phantoms.

  16. In vivo evaluation of adipose- and muscle-derived stem cells as a treatment for nonhealing diabetic wounds using multimodal microscopy

    NASA Astrophysics Data System (ADS)

    Li, Joanne; Pincu, Yair; Marjanovic, Marina; Bower, Andrew J.; Chaney, Eric J.; Jensen, Tor; Boppart, Marni D.; Boppart, Stephen A.

    2016-08-01

    Impaired skin wound healing is a significant comorbid condition of diabetes, which often results in nonhealing diabetic ulcers due to poor peripheral microcirculation, among other factors. The effectiveness of the regeneration of adipose-derived stem cells (ADSCs) and muscle-derived stem cells (MDSCs) was assessed using an integrated multimodal microscopy system equipped with two-photon fluorescence and second-harmonic generation imaging. These imaging modalities, integrated in a single platform for spatial and temporal coregistration, allowed us to monitor in vivo changes in the collagen network and cell dynamics in a skin wound. Fluorescently labeled ADSCs and MDSCs were applied topically to the wound bed of wild-type and diabetic (db/db) mice following punch biopsy. Longitudinal imaging demonstrated that ADSCs and MDSCs provided remarkable capacity for improved diabetic wound healing, and integrated microscopy revealed a more organized collagen remodeling in the wound bed of treated mice. The results from this study verify the regenerative capacity of stem cells toward healing and, with multimodal microscopy, provide insight regarding their impact on the skin microenvironment. The optical method outlined in this study, which has the potential for in vivo human use, may optimize the care and treatment of diabetic nonhealing wounds.

  17. Comparing the quality of accessing medical literature using content-based visual and textual information retrieval

    NASA Astrophysics Data System (ADS)

    Müller, Henning; Kalpathy-Cramer, Jayashree; Kahn, Charles E., Jr.; Hersh, William

    2009-02-01

    Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical information retrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual information retrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals. This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual information retrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently, visual retrieval alone does not achieve the performance necessary for real-world clinical applications. Most of the common visual retrieval techniques have a MAP (Mean Average Precision) of around 2-3%, which is much lower than that achieved using textual retrieval (MAP=29%). Advanced machine learning techniques, together with good training data, have been shown to improve the performance of visual retrieval systems in the past. Multimodal retrieval (basing retrieval on both visual and textual information) can achieve better results than purely visual, but only when carefully applied. In many cases, multimodal retrieval systems performed even worse than purely textual retrieval systems. On the other hand, some multimodal retrieval systems demonstrated significantly increased early precision, which has been shown to be a desirable behavior in real-world systems.

  18. Active Multimodal Sensor System for Target Recognition and Tracking

    PubMed Central

    Zhang, Guirong; Zou, Zhaofan; Liu, Ziyue; Mao, Jiansen

    2017-01-01

    High accuracy target recognition and tracking systems using a single sensor or a passive multisensor set are susceptible to external interferences and exhibit environmental dependencies. These difficulties stem mainly from limitations to the available imaging frequency bands, and a general lack of coherent diversity of the available target-related data. This paper proposes an active multimodal sensor system for target recognition and tracking, consisting of a visible, an infrared, and a hyperspectral sensor. The system makes full use of its multisensor information collection abilities; furthermore, it can actively control different sensors to collect additional data, according to the needs of the real-time target recognition and tracking processes. This level of integration between hardware collection control and data processing is experimentally shown to effectively improve the accuracy and robustness of the target recognition and tracking system. PMID:28657609

  19. Dual CARS and SHG image acquisition scheme that combines single central fiber and multimode fiber bundle to collect and differentiate backward and forward generated photons

    PubMed Central

    Weng, Sheng; Chen, Xu; Xu, Xiaoyun; Wong, Kelvin K.; Wong, Stephen T. C.

    2016-01-01

    In coherent anti-Stokes Raman scattering (CARS) and second harmonic generation (SHG) imaging, backward and forward generated photons exhibit different image patterns and thus capture salient intrinsic information of tissues from different perspectives. However, they are often mixed in collection using traditional image acquisition methods and thus are hard to interpret. We developed a multimodal scheme using a single central fiber and multimode fiber bundle to simultaneously collect and differentiate images formed by these two types of photons and evaluated the scheme in an endomicroscopy prototype. The ratio of these photons collected was calculated for the characterization of tissue regions with strong or weak epi-photon generation while different image patterns of these photons at different tissue depths were revealed. This scheme provides a new approach to extract and integrate information captured by backward and forward generated photons in dual CARS/SHG imaging synergistically for biomedical applications. PMID:27375938

  20. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images.

    PubMed

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue

    2018-01-01

    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  1. Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis

    PubMed Central

    Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.

    2006-01-01

    In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709

  2. Recent Advances in Molecular, Multimodal and Theranostic Ultrasound Imaging

    PubMed Central

    Kiessling, Fabian; Fokong, Stanley; Bzyl, Jessica; Lederle, Wiltrud; Palmowski, Moritz; Lammers, Twan

    2014-01-01

    Ultrasound (US) imaging is an exquisite tool for the non-invasive and real-time diagnosis of many different diseases. In this context, US contrast agents can improve lesion delineation, characterization and therapy response evaluation. US contrast agents are usually micrometer-sized gas bubbles, stabilized with soft or hard shells. By conjugating antibodies to the microbubble (MB) surface, and by incorporating diagnostic agents, drugs or nucleic acids into or onto the MB shell, molecular, multimodal and theranostic MB can be generated. We here summarize recent advances in molecular, multimodal and theranostic US imaging, and introduce concepts how such advanced MB can be generated, applied and imaged. Examples are given for their use to image and treat oncological, cardiovascular and neurological diseases. Furthermore, we discuss for which therapeutic entities incorporation into (or conjugation to) MB is meaningful, and how US-mediated MB destruction can increase their extravasation, penetration, internalization and efficacy. PMID:24316070

  3. Magnetic Nanoliposomes as in Situ Microbubble Bombers for Multimodality Image-Guided Cancer Theranostics.

    PubMed

    Liu, Yang; Yang, Fang; Yuan, Chuxiao; Li, Mingxi; Wang, Tuantuan; Chen, Bo; Jin, Juan; Zhao, Peng; Tong, Jiayi; Luo, Shouhua; Gu, Ning

    2017-02-28

    Nanosized drug delivery systems have offered promising approaches for cancer theranostics. However, few are effective to simultaneously maximize tumor-specific uptake, imaging, and therapy in a single nanoplatform. Here, we report a simple yet stimuli-responsive anethole dithiolethione (ADT)-loaded magnetic nanoliposome (AML) delivery system, which consists of ADT, hydrogen sulfide (H 2 S) pro-drug, doped in the lipid bilayer, and superparamagnetic nanoparticles encapsulated inside. HepG2 cells could be effectively bombed after 6 h co-incubation with AMLs. For in vivo applications, after preferentially targeting the tumor tissue when spatiotemporally navigated by an external magnetic field, the nanoscaled AMLs can intratumorally convert to microsized H 2 S bubbles. This dynamic process can be monitored by magnetic resonance and ultrasound dual modal imaging. Importantly, the intratumoral generated H 2 S bubbles imaged by real-time ultrasound imaging first can bomb to ablate the tumor tissue when exposed to higher acoustic intensity; then as gasotransmitters, intratumoral generated high-concentration H 2 S molecules can diffuse into the inner tumor regions to further have a synergetic antitumor effect. After 7-day follow-up observation, AMLs with magnetic field treatments have indicated extremely significantly higher inhibitions of tumor growth. Therefore, such elaborately designed intratumoral conversion of nanostructures to microstructures has exhibited an improved anticancer efficacy, which may be promising for multimodal image-guided accurate cancer therapy.

  4. Coherence switching of a vertical-cavity semiconductor-laser for multimode biomedical imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Knitter, Sebastian; Liu, Changgeng; Redding, Brandon; Khokha, Mustafa Kezar; Choma, Michael Andrew

    2017-02-01

    Speckle formation is a limiting factor when using coherent sources for imaging and sensing, but can provide useful information about the motion of an object. Illumination sources with tunable spatial coherence are therefore desirable as they can offer both speckled and speckle-free images. Efficient methods of coherence switching have been achieved with a solid-state degenerate laser, and here we demonstrate a semiconductor-based degenerate laser system that can be switched between a large number of mutually incoherent spatial modes and few-mode operation. Our system is designed around a semiconductor gain element, and overcomes barriers presented by previous low spatial coherence lasers. The gain medium is an electrically-pumped vertical external cavity surface emitting laser (VECSEL) with a large active area. The use of a degenerate external cavity enables either distributing the laser emission over a large ( 1000) number of mutually incoherent spatial modes or concentrating emission to few modes by using a pinhole in the Fourier plane of the self-imaging cavity. To demonstrate the unique potential of spatial coherence switching for multimodal biomedical imaging, we use both low and high spatial coherence light generated by our VECSEL-based degenerate laser for imaging embryo heart function in Xenopus, an important animal model of heart disease. The low-coherence illumination is used for high-speed (100 frames per second) speckle-free imaging of dynamic heart structure, while the high-coherence emission is used for laser speckle contrast imaging of the blood flow.

  5. The value of multimodality imaging in the investigation of a PSA recurrence after radical prostatectomy in the Irish hospital setting.

    PubMed

    McLoughlin, L C; Inder, S; Moran, D; O'Rourke, C; Manecksha, R P; Lynch, T H

    2018-02-01

    The diagnostic evaluation of a PSA recurrence after RP in the Irish hospital setting involves multimodality imaging with MRI, CT, and bone scanning, despite the low diagnostic yield from imaging at low PSA levels. We aim to investigate the value of multimodality imaging in PC patients after RP with a PSA recurrence. Forty-eight patients with a PSA recurrence after RP who underwent multimodality imaging were evaluated. Demographic data, postoperative PSA levels, and imaging studies performed at those levels were evaluated. Eight (21%) MRIs, 6 (33%) CTs, and 4 (9%) bone scans had PCa-specific findings. Three (12%) patients had a positive MRI with a PSA <1.0 ng/ml, while 5 (56%) were positive at PSA ≥1.1 ng/ml (p = 0.05). Zero patient had a positive CT TAP at a PSA level <1.0 ng/ml, while 5 (56%) were positive at levels ≥1.1 ng/ml (p = 0.03). Zero patient had a positive bone at PSA levels <1.0 ng/ml, while 4 (27%) were positive at levels ≥1.1 ng/ml (p = 0.01). The diagnostic yield from multimodality imaging, and isotope bone scanning in particular, in PSA levels <1.0 ng/ml, is low. There is a statistically significant increase in the frequency of positive findings on CT and bone scanning at PSA levels ≥1.1 ng/ml. MRI alone is of investigative value at PSA <1.0 ng/ml. The indication for CT, MRI, or isotope bone scanning should be carefully correlated with the clinical question and how it will affect further management.

  6. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.

  7. Fast and robust multimodal image registration using a local derivative pattern.

    PubMed

    Jiang, Dongsheng; Shi, Yonghong; Chen, Xinrong; Wang, Manning; Song, Zhijian

    2017-02-01

    Deformable multimodal image registration, which can benefit radiotherapy and image guided surgery by providing complementary information, remains a challenging task in the medical image analysis field due to the difficulty of defining a proper similarity measure. This article presents a novel, robust and fast binary descriptor, the discriminative local derivative pattern (dLDP), which is able to encode images of different modalities into similar image representations. dLDP calculates a binary string for each voxel according to the pattern of intensity derivatives in its neighborhood. The descriptor similarity is evaluated using the Hamming distance, which can be efficiently computed, instead of conventional L1 or L2 norms. For the first time, we validated the effectiveness and feasibility of the local derivative pattern for multimodal deformable image registration with several multi-modal registration applications. dLDP was compared with three state-of-the-art methods in artificial image and clinical settings. In the experiments of deformable registration between different magnetic resonance imaging (MRI) modalities from BrainWeb, between computed tomography and MRI images from patient data, and between MRI and ultrasound images from BITE database, we show our method outperforms localized mutual information and entropy images in terms of both accuracy and time efficiency. We have further validated dLDP for the deformable registration of preoperative MRI and three-dimensional intraoperative ultrasound images. Our results indicate that dLDP reduces the average mean target registration error from 4.12 mm to 2.30 mm. This accuracy is statistically equivalent to the accuracy of the state-of-the-art methods in the study; however, in terms of computational complexity, our method significantly outperforms other methods and is even comparable to the sum of the absolute difference. The results reveal that dLDP can achieve superior performance regarding both accuracy and time efficiency in general multimodal image registration. In addition, dLDP also indicates the potential for clinical ultrasound guided intervention. © 2016 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  8. Gadolinium-Conjugated Gold Nanoshells for Multimodal Diagnostic Imaging and Photothermal Cancer Therapy

    PubMed Central

    Coughlin, Andrew J.; Ananta, Jeyarama S.; Deng, Nanfu; Larina, Irina V.; Decuzzi, Paolo

    2014-01-01

    Multimodal imaging offers the potential to improve diagnosis and enhance the specificity of photothermal cancer therapy. Toward this goal, we have engineered gadolinium-conjugated gold nanoshells and demonstrated that they enhance contrast for magnetic resonance imaging, X-Ray, optical coherence tomography, reflectance confocal microscopy, and two-photon luminescence. Additionally, these particles effectively convert near-infrared light to heat, which can be used to ablate cancer cells. Ultimately, these studies demonstrate the potential of gadolinium-nanoshells for image-guided photothermal ablation. PMID:24115690

  9. Image-guided intraocular injection using multimodality optical coherence tomography and fluorescence confocal scanning laser ophthalmoscopy in rodent ophthalmological models

    NASA Astrophysics Data System (ADS)

    Terrones, Benjamin D.; Benavides, Oscar R.; Leeburg, Kelsey C.; Mehanathan, Sankarathi B.; Levine, Edward M.; Tao, Yuankai K.

    2018-02-01

    Intraocular injections are routinely performed for delivery of anti-VEGF and anti-inflammatory therapies in humans. While these injections are also performed in mice to develop novel models of ophthalmic diseases and screen novel therapeutics, the injection location and volume are not well-controlled and reproducible. We overcome limitations of conventional injections methods by developing a multimodality, long working distance, non-contact optical coherence tomography (OCT) and fluorescence confocal scanning laser ophthalmoscopy (cSLO) system for retinal imaging before and after injections. Our OCT+cSLO system combines a custom-built spectraldomain OCT engine (875+/-85 nm) with 125 kHz line-rate with a modified commercial cSLO with a maximum frame-rate of 30 fps (512 x 512 pix.). The system was designed for an overlapping OCT+cSLO field-of-view of 1.1 mm with a 7.76 mm working distance to the pupil. cSLO excitation light sources and filters were optimized for simultaneous GFP and tdTomato imaging. Lateral resolution was 3.02 µm for OCT and 2.74 μm for cSLO. Intravitreal injections of 5%, 10%, and 20% intralipid with Alex Fluor 488 were manually injected intraocularly in C57BL/6 mice. Post-injection imaging showed structural changes associated with retinal puncture, including the injection track, a retinal elevation, and detachment of the posterior hyaloid. OCT enables quantitative analysis of injection location and volumes whereas complementary cSLO improves specificity for identifying fluorescently labeled injected compounds and transgenic cells. The long working distance of our non-contact OCT+cSLO system is uniquely-suited for concurrent imaging with intraocular injections and may be applied for imaging of ophthalmic surgical dynamics and real-time image-guided injections.

  10. TeraSCREEN: multi-frequency multi-mode Terahertz screening for border checks

    NASA Astrophysics Data System (ADS)

    Alexander, Naomi E.; Alderman, Byron; Allona, Fernando; Frijlink, Peter; Gonzalo, Ramón; Hägelen, Manfred; Ibáñez, Asier; Krozer, Viktor; Langford, Marian L.; Limiti, Ernesto; Platt, Duncan; Schikora, Marek; Wang, Hui; Weber, Marc Andree

    2014-06-01

    The challenge for any security screening system is to identify potentially harmful objects such as weapons and explosives concealed under clothing. Classical border and security checkpoints are no longer capable of fulfilling the demands of today's ever growing security requirements, especially with respect to the high throughput generally required which entails a high detection rate of threat material and a low false alarm rate. TeraSCREEN proposes to develop an innovative concept of multi-frequency multi-mode Terahertz and millimeter-wave detection with new automatic detection and classification functionalities. The system developed will demonstrate, at a live control point, the safe automatic detection and classification of objects concealed under clothing, whilst respecting privacy and increasing current throughput rates. This innovative screening system will combine multi-frequency, multi-mode images taken by passive and active subsystems which will scan the subjects and obtain complementary spatial and spectral information, thus allowing for automatic threat recognition. The TeraSCREEN project, which will run from 2013 to 2016, has received funding from the European Union's Seventh Framework Programme under the Security Call. This paper will describe the project objectives and approach.

  11. Utilization of a multimedia PACS workstation for surgical planning of epilepsy

    NASA Astrophysics Data System (ADS)

    Soo Hoo, Kent; Wong, Stephen T.; Hawkins, Randall A.; Knowlton, Robert C.; Laxer, Kenneth D.; Rowley, Howard A.

    1997-05-01

    Surgical treatment of temporal lobe epilepsy requires the localization of the epileptogenic zone for surgical resection. Currently, clinicians utilize electroencephalography, various neuroimaging modalities, and psychological tests together to determine the location of this zone. We investigate how a multimedia neuroimaging workstation built on top of the UCSF Picture Archiving and Communication System can be used to aid surgical planning of epilepsy and related brain diseases. This usage demonstrates the ability of the workstation to retrieve image and textural data from PACS and other image sources, register multimodality images, visualize and render 3D data sets, analyze images, generate new image and text data from the analysis, and organize all data in a relational database management system.

  12. Multimodality imaging using SPECT/CT and MRI and ligand functionalized 99mTc-labeled magnetic microbubbles

    PubMed Central

    2013-01-01

    Background In the present study, we used multimodal imaging to investigate biodistribution in rats after intravenous administration of a new 99mTc-labeled delivery system consisting of polymer-shelled microbubbles (MBs) functionalized with diethylenetriaminepentaacetic acid (DTPA), thiolated poly(methacrylic acid) (PMAA), chitosan, 1,4,7-triacyclononane-1,4,7-triacetic acid (NOTA), NOTA-super paramagnetic iron oxide nanoparticles (SPION), or DTPA-SPION. Methods Examinations utilizing planar dynamic scintigraphy and hybrid imaging were performed using a commercially available single-photon emission computed tomography (SPECT)/computed tomography (CT) system. For SPION containing MBs, the biodistribution pattern of 99mTc-labeled NOTA-SPION and DTPA-SPION MBs was investigated and co-registered using fusion SPECT/CT and magnetic resonance imaging (MRI). Moreover, to evaluate the biodistribution, organs were removed and radioactivity was measured and calculated as percentage of injected dose. Results SPECT/CT and MRI showed that the distribution of 99mTc-labeled ligand-functionalized MBs varied with the type of ligand as well as with the presence of SPION. The highest uptake was observed in the lungs 1 h post injection of 99mTc-labeled DTPA and chitosan MBs, while a similar distribution to the lungs and the liver was seen after the administration of PMAA MBs. The highest counts of 99mTc-labeled NOTA-SPION and DTPA-SPION MBs were observed in the lungs, liver, and kidneys 1 h post injection. The highest counts were observed in the liver, spleen, and kidneys as confirmed by MRI 24 h post injection. Furthermore, the results obtained from organ measurements were in good agreement with those obtained from SPECT/CT. Conclusions In conclusion, microbubbles functionalized by different ligands can be labeled with radiotracers and utilized for SPECT/CT imaging, while the incorporation of SPION in MB shells enables imaging using MR. Our investigation revealed that biodistribution may be modified using different ligands. Furthermore, using a single contrast agent with fusion SPECT/CT/MR multimodal imaging enables visualization of functional and anatomical information in one image, thus improving the diagnostic benefit for patients. PMID:23442550

  13. Multimodal Image-Based Virtual Reality Presurgical Simulation and Evaluation for Trigeminal Neuralgia and Hemifacial Spasm.

    PubMed

    Yao, Shujing; Zhang, Jiashu; Zhao, Yining; Hou, Yuanzheng; Xu, Xinghua; Zhang, Zhizhong; Kikinis, Ron; Chen, Xiaolei

    2018-05-01

    To address the feasibility and predictive value of multimodal image-based virtual reality in detecting and assessing features of neurovascular confliction (NVC), particularly regarding the detection of offending vessels, degree of compression exerted on the nerve root, in patients who underwent microvascular decompression for nonlesional trigeminal neuralgia and hemifacial spasm (HFS). This prospective study includes 42 consecutive patients who underwent microvascular decompression for classic primary trigeminal neuralgia or HFS. All patients underwent preoperative 1.5-T magnetic resonance imaging (MRI) with T2-weighted three-dimensional (3D) sampling perfection with application-optimized contrasts by using different flip angle evolutions, 3D time-of-flight magnetic resonance angiography, and 3D T1-weighted gadolinium-enhanced sequences in combination, whereas 2 patients underwent extra experimental preoperative 7.0-T MRI scans with the same imaging protocol. Multimodal MRIs were then coregistered with open-source software 3D Slicer, followed by 3D image reconstruction to generate virtual reality (VR) images for detection of possible NVC in the cerebellopontine angle. Evaluations were performed by 2 reviewers and compared with the intraoperative findings. For detection of NVC, multimodal image-based VR sensitivity was 97.6% (40/41) and specificity was 100% (1/1). Compared with the intraoperative findings, the κ coefficients for predicting the offending vessel and the degree of compression were >0.75 (P < 0.001). The 7.0-T scans have a clearer view of vessels in the cerebellopontine angle, which may have significant impact on detection of small-caliber offending vessels with relatively slow flow speed in cases of HFS. Multimodal image-based VR using 3D sampling perfection with application-optimized contrasts by using different flip angle evolutions in combination with 3D time-of-flight magnetic resonance angiography sequences proved to be reliable in detecting NVC and in predicting the degree of root compression. The VR image-based simulation correlated well with the real surgical view. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Multimodal discrimination of immune cells using a combination of Raman spectroscopy and digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    McReynolds, Naomi; Cooke, Fiona G. M.; Chen, Mingzhou; Powis, Simon J.; Dholakia, Kishan

    2017-03-01

    The ability to identify and characterise individual cells of the immune system under label-free conditions would be a significant advantage in biomedical and clinical studies where untouched and unmodified cells are required. We present a multi-modal system capable of simultaneously acquiring both single point Raman spectra and digital holographic images of single cells. We use this combined approach to identify and discriminate between immune cell populations CD4+ T cells, B cells and monocytes. We investigate several approaches to interpret the phase images including signal intensity histograms and texture analysis. Both modalities are independently able to discriminate between cell subsets and dual-modality may therefore be used a means for validation. We demonstrate here sensitivities achieved in the range of 86.8% to 100%, and specificities in the range of 85.4% to 100%. Additionally each modality provides information not available from the other providing both a molecular and a morphological signature of each cell.

  15. Multimodal inspection in power engineering and building industries: new challenges and solutions

    NASA Astrophysics Data System (ADS)

    Kujawińska, Małgorzata; Malesa, Marcin; Malowany, Krzysztof

    2013-09-01

    Recently the demand and number of applications of full-field, optical measurement methods based on noncoherent light sources increased significantly. They include traditional image processing, thermovision, digital image correlation (DIC) and structured light methods. However, there are still numerous challenges connected with implementation of these methods to in-situ, long-term monitoring in industrial, civil engineering and cultural heritage applications, multimodal measurements of a variety of object features or simply adopting instruments to work in hard environmental conditions. In this paper we focus on 3D DIC method and present its enhancements concerning software modifications (new visualization methods and a method for automatic merging of data distributed in time) and hardware improvements. The modified 3D DIC system combined with infrared camera system is applied in many interesting cases: measurements of boiler drum during annealing and of pipelines in heat power stations and monitoring of different building steel struts at construction site and validation of numerical models of large building structures constructed of graded metal plate arches.

  16. Multimodal Research: Addressing the Complexity of Multimodal Environments and the Challenges for CALL

    ERIC Educational Resources Information Center

    Tan, Sabine; O'Halloran, Kay L.; Wignell, Peter

    2016-01-01

    Multimodality, the study of the interaction of language with other semiotic resources such as images and sound resources, has significant implications for computer assisted language learning (CALL) with regards to understanding the impact of digital environments on language teaching and learning. In this paper, we explore recent manifestations of…

  17. Feature-based Alignment of Volumetric Multi-modal Images

    PubMed Central

    Toews, Matthew; Zöllei, Lilla; Wells, William M.

    2014-01-01

    This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955

  18. Multimodal Spectral Imaging of Cells Using a Transmission Diffraction Grating on a Light Microscope

    PubMed Central

    Isailovic, Dragan; Xu, Yang; Copus, Tyler; Saraswat, Suraj; Nauli, Surya M.

    2011-01-01

    A multimodal methodology for spectral imaging of cells is presented. The spectral imaging setup uses a transmission diffraction grating on a light microscope to concurrently record spectral images of cells and cellular organelles by fluorescence, darkfield, brightfield, and differential interference contrast (DIC) spectral microscopy. Initially, the setup was applied for fluorescence spectral imaging of yeast and mammalian cells labeled with multiple fluorophores. Fluorescence signals originating from fluorescently labeled biomolecules in cells were collected through triple or single filter cubes, separated by the grating, and imaged using a charge-coupled device (CCD) camera. Cellular components such as nuclei, cytoskeleton, and mitochondria were spatially separated by the fluorescence spectra of the fluorophores present in them, providing detailed multi-colored spectral images of cells. Additionally, the grating-based spectral microscope enabled measurement of scattering and absorption spectra of unlabeled cells and stained tissue sections using darkfield and brightfield or DIC spectral microscopy, respectively. The presented spectral imaging methodology provides a readily affordable approach for multimodal spectral characterization of biological cells and other specimens. PMID:21639978

  19. Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.

    PubMed

    Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias

    2017-11-27

    Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.

  20. The biometric recognition on contactless multi-spectrum finger images

    NASA Astrophysics Data System (ADS)

    Kang, Wenxiong; Chen, Xiaopeng; Wu, Qiuxia

    2015-01-01

    This paper presents a novel multimodal biometric system based on contactless multi-spectrum finger images, which aims to deal with the limitations of unimodal biometrics. The chief merits of the system are the richness of the permissible texture and the ease of data access. We constructed a multi-spectrum instrument to simultaneously acquire three different types of biometrics from a finger: contactless fingerprint, finger vein, and knuckleprint. On the basis of the samples with these characteristics, a moderate database was built for the evaluation of our system. Considering the real-time requirements and the respective characteristics of the three biometrics, the block local binary patterns algorithm was used to extract features and match for the fingerprints and finger veins, while the Oriented FAST and Rotated BRIEF algorithm was applied for knuckleprints. Finally, score-level fusion was performed on the matching results from the aforementioned three types of biometrics. The experiments showed that our proposed multimodal biometric recognition system achieves an equal error rate of 0.109%, which is 88.9%, 94.6%, and 89.7% lower than the individual fingerprint, knuckleprint, and finger vein recognitions, respectively. Nevertheless, our proposed system also satisfies the real-time requirements of the applications.

  1. Direct visualization of gastrointestinal tract with lanthanide-doped BaYbF5 upconversion nanoprobes.

    PubMed

    Liu, Zhen; Ju, Enguo; Liu, Jianhua; Du, Yingda; Li, Zhengqiang; Yuan, Qinghai; Ren, Jinsong; Qu, Xiaogang

    2013-10-01

    Nanoparticulate contrast agents have attracted a great deal of attention along with the rapid development of modern medicine. Here, a binary contrast agent based on PAA modified BaYbF5:Tm nanoparticles for direct visualization of gastrointestinal (GI) tract has been designed and developed via a one-pot solvothermal route. By taking advantages of excellent colloidal stability, low cytotoxicity, and neglectable hemolysis of these well-designed nanoparticles, their feasibility as a multi-modal contrast agent for GI tract was intensively investigated. Significant enhancement of contrast efficacy relative to clinical barium meal and iodine-based contrast agent was evaluated via X-ray imaging and CT imaging in vivo. By doping Tm(3+) ions into these nanoprobes, in vivo NIR-NIR imaging was then demonstrated. Unlike some invasive imaging modalities, non-invasive imaging strategy including X-ray imaging, CT imaging, and UCL imaging for GI tract could extremely reduce the painlessness to patients, effectively facilitate imaging procedure, as well as rationality economize diagnostic time. Critical to clinical applications, long-term toxicity of our contrast agent was additionally investigated in detail, indicating their overall safety. Based on our results, PAA-BaYbF5:Tm nanoparticles were the excellent multi-modal contrast agent to integrate X-ray imaging, CT imaging, and UCL imaging for direct visualization of GI tract with low systemic toxicity. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Rapid Screening of Cancer Margins in Tissue with Multimodal Confocal Microscopy

    PubMed Central

    Gareau, Daniel S.; Jeon, Hana; Nehal, Kishwer S.; Rajadhyaksha, Milind

    2012-01-01

    Background Complete and accurate excision of cancer is guided by the examination of histopathology. However, preparation of histopathology is labor intensive and slow, leading to insufficient sampling of tissue and incomplete and/or inaccurate excision of margins. We demonstrate the potential utility of multimodal confocal mosaicing microscopy for rapid screening of cancer margins, directly in fresh surgical excisions, without the need for conventional embedding, sectioning or processing. Materials/Methods A multimodal confocal mosaicing microscope was developed to image basal cell carcinoma margins in surgical skin excisions, with resolution that shows nuclear detail. Multimodal contrast is with fluorescence for imaging nuclei and reflectance for cellular cytoplasm and dermal collagen. Thirtyfive excisions of basal cell carcinomas from Mohs surgery were imaged, and the mosaics analyzed by comparison to the corresponding frozen pathology. Results Confocal mosaics are produced in about 9 minutes, displaying tissue in fields-of-view of 12 mm with 2X magnification. A digital staining algorithm transforms black and white contrast to purple and pink, which simulates the appearance of standard histopathology. Mosaicing enables rapid digital screening, which mimics the examination of histopathology. Conclusions Multimodal confocal mosaicing microscopy offers a technology platform to potentially enable real-time pathology at the bedside. The imaging may serve as an adjunct to conventional histopathology, to expedite screening of margins and guide surgery toward more complete and accurate excision of cancer. PMID:22721570

  3. Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation

    PubMed Central

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829

  4. 1024-Pixel CMOS Multimodality Joint Cellular Sensor/Stimulator Array for Real-Time Holistic Cellular Characterization and Cell-Based Drug Screening.

    PubMed

    Park, Jong Seok; Aziz, Moez Karim; Li, Sensen; Chi, Taiyun; Grijalva, Sandra Ivonne; Sung, Jung Hoon; Cho, Hee Cheol; Wang, Hua

    2018-02-01

    This paper presents a fully integrated CMOS multimodality joint sensor/stimulator array with 1024 pixels for real-time holistic cellular characterization and drug screening. The proposed system consists of four pixel groups and four parallel signal-conditioning blocks. Every pixel group contains 16 × 16 pixels, and each pixel includes one gold-plated electrode, four photodiodes, and in-pixel circuits, within a pixel footprint. Each pixel supports real-time extracellular potential recording, optical detection, charge-balanced biphasic current stimulation, and cellular impedance measurement for the same cellular sample. The proposed system is fabricated in a standard 130-nm CMOS process. Rat cardiomyocytes are successfully cultured on-chip. Measured high-resolution optical opacity images, extracellular potential recordings, biphasic current stimulations, and cellular impedance images demonstrate the unique advantages of the system for holistic cell characterization and drug screening. Furthermore, this paper demonstrates the use of optical detection on the on-chip cultured cardiomyocytes to real-time track their cyclic beating pattern and beating rate.

  5. Enabling the detection of UV signal in multimodal nonlinear microscopy with catalogue lens components.

    PubMed

    Vogel, Martin; Wingert, Axel; Fink, Rainer H A; Hagl, Christian; Ganikhanov, Feruz; Pfeffer, Christian P

    2015-10-01

    Using an optical system made from fused silica catalogue optical components, third-order nonlinear microscopy has been enabled on conventional Ti:sapphire laser-based multiphoton microscopy setups. The optical system is designed using two lens groups with straightforward adaptation to other microscope stands when one of the lens groups is exchanged. Within the theoretical design, the optical system collects and transmits light with wavelengths between the near ultraviolet and the near infrared from an object field of at least 1 mm in diameter within a resulting numerical aperture of up to 0.56. The numerical aperture can be controlled with a variable aperture stop between the two lens groups of the condenser. We demonstrate this new detection capability in third harmonic generation imaging experiments at the harmonic wavelength of ∼300 nm and in multimodal nonlinear optical imaging experiments using third-order sum frequency generation and coherent anti-Stokes Raman scattering microscopy so that the wavelengths of the detected signals range from ∼300 nm to ∼660 nm. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  6. MPO4:Nd3+ (M=Ca, Gd), Luminomagnetic Nanophosphors with Optical and Magnetic Features for Multimodal Imaging Applications

    NASA Astrophysics Data System (ADS)

    Rightsell, Chris; Mimun, Lawrence C.; Kumar, Ajith G.; Sardar, Dhiraj K.

    2015-03-01

    Nanomaterials with multiple functionalities play a very important role in several high technology applications. A major area of such applications is the biomedical industry, where contrast agents with multiple imaging modalities can provide better results than conventional materials. Many of the contrast agents available now have drawbacks such as toxicity, photobleaching, low contrast, size restrictions, and overall cost of the imaging system. Rare-earth doped inorganic nanophosphors are alternatives to circumvent several of these issues, together with the added advantage of super high resolution imaging due to the excellent near infrared sensitivity of the phosphors. In addition to optical imaging features, by adding a magnetic ion such as Gd3+ at suitable lattice positions, the phosphor can be made magnetic, yielding dual imaging functionalities. In this research, we are presenting the optical and magnetic imaging features of sub-nanometer size MPO4:Nd3+ (M=Ca, Gd) phosphors for the potential application of these nanophosphors as multimodal contrast agents. Cytotoxicity, in vitro and in vivo imaging, penetration depth etc. are studied for various phosphor compositions, and optimized compositions are explored. This research was funded by the National Science Foundation Partnerships for Research and Education in Materials (NSF-PREM) Grant N0-DMR-0934218.

  7. Multimodal Imaging in Klippel-Trénaunay-Weber Syndrome: Clinical Photography, Computed Tomoangiography, Infrared Thermography, and 99mTc-Phytate Lymphoscintigraphy.

    PubMed

    Kim, Su Wan; Song, Heesung

    2017-12-01

    We report the case of a 19-year-old man who presented with a 12-year history of progressive fatigue, feeling hot, excessive sweating, and numbness in the left arm. He had undergone multimodal imaging and was diagnosed as having Klippel-Trénaunay-Weber syndrome (KTWS). This is a rare congenital disease, defined by combinations of nevus flammeus, venous and lymphatic malformation, and hypertrophy of the affected limbs. Lower extremities are affected mostly. Conventional modalities for evaluating KTWS are ultrasonography, CT, MRI, lymphoscintigraphy, and angiography. There are few reports on multimodal imaging of upper extremities of KTWS patients, and this is the first report of an infrared thermography in KTWS.

  8. Ultrasmall biomolecule-anchored hybrid GdVO4 nanophosphors as a metabolizable multimodal bioimaging contrast agent.

    PubMed

    Dong, Kai; Ju, Enguo; Liu, Jianhua; Han, Xueli; Ren, Jinsong; Qu, Xiaogang

    2014-10-21

    Multimodal molecular imaging has recently attracted much attention on disease diagnostics by taking advantage of individual imaging modalities. Herein, we have demonstrated a new paradigm for multimodal bioimaging based on amino acids-anchored ultrasmall lanthanide-doped GdVO4 nanoprobes. On the merit of special metal-cation complexation and abundant functional groups, these amino acids-anchored nanoprobes showed high colloidal stability and excellent dispersibility. Additionally, due to typical paramagnetic behaviour, high X-ray mass absorption coefficient and strong fluorescence, these nanoprobes would provide a unique opportunity to develop multifunctional probes for MRI, CT and luminescence imaging. More importantly, the small size and biomolecular coatings endow the nanoprobes with effective metabolisability and high biocompatibility. With the superior stability, high biocompatibility, effective metabolisability and excellent contrast performance, amino acids-capped GdVO4:Eu(3+) nanocastings are a promising candidate as multimodal contrast agents and would bring more opportunities for biological and medical applications with further modifications.

  9. Image-guided cold atmosphere plasma (CAP) therapy for cutaneous wound

    NASA Astrophysics Data System (ADS)

    Yu, Zelin; Ren, Wenqi; Gan, Qi; Li, Jiahong; Li, XiangXiang; Zhang, Shiwu; Jin, Fan; Cheng, Cheng; Ting, Yue; Xu, Ronald X.

    2016-03-01

    Bacterial infection is one of the major factors contributing to the compromised healing in chronic wounds. Sometimes bacteria biofilms formed on the wound are more resistant than adherent bacteria. Cold atmosphere plasma (CAP) has already shown its potential in contact-free disinfection, blood coagulation, and wound healing. In this study, we integrated a multimodal imaging system with a portable CAP device for image-guided treatment of infected wound in vivo and evaluated the antimicrobial effect on Pseudomonas aeruginosa sample in vitro.15 ICR mice were divided into three groups for therapeutic experiments:(1) control group with no infection nor treatment (2) infection group without treatment (3) infection group with treatment. For each mouse, a three millimeters punch biopsy was created on the dorsal skin. Infection was induced by Staphylococcus aureus inoculation one day post-wounding. The treated group was subjected to CAP for 2 min daily till day 13. For each group, five fixed wounds' oxygenation and blood perfusion were evaluated daily till day 13 by a multimodal imaging system that integrates a multispectral imaging module and a laser speckle imaging module. In the research of relationship between therapeutic depth and sterilization effect on P.aeruginosa in agarose, we found that the CAP-generated reactive species reached the depth of 26.7μm at 30s and 41.6μm at 60s for anti-bacterial effects. Image-guided CAP therapy can be potentially used to control infection and facilitate the healing process of infected wounds.

  10. MULTIMODAL IMAGING OF ANGIOID STREAKS ASSOCIATED WITH TURNER SYNDROME.

    PubMed

    Chiu, Bing Q; Tsui, Edmund; Hussnain, Syed Amal; Barbazetto, Irene A; Smith, R Theodore

    2018-02-13

    To report multimodal imaging in a novel case of angioid streaks in a patient with Turner syndrome with 10-year follow-up. Case report of a patient with Turner syndrome and angioid streaks followed at Bellevue Hospital Eye Clinic from 2007 to 2017. Fundus photography, fluorescein angiography, and optical coherence tomography angiography were obtained. Angioid streaks with choroidal neovascularization were noted in this patient with Turner syndrome without other systemic conditions previously correlated with angioid streaks. We report a case of angioid streaks with choroidal neovascularization in a patient with Turner syndrome. We demonstrate that angioid streaks, previously associated with pseudoxanthoma elasticum, Ehlers-Danlos syndrome, Paget disease of bone, and hemoglobinopathies, may also be associated with Turner syndrome, and may continue to develop choroidal neovascularization, suggesting the need for careful ophthalmic examination in these patients.

  11. Integrated approach to multimodal media content analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-12-01

    In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.

  12. Comparing Yb-fiber and Ti:Sapphire lasers for depth resolved imaging of human skin (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Balu, Mihaela; Saytashev, Ilyas; Hou, Jue; Dantus, Marcos; Tromberg, Bruce J.

    2016-02-01

    We report on a direct comparison between Ti:Sapphire and Yb fiber lasers for depth-resolved label-free multimodal imaging of human skin. We found that the penetration depth achieved with the Yb laser was 80% greater than for the Ti:Sapphire. Third harmonic generation (THG) imaging with Yb laser excitation provides additional information about skin structure. Our results indicate the potential of fiber-based laser systems for moving into clinical use.

  13. OSIRIX: open source multimodality image navigation software

    NASA Astrophysics Data System (ADS)

    Rosset, Antoine; Pysher, Lance; Spadola, Luca; Ratib, Osman

    2005-04-01

    The goal of our project is to develop a completely new software platform that will allow users to efficiently and conveniently navigate through large sets of multidimensional data without the need of high-end expensive hardware or software. We also elected to develop our system on new open source software libraries allowing other institutions and developers to contribute to this project. OsiriX is a free and open-source imaging software designed manipulate and visualize large sets of medical images: http://homepage.mac.com/rossetantoine/osirix/

  14. Analyzing multimodality tomographic images and associated regions of interest with MIDAS

    NASA Astrophysics Data System (ADS)

    Tsui, Wai-Hon; Rusinek, Henry; Van Gelder, Peter; Lebedev, Sergey

    2001-07-01

    This paper outlines the design and features incorporated in a software package for analyzing multi-modality tomographic images. The package MIDAS has been evolving for the past 15 years and is in wide use by researchers at New York University School of Medicine and a number of collaborating research sites. It was written in the C language and runs on Sun workstations and Intel PCs under the Solaris operating system. A unique strength of the MIDAS package lies in its ability to generate, manipulate and analyze a practically unlimited number of regions of interest (ROIs). These regions are automatically saved in an efficient data structure and linked to associated images. A wide selection of set theoretical (e.g. union, xor, difference), geometrical (e.g. move, rotate) and morphological (grow, peel) operators can be applied to an arbitrary selection of ROIs. ROIs are constructed as a result of image segmentation algorithms incorporated in MIDAS; they also can be drawn interactively. These ROI editing operations can be applied in either 2D or 3D mode. ROI statistics generated by MIDAS include means, standard deviations, centroids and histograms. Other image manipulation tools incorporated in MIDAS are multimodality and within modality coregistration methods (including landmark matching, surface fitting and Woods' correlation methods) and image reformatting methods (using nearest-neighbor, tri-linear or sinc interpolation). Applications of MIDAS include: (1) neuroanatomy research: marking anatomical structures in one orientation, reformatting marks to another orientation; (2) tissue volume measurements: brain structures (PET, MRI, CT), lung nodules (low dose CT), breast density (MRI); (3) analysis of functional (SPECT, PET) experiments by overlaying corresponding structural scans; (4) longitudinal studies: regional measurement of atrophy.

  15. In vivo features of melanocytic lesions: multimode hyperspectral dermoscopy, reflectance confocal microscopy, and histopathologic correlates (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas B.; Jain, Manu; Cordova, Miguel A.; Kose, Kivanc; Rajadhyaksha, Milind; Halpern, Allan C.; Farkas, Daniel L.

    2017-02-01

    Motivation and background: Melanoma, the fastest growing cancer worldwide, kills more than one person every hour in the United States. Determining the depth and distribution of dermal melanin and hemoglobin adds physio-morphologic information to the current diagnostic standard, cellular morphology, to further develop noninvasive methods to discriminate between melanoma and benign skin conditions. Purpose: To compare the performance of a multimode dermoscopy system (SkinSpect), which is designed to quantify and map in three dimensions, in vivo melanin and hemoglobin in skin, and to validate this with histopathology and three dimensional reflectance confocal microscopy (RCM) imaging. Methods: Sequentially capture SkinSpect and RCM images of suspect lesions and nearby normal skin and compare this with histopathology reports, RCM imaging allows noninvasive observation of nuclear, cellular and structural detail in 1-5 μm-thin optical sections in skin, and detection of pigmented skin lesions with sensitivity of 90-95% and specificity of 70-80%. The multimode imaging dermoscope combines polarization (cross and parallel), autofluorescence and hyperspectral imaging to noninvasively map the distribution of melanin, collagen and hemoglobin oxygenation in pigmented skin lesions. Results: We compared in vivo features of ten melanocytic lesions extracted by SkinSpect and RCM imaging, and correlated them to histopathologic results. We present results of two melanoma cases (in situ and invasive), and compare with in vivo features from eight benign lesions. Melanin distribution at different depths and hemodynamics, including abnormal vascularity, detected by both SkinSpect and RCM will be discussed. Conclusion: Diagnostic features such as dermal melanin and hemoglobin concentration provided in SkinSpect skin analysis for melanoma and normal pigmented lesions can be compared and validated using results from RCM and histopathology.

  16. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).

    PubMed

    Menze, Bjoern H; Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B; Ayache, Nicholas; Buendia, Patricia; Collins, D Louis; Cordier, Nicolas; Corso, Jason J; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M; Jena, Raj; John, Nigel M; Konukoglu, Ender; Lashkari, Danial; Mariz, José Antonió; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J; Raviv, Tammy Riklin; Reza, Syed M S; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A; Sousa, Nuno; Subbanna, Nagesh K; Szekely, Gabor; Taylor, Thomas J; Thomas, Owen M; Tustison, Nicholas J; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen

    2015-10-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.

  17. Integrated nanotechnology platform for tumor-targeted multimodal imaging and therapeutic cargo release

    PubMed Central

    Hosoya, Hitomi; Dobroff, Andrey S.; Driessen, Wouter H. P.; Cristini, Vittorio; Brinker, Lina M.; Staquicini, Fernanda I.; Cardó-Vila, Marina; D’Angelo, Sara; Ferrara, Fortunato; Proneth, Bettina; Lin, Yu-Shen; Dunphy, Darren R.; Dogra, Prashant; Melancon, Marites P.; Stafford, R. Jason; Miyazono, Kohei; Gelovani, Juri G.; Kataoka, Kazunori; Brinker, C. Jeffrey; Sidman, Richard L.; Arap, Wadih; Pasqualini, Renata

    2016-01-01

    A major challenge of targeted molecular imaging and drug delivery in cancer is establishing a functional combination of ligand-directed cargo with a triggered release system. Here we develop a hydrogel-based nanotechnology platform that integrates tumor targeting, photon-to-heat conversion, and triggered drug delivery within a single nanostructure to enable multimodal imaging and controlled release of therapeutic cargo. In proof-of-concept experiments, we show a broad range of ligand peptide-based applications with phage particles, heat-sensitive liposomes, or mesoporous silica nanoparticles that self-assemble into a hydrogel for tumor-targeted drug delivery. Because nanoparticles pack densely within the nanocarrier, their surface plasmon resonance shifts to near-infrared, thereby enabling a laser-mediated photothermal mechanism of cargo release. We demonstrate both noninvasive imaging and targeted drug delivery in preclinical mouse models of breast and prostate cancer. Finally, we applied mathematical modeling to predict and confirm tumor targeting and drug delivery. These results are meaningful steps toward the design and initial translation of an enabling nanotechnology platform with potential for broad clinical applications. PMID:26839407

  18. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

    PubMed Central

    Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B.; Ayache, Nicholas; Buendia, Patricia; Collins, D. Louis; Cordier, Nicolas; Corso, Jason J.; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R.; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M.; Jena, Raj; John, Nigel M.; Konukoglu, Ender; Lashkari, Danial; Mariz, José António; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J.; Raviv, Tammy Riklin; Reza, Syed M. S.; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A.; Sousa, Nuno; Subbanna, Nagesh K.; Szekely, Gabor; Taylor, Thomas J.; Thomas, Owen M.; Tustison, Nicholas J.; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen

    2016-01-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource. PMID:25494501

  19. Integrated nanotechnology platform for tumor-targeted multimodal imaging and therapeutic cargo release.

    PubMed

    Hosoya, Hitomi; Dobroff, Andrey S; Driessen, Wouter H P; Cristini, Vittorio; Brinker, Lina M; Staquicini, Fernanda I; Cardó-Vila, Marina; D'Angelo, Sara; Ferrara, Fortunato; Proneth, Bettina; Lin, Yu-Shen; Dunphy, Darren R; Dogra, Prashant; Melancon, Marites P; Stafford, R Jason; Miyazono, Kohei; Gelovani, Juri G; Kataoka, Kazunori; Brinker, C Jeffrey; Sidman, Richard L; Arap, Wadih; Pasqualini, Renata

    2016-02-16

    A major challenge of targeted molecular imaging and drug delivery in cancer is establishing a functional combination of ligand-directed cargo with a triggered release system. Here we develop a hydrogel-based nanotechnology platform that integrates tumor targeting, photon-to-heat conversion, and triggered drug delivery within a single nanostructure to enable multimodal imaging and controlled release of therapeutic cargo. In proof-of-concept experiments, we show a broad range of ligand peptide-based applications with phage particles, heat-sensitive liposomes, or mesoporous silica nanoparticles that self-assemble into a hydrogel for tumor-targeted drug delivery. Because nanoparticles pack densely within the nanocarrier, their surface plasmon resonance shifts to near-infrared, thereby enabling a laser-mediated photothermal mechanism of cargo release. We demonstrate both noninvasive imaging and targeted drug delivery in preclinical mouse models of breast and prostate cancer. Finally, we applied mathematical modeling to predict and confirm tumor targeting and drug delivery. These results are meaningful steps toward the design and initial translation of an enabling nanotechnology platform with potential for broad clinical applications.

  20. New Dioxaborolane Chemistry Enables [18F]-Positron-Emitting, Fluorescent [18F]-Multimodality Biomolecule Generation from the Solid Phase

    PubMed Central

    Crisp, Jessica L.; Vera, David R.; Tsien, Roger Y.; Ting, Richard

    2016-01-01

    New protecting group chemistry is used to greatly simplify imaging probe production. Temperature and organic solvent-sensitive biomolecules are covalently attached to a biotin-bearing dioxaborolane, which facilitates antibody immobilization on a streptavidin-agarose solid-phase support. Treatment with aqueous fluoride triggers fluoride-labeled antibody release from the solid phase, separated from unlabeled antibody, and creates [18F]-trifluoroborate-antibody for positron emission tomography and near-infrared fluorescent (PET/NIRF) multimodality imaging. This dioxaborolane-fluoride reaction is bioorthogonal, does not inhibit antigen binding, and increases [18F]-specific activity relative to solution-based radiosyntheses. Two applications are investigated: an anti-epithelial cell adhesion molecule (EpCAM) monoclonal antibody (mAb) that labels prostate tumors and Cetuximab, an anti-epidermal growth factor receptor (EGFR) mAb (FDA approved) that labels lung adenocarcinoma tumors. Colocalized, tumor-specific NIRF and PET imaging confirm utility of the new technology. The described chemistry should allow labeling of many commercial systems, diabodies, nanoparticles, and small molecules for dual modality imaging of many diseases. PMID:27064381

  1. New Dioxaborolane Chemistry Enables [(18)F]-Positron-Emitting, Fluorescent [(18)F]-Multimodality Biomolecule Generation from the Solid Phase.

    PubMed

    Rodriguez, Erik A; Wang, Ye; Crisp, Jessica L; Vera, David R; Tsien, Roger Y; Ting, Richard

    2016-05-18

    New protecting group chemistry is used to greatly simplify imaging probe production. Temperature and organic solvent-sensitive biomolecules are covalently attached to a biotin-bearing dioxaborolane, which facilitates antibody immobilization on a streptavidin-agarose solid-phase support. Treatment with aqueous fluoride triggers fluoride-labeled antibody release from the solid phase, separated from unlabeled antibody, and creates [(18)F]-trifluoroborate-antibody for positron emission tomography and near-infrared fluorescent (PET/NIRF) multimodality imaging. This dioxaborolane-fluoride reaction is bioorthogonal, does not inhibit antigen binding, and increases [(18)F]-specific activity relative to solution-based radiosyntheses. Two applications are investigated: an anti-epithelial cell adhesion molecule (EpCAM) monoclonal antibody (mAb) that labels prostate tumors and Cetuximab, an anti-epidermal growth factor receptor (EGFR) mAb (FDA approved) that labels lung adenocarcinoma tumors. Colocalized, tumor-specific NIRF and PET imaging confirm utility of the new technology. The described chemistry should allow labeling of many commercial systems, diabodies, nanoparticles, and small molecules for dual modality imaging of many diseases.

  2. Integrated nanotechnology platform for tumor-targeted multimodal imaging and therapeutic cargo release

    DOE PAGES

    Hosoya, Hitomi; Dobroff, Andrey S.; Driessen, Wouter H. P.; ...

    2016-02-02

    A major challenge of targeted molecular imaging and drug delivery in cancer is establishing a functional combination of ligand-directed cargo with a triggered release system. Here we develop a hydrogel-based nanotechnology platform that integrates tumor targeting, photon-to-heat conversion, and triggered drug delivery within a single nanostructure to enable multimodal imaging and controlled release of therapeutic cargo. In proof-of-concept experiments, we show a broad range of ligand peptide-based applications with phage particles, heat-sensitive liposomes, or mesoporous silica nanoparticles that self-assemble into a hydrogel for tumor-targeted drug delivery. Because nanoparticles pack densely within the nanocarrier, their surface plasmon resonance shifts to near-infrared,more » thereby enabling a laser-mediated photothermal mechanism of cargo release. We demonstrate both noninvasive imaging and targeted drug delivery in preclinical mouse models of breast and prostate cancer. Finally, we applied mathematical modeling to predict and confirm tumor targeting and drug delivery. We conclude that these results are meaningful steps toward the design and initial translation of an enabling nanotechnology platform with potential for broad clinical applications.« less

  3. Integrated nanotechnology platform for tumor-targeted multimodal imaging and therapeutic cargo release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hosoya, Hitomi; Dobroff, Andrey S.; Driessen, Wouter H. P.

    A major challenge of targeted molecular imaging and drug delivery in cancer is establishing a functional combination of ligand-directed cargo with a triggered release system. Here we develop a hydrogel-based nanotechnology platform that integrates tumor targeting, photon-to-heat conversion, and triggered drug delivery within a single nanostructure to enable multimodal imaging and controlled release of therapeutic cargo. In proof-of-concept experiments, we show a broad range of ligand peptide-based applications with phage particles, heat-sensitive liposomes, or mesoporous silica nanoparticles that self-assemble into a hydrogel for tumor-targeted drug delivery. Because nanoparticles pack densely within the nanocarrier, their surface plasmon resonance shifts to near-infrared,more » thereby enabling a laser-mediated photothermal mechanism of cargo release. We demonstrate both noninvasive imaging and targeted drug delivery in preclinical mouse models of breast and prostate cancer. Finally, we applied mathematical modeling to predict and confirm tumor targeting and drug delivery. We conclude that these results are meaningful steps toward the design and initial translation of an enabling nanotechnology platform with potential for broad clinical applications.« less

  4. In vivo integrated photoacoustic ophthalmoscopy, optical coherence tomography, and scanning laser ophthalmoscopy for retinal imaging

    NASA Astrophysics Data System (ADS)

    Song, Wei; Zhang, Rui; Zhang, Hao F.; Wei, Qing; Cao, Wenwu

    2012-12-01

    The physiological and pathological properties of retina are closely associated with various optical contrasts. Hence, integrating different ophthalmic imaging technologies is more beneficial in both fundamental investigation and clinical diagnosis of several blinding diseases. Recently, photoacoustic ophthalmoscopy (PAOM) was developed for in vivo retinal imaging in small animals, which demonstrated the capability of imaging retinal vascular networks and retinal pigment epithelium (RPE) at high sensitivity. We combined PAOM with traditional imaging modalities, such as fluorescein angiography (FA), spectral-domain optical coherence tomography (SD-OCT), and auto-fluorescence scanning laser ophthalmoscopy (AF-SLO), for imaging rats and mice. The multimodal imaging system provided more comprehensive evaluation of the retina based on the complementary imaging contrast mechanisms. The high-quality retinal images show that the integrated ophthalmic imaging system has great potential in the investigation of blinding disorders.

  5. Radionuclide Myocardial Perfusion Imaging for the Evaluation of Patients With Known or Suspected Coronary Artery Disease in the Era of Multimodality Cardiovascular Imaging

    PubMed Central

    Taqueti, Viviany R.; Di Carli, Marcelo F.

    2018-01-01

    Over the last several decades, radionuclide myocardial perfusion imaging (MPI) with single photon emission tomography and positron emission tomography has been a mainstay for the evaluation of patients with known or suspected coronary artery disease (CAD). More recently, technical advances in separate and complementary imaging modalities including coronary computed tomography angiography, computed tomography perfusion, cardiac magnetic resonance imaging, and contrast stress echocardiography have expanded the toolbox of diagnostic testing for cardiac patients. While the growth of available technologies has heralded an exciting era of multimodality cardiovascular imaging, coordinated and dispassionate utilization of these techniques is needed to implement the right test for the right patient at the right time, a promise of “precision medicine.” In this article, we review the maturing role of MPI in the current era of multimodality cardiovascular imaging, particularly in the context of recent advances in myocardial blood flow quantitation, and as applied to the evaluation of patients with known or suspected CAD. PMID:25770849

  6. Multimodal Image Alignment via Linear Mapping between Feature Modalities.

    PubMed

    Jiang, Yanyun; Zheng, Yuanjie; Hou, Sujuan; Chang, Yuchou; Gee, James

    2017-01-01

    We propose a novel landmark matching based method for aligning multimodal images, which is accomplished uniquely by resolving a linear mapping between different feature modalities. This linear mapping results in a new measurement on similarity of images captured from different modalities. In addition, our method simultaneously solves this linear mapping and the landmark correspondences by minimizing a convex quadratic function. Our method can estimate complex image relationship between different modalities and nonlinear nonrigid spatial transformations even in the presence of heavy noise, as shown in our experiments carried out by using a variety of image modalities.

  7. Medical image registration based on normalized multidimensional mutual information

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ji, Hongbing; Tong, Ming

    2009-10-01

    Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.

  8. Use of multimodality imaging and artificial intelligence for diagnosis and prognosis of early stages of Alzheimer's disease.

    PubMed

    Liu, Xiaonan; Chen, Kewei; Wu, Teresa; Weidman, David; Lure, Fleming; Li, Jing

    2018-04-01

    Alzheimer's disease (AD) is a major neurodegenerative disease and the most common cause of dementia. Currently, no treatment exists to slow down or stop the progression of AD. There is converging belief that disease-modifying treatments should focus on early stages of the disease, that is, the mild cognitive impairment (MCI) and preclinical stages. Making a diagnosis of AD and offering a prognosis (likelihood of converting to AD) at these early stages are challenging tasks but possible with the help of multimodality imaging, such as magnetic resonance imaging (MRI), fluorodeoxyglucose (FDG)-positron emission topography (PET), amyloid-PET, and recently introduced tau-PET, which provides different but complementary information. This article is a focused review of existing research in the recent decade that used statistical machine learning and artificial intelligence methods to perform quantitative analysis of multimodality image data for diagnosis and prognosis of AD at the MCI or preclinical stages. We review the existing work in 3 subareas: diagnosis, prognosis, and methods for handling modality-wise missing data-a commonly encountered problem when using multimodality imaging for prediction or classification. Factors contributing to missing data include lack of imaging equipment, cost, difficulty of obtaining patient consent, and patient drop-off (in longitudinal studies). Finally, we summarize our major findings and provide some recommendations for potential future research directions. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. A DICOM-based 2nd generation Molecular Imaging Data Grid implementing the IHE XDS-i integration profile.

    PubMed

    Lee, Jasper; Zhang, Jianguo; Park, Ryan; Dagliyan, Grant; Liu, Brent; Huang, H K

    2012-07-01

    A Molecular Imaging Data Grid (MIDG) was developed to address current informatics challenges in archival, sharing, search, and distribution of preclinical imaging studies between animal imaging facilities and investigator sites. This manuscript presents a 2nd generation MIDG replacing the Globus Toolkit with a new system architecture that implements the IHE XDS-i integration profile. Implementation and evaluation were conducted using a 3-site interdisciplinary test-bed at the University of Southern California. The 2nd generation MIDG design architecture replaces the initial design's Globus Toolkit with dedicated web services and XML-based messaging for dedicated management and delivery of multi-modality DICOM imaging datasets. The Cross-enterprise Document Sharing for Imaging (XDS-i) integration profile from the field of enterprise radiology informatics was adopted into the MIDG design because streamlined image registration, management, and distribution dataflow are likewise needed in preclinical imaging informatics systems as in enterprise PACS application. Implementation of the MIDG is demonstrated at the University of Southern California Molecular Imaging Center (MIC) and two other sites with specified hardware, software, and network bandwidth. Evaluation of the MIDG involves data upload, download, and fault-tolerance testing scenarios using multi-modality animal imaging datasets collected at the USC Molecular Imaging Center. The upload, download, and fault-tolerance tests of the MIDG were performed multiple times using 12 collected animal study datasets. Upload and download times demonstrated reproducibility and improved real-world performance. Fault-tolerance tests showed that automated failover between Grid Node Servers has minimal impact on normal download times. Building upon the 1st generation concepts and experiences, the 2nd generation MIDG system improves accessibility of disparate animal-model molecular imaging datasets to users outside a molecular imaging facility's LAN using a new architecture, dataflow, and dedicated DICOM-based management web services. Productivity and efficiency of preclinical research for translational sciences investigators has been further streamlined for multi-center study data registration, management, and distribution.

  10. Overlapping-image multimode interference couplers with a reduced number of self-images for uniform and nonuniform power splitting

    NASA Astrophysics Data System (ADS)

    Bachmann, M.; Besse, P. A.; Melchior, H.

    1995-10-01

    Overlapping-image multimode interference (MMI) couplers, a new class of devices, permit uniform and nonuniform power splitting. A theoretical description directly relates coupler geometry to image intensities, positions, and phases. Among many possibilities of nonuniform power splitting, examples of 1 \\times 2 couplers with ratios of 15:85 and 28:72 are given. An analysis of uniform power splitters includes the well-known 2 \\times N and 1 \\times N MMI couplers. Applications of MMI couplers include mode filters, mode splitters-combiners, and mode converters.

  11. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor

    PubMed Central

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-01-01

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190

  12. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor.

    PubMed

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-09-15

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.

  13. Multimodal Task-Driven Dictionary Learning for Image Classification

    DTIC Science & Technology

    2015-12-18

    1 Multimodal Task-Driven Dictionary Learning for Image Classification Soheil Bahrampour, Student Member, IEEE, Nasser M. Nasrabadi, Fellow, IEEE...Asok Ray, Fellow, IEEE, and W. Kenneth Jenkins, Life Fellow, IEEE Abstract— Dictionary learning algorithms have been suc- cessfully used for both...reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms. While these methods are

  14. A semantic model for multimodal data mining in healthcare information systems.

    PubMed

    Iakovidis, Dimitris; Smailis, Christos

    2012-01-01

    Electronic health records (EHRs) are representative examples of multimodal/multisource data collections; including measurements, images and free texts. The diversity of such information sources and the increasing amounts of medical data produced by healthcare institutes annually, pose significant challenges in data mining. In this paper we present a novel semantic model that describes knowledge extracted from the lowest-level of a data mining process, where information is represented by multiple features i.e. measurements or numerical descriptors extracted from measurements, images, texts or other medical data, forming multidimensional feature spaces. Knowledge collected by manual annotation or extracted by unsupervised data mining from one or more feature spaces is modeled through generalized qualitative spatial semantics. This model enables a unified representation of knowledge across multimodal data repositories. It contributes to bridging the semantic gap, by enabling direct links between low-level features and higher-level concepts e.g. describing body parts, anatomies and pathological findings. The proposed model has been developed in web ontology language based on description logics (OWL-DL) and can be applied to a variety of data mining tasks in medical informatics. It utility is demonstrated for automatic annotation of medical data.

  15. Cell-permeable Ln(III) chelate-functionalized InP quantum dots as multimodal imaging agents.

    PubMed

    Stasiuk, Graeme J; Tamang, Sudarsan; Imbert, Daniel; Poillot, Cathy; Giardiello, Marco; Tisseyre, Céline; Barbier, Emmanuel L; Fries, Pascal Henry; de Waard, Michel; Reiss, Peter; Mazzanti, Marinella

    2011-10-25

    Quantum dots (QDs) are ideal scaffolds for the development of multimodal imaging agents, but their application in clinical diagnostics is limited by the toxicity of classical CdSe QDs. A new bimodal MRI/optical nanosized contrast agent with high gadolinium payload has been prepared through direct covalent attachment of up to 80 Gd(III) chelates on fluorescent nontoxic InP/ZnS QDs. It shows a high relaxivity of 900 mM(-1) s(-1) (13 mM(-1 )s(-1) per Gd ion) at 35 MHz (0.81 T) and 298 K, while the bright luminescence of the QDs is preserved. Eu(III) and Tb(III) chelates were also successfully grafted to the InP/ZnS QDs. The absence of energy transfer between the QD and lanthanide emitting centers results in a multicolor system. Using this convenient direct grafting strategy additional targeting ligands can be included on the QD. Here a cell-penetrating peptide has been co-grafted in a one-pot reaction to afford a cell-permeable multimodal multimeric MRI contrast agent that reports cellular localization by fluorescence and provides high relaxivity and increased tissue retention with respect to commercial contrast agents.

  16. Distributed multimodal data fusion for large scale wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Ertin, Emre

    2006-05-01

    Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.

  17. Learning of Multimodal Representations With Random Walks on the Click Graph.

    PubMed

    Wu, Fei; Lu, Xinyan; Song, Jun; Yan, Shuicheng; Zhang, Zhongfei Mark; Rui, Yong; Zhuang, Yueting

    2016-02-01

    In multimedia information retrieval, most classic approaches tend to represent different modalities of media in the same feature space. With the click data collected from the users' searching behavior, existing approaches take either one-to-one paired data (text-image pairs) or ranking examples (text-query-image and/or image-query-text ranking lists) as training examples, which do not make full use of the click data, particularly the implicit connections among the data objects. In this paper, we treat the click data as a large click graph, in which vertices are images/text queries and edges indicate the clicks between an image and a query. We consider learning a multimodal representation from the perspective of encoding the explicit/implicit relevance relationship between the vertices in the click graph. By minimizing both the truncated random walk loss as well as the distance between the learned representation of vertices and their corresponding deep neural network output, the proposed model which is named multimodal random walk neural network (MRW-NN) can be applied to not only learn robust representation of the existing multimodal data in the click graph, but also deal with the unseen queries and images to support cross-modal retrieval. We evaluate the latent representation learned by MRW-NN on a public large-scale click log data set Clickture and further show that MRW-NN achieves much better cross-modal retrieval performance on the unseen queries/images than the other state-of-the-art methods.

  18. Multimodality optical coherence tomography and fluorescence confocal scanning laser ophthalmoscopy for image-guided feedback of intraocular injections in mouse models

    NASA Astrophysics Data System (ADS)

    Benavides, Oscar R.; Terrones, Benjamin D.; Leeburg, Kelsey C.; Mehanathan, Sankarathi B.; Levine, Edward M.; Tao, Yuankai K.

    2018-02-01

    Rodent models are robust tools for understanding human retinal disease and function because of their similarities with human physiology and anatomy and availability of genetic mutants. Optical coherence tomography (OCT) has been well-established for ophthalmic imaging in rodents and enables depth-resolved visualization of structures and image-based surrogate biomarkers of disease. Similarly, fluorescence confocal scanning laser ophthalmoscopy (cSLO) has demonstrated utility for imaging endogenous and exogenous fluorescence and scattering contrast in the mouse retina. Complementary volumetric scattering and en face fluorescence contrast from OCT and cSLO, respectively, enables cellular-resolution longitudinal imaging of changes in ophthalmic structure and function. We present a non-contact multimodal OCT+cSLO small animal imaging system with extended working distance to the pupil, which enables imaging during and after intraocular injection. While injections are routinely performed in mice to develop novel models of ophthalmic diseases and screen novel therapeutics, the location and volume delivered is not precisely controlled and difficult to reproduce. Animals were imaged using a custom-built OCT engine and scan-head combined with a modified commercial cSLO scan-head. Post-injection imaging showed structural changes associated with retinal puncture, including the injection track, a retinal elevation, and detachment of the posterior hyaloid. When combined with imagesegmentation, we believe OCT can be used to precisely identify injection locations and quantify injection volumes. Fluorescence cSLO can provide complementary contrast for either fluorescently labeled compounds or transgenic cells for improved specificity. Our non-contact OCT+cSLO system is uniquely-suited for concurrent imaging with intraocular injections, which may be used for real-time image-guided injections.

  19. F-18 Labeled Diabody-Luciferase Fusion Proteins for Optical-ImmunoPET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Anna M.

    2013-01-18

    The goal of the proposed work is to develop novel dual-labeled molecular imaging probes for multimodality imaging. Based on small, engineered antibodies called diabodies, these probes will be radioactively tagged with Fluorine-18 for PET imaging, and fused to luciferases for optical (bioluminescence) detection. Performance will be evaluated and validated using a prototype integrated optical-PET imaging system, OPET. Multimodality probes for optical-PET imaging will be based on diabodies that are dually labeled with 18F for PET detection and fused to luciferases for optical imaging. 1) Two sets of fusion proteins will be built, targeting the cell surface markers CEA or HER2.more » Coelenterazine-based luciferases and variant forms will be evaluated in combination with native substrate and analogs, in order to obtain two distinct probes recognizing different targets with different spectral signatures. 2) Diabody-luciferase fusion proteins will be labeled with 18F using amine reactive [18F]-SFB produced using a novel microwave-assisted, one-pot method. 3) Sitespecific, chemoselective radiolabeling methods will be devised, to reduce the chance that radiolabeling will inactivate either the target-binding properties or the bioluminescence properties of the diabody-luciferase fusion proteins. 4) Combined optical and PET imaging of these dual modality probes will be evaluated and validated in vitro and in vivo using a prototype integrated optical-PET imaging system, OPET. Each imaging modality has its strengths and weaknesses. Development and use of dual modality probes allows optical imaging to benefit from the localization and quantitation offered by the PET mode, and enhances the PET imaging by enabling simultaneous detection of more than one probe.« less

  20. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    PubMed Central

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  1. Development of an X-ray imaging system to prevent scintillator degradation for white synchrotron radiation.

    PubMed

    Zhou, Tunhe; Wang, Hongchang; Connolley, Thomas; Scott, Steward; Baker, Nick; Sawhney, Kawal

    2018-05-01

    The high flux of the white X-ray beams from third-generation synchrotron light sources can significantly benefit the development of high-speed X-ray imaging, but can also bring technical challenges to existing X-ray imaging systems. One prevalent problem is that the image quality deteriorates because of dust particles accumulating on the scintillator screen during exposure to intense X-ray radiation. Here, this problem has been solved by embedding the scintillator in a flowing inert-gas environment. It is also shown that the detector maintains the quality of the captured images even after days of X-ray exposure. This modification is cost-efficient and easy to implement. Representative examples of applications using the X-ray imaging system are also provided, including fast tomography and multimodal phase-contrast imaging for biomedical and geological samples. open access.

  2. Optical diagnostics of mercury jet for an intense proton target.

    PubMed

    Park, H; Tsang, T; Kirk, H G; Ladeinde, F; Graves, V B; Spampinato, P T; Carroll, A J; Titus, P H; McDonald, K T

    2008-04-01

    An optical diagnostic system is designed and constructed for imaging a free mercury jet interacting with a high intensity proton beam in a pulsed high-field solenoid magnet. The optical imaging system employs a backilluminated, laser shadow photography technique. Object illumination and image capture are transmitted through radiation-hard multimode optical fibers and flexible coherent imaging fibers. A retroreflected illumination design allows the entire passive imaging system to fit inside the bore of the solenoid magnet. A sequence of synchronized short laser light pulses are used to freeze the transient events, and the images are recorded by several high speed charge coupled devices. Quantitative and qualitative data analysis using image processing based on probability approach is described. The characteristics of free mercury jet as a high power target for beam-jet interaction at various levels of the magnetic induction field is reported in this paper.

  3. Development of an X-ray imaging system to prevent scintillator degradation for white synchrotron radiation

    PubMed Central

    Zhou, Tunhe; Wang, Hongchang; Scott, Steward

    2018-01-01

    The high flux of the white X-ray beams from third-generation synchrotron light sources can significantly benefit the development of high-speed X-ray imaging, but can also bring technical challenges to existing X-ray imaging systems. One prevalent problem is that the image quality deteriorates because of dust particles accumulating on the scintillator screen during exposure to intense X-ray radiation. Here, this problem has been solved by embedding the scintillator in a flowing inert-gas environment. It is also shown that the detector maintains the quality of the captured images even after days of X-ray exposure. This modification is cost-efficient and easy to implement. Representative examples of applications using the X-ray imaging system are also provided, including fast tomography and multimodal phase-contrast imaging for biomedical and geological samples. PMID:29714191

  4. Multimodal Nonlinear Optical Microscopy

    PubMed Central

    Yue, Shuhua; Slipchenko, Mikhail N.; Cheng, Ji-Xin

    2013-01-01

    Because each nonlinear optical (NLO) imaging modality is sensitive to specific molecules or structures, multimodal NLO imaging capitalizes the potential of NLO microscopy for studies of complex biological tissues. The coupling of multiphoton fluorescence, second harmonic generation, and coherent anti-Stokes Raman scattering (CARS) has allowed investigation of a broad range of biological questions concerning lipid metabolism, cancer development, cardiovascular disease, and skin biology. Moreover, recent research shows the great potential of using CARS microscope as a platform to develop more advanced NLO modalities such as electronic-resonance-enhanced four-wave mixing, stimulated Raman scattering, and pump-probe microscopy. This article reviews the various approaches developed for realization of multimodal NLO imaging as well as developments of new NLO modalities on a CARS microscope. Applications to various aspects of biological and biomedical research are discussed. PMID:24353747

  5. Fluorine-18-labeled Gd3+/Yb3+/Er3+ co-doped NaYF4 nanophosphors for multimodality PET/MR/UCL imaging.

    PubMed

    Zhou, Jing; Yu, Mengxiao; Sun, Yun; Zhang, Xianzhong; Zhu, Xingjun; Wu, Zhanhong; Wu, Dongmei; Li, Fuyou

    2011-02-01

    Molecular imaging modalities provide a wealth of information that is highly complementary and rarely redundant. To combine the advantages of molecular imaging techniques, (18)F-labeled Gd(3+)/Yb(3+)/Er(3+) co-doped NaYF(4) nanophosphors (NPs) simultaneously possessing with radioactivity, magnetic, and upconversion luminescent properties have been fabricated for multimodality positron emission tomography (PET), magnetic resonance imaging (MRI), and laser scanning upconversion luminescence (UCL) imaging. Hydrophilic citrate-capped NaY(0.2)Gd(0.6)Yb(0.18)Er(0.02)F(4) nanophosphors (cit-NPs) were obtained from hydrophobic oleic acid (OA)-coated nanoparticles (OA-NPs) through a process of ligand exchange of OA with citrate, and were found to be monodisperse with an average size of 22 × 19 nm. The obtained hexagonal cit-NPs show intense UCL emission in the visible region and paramagnetic longitudinal relaxivity (r(1) = 0.405 s(-1)·(mM)(-1)). Through a facile inorganic reaction based on the strong binding between Y(3+) and F(-), (18)F-labeled NPs have been fabricated in high yield. The use of cit-NPs as a multimodal probe has been further explored for T(1)-weighted MR and PET imaging in vivo and UCL imaging of living cells and tissue slides. The results indicate that (18)F-labeled NaY(0.2)Gd(0.6)Yb(0.18)Er(0.02) is a potential candidate as a multimodal nanoprobe for ultra-sensitive molecular imaging from the cellular scale to whole-body evaluation. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Multimodal nonlinear microscopy of biopsy specimen: towards intraoperative diagnostics (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Schmitt, Michael; Heuke, Sandro; Meyer, Tobias; Chernavskaia, Olga; Bocklitz, Thomas W.; Popp, Juergen

    2016-03-01

    The realization of label-free molecule specific imaging of morphology and chemical composition of tissue at subcellular spatial resolution in real time is crucial for many envisioned applications in medicine, e.g., precise surgical guidance and non-invasive histopathologic examination of tissue. Thus, new approaches for a fast and reliable in vivo and near in vivo (ex corpore in vivo) tissue characterization to supplement routine pathological diagnostics is needed. Spectroscopic imaging approaches are particularly important since they have the potential to provide a pathologist with adequate support in the form of clinically-relevant information under both ex vivo and in vivo conditions. In this contribution it is demonstrated, that multimodal nonlinear microscopy combining coherent anti-Stokes Raman scattering (CARS), two photon excited fluorescence (TPEF) and second harmonic generation (SHG) enables the detection of characteristic structures and the accompanying molecular changes of widespread diseases, particularly of cancer and atherosclerosis. The detailed images enable an objective evaluation of the tissue samples for an early diagnosis of the disease status. Increasing the spectral resolution and analyzing CARS images at multiple Raman resonances improves the chemical specificity. To facilitate handling and interpretation of the image data characteristic properties can be automatically extracted by advanced image processing algorithms, e.g., for tissue classification. Overall, the presented examples show the great potential of multimodal imaging to augment standard intraoperative clinical assessment with functional multimodal CARS/SHG/TPEF images to highlight functional activity and tumor boundaries. It ensures fast, label-free and non-invasive intraoperative tissue classification paving the way towards in vivo optical pathology.

  7. A small field of view camera for hybrid gamma and optical imaging

    NASA Astrophysics Data System (ADS)

    Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.

    2014-12-01

    The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.

  8. Showing or Telling a Story: A Comparative Study of Public Education Texts in Multimodality and Monomodality

    ERIC Educational Resources Information Center

    Wang, Kelu

    2013-01-01

    Multimodal texts that combine words and images produce meaning in a different way from monomodal texts that rely on words. They differ not only in representing the subject matter, but also constructing relationships between text producers and text receivers. This article uses two multimodal texts and one monomodal written text as samples, which…

  9. Atmospheric aerosol measurements by employing a polarization scheimpflug lidar system

    NASA Astrophysics Data System (ADS)

    Mei, Liang; Guan, Peng; Yang, Yang

    2018-04-01

    A polarization Scheimpflug lidar system based on the Scheimpflug principle has been developed by employing a compact 808-nm multimode highpower laser diode and two highly integrated CMOS sensors in Dalian University of Technology (DLUT), Dalian, China. The parallel and orthogonal polarized backscattering signal are recorded by two 45 degree tilted image sensors, respectively. Atmospheric particle measurements were carried out by employing the polarization Scheimpflug lidar system.

  10. Label-aligned Multi-task Feature Learning for Multimodal Classification of Alzheimer’s Disease and Mild Cognitive Impairment

    PubMed Central

    Zu, Chen; Jie, Biao; Liu, Mingxia; Chen, Songcan

    2015-01-01

    Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer’s disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI. PMID:26572145

  11. GMars-T Enabling Multimodal Subdiffraction Structural and Functional Fluorescence Imaging in Live Cells.

    PubMed

    Wang, Sheng; Chen, Xuanze; Chang, Lei; Ding, Miao; Xue, Ruiying; Duan, Haifeng; Sun, Yujie

    2018-06-05

    Fluorescent probes with multimodal and multilevel imaging capabilities are highly valuable as imaging with such probes not only can obtain new layers of information but also enable cross-validation of results under different experimental conditions. In recent years, the development of genetically encoded reversibly photoswitchable fluorescent proteins (RSFPs) has greatly promoted the application of various kinds of live-cell nanoscopy approaches, including reversible saturable optical fluorescence transitions (RESOLFT) and stochastic optical fluctuation imaging (SOFI). However, these two classes of live-cell nanoscopy approaches require different optical characteristics of specific RSFPs. In this work, we developed GMars-T, a monomeric bright green RSFP which can satisfy both RESOLFT and photochromic SOFI (pcSOFI) imaging in live cells. We further generated biosensor based on bimolecular fluorescence complementation (BiFC) of GMars-T which offers high specificity and sensitivity in detecting and visualizing various protein-protein interactions (PPIs) in different subcellular compartments under physiological conditions (e.g., 37 °C) in live mammalian cells. Thus, the newly developed GMars-T can serve as both structural imaging probe with multimodal super-resolution imaging capability and functional imaging probe for reporting PPIs with high specificity and sensitivity based on its derived biosensor.

  12. Multimodal optical workstation for simultaneous linear, nonlinear microscopy and nanomanipulation: upgrading a commercial confocal inverted microscope.

    PubMed

    Mathew, Manoj; Santos, Susana I C O; Zalvidea, Dobryna; Loza-Alvarez, Pablo

    2009-07-01

    In this work we propose and build a multimodal optical workstation that extends a commercially available confocal microscope (Nikon Confocal C1-Si) to include nonlinear/multiphoton microscopy and optical manipulation/stimulation tools such as nanosurgery. The setup allows both subsystems (confocal and nonlinear) to work independently and simultaneously. The workstation enables, for instance, nanosurgery along with simultaneous confocal and brightfield imaging. The nonlinear microscopy capabilities are added around the commercial confocal microscope by exploiting all the flexibility offered by this microscope and without need for any mechanical or electronic modification of the confocal microscope systems. As an example, the standard differential interference contrast condenser and diascopic detector in the confocal microscope are readily used as a forward detection mount for second harmonic generation imaging. The various capabilities of this workstation, as applied directly to biology, are demonstrated using the model organism Caenorhabditis elegans.

  13. Introduction to clinical and laboratory (small-animal) image registration and fusion.

    PubMed

    Zanzonico, Pat B; Nehmeh, Sadek A

    2006-01-01

    Imaging has long been a vital component of clinical medicine and, increasingly, of biomedical research in small-animals. Clinical and laboratory imaging modalities can be divided into two general categories, structural (or anatomical) and functional (or physiological). The latter, in particular, has spawned what has come to be known as "molecular imaging". Image registration and fusion have rapidly emerged as invaluable components of both clinical and small-animal imaging and has lead to the development and marketing of a variety of multi-modality, e.g. PET-CT, devices which provide registered and fused three-dimensional image sets. This paper briefly reviews the basics of image registration and fusion and available clinical and small-animal multi-modality instrumentation.

  14. Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.

    PubMed

    Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie

    2016-07-01

    Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.

  15. Multimodality Molecular Imaging of Cardiac Cell Transplantation: Part I. Reporter Gene Design, Characterization, and Optical in Vivo Imaging of Bone Marrow Stromal Cells after Myocardial Infarction

    PubMed Central

    Parashurama, Natesh; Ahn, Byeong-Cheol; Ziv, Keren; Ito, Ken; Paulmurugan, Ramasamy; Willmann, Jürgen K.; Chung, Jaehoon; Ikeno, Fumiaki; Swanson, Julia C.; Merk, Denis R.; Lyons, Jennifer K.; Yerushalmi, David; Teramoto, Tomohiko; Kosuge, Hisanori; Dao, Catherine N.; Ray, Pritha; Patel, Manishkumar; Chang, Ya-fang; Mahmoudi, Morteza; Cohen, Jeff Eric; Goldstone, Andrew Brooks; Habte, Frezghi; Bhaumik, Srabani; Yaghoubi, Shahriar; Robbins, Robert C.; Dash, Rajesh; Yang, Phillip C.; Brinton, Todd J.; Yock, Paul G.; McConnell, Michael V.

    2016-01-01

    Purpose To use multimodality reporter-gene imaging to assess the serial survival of marrow stromal cells (MSC) after therapy for myocardial infarction (MI) and to determine if the requisite preclinical imaging end point was met prior to a follow-up large-animal MSC imaging study. Materials and Methods Animal studies were approved by the Institutional Administrative Panel on Laboratory Animal Care. Mice (n = 19) that had experienced MI were injected with bone marrow–derived MSC that expressed a multimodality triple fusion (TF) reporter gene. The TF reporter gene (fluc2-egfp-sr39ttk) consisted of a human promoter, ubiquitin, driving firefly luciferase 2 (fluc2), enhanced green fluorescent protein (egfp), and the sr39tk positron emission tomography reporter gene. Serial bioluminescence imaging of MSC-TF and ex vivo luciferase assays were performed. Correlations were analyzed with the Pearson product-moment correlation, and serial imaging results were analyzed with a mixed-effects regression model. Results Analysis of the MSC-TF after cardiac cell therapy showed significantly lower signal on days 8 and 14 than on day 2 (P = .011 and P = .001, respectively). MSC-TF with MI demonstrated significantly higher signal than MSC-TF without MI at days 4, 8, and 14 (P = .016). Ex vivo luciferase activity assay confirmed the presence of MSC-TF on days 8 and 14 after MI. Conclusion Multimodality reporter-gene imaging was successfully used to assess serial MSC survival after therapy for MI, and it was determined that the requisite preclinical imaging end point, 14 days of MSC survival, was met prior to a follow-up large-animal MSC study. © RSNA, 2016 Online supplemental material is available for this article. PMID:27308957

  16. Drusen Characterization with Multimodal Imaging

    PubMed Central

    Spaide, Richard F.; Curcio, Christine A.

    2010-01-01

    Summary Multimodal imaging findings and histological demonstration of soft drusen, cuticular drusen, and subretinal drusenoid deposits provided information used to develop a model explaining their imaging characteristics. Purpose To characterize the known appearance of cuticular drusen, subretinal drusenoid deposits (reticular pseudodrusen), and soft drusen as revealed by multimodal fundus imaging; to create an explanatory model that accounts for these observations. Methods Reported color, fluorescein angiographic, autofluorescence, and spectral domain optical coherence tomography (SD-OCT) images of patients with cuticular drusen, soft drusen, and subretinal drusenoid deposits were reviewed, as were actual images from affected eyes. Representative histological sections were examined. The geometry, location, and imaging characteristics of these lesions were evaluated. A hypothesis based on the Beer-Lambert Law of light absorption was generated to fit these observations. Results Cuticular drusen appear as numerous uniform round yellow-white punctate accumulations under the retinal pigment epithelium (RPE). Soft drusen are larger yellow-white dome-shaped mounds of deposit under the RPE. Subretinal drusenoid deposits are polymorphous light-grey interconnected accumulations above the RPE. Based on the model, both cuticular and soft drusen appear yellow due to the removal of shorter wavelength light by a double pass through the RPE. Subretinal drusenoid deposits, which are located on the RPE, are not subjected to short wavelength attenuation and therefore are more prominent when viewed with blue light. The location and morphology of extracellular material in relationship to the RPE, and associated changes to RPE morphology and pigmentation, appeared to be primary determinants of druse appearance in different imaging modalities. Conclusion Although cuticular drusen, subretinal drusenoid deposits, and soft drusen are composed of common components, they are distinguishable by multimodal imaging due to differences in location, morphology, and optical filtering effects by drusenoid material and the RPE. PMID:20924263

  17. Fully integrated reflection-mode photoacoustic, two-photon, and second harmonic generation microscopy in vivo

    NASA Astrophysics Data System (ADS)

    Song, Wei; Xu, Qiang; Zhang, Yang; Zhan, Yang; Zheng, Wei; Song, Liang

    2016-08-01

    The ability to obtain comprehensive structural and functional information from intact biological tissue in vivo is highly desirable for many important biomedical applications, including cancer and brain studies. Here, we developed a fully integrated multimodal microscopy that can provide photoacoustic (optical absorption), two-photon (fluorescence), and second harmonic generation (SHG) information from tissue in vivo, with intrinsically co-registered images. Moreover, using a delicately designed optical-acoustic coupling configuration, a high-frequency miniature ultrasonic transducer was integrated into a water-immersion optical objective, thus allowing all three imaging modalities to provide a high lateral resolution of ~290 nm with reflection-mode imaging capability, which is essential for studying intricate anatomy, such as that of the brain. Taking advantage of the complementary and comprehensive contrasts of the system, we demonstrated high-resolution imaging of various tissues in living mice, including microvasculature (by photoacoustics), epidermis cells, cortical neurons (by two-photon fluorescence), and extracellular collagen fibers (by SHG). The intrinsic image co-registration of the three modalities conveniently provided improved visualization and understanding of the tissue microarchitecture. The reported results suggest that, by revealing complementary tissue microstructures in vivo, this multimodal microscopy can potentially facilitate a broad range of biomedical studies, such as imaging of the tumor microenvironment and neurovascular coupling.

  18. Dual delivery of biological therapeutics for multimodal and synergistic cancer therapies.

    PubMed

    Jang, Bora; Kwon, Hyokyoung; Katila, Pramila; Lee, Seung Jin; Lee, Hyukjin

    2016-03-01

    Cancer causes >8.2 million deaths annually worldwide; thus, various cancer treatments have been investigated over the past decades. Among them, combination drug therapy has become extremely popular, and treatment with more than one drug is often necessary to achieve appropriate anticancer efficacy. With the development of nanoformulations and nanoparticulate-based drug delivery, researchers have explored the feasibility of dual delivery of biological therapeutics to overcome the current drawbacks of cancer therapy. Compared with the conventional single drug therapy, dual delivery of therapeutics has provided various synergistic effects in addition to offering multimodality to cancer treatment. In this review, we highlight and summarize three aspects of dual-delivery systems for cancer therapy. These include (1) overcoming drug resistance by the dual delivery of chemical drugs with biological therapeutics for synergistic therapy, (2) targeted and controlled drug release by the dual delivery of drugs with stimuli-responsive nanomaterials, and (3) multimodal theranostics by the dual delivery of drugs and molecular imaging probes. Furthermore, recent developments, perspectives, and new challenges regarding dual-delivery systems for cancer therapy are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Tunable X-ray speckle-based phase-contrast and dark-field imaging using the unified modulated pattern analysis approach

    NASA Astrophysics Data System (ADS)

    Zdora, M.-C.; Thibault, P.; Deyhle, H.; Vila-Comamala, J.; Rau, C.; Zanette, I.

    2018-05-01

    X-ray phase-contrast and dark-field imaging provides valuable, complementary information about the specimen under study. Among the multimodal X-ray imaging methods, X-ray grating interferometry and speckle-based imaging have drawn particular attention, which, however, in their common implementations incur certain limitations that can restrict their range of applications. Recently, the unified modulated pattern analysis (UMPA) approach was proposed to overcome these limitations and combine grating- and speckle-based imaging in a single approach. Here, we demonstrate the multimodal imaging capabilities of UMPA and highlight its tunable character regarding spatial resolution, signal sensitivity and scan time by using different reconstruction parameters.

  20. Multimodality imaging of ovarian cystic lesions: Review with an imaging based algorithmic approach

    PubMed Central

    Wasnik, Ashish P; Menias, Christine O; Platt, Joel F; Lalchandani, Usha R; Bedi, Deepak G; Elsayes, Khaled M

    2013-01-01

    Ovarian cystic masses include a spectrum of benign, borderline and high grade malignant neoplasms. Imaging plays a crucial role in characterization and pretreatment planning of incidentally detected or suspected adnexal masses, as diagnosis of ovarian malignancy at an early stage is correlated with a better prognosis. Knowledge of differential diagnosis, imaging features, management trends and an algorithmic approach of such lesions is important for optimal clinical management. This article illustrates a multi-modality approach in the diagnosis of a spectrum of ovarian cystic masses and also proposes an algorithmic approach for the diagnosis of these lesions. PMID:23671748

  1. Rapid multi-modality preregistration based on SIFT descriptor.

    PubMed

    Chen, Jian; Tian, Jie

    2006-01-01

    This paper describes the scale invariant feature transform (SIFT) method for rapid preregistration of medical image. This technique originates from Lowe's method wherein preregistration is achieved by matching the corresponding keypoints between two images. The computational complexity has been reduced when we applied SIFT preregistration method before refined registration due to its O(n) exponential calculations. The features of SIFT are highly distinctive and invariant to image scaling and rotation, and partially invariant to change in illumination and contrast, it is robust and repeatable for cursorily matching two images. We also altered the descriptor so our method can deal with multimodality preregistration.

  2. A systematic, multimodality approach to emergency elbow imaging.

    PubMed

    Singer, Adam D; Hanna, Tarek; Jose, Jean; Datir, Abhijit

    2016-01-01

    The elbow is a complex synovial hinge joint that is frequently involved in both athletic and nonathletic injuries. A thorough understanding of the normal anatomy and various injury patterns is essential when utilizing diagnostic imaging to identify damaged structures and to assist in surgical planning. In this review, the elbow anatomy will be scrutinized in a systematic approach. This will be followed by a comprehensive presentation of elbow injuries that are commonly seen in the emergency department accompanied by multimodality imaging findings. A short discussion regarding pitfalls in elbow imaging is also included. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Multimode-Optical-Fiber Imaging Probe

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah

    2000-01-01

    Currently, endoscopic surgery uses single-mode fiber-bundles to obtain in vivo image information inside orifices of the body. This limits their use to the larger natural bodily orifices and to surgical procedures where there is plenty of room for manipulation. The knee joint, for example can be easily viewed with a fiber optic viewer, but joints in the finger cannot. However, there are a host of smaller orifices where fiber endoscopy would play an important role if a cost effective fiber probe were developed with small enough dimensions (< 250 microns). Examples of beneficiaries of micro-endoscopes are the treatment of the Eustatian tube of the middle ear, the breast ducts, tear ducts, coronary arteries, fallopian tubes, as well as the treatment of salivary duct parotid disease, and the neuro endoscopy of the ventricles and spinal canal. To solve this problem, this work describes an approach for recovering images from. tightly confined spaces using multimode fibers and analytically demonstrates that the concept is sound. The proof of concept draws upon earlier works that concentrated on image recovery after two-way transmission through a multimode fiber as well as work that demonstrated the recovery of images after one-way transmission through a multimode fiber. Both relied on generating a phase conjugated wavefront which was predistorted with the characteristics of the fiber. The described approach also relies on generating a phase conjugated wavefront, but utilizes two fibers to capture the image at some intermediate point (accessible by the fibers, but which is otherwise visually unaccessible).

  4. Multimode-Optical-Fiber Imaging Probe

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah

    1999-01-01

    Currently, endoscopic surgery uses single-mode fiber-bundles to obtain in vivo image information inside the orifices of the body. This limits their use to the larger natural orifices and to surgical procedures where there is plenty of room for manipulation. The knee joint, for example, can be easily viewed with a fiber optic viewer, but joints in the finger cannot. However, there are a host of smaller orifices where fiber endoscopy would play an important role if a cost effective fiber probe were developed with small enough dimensions (less than or equal to 250 microns). Examples of beneficiaries of micro-endoscopes are the treatment of the Eustatian tube of the middle ear, the breast ducts, tear ducts, coronary arteries, fallopian tubes, as well as the treatment of salivary duct parotid disease, and the neuro endoscopy of the ventricles and spinal canal. This work describes an approach for recovering images from tightly confined spaces using multimode. The concept draws upon earlier works that concentrated on image recovery after two-way transmission through a multimode fiber as well as work that demonstrated the recovery of images after one-way transmission through a multimode fiber. Both relied on generating a phase conjugated wavefront, which was predistorted with the characteristics of the fiber. The approach described here also relies on generating a phase conjugated wavefront, but utilizes two fibers to capture the image at some intermediate point (accessible by the fibers, but which is otherwise visually inaccessible).

  5. Patient-tailored multimodal neuroimaging, visualization and quantification of human intra-cerebral hemorrhage

    NASA Astrophysics Data System (ADS)

    Goh, Sheng-Yang M.; Irimia, Andrei; Vespa, Paul M.; Van Horn, John D.

    2016-03-01

    In traumatic brain injury (TBI) and intracerebral hemorrhage (ICH), the heterogeneity of lesion sizes and types necessitates a variety of imaging modalities to acquire a comprehensive perspective on injury extent. Although it is advantageous to combine imaging modalities and to leverage their complementary benefits, there are difficulties in integrating information across imaging types. Thus, it is important that efforts be dedicated to the creation and sustained refinement of resources for multimodal data integration. Here, we propose a novel approach to the integration of neuroimaging data acquired from human patients with TBI/ICH using various modalities; we also demonstrate the integrated use of multimodal magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) data for TBI analysis based on both visual observations and quantitative metrics. 3D models of healthy-appearing tissues and TBIrelated pathology are generated, both of which are derived from multimodal imaging data. MRI volumes acquired using FLAIR, SWI, and T2 GRE are used to segment pathology. Healthy tissues are segmented using user-supervised tools, and results are visualized using a novel graphical approach called a `connectogram', where brain connectivity information is depicted within a circle of radially aligned elements. Inter-region connectivity and its strength are represented by links of variable opacities drawn between regions, where opacity reflects the percentage longitudinal change in brain connectivity density. Our method for integrating, analyzing and visualizing structural brain changes due to TBI and ICH can promote knowledge extraction and enhance the understanding of mechanisms underlying recovery.

  6. MMX-I: A data-processing software for multi-modal X-ray imaging and tomography

    NASA Astrophysics Data System (ADS)

    Bergamaschi, A.; Medjoubi, K.; Messaoudi, C.; Marco, S.; Somogyi, A.

    2017-06-01

    Scanning hard X-ray imaging allows simultaneous acquisition of multimodal information, including X-ray fluorescence, absorption, phase and dark-field contrasts, providing structural and chemical details of the samples. Combining these scanning techniques with the infrastructure developed for fast data acquisition at Synchrotron Soleil permits to perform multimodal imaging and tomography during routine user experiments at the Nanoscopium beamline. A main challenge of such imaging techniques is the online processing and analysis of the generated very large volume (several hundreds of Giga Bytes) multimodal data-sets. This is especially important for the wide user community foreseen at the user oriented Nanoscopium beamline (e.g. from the fields of Biology, Life Sciences, Geology, Geobiology), having no experience in such data-handling. MMX-I is a new multi-platform open-source freeware for the processing and reconstruction of scanning multi-technique X-ray imaging and tomographic datasets. The MMX-I project aims to offer, both expert users and beginners, the possibility of processing and analysing raw data, either on-site or off-site. Therefore we have developed a multi-platform (Mac, Windows and Linux 64bit) data processing tool, which is easy to install, comprehensive, intuitive, extendable and user-friendly. MMX-I is now routinely used by the Nanoscopium user community and has demonstrated its performance in treating big data.

  7. Molecular Imaging for Breast Cancer Using Magnetic Resonance-Guided Positron Emission Mammography and Excitation-Resolved Near-Infrared Fluorescence Imaging

    NASA Astrophysics Data System (ADS)

    Cho, Jaedu

    The aim of this work is to develop novel breast-specific molecular imaging techniques for management of breast cancer. In this dissertation, we describe two novel molecular imaging approaches for breast cancer management. In Part I, we introduce our multimodal molecular imaging approach for breast cancer therapy monitoring using magnetic resonance imaging and positron emission mammography (MR/PEM). We have focused on the therapy monitoring technique for aggressive cancer molecular subtypes, which is challenging due to time constraint. Breast cancer therapy planning relies on a fast and accurate monitoring of functional and anatomical change. We demonstrate a proof-of-concept of sequential dual-modal magnetic resonance and positron emission mammography (MR/PEM) for the cancer therapy monitoring. We have developed dedicated breast coils with breast compression mechanism equipped with MR-compatible PEM detector heads. We have designed a fiducial marker that allows straightforward image registration of data obtained from MRI and PEM. We propose an optimal multimodal imaging procedure for MR/PEM. In Part II, we have focused on the development of a novel intraoperative near-infrared fluorescence imaging system (NIRF) for image-guided breast cancer surgery. Conventional spectrally-resolved NIRF systems are unable to resolve various NIR fluorescence dyes for the following reasons. First, the fluorescence spectra of viable NIR fluorescence dyes are heavily overlapping. Second, conventional emission-resolved NIRF suffers from a trade-off between the fluence rate and the spectral resolution. Third, the multiple scattering in tissue degrades not only the spatial information but also the spectral contents by the red-shift. We develop a wavelength-swept laser-based NIRF system that can resolve the excitation shift of various NIR fluorescence dyes without substantial loss of the fluence rate. A linear ratiometric model is employed to measure the relative shift of the excitation spectrum of a fluorescence dye.

  8. Characterizing virus-induced gene silencing at the cellular level with in situ multimodal imaging

    DOE PAGES

    Burkhow, Sadie J.; Stephens, Nicole M.; Mei, Yu; ...

    2018-05-25

    Reverse genetic strategies, such as virus-induced gene silencing, are powerful techniques to study gene function. Currently, there are few tools to study the spatial dependence of the consequences of gene silencing at the cellular level. Here, we report the use of multimodal Raman and mass spectrometry imaging to study the cellular-level biochemical changes that occur from silencing the phytoene desaturase ( pds) gene using a Foxtail mosaic virus (FoMV) vector in maize leaves. The multimodal imaging method allows the localized carotenoid distribution to be measured and reveals differences lost in the spatial average when analyzing a carotenoid extraction of themore » whole leaf. The nature of the Raman and mass spectrometry signals are complementary: silencing pds reduces the downstream carotenoid Raman signal and increases the phytoene mass spectrometry signal.« less

  9. Characterizing virus-induced gene silencing at the cellular level with in situ multimodal imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burkhow, Sadie J.; Stephens, Nicole M.; Mei, Yu

    Reverse genetic strategies, such as virus-induced gene silencing, are powerful techniques to study gene function. Currently, there are few tools to study the spatial dependence of the consequences of gene silencing at the cellular level. Here, we report the use of multimodal Raman and mass spectrometry imaging to study the cellular-level biochemical changes that occur from silencing the phytoene desaturase ( pds) gene using a Foxtail mosaic virus (FoMV) vector in maize leaves. The multimodal imaging method allows the localized carotenoid distribution to be measured and reveals differences lost in the spatial average when analyzing a carotenoid extraction of themore » whole leaf. The nature of the Raman and mass spectrometry signals are complementary: silencing pds reduces the downstream carotenoid Raman signal and increases the phytoene mass spectrometry signal.« less

  10. Overview of chemical imaging methods to address biological questions.

    PubMed

    da Cunha, Marcel Menezes Lyra; Trepout, Sylvain; Messaoudi, Cédric; Wu, Ting-Di; Ortega, Richard; Guerquin-Kern, Jean-Luc; Marco, Sergio

    2016-05-01

    Chemical imaging offers extensive possibilities for better understanding of biological systems by allowing the identification of chemical components at the tissue, cellular, and subcellular levels. In this review, we introduce modern methods for chemical imaging that can be applied to biological samples. This work is mainly addressed to the biological sciences community and includes the bases of different technologies, some examples of its application, as well as an introduction to approaches on combining multimodal data. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. SU-E-J-97: Evaluation of Multi-Modality (CT/MR/PET) Image Registration Accuracy in Radiotherapy Planning.

    PubMed

    Sethi, A; Rusu, I; Surucu, M; Halama, J

    2012-06-01

    Evaluate accuracy of multi-modality image registration in radiotherapy planning process. A water-filled anthropomorphic head phantom containing eight 'donut-shaped' fiducial markers (3 internal + 5 external) was selected for this study. Seven image sets (3CTs, 3MRs and PET) of phantom were acquired and fused in a commercial treatment planning system. First, a narrow slice (0.75mm) baseline CT scan was acquired (CT1). Subsequently, the phantom was re-scanned with a coarse slice width = 1.5mm (CT2) and after subjecting phantom to rotation/displacement (CT3). Next, the phantom was scanned in a 1.5 Tesla MR scanner and three MR image sets (axial T1, axial T2, coronal T1) were acquired at 2mm slice width. Finally, the phantom and center of fiducials were doped with 18F and a PET scan was performed with 2mm cubic voxels. All image scans (CT/MR/PET) were fused to the baseline (CT1) data using automated mutual-information based fusion algorithm. Difference between centroids of fiducial markers in various image modalities was used to assess image registration accuracy. CT/CT image registration was superior to CT/MR and CT/PET: average CT/CT fusion error was found to be 0.64 ± 0.14 mm. Corresponding values for CT/MR and CT/PET fusion were 1.33 ± 0.71mm and 1.11 ± 0.37mm. Internal markers near the center of phantom fused better than external markers placed on the phantom surface. This was particularly true for the CT/MR and CT/PET. The inferior quality of external marker fusion indicates possible distortion effects toward the edges of MR image. Peripheral targets in the PET scan may be subject to parallax error caused by depth of interaction of photons in detectors. Current widespread use of multimodality imaging in radiotherapy planning calls for periodic quality assurance of image registration process. Such studies may help improve safety and accuracy in treatment planning. © 2012 American Association of Physicists in Medicine.

  12. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.

    PubMed

    Scharfe, Michael; Pielot, Rainer; Schreiber, Falk

    2010-01-11

    Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  13. Multi-modal anatomical optical coherence tomography and CT for in vivo dynamic upper airway imaging

    NASA Astrophysics Data System (ADS)

    Balakrishnan, Santosh; Bu, Ruofei; Price, Hillel; Zdanski, Carlton; Oldenburg, Amy L.

    2017-02-01

    We describe a novel, multi-modal imaging protocol for validating quantitative dynamic airway imaging performed using anatomical Optical Coherence Tomography (aOCT). The aOCT system consists of a catheter-based aOCT probe that is deployed via a bronchoscope, while a programmable ventilator is used to control airway pressure. This setup is employed on the bed of a Siemens Biograph CT system capable of performing respiratory-gated acquisitions. In this arrangement the position of the aOCT catheter may be visualized with CT to aid in co-registration. Utilizing this setup we investigate multiple respiratory pressure parameters with aOCT, and respiratory-gated CT, on both ex vivo porcine trachea and live, anesthetized pigs. This acquisition protocol has enabled real-time measurement of airway deformation with simultaneous measurement of pressure under physiologically relevant static and dynamic conditions- inspiratory peak or peak positive airway pressures of 10-40 cm H2O, and 20-30 breaths per minute for dynamic studies. We subsequently compare the airway cross sectional areas (CSA) obtained from aOCT and CT, including the change in CSA at different stages of the breathing cycle for dynamic studies, and the CSA at different peak positive airway pressures for static studies. This approach has allowed us to improve our acquisition methodology and to validate aOCT measurements of the dynamic airway for the first time. We believe that this protocol will prove invaluable for aOCT system development and greatly facilitate translation of OCT systems for airway imaging into the clinical setting.

  14. Simultaneous in vivo imaging of melanin and lipofuscin in the retina with photoacoustic ophthalmoscopy and autofluorescence imaging.

    PubMed

    Zhang, Xiangyang; Zhang, Hao F; Puliafito, Carmen A; Jiao, Shuliang

    2011-08-01

    We combined photoacoustic ophthalmoscopy (PAOM) with autofluorescence imaging for simultaneous in vivo imaging of dual molecular contrasts in the retina using a single light source. The dual molecular contrasts come from melanin and lipofuscin in the retinal pigment epithelium (RPE). Melanin and lipofuscin are two types of pigments and are believed to play opposite roles (protective versus exacerbate) in the RPE in the aging process. We have successfully imaged the retina of pigmented and albino rats at different ages. The experimental results showed that multimodal PAOM system can be a potentially powerful tool in the study of age-related degenerative retinal diseases.

  15. Simultaneous in vivo imaging of melanin and lipofuscin in the retina with photoacoustic ophthalmoscopy and autofluorescence imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangyang; Zhang, Hao F.; Puliafito, Carmen A.; Jiao, Shuliang

    2011-08-01

    We combined photoacoustic ophthalmoscopy (PAOM) with autofluorescence imaging for simultaneous in vivo imaging of dual molecular contrasts in the retina using a single light source. The dual molecular contrasts come from melanin and lipofuscin in the retinal pigment epithelium (RPE). Melanin and lipofuscin are two types of pigments and are believed to play opposite roles (protective versus exacerbate) in the RPE in the aging process. We have successfully imaged the retina of pigmented and albino rats at different ages. The experimental results showed that multimodal PAOM system can be a potentially powerful tool in the study of age-related degenerative retinal diseases.

  16. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data

    PubMed Central

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-01-01

    Background Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. Results We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: . Conclusion MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine. PMID:17937818

  17. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data.

    PubMed

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-10-15

    Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.

  18. High frame-rate MR-guided near-infrared tomography system to monitor breast hemodynamics

    NASA Astrophysics Data System (ADS)

    Li, Zhiqiu; Jiang, Shudong; Krishnaswamy, Venkataramanan; Davis, Scott C.; Srinivasan, Subhadra; Paulsen, Keith D.; Pogue, Brian W.

    2011-02-01

    A near-infrared (NIR) tomography system with spectral-encoded sources at two wavelength bands was built to quantify the temporal contrast at 20 Hz bandwidth, while imaging breast tissue. The NIR system was integrated with a magnetic resonance (MR) machine through a custom breast coil interface, and both NIR data and MR images were acquired simultaneously. MR images provided breast tissue structural information for NIR reconstruction. Acquisition of finger pulse oximeter (PO) plethysmogram was synchronized with the NIR system in the experiment to offer a frequency-locked reference. The recovered absorption coefficients of the breast at two wavelengths showed identical temporal frequency as the PO output, proving this multi-modality design can recover the small pulsatile variation of absorption property in breast tissue related to the heartbeat. And it also showed the system's ability on novel contrast imaging of fast flow signals in deep tissue.

  19. Multimodal brain-tumor segmentation based on Dirichlet process mixture model with anisotropic diffusion and Markov random field prior.

    PubMed

    Lu, Yisu; Jiang, Jun; Yang, Wei; Feng, Qianjin; Chen, Wufan

    2014-01-01

    Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use.

  20. Multimodal Brain-Tumor Segmentation Based on Dirichlet Process Mixture Model with Anisotropic Diffusion and Markov Random Field Prior

    PubMed Central

    Lu, Yisu; Jiang, Jun; Chen, Wufan

    2014-01-01

    Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use. PMID:25254064

  1. Intraoperative high-field magnetic resonance imaging, multimodal neuronavigation, and intraoperative electrophysiological monitoring-guided surgery for treating supratentorial cavernomas.

    PubMed

    Li, Fang-Ye; Chen, Xiao-Lei; Xu, Bai-Nan

    2016-09-01

    To determine the beneficial effects of intraoperative high-field magnetic resonance imaging (MRI), multimodal neuronavigation, and intraoperative electrophysiological monitoring-guided surgery for treating supratentorial cavernomas. Twelve patients with 13 supratentorial cavernomas were prospectively enrolled and operated while using a 1.5 T intraoperative MRI, multimodal neuronavigation, and intraoperative electrophysiological monitoring. All cavernomas were deeply located in subcortical areas or involved critical areas. Intraoperative high-field MRIs were obtained for the intraoperative "visualization" of surrounding eloquent structures, "brain shift" corrections, and navigational plan updates. All cavernomas were successfully resected with guidance from intraoperative MRI, multimodal neuronavigation, and intraoperative electrophysiological monitoring. In 5 cases with supratentorial cavernomas, intraoperative "brain shift" severely deterred locating of the lesions; however, intraoperative MRI facilitated precise locating of these lesions. During long-term (>3 months) follow-up, some or all presenting signs and symptoms improved or resolved in 4 cases, but were unchanged in 7 patients. Intraoperative high-field MRI, multimodal neuronavigation, and intraoperative electrophysiological monitoring are helpful in surgeries for the treatment of small deeply seated subcortical cavernomas.

  2. Real-time, label-free, intraoperative visualization of peripheral nerves and micro-vasculatures using multimodal optical imaging techniques

    PubMed Central

    Cha, Jaepyeong; Broch, Aline; Mudge, Scott; Kim, Kihoon; Namgoong, Jung-Man; Oh, Eugene; Kim, Peter

    2018-01-01

    Accurate, real-time identification and display of critical anatomic structures, such as the nerve and vasculature structures, are critical for reducing complications and improving surgical outcomes. Human vision is frequently limited in clearly distinguishing and contrasting these structures. We present a novel imaging system, which enables noninvasive visualization of critical anatomic structures during surgical dissection. Peripheral nerves are visualized by a snapshot polarimetry that calculates the anisotropic optical properties. Vascular structures, both venous and arterial, are identified and monitored in real-time using a near-infrared laser-speckle-contrast imaging. We evaluate the system by performing in vivo animal studies with qualitative comparison by contrast-agent-aided fluorescence imaging. PMID:29541506

  3. Machine learning approaches for integrating clinical and imaging features in late-life depression classification and response prediction.

    PubMed

    Patel, Meenal J; Andreescu, Carmen; Price, Julie C; Edelman, Kathryn L; Reynolds, Charles F; Aizenstein, Howard J

    2015-10-01

    Currently, depression diagnosis relies primarily on behavioral symptoms and signs, and treatment is guided by trial and error instead of evaluating associated underlying brain characteristics. Unlike past studies, we attempted to estimate accurate prediction models for late-life depression diagnosis and treatment response using multiple machine learning methods with inputs of multi-modal imaging and non-imaging whole brain and network-based features. Late-life depression patients (medicated post-recruitment) (n = 33) and older non-depressed individuals (n = 35) were recruited. Their demographics and cognitive ability scores were recorded, and brain characteristics were acquired using multi-modal magnetic resonance imaging pretreatment. Linear and nonlinear learning methods were tested for estimating accurate prediction models. A learning method called alternating decision trees estimated the most accurate prediction models for late-life depression diagnosis (87.27% accuracy) and treatment response (89.47% accuracy). The diagnosis model included measures of age, Mini-mental state examination score, and structural imaging (e.g. whole brain atrophy and global white mater hyperintensity burden). The treatment response model included measures of structural and functional connectivity. Combinations of multi-modal imaging and/or non-imaging measures may help better predict late-life depression diagnosis and treatment response. As a preliminary observation, we speculate that the results may also suggest that different underlying brain characteristics defined by multi-modal imaging measures-rather than region-based differences-are associated with depression versus depression recovery because to our knowledge this is the first depression study to accurately predict both using the same approach. These findings may help better understand late-life depression and identify preliminary steps toward personalized late-life depression treatment. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Full optical model of micro-endoscope with optical coherence microscopy, multiphoton microscopy and visible capabilities

    NASA Astrophysics Data System (ADS)

    Vega, David; Kiekens, Kelli C.; Syson, Nikolas C.; Romano, Gabriella; Baker, Tressa; Barton, Jennifer K.

    2018-02-01

    While Optical Coherence Microscopy (OCM), Multiphoton Microscopy (MPM), and narrowband imaging are powerful imaging techniques that can be used to detect cancer, each imaging technique has limitations when used by itself. Combining them into an endoscope to work in synergy can help achieve high sensitivity and specificity for diagnosis at the point of care. Such complex endoscopes have an elevated risk of failure, and performing proper modelling ensures functionality and minimizes risk. We present full 2D and 3D models of a multimodality optical micro-endoscope to provide real-time detection of carcinomas, called a salpingoscope. The models evaluate the endoscope illumination and light collection capabilities of various modalities. The design features two optical paths with different numerical apertures (NA) through a single lens system with a scanning optical fiber. The dual path is achieved using dichroic coatings embedded in a triplet. A high NA optical path is designed to perform OCM and MPM while a low NA optical path is designed for the visible spectrum to navigate the endoscope to areas of interest and narrowband imaging. Different tests such as the reflectance profile of homogeneous epithelial tissue were performed to adjust the models properly. Light collection models for the different modalities were created and tested for efficiency. While it is challenging to evaluate the efficiency of multimodality endoscopes, the models ensure that the system is design for the expected light collection levels to provide detectable signal to work for the intended imaging.

  5. A spline-based non-linear diffeomorphism for multimodal prostate registration.

    PubMed

    Mitra, Jhimli; Kato, Zoltan; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Sidibé, Désiré; Ghose, Soumya; Vilanova, Joan C; Comet, Josep; Meriaudeau, Fabrice

    2012-08-01

    This paper presents a novel method for non-rigid registration of transrectal ultrasound and magnetic resonance prostate images based on a non-linear regularized framework of point correspondences obtained from a statistical measure of shape-contexts. The segmented prostate shapes are represented by shape-contexts and the Bhattacharyya distance between the shape representations is used to find the point correspondences between the 2D fixed and moving images. The registration method involves parametric estimation of the non-linear diffeomorphism between the multimodal images and has its basis in solving a set of non-linear equations of thin-plate splines. The solution is obtained as the least-squares solution of an over-determined system of non-linear equations constructed by integrating a set of non-linear functions over the fixed and moving images. However, this may not result in clinically acceptable transformations of the anatomical targets. Therefore, the regularized bending energy of the thin-plate splines along with the localization error of established correspondences should be included in the system of equations. The registration accuracies of the proposed method are evaluated in 20 pairs of prostate mid-gland ultrasound and magnetic resonance images. The results obtained in terms of Dice similarity coefficient show an average of 0.980±0.004, average 95% Hausdorff distance of 1.63±0.48 mm and mean target registration and target localization errors of 1.60±1.17 mm and 0.15±0.12 mm respectively. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. A Multimodal Approach to Counselor Supervision.

    ERIC Educational Resources Information Center

    Ponterotto, Joseph G.; Zander, Toni A.

    1984-01-01

    Represents an initial effort to apply Lazarus's multimodal approach to a model of counselor supervision. Includes continuously monitoring the trainee's behavior, affect, sensations, images, cognitions, interpersonal functioning, and when appropriate, biological functioning (diet and drugs) in the supervisory process. (LLL)

  7. Multimodal Nonlinear Optical Imaging for Sensitive Detection of Multiple Pharmaceutical Solid-State Forms and Surface Transformations.

    PubMed

    Novakovic, Dunja; Saarinen, Jukka; Rojalin, Tatu; Antikainen, Osmo; Fraser-Miller, Sara J; Laaksonen, Timo; Peltonen, Leena; Isomäki, Antti; Strachan, Clare J

    2017-11-07

    Two nonlinear imaging modalities, coherent anti-Stokes Raman scattering (CARS) and sum-frequency generation (SFG), were successfully combined for sensitive multimodal imaging of multiple solid-state forms and their changes on drug tablet surfaces. Two imaging approaches were used and compared: (i) hyperspectral CARS combined with principal component analysis (PCA) and SFG imaging and (ii) simultaneous narrowband CARS and SFG imaging. Three different solid-state forms of indomethacin-the crystalline gamma and alpha forms, as well as the amorphous form-were clearly distinguished using both approaches. Simultaneous narrowband CARS and SFG imaging was faster, but hyperspectral CARS and SFG imaging has the potential to be applied to a wider variety of more complex samples. These methodologies were further used to follow crystallization of indomethacin on tablet surfaces under two storage conditions: 30 °C/23% RH and 30 °C/75% RH. Imaging with (sub)micron resolution showed that the approach allowed detection of very early stage surface crystallization. The surfaces progressively crystallized to predominantly (but not exclusively) the gamma form at lower humidity and the alpha form at higher humidity. Overall, this study suggests that multimodal nonlinear imaging is a highly sensitive, solid-state (and chemically) specific, rapid, and versatile imaging technique for understanding and hence controlling (surface) solid-state forms and their complex changes in pharmaceuticals.

  8. A Multi-modal, Discriminative and Spatially Invariant CNN for RGB-D Object Labeling.

    PubMed

    Asif, Umar; Bennamoun, Mohammed; Sohel, Ferdous

    2017-08-30

    While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multimodal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multimodal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance - this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability - this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multimodal hierarchical fusion - this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., imageand pixel-levels), and fused into a Conditional Random Field (CRF)- based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.

  9. Application of high-resolution linear Radon transform for Rayleigh-wave dispersive energy imaging and mode separating

    USGS Publications Warehouse

    Luo, Y.; Xia, J.; Miller, R.D.; Liu, J.; Xu, Y.; Liu, Q.

    2008-01-01

    Multichannel Analysis of Surface Waves (MASW) analysis is an efficient tool to obtain the vertical shear-wave profile. One of the key steps in the MASW method is to generate an image of dispersive energy in the frequency-velocity domain, so dispersion curves can be determined by picking peaks of dispersion energy. In this paper, we image Rayleigh-wave dispersive energy and separate multimodes from a multichannel record by high-resolution linear Radon transform (LRT). We first introduce Rayleigh-wave dispersive energy imaging by high-resolution LRT. We then show the process of Rayleigh-wave mode separation. Results of synthetic and real-world examples demonstrate that (1) compared with slant stacking algorithm, high-resolution LRT can improve the resolution of images of dispersion energy by more than 50% (2) high-resolution LRT can successfully separate multimode dispersive energy of Rayleigh waves with high resolution; and (3) multimode separation and reconstruction expand frequency ranges of higher mode dispersive energy, which not only increases the investigation depth but also provides a means to accurately determine cut-off frequencies.

  10. Multimodal imaging of the human knee down to the cellular level

    NASA Astrophysics Data System (ADS)

    Schulz, G.; Götz, C.; Müller-Gerbl, M.; Zanette, I.; Zdora, M.-C.; Khimchenko, A.; Deyhle, H.; Thalmann, P.; Müller, B.

    2017-06-01

    Computed tomography reaches the best spatial resolution for the three-dimensional visualization of human tissues among the available nondestructive clinical imaging techniques. Nowadays, sub-millimeter voxel sizes are regularly obtained. Regarding investigations on true micrometer level, lab-based micro-CT (μCT) has become gold standard. The aim of the present study is firstly the hierarchical investigation of a human knee post mortem using hard X-ray μCT and secondly a multimodal imaging using absorption and phase contrast modes in order to investigate hard (bone) and soft (cartilage) tissues on the cellular level. After the visualization of the entire knee using a clinical CT, a hierarchical imaging study was performed using the lab-system nanotom® m. First, the entire knee was measured with a pixel length of 65 μm. The highest resolution with a pixel length of 3 μm could be achieved after extracting cylindrically shaped plugs from the femoral bones. For the visualization of the cartilage, grating-based phase contrast μCT (I13-2, Diamond Light Source) was performed. With an effective voxel size of 2.3 μm it was possible to visualize individual chondrocytes within the cartilage.

  11. “How many times must a man look up before he can really see the sky?” Rheumatic cardiovascular disease in the era of multimodality imaging

    PubMed Central

    Mavrogeni, Sophie I; Markousis-Mavrogenis, George; Heutemann, David; van Wijk, Kees; Reiber, Hans J; Kolovou, Genovefa

    2015-01-01

    Cardiovascular involvement in rheumatic diseases (RD) is the result of various pathophysiologic mechanisms including inflammation, accelerated atherosclerosis, myocardial ischemia, due to micro- or macro-vascular lesions and fibrosis. Noninvasive cardiovascular imaging, including echocardiography, nuclear techniques, cardiovascular computed tomography and cardiovascular magnetic resonance, represents the main diagnostic tool for early, non-invasive diagnosis of heart disease in RD. However, in the era of multimodality imaging and financial crisis there is an imperative need for rational use of imaging techniques in order to obtain the maximum benefit at the lowest possible cost for the health insurance system. The oligo-asymptomatic cardiovascular presentation and the high cardiovascular mortality of RD necessitate a reliable and reproducible diagnostic approach to catch early cardiovascular involvement. Echocardiography remains the routine cornerstone of cardiovascular evaluation. However, a normal echocardiogram can not always exclude cardiac involvement and/or identify heart disease acuity and pathophysiology. Therefore, cardiovascular magnetic resonance is a necessary adjunct complementary to echocardiography, especially in new onset heart failure and when there are conflicting data from clinical, electrocardiographic and echocardiographic evaluation of RD patients. PMID:26413486

  12. CT and Ultrasound Guided Stereotactic High Intensity Focused Ultrasound (HIFU)

    NASA Astrophysics Data System (ADS)

    Wood, Bradford J.; Yanof, J.; Frenkel, V.; Viswanathan, A.; Dromi, S.; Oh, K.; Kruecker, J.; Bauer, C.; Seip, R.; Kam, A.; Li, K. C. P.

    2006-05-01

    To demonstrate the feasibility of CT and B-mode Ultrasound (US) targeted HIFU, a prototype coaxial focused ultrasound transducer was registered and integrated to a CT scanner. CT and diagnostic ultrasound were used for HIFU targeting and monitoring, with the goals of both thermal ablation and non-thermal enhanced drug delivery. A 1 megahertz coaxial ultrasound transducer was custom fabricated and attached to a passive position-sensing arm and an active six degree-of-freedom robotic arm via a CT stereotactic frame. The outer therapeutic transducer with a 10 cm fixed focal zone was coaxially mounted to an inner diagnostic US transducer (2-4 megahertz, Philips Medical Systems). This coaxial US transducer was connected to a modified commercial focused ultrasound generator (Focus Surgery, Indianapolis, IN) with a maximum total acoustic power of 100 watts. This pre-clinical paradigm was tested for ability to heat tissue in phantoms with monitoring and navigation from CT and live US. The feasibility of navigation via image fusion of CT with other modalities such as PET and MRI was demonstrated. Heated water phantoms were tested for correlation between CT numbers and temperature (for ablation monitoring). The prototype transducer and integrated CT/US imaging system enabled simultaneous multimodality imaging and therapy. Pre-clinical phantom models validated the treatment paradigm and demonstrated integrated multimodality guidance and treatment monitoring. Temperature changes during phantom cooling corresponded to CT number changes. Contrast enhanced or non-enhanced CT numbers may potentially be used to monitor thermal ablation with HIFU. Integrated CT, diagnostic US, and therapeutic focused ultrasound bridges a gap between diagnosis and therapy. Preliminary results show that the multimodality system may represent a relatively inexpensive, accessible, and simple method of both targeting and monitoring HIFU effects. Small animal pre-clinical models may be translated to large animals and humans for HIFU-induced ablation and drug delivery. Integrated CT-guided focused ultrasound holds promise for tissue ablation, enhancing local drug delivery, and CT thermometry for monitoring ablation in near real-time.

  13. PIRATE: pediatric imaging response assessment and targeting environment

    NASA Astrophysics Data System (ADS)

    Glenn, Russell; Zhang, Yong; Krasin, Matthew; Hua, Chiaho

    2010-02-01

    By combining the strengths of various imaging modalities, the multimodality imaging approach has potential to improve tumor staging, delineation of tumor boundaries, chemo-radiotherapy regime design, and treatment response assessment in cancer management. To address the urgent needs for efficient tools to analyze large-scale clinical trial data, we have developed an integrated multimodality, functional and anatomical imaging analysis software package for target definition and therapy response assessment in pediatric radiotherapy (RT) patients. Our software provides quantitative tools for automated image segmentation, region-of-interest (ROI) histogram analysis, spatial volume-of-interest (VOI) analysis, and voxel-wise correlation across modalities. To demonstrate the clinical applicability of this software, histogram analyses were performed on baseline and follow-up 18F-fluorodeoxyglucose (18F-FDG) PET images of nine patients with rhabdomyosarcoma enrolled in an institutional clinical trial at St. Jude Children's Research Hospital. In addition, we combined 18F-FDG PET, dynamic-contrast-enhanced (DCE) MR, and anatomical MR data to visualize the heterogeneity in tumor pathophysiology with the ultimate goal of adaptive targeting of regions with high tumor burden. Our software is able to simultaneously analyze multimodality images across multiple time points, which could greatly speed up the analysis of large-scale clinical trial data and validation of potential imaging biomarkers.

  14. Design of small confocal endo-microscopic probe working under multiwavelength environment

    NASA Astrophysics Data System (ADS)

    Kim, Young-Duk; Ahn, MyoungKi; Gweon, Dae-Gab

    2010-02-01

    Recently, optical imaging system is widely used in medical purpose. By using optical imaging system specific diseases can be easily diagnosed at early stage because optical imaging system has high resolution performance and various imaging method. These methods are used to get high resolution image of human body and can be used to verify whether the cell is infected by virus. Confocal microscope is one of the famous imaging systems which is used for in-vivo imaging. Because most of diseases are accompanied with cellular level changes, doctors can diagnosis at early stage by observing the cellular image of human organ. Current research is focused in the development of endo-microscope that has great advantage in accessibility to human body. In this research, I designed small probe that is connected to confocal microscope through optical fiber bundle and work as endo-microscope. And this small probe is mainly designed to correct chromatic aberration to use various laser sources for both fluorescence type and reflection type confocal images. By using two kinds of laser sources at the same time we demonstrated multi-modality confocal endo-microscope.

  15. In-situ Multimodal Imaging and Spectroscopy of Mg Electrodeposition at Electrode-Electrolyte Interfaces

    NASA Astrophysics Data System (ADS)

    Wu, Yimin A.; Yin, Zuwei; Farmand, Maryam; Yu, Young-Sang; Shapiro, David A.; Liao, Hong-Gang; Liang, Wen-I.; Chu, Ying-Hao; Zheng, Haimei

    2017-02-01

    We report the study of Mg cathodic electrochemical deposition on Ti and Au electrode using a multimodal approach by examining the sample area in-situ using liquid cell transmission electron microscopy (TEM), scanning transmission X-ray microscopy (STXM) and X-ray absorption spectroscopy (XAS). Magnesium Aluminum Chloride Complex was synthesized and utilized as electrolyte, where non-reversible features during in situ charging-discharging cycles were observed. During charging, a uniform Mg film was deposited on the electrode, which is consistent with the intrinsic non-dendritic nature of Mg deposition in Mg ion batteries. The Mg thin film was not dissolvable during the following discharge process. We found that such Mg thin film is hexacoordinated Mg compounds by in-situ STXM and XAS. This study provides insights on the non-reversibility issue and failure mechanism of Mg ion batteries. Also, our method provides a novel generic method to understand the in situ battery chemistry without any further sample processing, which can preserve the original nature of battery materials or electrodeposited materials. This multimodal in situ imaging and spectroscopy provides many opportunities to attack complex problems that span orders of magnitude in length and time scale, which can be applied to a broad range of the energy storage systems.

  16. In-situ Multimodal Imaging and Spectroscopy of Mg Electrodeposition at Electrode-Electrolyte Interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Yimin A.; Yin, Zuwei; Farmand, Maryam

    We report the study of Mg cathodic electrochemical deposition on Ti and Au electrode using a multimodal approach by examining the sample area in-situ using liquid cell transmission electron microscopy (TEM), scanning transmission X-ray microscopy (STXM) and X-ray absorption spectroscopy (XAS). Magnesium Aluminum Chloride Complex was synthesized and utilized as electrolyte, where non-reversible features during in situ charging-discharging cycles were observed. During charging, a uniform Mg film was deposited on the electrode, which is consistent with the intrinsic non-dendritic nature of Mg deposition in Mg ion batteries. The Mg thin film was not dissolvable during the following discharge process. Wemore » found that such Mg thin film is hexacoordinated Mg compounds by in-situ STXM and XAS. This study provides insights on the non-reversibility issue and failure mechanism of Mg ion batteries. Also, our method provides a novel generic method to understand the in situ battery chemistry without any further sample processing, which can preserve the original nature of battery materials or electrodeposited materials. This multimodal in situ imaging and spectroscopy provides many opportunities to attack complex problems that span orders of magnitude in length and time scale, which can be applied to a broad range of the energy storage systems.« less

  17. In-situ Multimodal Imaging and Spectroscopy of Mg Electrodeposition at Electrode-Electrolyte Interfaces

    DOE PAGES

    Wu, Yimin A.; Yin, Zuwei; Farmand, Maryam; ...

    2017-02-10

    We report the study of Mg cathodic electrochemical deposition on Ti and Au electrode using a multimodal approach by examining the sample area in-situ using liquid cell transmission electron microscopy (TEM), scanning transmission X-ray microscopy (STXM) and X-ray absorption spectroscopy (XAS). Magnesium Aluminum Chloride Complex was synthesized and utilized as electrolyte, where non-reversible features during in situ charging-discharging cycles were observed. During charging, a uniform Mg film was deposited on the electrode, which is consistent with the intrinsic non-dendritic nature of Mg deposition in Mg ion batteries. The Mg thin film was not dissolvable during the following discharge process. Wemore » found that such Mg thin film is hexacoordinated Mg compounds by in-situ STXM and XAS. This study provides insights on the non-reversibility issue and failure mechanism of Mg ion batteries. Also, our method provides a novel generic method to understand the in situ battery chemistry without any further sample processing, which can preserve the original nature of battery materials or electrodeposited materials. This multimodal in situ imaging and spectroscopy provides many opportunities to attack complex problems that span orders of magnitude in length and time scale, which can be applied to a broad range of the energy storage systems.« less

  18. In-situ Multimodal Imaging and Spectroscopy of Mg Electrodeposition at Electrode-Electrolyte Interfaces.

    PubMed

    Wu, Yimin A; Yin, Zuwei; Farmand, Maryam; Yu, Young-Sang; Shapiro, David A; Liao, Hong-Gang; Liang, Wen-I; Chu, Ying-Hao; Zheng, Haimei

    2017-02-10

    We report the study of Mg cathodic electrochemical deposition on Ti and Au electrode using a multimodal approach by examining the sample area in-situ using liquid cell transmission electron microscopy (TEM), scanning transmission X-ray microscopy (STXM) and X-ray absorption spectroscopy (XAS). Magnesium Aluminum Chloride Complex was synthesized and utilized as electrolyte, where non-reversible features during in situ charging-discharging cycles were observed. During charging, a uniform Mg film was deposited on the electrode, which is consistent with the intrinsic non-dendritic nature of Mg deposition in Mg ion batteries. The Mg thin film was not dissolvable during the following discharge process. We found that such Mg thin film is hexacoordinated Mg compounds by in-situ STXM and XAS. This study provides insights on the non-reversibility issue and failure mechanism of Mg ion batteries. Also, our method provides a novel generic method to understand the in situ battery chemistry without any further sample processing, which can preserve the original nature of battery materials or electrodeposited materials. This multimodal in situ imaging and spectroscopy provides many opportunities to attack complex problems that span orders of magnitude in length and time scale, which can be applied to a broad range of the energy storage systems.

  19. In-situ Multimodal Imaging and Spectroscopy of Mg Electrodeposition at Electrode-Electrolyte Interfaces

    PubMed Central

    Wu, Yimin A.; Yin, Zuwei; Farmand, Maryam; Yu, Young-Sang; Shapiro, David A.; Liao, Hong-Gang; Liang, Wen-I; Chu, Ying-Hao; Zheng, Haimei

    2017-01-01

    We report the study of Mg cathodic electrochemical deposition on Ti and Au electrode using a multimodal approach by examining the sample area in-situ using liquid cell transmission electron microscopy (TEM), scanning transmission X-ray microscopy (STXM) and X-ray absorption spectroscopy (XAS). Magnesium Aluminum Chloride Complex was synthesized and utilized as electrolyte, where non-reversible features during in situ charging-discharging cycles were observed. During charging, a uniform Mg film was deposited on the electrode, which is consistent with the intrinsic non-dendritic nature of Mg deposition in Mg ion batteries. The Mg thin film was not dissolvable during the following discharge process. We found that such Mg thin film is hexacoordinated Mg compounds by in-situ STXM and XAS. This study provides insights on the non-reversibility issue and failure mechanism of Mg ion batteries. Also, our method provides a novel generic method to understand the in situ battery chemistry without any further sample processing, which can preserve the original nature of battery materials or electrodeposited materials. This multimodal in situ imaging and spectroscopy provides many opportunities to attack complex problems that span orders of magnitude in length and time scale, which can be applied to a broad range of the energy storage systems. PMID:28186175

  20. Research of the multimodal brain-tumor segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Yisu; Chen, Wufan

    2015-12-01

    It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. A new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain tumor images, we developed the algorithm to segment multimodal brain tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated and compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance.

  1. Multimodal MR imaging in hepatic encephalopathy: state of the art.

    PubMed

    Zhang, Xiao Dong; Zhang, Long Jiang

    2018-06-01

    Hepatic encephalopathy (HE) is a neurological or neuropsychological complication due to liver failure or portosystemic shunting. The clinical manifestation is highly variable, which can exhibit mild cognitive or motor impairment initially, or gradually progress to a coma, even death, without treatment. Neuroimaging plays a critical role in uncovering the neural mechanism of HE. In particular, multimodality MR imaging is able to assess both structural and functional derangements of the brain with HE in focal or neural network perspectives. In recent years, there has been rapid development in novel MR technologies and applications to investigate the pathophysiological mechanism of HE. Therefore, it is necessary to update the latest MR findings regarding HE by use of multimodality MRI to refine and deepen our understanding of the neural traits in HE. Herein, this review highlights the latest MR imaging findings in HE to refresh our understanding of MRI application in HE.

  2. Spinal metastases: multimodality imaging in diagnosis and stereotactic body radiation therapy planning.

    PubMed

    Jabehdar Maralani, Pejman; Lo, Simon S; Redmond, Kristin; Soliman, Hany; Myrehaug, Sten; Husain, Zain A; Heyn, Chinthaka; Kapadia, Anish; Chan, Aimee; Sahgal, Arjun

    2017-01-01

    Due to increased effectiveness of cancer treatments and increasing survival rates, metastatic disease has become more frequent compared to the past, with the spine being the most common site of bony metastases. Diagnostic imaging is an integral part of screening, diagnosis and follow-up of spinal metastases. In this article, we review the principles of multimodality imaging for tumor detection with respect to their value for diagnosis and stereotactic body radiation therapy planning for spinal metastases. We will also review the current international consensus agreement for stereotactic body radiation therapy planning, and the role of imaging in achieving the best possible treatment plan.

  3. New Finger Biometric Method Using Near Infrared Imaging

    PubMed Central

    Lee, Eui Chul; Jung, Hyunwoo; Kim, Daeyeoul

    2011-01-01

    In this paper, we propose a new finger biometric method. Infrared finger images are first captured, and then feature extraction is performed using a modified Gaussian high-pass filter through binarization, local binary pattern (LBP), and local derivative pattern (LDP) methods. Infrared finger images include the multimodal features of finger veins and finger geometries. Instead of extracting each feature using different methods, the modified Gaussian high-pass filter is fully convolved. Therefore, the extracted binary patterns of finger images include the multimodal features of veins and finger geometries. Experimental results show that the proposed method has an error rate of 0.13%. PMID:22163741

  4. Detail-enhanced multimodality medical image fusion based on gradient minimization smoothing filter and shearing filter.

    PubMed

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2018-02-13

    In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.

  5. A Versatile High-Vacuum Cryo-transfer System for Cryo-microscopy and Analytics

    PubMed Central

    Tacke, Sebastian; Krzyzanek, Vladislav; Nüsse, Harald; Wepf, Roger Albert; Klingauf, Jürgen; Reichelt, Rudolf

    2016-01-01

    Cryogenic microscopy methods have gained increasing popularity, as they offer an unaltered view on the architecture of biological specimens. As a prerequisite, samples must be handled under cryogenic conditions below their recrystallization temperature, and contamination during sample transfer and handling must be prevented. We present a high-vacuum cryo-transfer system that streamlines the entire handling of frozen-hydrated samples from the vitrification process to low temperature imaging for scanning transmission electron microscopy and transmission electron microscopy. A template for cryo-electron microscopy and multimodal cryo-imaging approaches with numerous sample transfer steps is presented. PMID:26910419

  6. A comparison of line enhancement techniques: applications to guide-wire detection and respiratory motion tracking

    NASA Astrophysics Data System (ADS)

    Bismuth, Vincent; Vancamberg, Laurence; Gorges, Sébastien

    2009-02-01

    During interventional radiology procedures, guide-wires are usually inserted into the patients vascular tree for diagnosis or healing purpose. These procedures are monitored with an Xray interventional system providing images of the interventional devices navigating through the patient's body. The automatic detection of such tools by image processing means has gained maturity over the past years and enables applications ranging from image enhancement to multimodal image fusion. Sophisticated detection methods are emerging, which rely on a variety of device enhancement techniques. In this article we reviewed and classified these techniques into three families. We chose a state of the art approach in each of them and built a rigorous framework to compare their detection capability and their computational complexity. Through simulations and the intensive use of ROC curves we demonstrated that the Hessian based methods are the most robust to strong curvature of the devices and that the family of rotated filters technique is the most suited for detecting low CNR and low curvature devices. The steerable filter approach demonstrated less interesting detection capabilities and appears to be the most expensive one to compute. Finally we demonstrated the interest of automatic guide-wire detection on a clinical topic: the compensation of respiratory motion in multimodal image fusion.

  7. Laser a balayage spectral double-bande pour l'imagerie biomedicale multimodale

    NASA Astrophysics Data System (ADS)

    Goulamhoussen, Nadir

    A novel swept laser providing simultaneous dual-band (780nm and 1 300 nm) wavelength scanning has been designed for use in multimodal imaging systems. The swept laser is based on two gain media : a fibered semiconductor optical amplifier (SOA) centered at 1 300nm and a free-space laser diode centered at 780 nm. Simultaneous wavelength tuning for both bands is obtained by separate wavelength filters set up around the same rotating polygonal mirror. For each band, a telescope in an infinite conjugate setup converges the wavelengths dispersed by a grating on the polygon. The polygon reflects back a narrow band of wavelengths for amplification in the gain medium. Rotating the polygon enables wavelength tuning and imaging at a rate of 6 000 to 30 000 spectral lines/s, or A-lines/s in Optical Coherence Tomography (OCT). The 780nm source has a bandwidth of 37 nm, a fibered output power of 54 mW and a coherence length of 11 mm. The 1 300nm source has a bandwidth of 75 nm, a fibered output power of 17mW and a coherence length of 7.2 mm. Three multimodal systems were designed to test the potential of the swept laser in biomedical imaging. A two color OCT which allows three-dimensional in depth imaging of biological tissues with good morphological contrast was first designed, including a novel arrangement for balanced detection in both bands. A simultaneous OCT and SECM instrument was also built in which spectrally encoded confocal microscopy (SECM) provides en face images of subcellular features with high resolution on top of the 3D high penetration image obtained by OCT. Finally, a system combining OCT with fluorescence was designed, thus adding functional imaging to structural OCT images. There are many prospective paths for these three modalities, first among them the adaptation of the systems such that they may be used with imaging probes. One potential solution would be the development of novel fiber components to combine the illumination of theses modalities while demultiplexing their detection, and as would be the development of new optomechanics to enable 3D real-time in vivo imaging.

  8. 3D multi-scale FCN with random modality voxel dropout learning for Intervertebral Disc Localization and Segmentation from Multi-modality MR Images.

    PubMed

    Li, Xiaomeng; Dou, Qi; Chen, Hao; Fu, Chi-Wing; Qi, Xiaojuan; Belavý, Daniel L; Armbrecht, Gabriele; Felsenberg, Dieter; Zheng, Guoyan; Heng, Pheng-Ann

    2018-04-01

    Intervertebral discs (IVDs) are small joints that lie between adjacent vertebrae. The localization and segmentation of IVDs are important for spine disease diagnosis and measurement quantification. However, manual annotation is time-consuming and error-prone with limited reproducibility, particularly for volumetric data. In this work, our goal is to develop an automatic and accurate method based on fully convolutional networks (FCN) for the localization and segmentation of IVDs from multi-modality 3D MR data. Compared with single modality data, multi-modality MR images provide complementary contextual information, which contributes to better recognition performance. However, how to effectively integrate such multi-modality information to generate accurate segmentation results remains to be further explored. In this paper, we present a novel multi-scale and modality dropout learning framework to locate and segment IVDs from four-modality MR images. First, we design a 3D multi-scale context fully convolutional network, which processes the input data in multiple scales of context and then merges the high-level features to enhance the representation capability of the network for handling the scale variation of anatomical structures. Second, to harness the complementary information from different modalities, we present a random modality voxel dropout strategy which alleviates the co-adaption issue and increases the discriminative capability of the network. Our method achieved the 1st place in the MICCAI challenge on automatic localization and segmentation of IVDs from multi-modality MR images, with a mean segmentation Dice coefficient of 91.2% and a mean localization error of 0.62 mm. We further conduct extensive experiments on the extended dataset to validate our method. We demonstrate that the proposed modality dropout strategy with multi-modality images as contextual information improved the segmentation accuracy significantly. Furthermore, experiments conducted on extended data collected from two different time points demonstrate the efficacy of our method on tracking the morphological changes in a longitudinal study. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Medical information, communication, and archiving system (MICAS): Phase II integration and acceptance testing

    NASA Astrophysics Data System (ADS)

    Smith, Edward M.; Wandtke, John; Robinson, Arvin E.

    1999-07-01

    The Medical Information, Communication and Archive System (MICAS) is a multi-modality integrated image management system that is seamlessly integrated with the Radiology Information System (RIS). This project was initiated in the summer of 1995 with the first phase being installed during the first half of 1997 and the second phase installed during the summer of 1998. Phase II enhancements include a permanent archive, automated workflow including modality worklist, study caches, NT diagnostic workstations with all components adhering to Digital Imaging and Communications in Medicine (DICOM) standards. This multi-vendor phased approach to PACS implementation is designed as an enterprise-wide PACS to provide images and reports throughout our healthcare network. MICAS demonstrates that aa multi-vendor open system phased approach to PACS is feasible, cost-effective, and has significant advantages over a single vendor implementation.

  10. Advances in combined endoscopic fluorescence confocal microscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Risi, Matthew D.

    Confocal microendoscopy provides real-time high resolution cellular level images via a minimally invasive procedure. Results from an ongoing clinical study to detect ovarian cancer with a novel confocal fluorescent microendoscope are presented. As an imaging modality, confocal fluorescence microendoscopy typically requires exogenous fluorophores, has a relatively limited penetration depth (100 μm), and often employs specialized aperture configurations to achieve real-time imaging in vivo. Two primary research directions designed to overcome these limitations and improve diagnostic capability are presented. Ideal confocal imaging performance is obtained with a scanning point illumination and confocal aperture, but this approach is often unsuitable for real-time, in vivo biomedical imaging. By scanning a slit aperture in one direction, image acquisition speeds are greatly increased, but at the cost of a reduction in image quality. The design, implementation, and experimental verification of a custom multi-point-scanning modification to a slit-scanning multi-spectral confocal microendoscope is presented. This new design improves the axial resolution while maintaining real-time imaging rates. In addition, the multi-point aperture geometry greatly reduces the effects of tissue scatter on imaging performance. Optical coherence tomography (OCT) has seen wide acceptance and FDA approval as a technique for ophthalmic retinal imaging, and has been adapted for endoscopic use. As a minimally invasive imaging technique, it provides morphological characteristics of tissues at a cellular level without requiring the use of exogenous fluorophores. OCT is capable of imaging deeper into biological tissue (˜1-2 mm) than confocal fluorescence microscopy. A theoretical analysis of the use of a fiber-bundle in spectral-domain OCT systems is presented. The fiber-bundle enables a flexible endoscopic design and provides fast, parallelized acquisition of the optical coherence tomography data. However, the multi-mode characteristic of the fibers in the fiber-bundle affects the depth sensitivity of the imaging system. A description of light interference in a multi-mode fiber is presented along with numerical simulations and experimental studies to illustrate the theoretical analysis.

  11. Unifying framework for multimodal brain MRI segmentation based on Hidden Markov Chains.

    PubMed

    Bricq, S; Collet, Ch; Armspach, J P

    2008-12-01

    In the frame of 3D medical imaging, accurate segmentation of multimodal brain MR images is of interest for many brain disorders. However, due to several factors such as noise, imaging artifacts, intrinsic tissue variation and partial volume effects, tissue classification remains a challenging task. In this paper, we present a unifying framework for unsupervised segmentation of multimodal brain MR images including partial volume effect, bias field correction, and information given by a probabilistic atlas. Here-proposed method takes into account neighborhood information using a Hidden Markov Chain (HMC) model. Due to the limited resolution of imaging devices, voxels may be composed of a mixture of different tissue types, this partial volume effect is included to achieve an accurate segmentation of brain tissues. Instead of assigning each voxel to a single tissue class (i.e., hard classification), we compute the relative amount of each pure tissue class in each voxel (mixture estimation). Further, a bias field estimation step is added to the proposed algorithm to correct intensity inhomogeneities. Furthermore, atlas priors were incorporated using probabilistic brain atlas containing prior expectations about the spatial localization of different tissue classes. This atlas is considered as a complementary sensor and the proposed method is extended to multimodal brain MRI without any user-tunable parameter (unsupervised algorithm). To validate this new unifying framework, we present experimental results on both synthetic and real brain images, for which the ground truth is available. Comparison with other often used techniques demonstrates the accuracy and the robustness of this new Markovian segmentation scheme.

  12. Architecture for a PACS primary diagnosis workstation

    NASA Astrophysics Data System (ADS)

    Shastri, Kaushal; Moran, Byron

    1990-08-01

    A major factor in determining the overall utility of a medical Picture Archiving and Communications (PACS) system is the functionality of the diagnostic workstation. Meyer-Ebrecht and Wendler [1] have proposed a modular picture computer architecture with high throughput and Perry et.al [2] have defined performance requirements for radiology workstations. In order to be clinically useful, a primary diagnosis workstation must not only provide functions of current viewing systems (e.g. mechanical alternators [3,4]) such as acceptable image quality, simultaneous viewing of multiple images, and rapid switching of image banks; but must also provide a diagnostic advantage over the current systems. This includes window-level functions on any image, simultaneous display of multi-modality images, rapid image manipulation, image processing, dynamic image display (cine), electronic image archival, hardcopy generation, image acquisition, network support, and an easy user interface. Implementation of such a workstation requires an underlying hardware architecture which provides high speed image transfer channels, local storage facilities, and image processing functions. This paper describes the hardware architecture of the Siemens Diagnostic Reporting Console (DRC) which meets these requirements.

  13. Multi Modality Brain Mapping System (MBMS) Using Artificial Intelligence and Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Nikzad, Shouleh (Inventor); Kateb, Babak (Inventor)

    2017-01-01

    A Multimodality Brain Mapping System (MBMS), comprising one or more scopes (e.g., microscopes or endoscopes) coupled to one or more processors, wherein the one or more processors obtain training data from one or more first images and/or first data, wherein one or more abnormal regions and one or more normal regions are identified; receive a second image captured by one or more of the scopes at a later time than the one or more first images and/or first data and/or captured using a different imaging technique; and generate, using machine learning trained using the training data, one or more viewable indicators identifying one or abnormalities in the second image, wherein the one or more viewable indicators are generated in real time as the second image is formed. One or more of the scopes display the one or more viewable indicators on the second image.

  14. Tangible interactive system for document browsing and visualisation of multimedia data

    NASA Astrophysics Data System (ADS)

    Rytsar, Yuriy; Voloshynovskiy, Sviatoslav; Koval, Oleksiy; Deguillaume, Frederic; Topak, Emre; Startchik, Sergei; Pun, Thierry

    2006-01-01

    In this paper we introduce and develop a framework for document interactive navigation in multimodal databases. First, we analyze the main open issues of existing multimodal interfaces and then discuss two applications that include interaction with documents in several human environments, i.e., the so-called smart rooms. Second, we propose a system set-up dedicated to the efficient navigation in the printed documents. This set-up is based on the fusion of data from several modalities that include images and text. Both modalities can be used as cover data for hidden indexes using data-hiding technologies as well as source data for robust visual hashing. The particularities of the proposed robust visual hashing are described in the paper. Finally, we address two practical applications of smart rooms for tourism and education and demonstrate the advantages of the proposed solution.

  15. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets

    PubMed Central

    2010-01-01

    Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262

  16. Biodistribution of biodegradable polymeric nano-carriers loaded with busulphan and designed for multimodal imaging.

    PubMed

    Asem, Heba; Zhao, Ying; Ye, Fei; Barrefelt, Åsa; Abedi-Valugerdi, Manuchehr; El-Sayed, Ramy; El-Serafi, Ibrahim; Abu-Salah, Khalid M; Hamm, Jörg; Muhammed, Mamoun; Hassan, Moustapha

    2016-12-19

    Multifunctional nanocarriers for controlled drug delivery, imaging of disease development and follow-up of treatment efficacy are promising novel tools for disease diagnosis and treatment. In the current investigation, we present a multifunctional theranostic nanocarrier system for anticancer drug delivery and molecular imaging. Superparamagnetic iron oxide nanoparticles (SPIONs) as an MRI contrast agent and busulphan as a model for lipophilic antineoplastic drugs were encapsulated into poly (ethylene glycol)-co-poly (caprolactone) (PEG-PCL) micelles via the emulsion-evaporation method, and PEG-PCL was labelled with VivoTag 680XL fluorochrome for in vivo fluorescence imaging. Busulphan entrapment efficiency was 83% while the drug release showed a sustained pattern over 10 h. SPION loaded-PEG-PCL micelles showed contrast enhancement in T 2 *-weighted MRI with high r 2 * relaxivity. In vitro cellular uptake of PEG-PCL micelles labeled with fluorescein in J774A cells was found to be time-dependent. The maximum uptake was observed after 24 h of incubation. The biodistribution of PEG-PCL micelles functionalized with VivoTag 680XL was investigated in Balb/c mice over 48 h using in vivo fluorescence imaging. The results of real-time live imaging were then confirmed by ex vivo organ imaging and histological examination. Generally, PEG-PCL micelles were highly distributed into the lungs during the first 4 h post intravenous administration, then redistributed and accumulated in liver and spleen until 48 h post administration. No pathological impairment was found in the major organs studied. Thus, with loaded contrast agent and conjugated fluorochrome, PEG-PCL micelles as biodegradable and biocompatible nanocarriers are efficient multimodal imaging agents, offering high drug loading capacity, and sustained drug release. These might offer high treatment efficacy and real-time tracking of the drug delivery system in vivo, which is crucial for designing of an efficient drug delivery system.

  17. A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps

    PubMed Central

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  18. A Multimodal Imaging Protocol, (123)I/(99)Tc-Sestamibi, SPECT, and SPECT/CT, in Primary Hyperparathyroidism Adds Limited Benefit for Preoperative Localization.

    PubMed

    Lee, Grace S; McKenzie, Travis J; Mullan, Brian P; Farley, David R; Thompson, Geoffrey B; Richards, Melanie L

    2016-03-01

    Focused parathyroidectomy in primary hyperparathyroidism (1°HPT) is possible with accurate preoperative localization and intraoperative PTH monitoring (IOPTH). The added benefit of multimodal imaging techniques for operative success is unknown. Patients with 1°HPT, who underwent parathyroidectomy in 2012-2014 at a single institution, were retrospectively reviewed. Only the patients who underwent the standardized multimodal imaging workup consisting of (123)I/(99)Tc-sestamibi subtraction scintigraphy, SPECT, and SPECT/CT were assessed. Of 360 patients who were identified, a curative operation was performed in 96%, using pre-operative imaging and IOPTH. Imaging analysis showed that (123)I/(99)Tc-sestamibi had a sensitivity of 86% (95% CI 82-90%), positive predictive value (PPV) 93%, and accuracy 81%, based on correct lateralization. SPECT had a sensitivity of 77% (95% CI 72-82%), PPV 92% and accuracy 72%. SPECT/CT had a sensitivity of 75% (95% CI 70-80%), PPV of 94%, and accuracy 71%. There were 3 of 45 (7%) patients with negative sestamibi imaging that had an accurate SPECT and SPECT/CT. Of 312 patients (87%) with positive uptake on sestamibi (93% true positive, 7% false positive), concordant findings were present in 86% SPECT and 84% SPECT/CT. In cases where imaging modalities were discordant, but at least one method was true-positive, (123)I/(99)Tc-sestamibi was significantly better than both SPECT and SPECT/CT (p < 0.001). The inclusion of SPECT and SPECT/CT in 1°HPT imaging protocol increases patient cost up to 2.4-fold. (123)I/(99)Tc-sestamibi subtraction imaging is highly sensitive for preoperative localization in 1°HPT. SPECT and SPECT/CT are commonly concordant with (123)I/(99)Tc-sestamibi and rarely increase the sensitivity. Routine inclusion of multimodality imaging technique adds minimal clinical benefit but increases cost to patient in high-volume setting.

  19. Fiber-optic-bundle-based optical coherence tomography.

    PubMed

    Xie, Tuqiang; Mukai, David; Guo, Shuguang; Brenner, Matthew; Chen, Zhongping

    2005-07-15

    A fiber-optic-bundle-based optical coherence tomography (OCT) probe method is presented. The experimental results demonstrate this multimode optical fiber-bundle-based OCT system can achieve a lateral resolution of 12 microm and an axial resolution of 10 microm with a superluminescent diode source. This novel OCT imaging approach eliminates any moving parts in the probe and has a primary advantage for use in extremely compact and safe OCT endoscopes for imaging internal organs and great potential to be combined with confocal endoscopic microscopy.

  20. Wave analysis of a plenoptic system and its applications

    NASA Astrophysics Data System (ADS)

    Shroff, Sapna A.; Berkner, Kathrin

    2013-03-01

    Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.

  1. Innovations in Small-Animal PET/MR Imaging Instrumentation.

    PubMed

    Tsoumpas, Charalampos; Visvikis, Dimitris; Loudos, George

    2016-04-01

    Multimodal imaging has led to a more detailed exploration of different physiologic processes with integrated PET/MR imaging being the most recent entry. Although the clinical need is still questioned, it is well recognized that it represents one of the most active and promising fields of medical imaging research in terms of software and hardware. The hardware developments have moved from small detector components to high-performance PET inserts and new concepts in full systems. Conversely, the software focuses on the efficient performance of necessary corrections without the use of CT data. The most recent developments in both directions are reviewed. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. "The Mermaid's Purse:" Looking Closely at Young Children's Art and Poetry

    ERIC Educational Resources Information Center

    Wolf, Shelby A.

    2006-01-01

    In this article, the author explores the multimodal poems, digital photographs, and three-dimensional artistic creations of young children who live by the sea. Encouraged by their teachers and adult artists, the children learned to look closely at the sign systems of art and poetry to open up worlds of image creation and metaphor making. Teachers…

  3. Latest advances in molecular imaging instrumentation.

    PubMed

    Pichler, Bernd J; Wehrl, Hans F; Judenhofer, Martin S

    2008-06-01

    This review concentrates on the latest advances in molecular imaging technology, including PET, MRI, and optical imaging. In PET, significant improvements in tumor detection and image resolution have been achieved by introducing new scintillation materials, iterative image reconstruction, and correction methods. These advances enabled the first clinical scanners capable of time-of-flight detection and incorporating point-spread-function reconstruction to compensate for depth-of-interaction effects. In the field of MRI, the most important developments in recent years have mainly been MRI systems with higher field strengths and improved radiofrequency coil technology. Hyperpolarized imaging, functional MRI, and MR spectroscopy provide molecular information in vivo. A special focus of this review article is multimodality imaging and, in particular, the emerging field of combined PET/MRI.

  4. Advanced Contrast Agents for Multimodal Biomedical Imaging Based on Nanotechnology.

    PubMed

    Calle, Daniel; Ballesteros, Paloma; Cerdán, Sebastián

    2018-01-01

    Clinical imaging modalities have reached a prominent role in medical diagnosis and patient management in the last decades. Different image methodologies as Positron Emission Tomography, Single Photon Emission Tomography, X-Rays, or Magnetic Resonance Imaging are in continuous evolution to satisfy the increasing demands of current medical diagnosis. Progress in these methodologies has been favored by the parallel development of increasingly more powerful contrast agents. These are molecules that enhance the intrinsic contrast of the images in the tissues where they accumulate, revealing noninvasively the presence of characteristic molecular targets or differential physiopathological microenvironments. The contrast agent field is currently moving to improve the performance of these molecules by incorporating the advantages that modern nanotechnology offers. These include, mainly, the possibilities to combine imaging and therapeutic capabilities over the same theranostic platform or improve the targeting efficiency in vivo by molecular engineering of the nanostructures. In this review, we provide an introduction to multimodal imaging methods in biomedicine, the sub-nanometric imaging agents previously used and the development of advanced multimodal and theranostic imaging agents based in nanotechnology. We conclude providing some illustrative examples from our own laboratories, including recent progress in theranostic formulations of magnetoliposomes containing ω-3 poly-unsaturated fatty acids to treat inflammatory diseases, or the use of stealth liposomes engineered with a pH-sensitive nanovalve to release their cargo specifically in the acidic extracellular pH microenvironment of tumors.

  5. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.

    PubMed

    Rajagopal, Gayathri; Palaniswamy, Ramamoorthy

    2015-01-01

    This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.

  6. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification

    PubMed Central

    Rajagopal, Gayathri; Palaniswamy, Ramamoorthy

    2015-01-01

    This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813

  7. Long-Term Live Cell Imaging of Cell Migration: Effects of Pathogenic Fungi on Human Epithelial Cell Migration.

    PubMed

    Wöllert, Torsten; Langford, George M

    2016-01-01

    Long-term live cell imaging was used in this study to determine the responses of human epithelial cells to pathogenic biofilms formed by Candida albicans. Epithelial cells of the skin represent the front line of defense against invasive pathogens such as C. albicans but under certain circumstances, especially when the host's immune system is compromised, the skin barrier is breached. The mechanisms by which the fungal pathogen penetrates the skin and invade the deeper layers are not fully understood. In this study we used keratinocytes grown in culture as an in vitro model system to determine changes in host cell migration and the actin cytoskeleton in response to virulence factors produced by biofilms of pathogenic C. albicans. It is clear that changes in epithelial cell migration are part of the response to virulence factors secreted by biofilms of C. albicans and the actin cytoskeleton is the downstream effector that mediates cell migration. Our goal is to understand the mechanism by which virulence factors hijack the signaling pathways of the actin cytoskeleton to alter cell migration and thereby invade host tissues. To understand the dynamic changes of the actin cytoskeleton during infection, we used long-term live cell imaging to obtain spatial and temporal information of actin filament dynamics and to identify signal transduction pathways that regulate the actin cytoskeleton and its associated proteins. Long-term live cell imaging was achieved using a high resolution, multi-mode epifluorescence microscope equipped with specialized light sources, high-speed cameras with high sensitivity detectors, and specific biocompatible fluorescent markers. In addition to the multi-mode epifluorescence microscope, a spinning disk confocal long-term live cell imaging system (Olympus CV1000) equipped with a stage incubator to create a stable in vitro environment for long-term real-time and time-lapse microscopy was used. Detailed descriptions of these two long-term live cell imaging systems are provided.

  8. Spinal focal lesion detection in multiple myeloma using multimodal image features

    NASA Astrophysics Data System (ADS)

    Fränzle, Andrea; Hillengass, Jens; Bendl, Rolf

    2015-03-01

    Multiple myeloma is a tumor disease in the bone marrow that affects the skeleton systemically, i.e. multiple lesions can occur in different sites in the skeleton. To quantify overall tumor mass for determining degree of disease and for analysis of therapy response, volumetry of all lesions is needed. Since the large amount of lesions in one patient impedes manual segmentation of all lesions, quantification of overall tumor volume is not possible until now. Therefore development of automatic lesion detection and segmentation methods is necessary. Since focal tumors in multiple myeloma show different characteristics in different modalities (changes in bone structure in CT images, hypointensity in T1 weighted MR images and hyperintensity in T2 weighted MR images), multimodal image analysis is necessary for the detection of focal tumors. In this paper a pattern recognition approach is presented that identifies focal lesions in lumbar vertebrae based on features from T1 and T2 weighted MR images. Image voxels within bone are classified using random forests based on plain intensities and intensity value derived features (maximum, minimum, mean, median) in a 5 x 5 neighborhood around a voxel from both T1 and T2 weighted MR images. A test data sample of lesions in 8 lumbar vertebrae from 4 multiple myeloma patients can be classified at an accuracy of 95% (using a leave-one-patient-out test). The approach provides a reasonable delineation of the example lesions. This is an important step towards automatic tumor volume quantification in multiple myeloma.

  9. Hyperspectral microscope for in vivo imaging of microstructures and cells in tissues

    DOEpatents

    Demos,; Stavros, G [Livermore, CA

    2011-05-17

    An optical hyperspectral/multimodal imaging method and apparatus is utilized to provide high signal sensitivity for implementation of various optical imaging approaches. Such a system utilizes long working distance microscope objectives so as to enable off-axis illumination of predetermined tissue thereby allowing for excitation at any optical wavelength, simplifies design, reduces required optical elements, significantly reduces spectral noise from the optical elements and allows for fast image acquisition enabling high quality imaging in-vivo. Such a technology provides a means of detecting disease at the single cell level such as cancer, precancer, ischemic, traumatic or other type of injury, infection, or other diseases or conditions causing alterations in cells and tissue micro structures.

  10. Evaluation of state-of-the-art imaging systems for in vivo monitoring of retinal structure in mice: current capabilities and limitations

    NASA Astrophysics Data System (ADS)

    Zhang, Pengfei; Zam, Azhar; Pugh, Edward N.; Zawadzki, Robert J.

    2014-02-01

    Animal models of human diseases play an important role in studying and advancing our understanding of these conditions, allowing molecular level studies of pathogenesis as well as testing of new therapies. Recently several non-invasive imaging modalities including Fundus Camera, Scanning Laser Ophthalmoscopy (SLO) and Optical Coherence Tomography (OCT) have been successfully applied to monitor changes in the retinas of the living animals in experiments in which a single animal is followed over a portion of its lifespan. Here we evaluate the capabilities and limitations of these three imaging modalities for visualization of specific structures in the mouse eye. Example images acquired from different types of mice are presented. Future directions of development for these instruments and potential advantages of multi-modal imaging systems are discussed as well.

  11. Multimodality imaging of hepato-biliary disorders in pregnancy: a pictorial essay.

    PubMed

    Ong, Eugene M W; Drukteinis, Jennifer S; Peters, Hope E; Mortelé, Koenraad J

    2009-09-01

    Hepato-biliary disorders are rare complications of pregnancy, but they may be severe, with high fetal and maternal morbidity and mortality. Imaging is, therefore, essential in the rapid diagnosis of some of these conditions so that appropriate, life-saving treatment can be administered. This pictorial essay illustrates the multimodality imaging features of pregnancy-induced hepato-biliary disorders, such as acute fatty liver of pregnancy, preeclamsia and eclampsia, and HELLP syndrome, as well as those conditions which occur in pregnancy but are not unique to it, such as viral hepatitis, Budd-Chiari syndrome, focal hepatic lesions, biliary sludge, cholecystolithiasis, and choledocholithiasis.

  12. Multimodal correlation and intraoperative matching of virtual models in neurosurgery

    NASA Technical Reports Server (NTRS)

    Ceresole, Enrico; Dalsasso, Michele; Rossi, Aldo

    1994-01-01

    The multimodal correlation between different diagnostic exams, the intraoperative calibration of pointing tools and the correlation of the patient's virtual models with the patient himself, are some examples, taken from the biomedical field, of a unique problem: determine the relationship linking representation of the same object in different reference frames. Several methods have been developed in order to determine this relationship, among them, the surface matching method is one that gives the patient minimum discomfort and the errors occurring are compatible with the required precision. The surface matching method has been successfully applied to the multimodal correlation of diagnostic exams such as CT, MR, PET and SPECT. Algorithms for automatic segmentation of diagnostic images have been developed to extract the reference surfaces from the diagnostic exams, whereas the surface of the patient's skull has been monitored, in our approach, by means of a laser sensor mounted on the end effector of an industrial robot. An integrated system for virtual planning and real time execution of surgical procedures has been realized.

  13. The evolution of gadolinium based contrast agents: from single-modality to multi-modality

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Liu, Ruiqing; Peng, Hui; Li, Penghui; Xu, Zushun; Whittaker, Andrew K.

    2016-05-01

    Gadolinium-based contrast agents are extensively used as magnetic resonance imaging (MRI) contrast agents due to their outstanding signal enhancement and ease of chemical modification. However, it is increasingly recognized that information obtained from single modal molecular imaging cannot satisfy the higher requirements on the efficiency and accuracy for clinical diagnosis and medical research, due to its limitation and default rooted in single molecular imaging technique itself. To compensate for the deficiencies of single function magnetic resonance imaging contrast agents, the combination of multi-modality imaging has turned to be the research hotpot in recent years. This review presents an overview on the recent developments of the functionalization of gadolinium-based contrast agents, and their application in biomedicine applications.

  14. Study of electromechanical and mechanical properties of bacteria using force microscopy

    NASA Astrophysics Data System (ADS)

    Reukov, Vladimir; Thompson, Gary; Nikiforov, Maxim; Guo, Senli; Ovchinnikov, Oleg; Jesse, Stephen; Kalinin, Sergei; Vertegel, Alexey

    2010-03-01

    The application of scanning probe microscopy (SPM) to biological systems has evolved over the past decade into a multimodal and spectroscopic instrument that provides multiple information channels at each spatial pixel acquired. Recently, functional recognition imaging based on differing electromechanical properties between Gram negative and Gram positive bacteria was achieved using artificial neural network analysis of band excitation piezoresponse force microscopy (BEPFM) data. The immediate goal of this project was to study mechanical and electromechanical properties of bacterial systems physiologically-relevant solutions using Band-width Excitation Piezoresponce Force Microscopy (BE PFM) in combination with Force Mapping. Electromechanical imaging in physiological environments will improve the versatility of functional recognition imaging and open the way for application of the rapid BEPFM line mode method to other living cell systems.

  15. Large-scale evaluation of multimodal biometric authentication using state-of-the-art systems.

    PubMed

    Snelick, Robert; Uludag, Umut; Mink, Alan; Indovina, Michael; Jain, Anil

    2005-03-01

    We examine the performance of multimodal biometric authentication systems using state-of-the-art Commercial Off-the-Shelf (COTS) fingerprint and face biometric systems on a population approaching 1,000 individuals. The majority of prior studies of multimodal biometrics have been limited to relatively low accuracy non-COTS systems and populations of a few hundred users. Our work is the first to demonstrate that multimodal fingerprint and face biometric systems can achieve significant accuracy gains over either biometric alone, even when using highly accurate COTS systems on a relatively large-scale population. In addition to examining well-known multimodal methods, we introduce new methods of normalization and fusion that further improve the accuracy.

  16. A Pretargeted Approach for the Multimodal PET/NIRF Imaging of Colorectal Cancer.

    PubMed

    Adumeau, Pierre; Carnazza, Kathryn E; Brand, Christian; Carlin, Sean D; Reiner, Thomas; Agnew, Brian J; Lewis, Jason S; Zeglis, Brian M

    2016-01-01

    The complementary nature of positron emission tomography (PET) and near-infrared fluorescence (NIRF) imaging makes the development of strategies for the multimodal PET/NIRF imaging of cancer a very enticing prospect. Indeed, in the context of colorectal cancer, a single multimodal PET/NIRF imaging agent could be used to stage the disease, identify candidates for surgical intervention, and facilitate the image-guided resection of the disease. While antibodies have proven to be highly effective vectors for the delivery of radioisotopes and fluorophores to malignant tissues, the use of radioimmunoconjugates labeled with long-lived nuclides such as 89 Zr poses two important clinical complications: high radiation doses to the patient and the need for significant lag time between imaging and surgery. In vivo pretargeting strategies that decouple the targeting vector from the radioactivity at the time of injection have the potential to circumvent these issues by facilitating the use of positron-emitting radioisotopes with far shorter half-lives. Here, we report the synthesis, characterization, and in vivo validation of a pretargeted strategy for the multimodal PET and NIRF imaging of colorectal carcinoma. This approach is based on the rapid and bioorthogonal ligation between a trans -cyclooctene- and fluorophore-bearing immunoconjugate of the huA33 antibody (huA33-Dye800-TCO) and a 64 Cu-labeled tetrazine radioligand ( 64 Cu-Tz-SarAr). In vivo imaging experiments in mice bearing A33 antigen-expressing SW1222 colorectal cancer xenografts clearly demonstrate that this approach enables the non-invasive visualization of tumors and the image-guided resection of malignant tissue, all at only a fraction of the radiation dose created by a directly labeled radioimmunoconjugate. Additional in vivo experiments in peritoneal and patient-derived xenograft models of colorectal carcinoma reinforce the efficacy of this methodology and underscore its potential as an innovative and useful clinical tool.

  17. Acute imaging does not improve ASTRAL score's accuracy despite having a prognostic value.

    PubMed

    Ntaios, George; Papavasileiou, Vasileios; Faouzi, Mohamed; Vanacker, Peter; Wintermark, Max; Michel, Patrik

    2014-10-01

    The ASTRAL score was recently shown to reliably predict three-month functional outcome in patients with acute ischemic stroke. The study aims to investigate whether information from multimodal imaging increases ASTRAL score's accuracy. All patients registered in the ASTRAL registry until March 2011 were included. In multivariate logistic-regression analyses, we added covariates derived from parenchymal, vascular, and perfusion imaging to the 6-parameter model of the ASTRAL score. If a specific imaging covariate remained an independent predictor of three-month modified Rankin score>2, the area-under-the-curve (AUC) of this new model was calculated and compared with ASTRAL score's AUC. We also performed similar logistic regression analyses in arbitrarily chosen patient subgroups. When added to the ASTRAL score, the following covariates on admission computed tomography/magnetic resonance imaging-based multimodal imaging were not significant predictors of outcome: any stroke-related acute lesion, any nonstroke-related lesions, chronic/subacute stroke, leukoaraiosis, significant arterial pathology in ischemic territory on computed tomography angiography/magnetic resonance angiography/Doppler, significant intracranial arterial pathology in ischemic territory, and focal hypoperfusion on perfusion-computed tomography. The Alberta Stroke Program Early CT score on plain imaging and any significant extracranial arterial pathology on computed tomography angiography/magnetic resonance angiography/Doppler were independent predictors of outcome (odds ratio: 0·93, 95% CI: 0·87-0·99 and odds ratio: 1·49, 95% CI: 1·08-2·05, respectively) but did not increase ASTRAL score's AUC (0·849 vs. 0·850, and 0·8563 vs. 0·8564, respectively). In exploratory analyses in subgroups of different prognosis, age or stroke severity, no covariate was found to increase ASTRAL score's AUC, either. The addition of information derived from multimodal imaging does not increase ASTRAL score's accuracy to predict functional outcome despite having an independent prognostic value. More selected radiological parameters applied in specific subgroups of stroke patients may add prognostic value of multimodal imaging. © 2014 World Stroke Organization.

  18. A mercury arc lamp-based multi-color confocal real time imaging system for cellular structure and function.

    PubMed

    Saito, Kenta; Kobayashi, Kentaro; Tani, Tomomi; Nagai, Takeharu

    2008-01-01

    Multi-point scanning confocal microscopy using a Nipkow disk enables the acquisition of fluorescent images with high spatial and temporal resolutions. Like other single-point scanning confocal systems that use Galvano meter mirrors, a commercially available Nipkow spinning disk confocal unit, Yokogawa CSU10, requires lasers as the excitation light source. The choice of fluorescent dyes is strongly restricted, however, because only a limited number of laser lines can be introduced into a single confocal system. To overcome this problem, we developed an illumination system in which light from a mercury arc lamp is scrambled to make homogeneous light by passing it through a multi-mode optical fiber. This illumination system provides incoherent light with continuous wavelengths, enabling the observation of a wide range of fluorophores. Using this optical system, we demonstrate both the high-speed imaging (up to 100 Hz) of intracellular Ca(2+) propagation, and the multi-color imaging of Ca(2+) and PKC-gamma dynamics in living cells.

  19. Using multimodal information for the segmentation of fluorescent micrographs with application to virology and microbiology.

    PubMed

    Held, Christian; Wenzel, Jens; Webel, Rike; Marschall, Manfred; Lang, Roland; Palmisano, Ralf; Wittenberg, Thomas

    2011-01-01

    In order to improve reproducibility and objectivity of fluorescence microscopy based experiments and to enable the evaluation of large datasets, flexible segmentation methods are required which are able to adapt to different stainings and cell types. This adaption is usually achieved by the manual adjustment of the segmentation methods parameters, which is time consuming and challenging for biologists with no knowledge on image processing. To avoid this, parameters of the presented methods automatically adapt to user generated ground truth to determine the best method and the optimal parameter setup. These settings can then be used for segmentation of the remaining images. As robust segmentation methods form the core of such a system, the currently used watershed transform based segmentation routine is replaced by a fast marching level set based segmentation routine which incorporates knowledge on the cell nuclei. Our evaluations reveal that incorporation of multimodal information improves segmentation quality for the presented fluorescent datasets.

  20. Synthesis and radiolabeling of a somatostatin analog for multimodal imaging

    NASA Astrophysics Data System (ADS)

    Edwards, W. Barry; Liang, Kexian; Xu, Baogang; Anderson, Carolyn J.; Achilefu, Samuel

    2006-02-01

    A new multimodal imaging agent for imaging the somatostatin receptor has been synthesized and evaluated in vitro and in vivo. A somatostatin analog, conjugated to both 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraaceticacid (DOTA) and cypate (BS-296), was synthesized entirely on the solid phase (Fmoc) and purified by RP-HPLC. DOTA was added as a ligand for radiometals such as 64Cu or 177Lu for either radio-imaging or radiotherapy respectively. Cytate, a cypatesomatostatin analog conjugate, has previously demonstrated the ability to visualize somatostatin receptor rich tumor xenografts and natural organs by optical imaging techniques. BS-296 exhibited low nanomolar inhibitory capacity toward the binding of radiolabeled somatostatin analogs in cell membranes enriched in the somatostatin receptor, demonstrating the high affinity of this multimodal imaging peptide and indicating its potential as a molecular imaging agent. 64Cu, an isotope for diagnostic imaging and radiotherapy, was selected as the isotope for radiolabeling BS-296. BS-296 was radiolabeled with 64Cu in high specific activity (200 μCi/μg) in 90% radiochemical yield. Addition of 2,5-dihydroxybenzoic acid (gentisic acid) prevented radiolysis of the sample, allowing for study of the 64Cu -BS-296 the day following radiolabeling. Furthermore, inclusion of DMSO at a level of 20% was found not to interfere with radiolabeling yields and prevented the adherence of 64Cu -BS-296 to the walls of the reaction vessel.

  1. A multimodal imaging approach enables in vivo assessment of antifungal treatment in a mouse model of invasive pulmonary aspergillosis.

    PubMed

    Poelmans, Jennifer; Himmelreich, Uwe; Vanherp, Liesbeth; Zhai, Luca; Hillen, Amy; Holvoet, Bryan; Belderbos, Sarah; Brock, Matthias; Maertens, Johan; Velde, Greetje Vande; Lagrou, Katrien

    2018-05-14

    Aspergillus fumigatus causes life-threatening lung infections in immunocompromised patients. Mouse models are extensively used in research to assess the in vivo efficacy of antifungals. In recent years, there has been an increasing interest in the use of non-invasive imaging techniques to evaluate experimental infections. However, single imaging modalities have limitations concerning the type of information they can provide. In this study, magnetic resonance imaging and bioluminescence imaging were combined to obtain longitudinal information on the extent of developing lesions and fungal load in a leucopenic mouse model of IPA. This multimodal imaging approach was used to assess changes occurring within lungs of infected mice receiving voriconazole treatment starting at different time points after infection. Results showed that IPA development depends on the inoculum size used to infect animals and that disease can be successfully prevented or treated by initiating intervention during early stages of infection. Furthermore, we demonstrated that reduction of the fungal load is not necessarily associated with the disappearance of lesions on anatomical lung images, especially when antifungal treatment coincides with immune recovery. In conclusion, multimodal imaging allows to investigate different aspects of disease progression or recovery by providing complementary information on dynamic processes, which are highly useful for assessing the efficacy of (novel) therapeutic compounds in a time- and labor-efficient manner. Copyright © 2018 American Society for Microbiology.

  2. Anti-Stokes effect CCD camera and SLD based optical coherence tomography for full-field imaging in the 1550nm region

    NASA Astrophysics Data System (ADS)

    Kredzinski, Lukasz; Connelly, Michael J.

    2012-06-01

    Full-field Optical coherence tomography is an en-face interferometric imaging technology capable of carrying out high resolution cross-sectional imaging of the internal microstructure of an examined specimen in a non-invasive manner. The presented system is based on competitively priced optical components available at the main optical communications band located in the 1550 nm region. It consists of a superluminescent diode and an anti-stokes imaging device. The single mode fibre coupled SLD was connected to a multi-mode fibre inserted into a mode scrambler to obtain spatially incoherent illumination, suitable for OCT wide-field modality in terms of crosstalk suppression and image enhancement. This relatively inexpensive system with moderate resolution of approximately 24um x 12um (axial x lateral) was constructed to perform a 3D cross sectional imaging of a human tooth. To our knowledge this is the first 1550 nm full-field OCT system reported.

  3. Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.

    PubMed

    Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu

    2016-01-01

    The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.

  4. Colocalization of cellular nanostructure using confocal fluorescence and partial wave spectroscopy.

    PubMed

    Chandler, John E; Stypula-Cyrus, Yolanda; Almassalha, Luay; Bauer, Greta; Bowen, Leah; Subramanian, Hariharan; Szleifer, Igal; Backman, Vadim

    2017-03-01

    A new multimodal confocal microscope has been developed, which includes a parallel Partial Wave Spectroscopic (PWS) microscopy path. This combination of modalities allows molecular-specific sensing of nanoscale intracellular structure using fluorescent labels. Combining molecular specificity and sensitivity to nanoscale structure allows localization of nanostructural intracellular changes, which is critical for understanding the mechanisms of diseases such as cancer. To demonstrate the capabilities of this multimodal instrument, we imaged HeLa cells treated with valinomycin, a potassium ionophore that uncouples oxidative phosphorylation. Colocalization of fluorescence images of the nuclei (Hoechst 33342) and mitochondria (anti-mitochondria conjugated to Alexa Fluor 488) with PWS measurements allowed us to detect a significant decrease in nuclear nanoscale heterogeneity (Σ), while no significant change in Σ was observed at mitochondrial sites. In addition, application of the new multimodal imaging approach was demonstrated on human buccal samples prepared using a cancer screening protocol. These images demonstrate that nanoscale intracellular structure can be studied in healthy and diseased cells at molecular-specific sites. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Love that Book: Multimodal Response to Literature

    ERIC Educational Resources Information Center

    Dalton, Bridget; Grisham, Dana L.

    2013-01-01

    Composing with different modes--image, sound, video and the written word--to respond to and analyze literary and informational text helps students develop as readers and digital communicators. This article showcases five multimodal strategies for engaging children in rich literature-based learning using digital tools and Internet resources.

  6. Toward intravascular morphological and biochemical imaging of atherosclerosis with optical coherence tomography (OCT) and fluorescence lifetime imaging (FLIM) (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Kim, Wihan; Serafino, Michael; Walton, Brian; Jo, Javier A.; Applegate, Brian E.

    2017-02-01

    We have shown in an ex vivo human coronary artery study that the biochemical information derived from FLIM interpreted in the context of the morphological information from OCT enables a detailed classification of human coronary plaques associated with atherosclerosis. The identification of lipid-rich plaques prone to erosion or rupture and associated with sudden coronary events can impact current clinical practice as well as future development of targeted therapies for "vulnerable" plaques. In order to realize clinical translation of intravascular OCT/FLIM we have had to develop several key technologies. A multimodal catheter endoscope capable of delivering near UV excitation for FLIM and shortwave IR for OCT has been fabricated using a ball lens design with a double clad fiber. The OCT illumination and the FLIM excitation propogate down the inner core while the large outer multimode core captures the fluorescence emission. To enable intravascular pullback imaging with this endoscope we have developed an ultra-wideband fiber optic rotary joint using the same double clad fiber. The rotary joint is based on a lensless design where two cleaved fibers, one fixed and one rotating, are brought into close proximity but not touching. Using water as the lubricant enabled operation over the near UV-shortwave IR range. Transmission over this bandwidth has been measured to be near 100% at rotational frequencies up to 147 Hz. The entire system has been assembled and placed on a mobile cart suitable for cath lab based imaging. System development, performance, and early ex vivo imaging results will be discussed.

  7. Compressive Coded-Aperture Multimodal Imaging Systems

    NASA Astrophysics Data System (ADS)

    Rueda-Chacon, Hoover F.

    Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.

  8. Depth-resolved imaging of colon tumor using optical coherence tomography and fluorescence laminar optical tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tang, Qinggong; Frank, Aaron; Wang, Jianting; Chen, Chao-wei; Jin, Lily; Lin, Jon; Chan, Joanne M.; Chen, Yu

    2016-03-01

    Early detection of neoplastic changes remains a critical challenge in clinical cancer diagnosis and treatment. Many cancers arise from epithelial layers such as those of the gastrointestinal (GI) tract. Current standard endoscopic technology is unable to detect those subsurface lesions. Since cancer development is associated with both morphological and molecular alterations, imaging technologies that can quantitative image tissue's morphological and molecular biomarkers and assess the depth extent of a lesion in real time, without the need for tissue excision, would be a major advance in GI cancer diagnostics and therapy. In this research, we investigated the feasibility of multi-modal optical imaging including high-resolution optical coherence tomography (OCT) and depth-resolved high-sensitivity fluorescence laminar optical tomography (FLOT) for structural and molecular imaging. APC (adenomatous polyposis coli) mice model were imaged using OCT and FLOT and the correlated histopathological diagnosis was obtained. Quantitative structural (the scattering coefficient) and molecular imaging parameters (fluorescence intensity) from OCT and FLOT images were developed for multi-parametric analysis. This multi-modal imaging method has demonstrated the feasibility for more accurate diagnosis with 87.4% (87.3%) for sensitivity (specificity) which gives the most optimal diagnosis (the largest area under receiver operating characteristic (ROC) curve). This project results in a new non-invasive multi-modal imaging platform for improved GI cancer detection, which is expected to have a major impact on detection, diagnosis, and characterization of GI cancers, as well as a wide range of epithelial cancers.

  9. An interventional multispectral photoacoustic imaging platform for the guidance of minimally invasive procedures

    NASA Astrophysics Data System (ADS)

    Xia, Wenfeng; Nikitichev, Daniil I.; Mari, Jean Martial; West, Simeon J.; Ourselin, Sebastien; Beard, Paul C.; Desjardins, Adrien E.

    2015-07-01

    Precise and efficient guidance of medical devices is of paramount importance for many minimally invasive procedures. These procedures include fetal interventions, tumor biopsies and treatments, central venous catheterisations and peripheral nerve blocks. Ultrasound imaging is commonly used for guidance, but it often provides insufficient contrast with which to identify soft tissue structures such as vessels, tumors, and nerves. In this study, a hybrid interventional imaging system that combines ultrasound imaging and multispectral photoacoustic imaging for guiding minimally invasive procedures was developed and characterized. The system provides both structural information from ultrasound imaging and molecular information from multispectral photoacoustic imaging. It uses a commercial linear-array ultrasound imaging probe as the ultrasound receiver, with a multimode optical fiber embedded in a needle to deliver pulsed excitation light to tissue. Co-registration of ultrasound and photoacoustic images is achieved with the use of the same ultrasound receiver for both modalities. Using tissue ex vivo, the system successfully discriminated deep-located fat tissue from the surrounding muscle tissue. The measured photoacoustic spectrum of the fat tissue had good agreement with the lipid spectrum in literature.

  10. Integrated photoacoustic, ultrasound and fluorescence platform for diagnostic medical imaging-proof of concept study with a tissue mimicking phantom.

    PubMed

    James, Joseph; Murukeshan, Vadakke Matham; Woh, Lye Sun

    2014-07-01

    The structural and molecular heterogeneities of biological tissues demand the interrogation of the samples with multiple energy sources and provide visualization capabilities at varying spatial resolution and depth scales for obtaining complementary diagnostic information. A novel multi-modal imaging approach that uses optical and acoustic energies to perform photoacoustic, ultrasound and fluorescence imaging at multiple resolution scales from the tissue surface and depth is proposed in this paper. The system comprises of two distinct forms of hardware level integration so as to have an integrated imaging system under a single instrumentation set-up. The experimental studies show that the system is capable of mapping high resolution fluorescence signatures from the surface, optical absorption and acoustic heterogeneities along the depth (>2cm) of the tissue at multi-scale resolution (<1µm to <0.5mm).

  11. Exploring infrared neural stimulation with multimodal nonlinear imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Adams, Wilson R.; Mahadevan-Jansen, Anita

    2017-02-01

    Infrared neural stimulation (INS) provides optical control of neural excitability using near to mid-infrared (mid-IR) light, which allows for spatially selective, artifact-free excitation without the introduction of exogenous agents or genetic modification. Although neural excitability is mediated by a transient temperature increase due to water absorption of IR energy, the molecular nature of IR excitability in neural tissue remains unknown. Current research suggests that transient changes in local tissue temperature give rise to a myriad of cellular responses that have been individually attributed to IR mediated excitability. To further elucidate the underlying biophysical mechanisms, we have begun work towards employing a novel multimodal nonlinear imaging platform to probe the molecular underpinnings of INS. Our imaging system performs coherent anti-Stokes Raman scattering (CARS), stimulated Raman scattering (SRS), two-photon excitation fluorescence (TPEF), second-harmonic generation (SHG) and thermal imaging into a single platform that allows for unprecedented co-registration of thermal and biochemical information in real-time. Here, we present our work leveraging CARS and SRS in acute thalamocortical brain slice preparations. We observe the evolution of lipid and protein-specific Raman bands during INS and electrically evoked activity in real-time. Combined with two-photon fluorescence and second harmonic generation, we offer insight to cellular metabolism and membrane dynamics during INS. Thermal imaging allows for the coregistration of acquired biochemical information with temperature information. Our work previews the versatility and capabilities of coherent Raman imaging combined with multiphoton imaging to observe biophysical phenomena for neuroscience applications.

  12. Multimodal full-field optical coherence tomography on biological tissue: toward all optical digital pathology

    NASA Astrophysics Data System (ADS)

    Harms, F.; Dalimier, E.; Vermeulen, P.; Fragola, A.; Boccara, A. C.

    2012-03-01

    Optical Coherence Tomography (OCT) is an efficient technique for in-depth optical biopsy of biological tissues, relying on interferometric selection of ballistic photons. Full-Field Optical Coherence Tomography (FF-OCT) is an alternative approach to Fourier-domain OCT (spectral or swept-source), allowing parallel acquisition of en-face optical sections. Using medium numerical aperture objective, it is possible to reach an isotropic resolution of about 1x1x1 ìm. After stitching a grid of acquired images, FF-OCT gives access to the architecture of the tissue, for both macroscopic and microscopic structures, in a non-invasive process, which makes the technique particularly suitable for applications in pathology. Here we report a multimodal approach to FF-OCT, combining two Full-Field techniques for collecting a backscattered endogeneous OCT image and a fluorescence exogeneous image in parallel. Considering pathological diagnosis of cancer, visualization of cell nuclei is of paramount importance. OCT images, even for the highest resolution, usually fail to identify individual nuclei due to the nature of the optical contrast used. We have built a multimodal optical microscope based on the combination of FF-OCT and Structured Illumination Microscopy (SIM). We used x30 immersion objectives, with a numerical aperture of 1.05, allowing for sub-micron transverse resolution. Fluorescent staining of nuclei was obtained using specific fluorescent dyes such as acridine orange. We present multimodal images of healthy and pathological skin tissue at various scales. This instrumental development paves the way for improvements of standard pathology procedures, as a faster, non sacrificial, operator independent digital optical method compared to frozen sections.

  13. Voxel-based automated detection of focal cortical dysplasia lesions using diffusion tensor imaging and T2-weighted MRI data.

    PubMed

    Wang, Yanming; Zhou, Yawen; Wang, Huijuan; Cui, Jin; Nguchu, Benedictor Alexander; Zhang, Xufei; Qiu, Bensheng; Wang, Xiaoxiao; Zhu, Mingwang

    2018-05-21

    The aim of this study was to automatically detect focal cortical dysplasia (FCD) lesions in patients with extratemporal lobe epilepsy by relying on diffusion tensor imaging (DTI) and T2-weighted magnetic resonance imaging (MRI) data. We implemented an automated classifier using voxel-based multimodal features to identify gray and white matter abnormalities of FCD in patient cohorts. In addition to the commonly used T2-weighted image intensity feature, DTI-based features were also utilized. A Gaussian processes for machine learning (GPML) classifier was tested on 12 patients with FCD (8 with histologically confirmed FCD) scanned at 1.5 T and cross-validated using a leave-one-out strategy. Moreover, we compared the multimodal GPML paradigm's performance with that of single modal GPML and classical support vector machine (SVM). Our results demonstrated that the GPML performance on DTI-based features (mean AUC = 0.63) matches with the GPML performance on T2-weighted image intensity feature (mean AUC = 0.64). More promisingly, GPML yielded significantly improved performance (mean AUC = 0.76) when applying DTI-based features to multimodal paradigm. Based on the results, it can also be clearly stated that the proposed GPML strategy performed better and is robust to unbalanced dataset contrary to SVM that performed poorly (AUC = 0.69). Therefore, the GPML paradigm using multimodal MRI data containing DTI modality has promising result towards detection of the FCD lesions and provides an effective direction for future researches. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. High-Resolution, Non-Invasive Imaging of Upper Vocal Tract Articulators Compatible with Human Brain Recordings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouchard, Kristofer E.; Conant, David F.; Anumanchipalli, Gopala K.

    A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial-especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship acrossmore » speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics.« less

  15. High-Resolution, Non-Invasive Imaging of Upper Vocal Tract Articulators Compatible with Human Brain Recordings

    PubMed Central

    Anumanchipalli, Gopala K.; Dichter, Benjamin; Chaisanguanthum, Kris S.; Johnson, Keith; Chang, Edward F.

    2016-01-01

    A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial—especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship across speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics. PMID:27019106

  16. High-Resolution, Non-Invasive Imaging of Upper Vocal Tract Articulators Compatible with Human Brain Recordings

    DOE PAGES

    Bouchard, Kristofer E.; Conant, David F.; Anumanchipalli, Gopala K.; ...

    2016-03-28

    A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial-especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship acrossmore » speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics.« less

  17. Image-guided endobronchial ultrasound

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Zang, Xiaonan; Cheirsilp, Ronnarit; Byrnes, Patrick; Kuhlengel, Trevor; Bascom, Rebecca; Toth, Jennifer

    2016-03-01

    Endobronchial ultrasound (EBUS) is now recommended as a standard procedure for in vivo verification of extraluminal diagnostic sites during cancer-staging bronchoscopy. Yet, physicians vary considerably in their skills at using EBUS effectively. Regarding existing bronchoscopy guidance systems, studies have shown their effectiveness in the lung-cancer management process. With such a system, a patient's X-ray computed tomography (CT) scan is used to plan a procedure to regions of interest (ROIs). This plan is then used during follow-on guided bronchoscopy. Recent clinical guidelines for lung cancer, however, also dictate using positron emission tomography (PET) imaging for identifying suspicious ROIs and aiding in the cancer-staging process. While researchers have attempted to use guided bronchoscopy systems in tandem with PET imaging and EBUS, no true EBUS-centric guidance system exists. We now propose a full multimodal image-based methodology for guiding EBUS. The complete methodology involves two components: 1) a procedure planning protocol that gives bronchoscope movements appropriate for live EBUS positioning; and 2) a guidance strategy and associated system graphical user interface (GUI) designed for image-guided EBUS. We present results demonstrating the operation of the system.

  18. Focusing and imaging with increased numerical apertures through multimode fibers with micro-fabricated optics.

    PubMed

    Bianchi, S; Rajamanickam, V P; Ferrara, L; Di Fabrizio, E; Liberale, C; Di Leonardo, R

    2013-12-01

    The use of individual multimode optical fibers in endoscopy applications has the potential to provide highly miniaturized and noninvasive probes for microscopy and optical micromanipulation. A few different strategies have been proposed recently, but they all suffer from intrinsically low resolution related to the low numerical aperture of multimode fibers. Here, we show that two-photon polymerization allows for direct fabrication of micro-optics components on the fiber end, resulting in an increase of the numerical aperture to a value that is close to 1. Coupling light into the fiber through a spatial light modulator, we were able to optically scan a submicrometer spot (300 nm FWHM) over an extended region, facing the opposite fiber end. Fluorescence imaging with improved resolution is also demonstrated.

  19. Children Creating Multimodal Stories about a Familiar Environment

    ERIC Educational Resources Information Center

    Kervin, Lisa; Mantei, Jessica

    2017-01-01

    Storytelling is a practice that enables children to apply their literacy skills. This article shares a collaborative literacy strategy devised to enable children to create multimodal stories about their familiar school environment. The strategy uses resources, including the children's own drawings, images from Google Maps, and the Puppet Pals…

  20. Generation of light-sheet at the end of multimode fibre (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Plöschner, Martin; Kollárová, Véra; Dostál, Zbyněk.; Nylk, Jonathan; Barton-Owen, Thomas; Ferrier, David E. K.; Chmelik, Radim; Dholakia, Kishan; Cizmár, TomáÅ.¡

    2017-02-01

    Light-sheet fluorescence microscopy is quickly becoming one of the cornerstone imaging techniques in biology as it provides rapid, three-dimensional sectioning of specimens at minimal levels of phototoxicity. It is very appealing to bring this unique combination of imaging properties into an endoscopic setting and be able to perform optical sectioning deep in tissues. Current endoscopic approaches for delivery of light-sheet illumination are based on single-mode optical fibre terminated by cylindrical gradient index lens. Such configuration generates a light-sheet plane that is axially fixed and a mechanical movement of either the sample or the endoscope is required to acquire three-dimensional information about the sample. Furthermore, the axial resolution of this technique is limited to 5um. The delivery of the light-sheet through the multimode fibre provides better axial resolution limited only by its numerical aperture, the light-sheet is scanned holographically without any mechanical movement, and multiple advanced light-sheet imaging modalities, such as Bessel and structured illumination Bessel beam, are intrinsically supported by the system due to the cylindrical symmetry of the fibre. We discuss the holographic techniques for generation of multiple light-sheet types and demonstrate the imaging on a sample of fluorescent beads fixed in agarose gel, as well as on a biological sample of Spirobranchus Lamarcki.

  1. Evaluating laser-driven Bremsstrahlung radiation sources for imaging and analysis of nuclear waste packages.

    PubMed

    Jones, Christopher P; Brenner, Ceri M; Stitt, Camilla A; Armstrong, Chris; Rusby, Dean R; Mirfayzi, Seyed R; Wilson, Lucy A; Alejo, Aarón; Ahmed, Hamad; Allott, Ric; Butler, Nicholas M H; Clarke, Robert J; Haddock, David; Hernandez-Gomez, Cristina; Higginson, Adam; Murphy, Christopher; Notley, Margaret; Paraskevoulakos, Charilaos; Jowsey, John; McKenna, Paul; Neely, David; Kar, Satya; Scott, Thomas B

    2016-11-15

    A small scale sample nuclear waste package, consisting of a 28mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500keV), with a source size of <0.5mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30cm(2) scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned. Copyright © 2016. Published by Elsevier B.V.

  2. Design and rationale of the Mechanical Retrieval and Recanalization of Stroke Clots Using Embolectomy (MR RESCUE) Trial.

    PubMed

    Kidwell, Chelsea S; Jahan, Reza; Alger, Jeffry R; Schaewe, Timothy J; Guzy, Judy; Starkman, Sidney; Elashoff, Robert; Gornbein, Jeffrey; Nenov, Val; Saver, Jeffrey L

    2014-01-01

    Multimodal imaging has the potential to identify acute ischaemic stroke patients most likely to benefit from late recanalization therapies. The general aim of the Mechanical Retrieval and Recanalization of Stroke Clots Using Embolectomy Trial is to investigate whether multimodal imaging can identify patients who will benefit substantially from mechanical embolectomy for the treatment of acute ischaemic stroke up to eight-hours from symptom onset. Mechanical Retrieval and Recanalization of Stroke Clots Using Embolectomy is a randomized, controlled, blinded-outcome clinical trial. Acute ischaemic stroke patients with large vessel intracranial internal carotid artery or middle cerebral artery M1 or M2 occlusion enrolled within eight-hours of symptom onset are eligible. The study sample size is 120 patients. Patients are randomized to endovascular embolectomy employing the Merci Retriever (Concentric Medical, Mountain View, CA) or the Penumbra System (Penumbra, Alameda, CA) vs. standard medical care, with randomization stratified by penumbral pattern. The primary aim of the trial is to test the hypothesis that the presence of substantial ischaemic penumbral tissue visualized on multimodal imaging (magnetic resonance imaging or computed tomography) predicts patients most likely to respond to mechanical embolectomy for treatment of acute ischaemic stroke due to a large vessel, intracranial occlusion up to eight-hours from symptom onset. This hypothesis will be tested by analysing whether pretreatment imaging pattern has a significant interaction with treatment as a determinant of functional outcome based on the distribution of scores on the modified Rankin Scale measure of global disability assessed 90 days post-stroke. Nested hypotheses test for (1) treatment efficacy in patients with a penumbral pattern pretreatment, and (2) absence of treatment benefit (equivalency) in patients without a penumbral pattern pretreatment. An additional aim will only be tested if the primary hypothesis of an interaction is negative: that patients treated with mechanical embolectomy have improved functional outcome vs. standard medical management. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.

  3. Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images

    NASA Astrophysics Data System (ADS)

    Miri, Mohammad Saleh; Lee, Kyungmoo; Niemeijer, Meindert; Abràmoff, Michael D.; Kwon, Young H.; Garvin, Mona K.

    2013-03-01

    Glaucoma is one of the major causes of blindness worldwide. One important structural parameter for the diagnosis and management of glaucoma is the cup-to-disc ratio (CDR), which tends to become larger as glaucoma progresses. While approaches exist for segmenting the optic disc and cup within fundus photographs, and more recently, within spectral-domain optical coherence tomography (SD-OCT) volumes, no approaches have been reported for the simultaneous segmentation of these structures within both modalities combined. In this work, a multimodal pixel-classification approach for the segmentation of the optic disc and cup within fundus photographs and SD-OCT volumes is presented. In particular, after segmentation of other important structures (such as the retinal layers and retinal blood vessels) and fundus-to-SD-OCT image registration, features are extracted from both modalities and a k-nearest-neighbor classification approach is used to classify each pixel as cup, rim, or background. The approach is evaluated on 70 multimodal image pairs from 35 subjects in a leave-10%-out fashion (by subject). A significant improvement in classification accuracy is obtained using the multimodal approach over that obtained from the corresponding unimodal approach (97.8% versus 95.2%; p < 0:05; paired t-test).

  4. Mechanisms of murine cerebral malaria: Multimodal imaging of altered cerebral metabolism and protein oxidation at hemorrhage sites

    PubMed Central

    Hackett, Mark J.; Aitken, Jade B.; El-Assaad, Fatima; McQuillan, James A.; Carter, Elizabeth A.; Ball, Helen J.; Tobin, Mark J.; Paterson, David; de Jonge, Martin D.; Siegele, Rainer; Cohen, David D.; Vogt, Stefan; Grau, Georges E.; Hunt, Nicholas H.; Lay, Peter A.

    2015-01-01

    Using a multimodal biospectroscopic approach, we settle several long-standing controversies over the molecular mechanisms that lead to brain damage in cerebral malaria, which is a major health concern in developing countries because of high levels of mortality and permanent brain damage. Our results provide the first conclusive evidence that important components of the pathology of cerebral malaria include peroxidative stress and protein oxidation within cerebellar gray matter, which are colocalized with elevated nonheme iron at the site of microhemorrhage. Such information could not be obtained previously from routine imaging methods, such as electron microscopy, fluorescence, and optical microscopy in combination with immunocytochemistry, or from bulk assays, where the level of spatial information is restricted to the minimum size of tissue that can be dissected. We describe the novel combination of chemical probe–free, multimodal imaging to quantify molecular markers of disturbed energy metabolism and peroxidative stress, which were used to provide new insights into understanding the pathogenesis of cerebral malaria. In addition to these mechanistic insights, the approach described acts as a template for the future use of multimodal biospectroscopy for understanding the molecular processes involved in a range of clinically important acute and chronic (neurodegenerative) brain diseases to improve treatment strategies. PMID:26824064

  5. A Deep and Autoregressive Approach for Topic Modeling of Multimodal Data.

    PubMed

    Zheng, Yin; Zhang, Yu-Jin; Larochelle, Hugo

    2016-06-01

    Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set.

  6. Drug-related webpages classification based on multi-modal local decision fusion

    NASA Astrophysics Data System (ADS)

    Hu, Ruiguang; Su, Xiaojing; Liu, Yanxin

    2018-03-01

    In this paper, multi-modal local decision fusion is used for drug-related webpages classification. First, meaningful text are extracted through HTML parsing, and effective images are chosen by the FOCARSS algorithm. Second, six SVM classifiers are trained for six kinds of drug-taking instruments, which are represented by PHOG. One SVM classifier is trained for the cannabis, which is represented by the mid-feature of BOW model. For each instance in a webpage, seven SVMs give seven labels for its image, and other seven labels are given by searching the names of drug-taking instruments and cannabis in its related text. Concatenating seven labels of image and seven labels of text, the representation of those instances in webpages are generated. Last, Multi-Instance Learning is used to classify those drugrelated webpages. Experimental results demonstrate that the classification accuracy of multi-instance learning with multi-modal local decision fusion is much higher than those of single-modal classification.

  7. Multimodal Deep Autoencoder for Human Pose Recovery.

    PubMed

    Hong, Chaoqun; Yu, Jun; Wan, Jian; Tao, Dacheng; Wang, Meng

    2015-12-01

    Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.

  8. Integration of Sparse Multi-modality Representation and Anatomical Constraint for Isointense Infant Brain MR Image Segmentation

    PubMed Central

    Wang, Li; Shi, Feng; Gao, Yaozong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination process. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6–8 months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6 months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter. PMID:24291615

  9. Fluorescence labeled microbubbles for multimodal imaging.

    PubMed

    Barrefelt, Åsa; Zhao, Ying; Larsson, Malin K; Egri, Gabriella; Kuiper, Raoul V; Hamm, Jörg; Saghafian, Maryam; Caidahl, Kenneth; Brismar, Torkel B; Aspelin, Peter; Heuchel, Rainer; Muhammed, Mamoun; Dähne, Lars; Hassan, Moustapha

    2015-08-28

    Air-filled polyvinyl alcohol microbubbles (PVA-MBs) were recently introduced as a contrast agent for ultrasound imaging. In the present study, we explore the possibility of extending their application in multimodal imaging by labeling them with a near infrared (NIR) fluorophore, VivoTag-680. PVA-MBs were injected intravenously into FVB/N female mice and their dynamic biodistribution over 24 h was determined by 3D-fluorescence imaging co-registered with 3D-μCT imaging, to verify the anatomic location. To further confirm the biodistribution results from in vivo imaging, organs were removed and examined histologically using bright field and fluorescence microscopy. Fluorescence imaging detected PVA-MB accumulation in the lungs within the first 30 min post-injection. Redistribution to a low extent was observed in liver and kidneys at 4 h, and to a high extent mainly in the liver and spleen at 24 h. Histology confirmed PVA-MB localization in lung capillaries and macrophages. In the liver, they were associated with Kupffer cells; in the spleen, they were located mostly within the marginal-zone. Occasional MBs were observed in the kidney glomeruli and interstitium. The potential application of PVA-MBs as a contrast agent was also studied using ultrasound (US) imaging in subcutaneous and orthotopic pancreatic cancer mouse models, to visualize blood flow within the tumor mass. In conclusion, this study showed that PVA-MBs are useful as a contrast agent for multimodal imaging. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Stereotactic radiation treatment planning and follow-up studies involving fused multimodality imaging.

    PubMed

    Hamm, Klaus D; Surber, Gunnar; Schmücking, Michael; Wurm, Reinhard E; Aschenbach, Rene; Kleinert, Gabriele; Niesen, A; Baum, Richard P

    2004-11-01

    Innovative new software solutions may enable image fusion to produce the desired data superposition for precise target definition and follow-up studies in radiosurgery/stereotactic radiotherapy in patients with intracranial lesions. The aim is to integrate the anatomical and functional information completely into the radiation treatment planning and to achieve an exact comparison for follow-up examinations. Special conditions and advantages of BrainLAB's fully automatic image fusion system are evaluated and described for this purpose. In 458 patients, the radiation treatment planning and some follow-up studies were performed using an automatic image fusion technique involving the use of different imaging modalities. Each fusion was visually checked and corrected as necessary. The computerized tomography (CT) scans for radiation treatment planning (slice thickness 1.25 mm), as well as stereotactic angiography for arteriovenous malformations, were acquired using head fixation with stereotactic arc or, in the case of stereotactic radiotherapy, with a relocatable stereotactic mask. Different magnetic resonance (MR) imaging sequences (T1, T2, and fluid-attenuated inversion-recovery images) and positron emission tomography (PET) scans were obtained without head fixation. Fusion results and the effects on radiation treatment planning and follow-up studies were analyzed. The precision level of the results of the automatic fusion depended primarily on the image quality, especially the slice thickness and the field homogeneity when using MR images, as well as on patient movement during data acquisition. Fully automated image fusion of different MR, CT, and PET studies was performed for each patient. Only in a few cases was it necessary to correct the fusion manually after visual evaluation. These corrections were minor and did not materially affect treatment planning. High-quality fusion of thin slices of a region of interest with a complete head data set could be performed easily. The target volume for radiation treatment planning could be accurately delineated using multimodal information provided by CT, MR, angiography, and PET studies. The fusion of follow-up image data sets yielded results that could be successfully compared and quantitatively evaluated. Depending on the quality of the originally acquired image, automated image fusion can be a very valuable tool, allowing for fast (approximately 1-2 minute) and precise fusion of all relevant data sets. Fused multimodality imaging improves the target volume definition for radiation treatment planning. High-quality follow-up image data sets should be acquired for image fusion to provide exactly comparable slices and volumetric results that will contribute to quality contol.

  11. Echocardiography in the Era of Multimodality Cardiovascular Imaging

    PubMed Central

    Shah, Benoy Nalin

    2013-01-01

    Echocardiography remains the most frequently performed cardiac imaging investigation and is an invaluable tool for detailed and accurate evaluation of cardiac structure and function. Echocardiography, nuclear cardiology, cardiac magnetic resonance imaging, and cardiovascular-computed tomography comprise the subspeciality of cardiovascular imaging, and these techniques are often used together for a multimodality, comprehensive assessment of a number of cardiac diseases. This paper provides the general cardiologist and physician with an overview of state-of-the-art modern echocardiography, summarising established indications as well as highlighting advances in stress echocardiography, three-dimensional echocardiography, deformation imaging, and contrast echocardiography. Strengths and limitations of echocardiography are discussed as well as the growing role of real-time three-dimensional echocardiography in the guidance of structural heart interventions in the cardiac catheter laboratory. PMID:23878804

  12. Decomposing Multifractal Crossovers

    PubMed Central

    Nagy, Zoltan; Mukli, Peter; Herman, Peter; Eke, Andras

    2017-01-01

    Physiological processes—such as, the brain's resting-state electrical activity or hemodynamic fluctuations—exhibit scale-free temporal structuring. However, impacts common in biological systems such as, noise, multiple signal generators, or filtering by transport function, result in multimodal scaling that cannot be reliably assessed by standard analytical tools that assume unimodal scaling. Here, we present two methods to identify breakpoints or crossovers in multimodal multifractal scaling functions. These methods incorporate the robust iterative fitting approach of the focus-based multifractal formalism (FMF). The first approach (moment-wise scaling range adaptivity) allows for a breakpoint-based adaptive treatment that analyzes segregated scale-invariant ranges. The second method (scaling function decomposition method, SFD) is a crossover-based design aimed at decomposing signal constituents from multimodal scaling functions resulting from signal addition or co-sampling, such as, contamination by uncorrelated fractals. We demonstrated that these methods could handle multimodal, mono- or multifractal, and exact or empirical signals alike. Their precision was numerically characterized on ideal signals, and a robust performance was demonstrated on exemplary empirical signals capturing resting-state brain dynamics by near infrared spectroscopy (NIRS), electroencephalography (EEG), and blood oxygen level-dependent functional magnetic resonance imaging (fMRI-BOLD). The NIRS and fMRI-BOLD low-frequency fluctuations were dominated by a multifractal component over an underlying biologically relevant random noise, thus forming a bimodal signal. The crossover between the EEG signal components was found at the boundary between the δ and θ bands, suggesting an independent generator for the multifractal δ rhythm. The robust implementation of the SFD method should be regarded as essential in the seamless processing of large volumes of bimodal fMRI-BOLD imaging data for the topology of multifractal metrics free of the masking effect of the underlying random noise. PMID:28798694

  13. Rat brain imaging using full field optical coherence microscopy with short multimode fiber probe

    NASA Astrophysics Data System (ADS)

    Sato, Manabu; Saito, Daisuke; Kurotani, Reiko; Abe, Hiroyuki; Kawauchi, Satoko; Sato, Shunichi; Nishidate, Izumi

    2017-02-01

    We demonstrated FF OCM(full field optical coherence microscopy) using an ultrathin forward-imaging SMMF (short multimode fiber) probe of 50 μm core diameter, 125 μm diameter, and 7.4 mm length, which is a typical graded-index multimode fiber for optical communications. The axial resolution was measured to be 2.20 μm, which is close to the calculated axial resolution of 2.06 μm. The lateral resolution was evaluated to be 4.38 μm using a test pattern. Assuming that the FWHM of the contrast is the DOF (depth of focus), the DOF of the signal is obtained at 36 μm and that of the OCM is 66 μm. The contrast of the OCT images was 6.1 times higher than that of the signal images due to the coherence gate. After an euthanasia the rat brain was resected and cut at 2.6mm tail from Bregma. Contacting SMMF to the primary somatosensory cortex and the agranular insular cortex of ex vivo brain, OCM images of the brain were measured 100 times with 2μm step. 3D OCM images of the brain were measured, and internal structure information was obtained. The feasibility of an SMMF as an ultrathin forward-imaging probe in full-field OCM has been demonstrated.

  14. Multimodality image display station

    NASA Astrophysics Data System (ADS)

    Myers, H. Joseph

    1990-07-01

    The Multi-modality Image Display Station (MIDS) is designed for the use of physicians outside of the radiology department. Connected to a local area network or a host computer, it provides speedy access to digitized radiology images and written diagnostics needed by attending and consulting physicians near the patient bedside. Emphasis has been placed on low cost, high performance and ease of use. The work is being done as a joint study with the University of Texas Southwestern Medical Center at Dallas, and as part of a joint development effort with the Mayo Clinic. MIDS is a prototype, and should not be assumed to be an IBM product.

  15. TiO2 -coated fluoride nanoparticles for dental multimodal optical imaging.

    PubMed

    Braz, Ana K S; Moura, Diógenes S; Gomes, Anderson S L; Ohulchanskyy, Tymish Y; Chen, Guanying; Liu, Maixian; Damasco, Jossana; de Araujo, Renato E; Prasad, Paras N

    2018-04-01

    Core-shell nanostructures associated with photonics techniques have found innumerous applications in diagnostics and therapy. In this work, we introduce a novel core-shell nanostructure design that serves as a multimodal optical imaging contrast agent for dental adhesion evaluation. This nanostructure consists of a rare-earth-doped (NaYF 4 :Yb 60%, Tm 0.5%)/NaYF 4 particle as the core (hexagonal prism, ~51 nm base side length) and the highly refractive TiO 2 material as the shell (~thickness of 15 nm). We show that the TiO 2 shell provides enhanced contrast for optical coherence tomography (OCT), while the rare-earth-doped core upconverts excitation light from 975 nm to an emission peaked at 800 nm for photoluminescence imaging. The OCT and the photoluminescence wide-field images of human tooth were demonstrated with this nanoparticle core-shell contrast agent. In addition, the described core-shell nanoparticles (CSNps) were dispersed in the primer of a commercially available dental bonding system, allowing clear identification of dental adhesive layers with OCT. We evaluated that the presence of the CSNp in the adhesive induced an enhancement of 67% scattering coefficient to significantly increase the OCT contrast. Moreover, our results highlight that the upconversion photoluminescence in the near-infrared spectrum region is suitable for image of deep dental tissue. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Intrasubject multimodal groupwise registration with the conditional template entropy.

    PubMed

    Polfliet, Mathias; Klein, Stefan; Huizinga, Wyke; Paulides, Margarethus M; Niessen, Wiro J; Vandemeulebroucke, Jef

    2018-05-01

    Image registration is an important task in medical image analysis. Whereas most methods are designed for the registration of two images (pairwise registration), there is an increasing interest in simultaneously aligning more than two images using groupwise registration. Multimodal registration in a groupwise setting remains difficult, due to the lack of generally applicable similarity metrics. In this work, a novel similarity metric for such groupwise registration problems is proposed. The metric calculates the sum of the conditional entropy between each image in the group and a representative template image constructed iteratively using principal component analysis. The proposed metric is validated in extensive experiments on synthetic and intrasubject clinical image data. These experiments showed equivalent or improved registration accuracy compared to other state-of-the-art (dis)similarity metrics and improved transformation consistency compared to pairwise mutual information. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  17. BNU-LSVED: a multimodal spontaneous expression database in educational environment

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Wei, Qinglan; He, Jun; Yu, Lejun; Zhu, Xiaoming

    2016-09-01

    In the field of pedagogy or educational psychology, emotions are treated as very important factors, which are closely associated with cognitive processes. Hence, it is meaningful for teachers to analyze students' emotions in classrooms, thus adjusting their teaching activities and improving students ' individual development. To provide a benchmark for different expression recognition algorithms, a large collection of training and test data in classroom environment has become an acute problem that needs to be resolved. In this paper, we present a multimodal spontaneous database in real learning environment. To collect the data, students watched seven kinds of teaching videos and were simultaneously filmed by a camera. Trained coders made one of the five learning expression labels for each image sequence extracted from the captured videos. This subset consists of 554 multimodal spontaneous expression image sequences (22,160 frames) recorded in real classrooms. There are four main advantages in this database. 1) Due to recorded in the real classroom environment, viewer's distance from the camera and the lighting of the database varies considerably between image sequences. 2) All the data presented are natural spontaneous responses to teaching videos. 3) The multimodal database also contains nonverbal behavior including eye movement, head posture and gestures to infer a student ' s affective state during the courses. 4) In the video sequences, there are different kinds of temporal activation patterns. In addition, we have demonstrated the labels for the image sequences are in high reliability through Cronbach's alpha method.

  18. Computer-aided detection of prostate cancer in T2-weighted MRI within the peripheral zone

    NASA Astrophysics Data System (ADS)

    Rampun, Andrik; Zheng, Ling; Malcolm, Paul; Tiddeman, Bernie; Zwiggelaar, Reyer

    2016-07-01

    In this paper we propose a prostate cancer computer-aided diagnosis (CAD) system and suggest a set of discriminant texture descriptors extracted from T2-weighted MRI data which can be used as a good basis for a multimodality system. For this purpose, 215 texture descriptors were extracted and eleven different classifiers were employed to achieve the best possible results. The proposed method was tested based on 418 T2-weighted MR images taken from 45 patients and evaluated using 9-fold cross validation with five patients in each fold. The results demonstrated comparable results to existing CAD systems using multimodality MRI. We achieved an area under the receiver operating curve (A z ) values equal to 90.0%+/- 7.6% , 89.5%+/- 8.9% , 87.9%+/- 9.3% and 87.4%+/- 9.2% for Bayesian networks, ADTree, random forest and multilayer perceptron classifiers, respectively, while a meta-voting classifier using average probability as a combination rule achieved 92.7%+/- 7.4% .

  19. Fall Risk Assessment and Early-Warning for Toddler Behaviors at Home

    PubMed Central

    Yang, Mau-Tsuen; Chuang, Min-Wen

    2013-01-01

    Accidental falls are the major cause of serious injuries in toddlers, with most of these falls happening at home. Instead of providing immediate fall detection based on short-term observations, this paper proposes an early-warning childcare system to monitor fall-prone behaviors of toddlers at home. Using 3D human skeleton tracking and floor plane detection based on depth images captured by a Kinect system, eight fall-prone behavioral modules of toddlers are developed and organized according to four essential criteria: posture, motion, balance, and altitude. The final fall risk assessment is generated by a multi-modal fusion using either a weighted mean thresholding or a support vector machine (SVM) classification. Optimizations are performed to determine local parameter in each module and global parameters of the multi-modal fusion. Experimental results show that the proposed system can assess fall risks and trigger alarms with an accuracy rate of 92% at a speed of 20 frames per second. PMID:24335727

  20. Fall risk assessment and early-warning for toddler behaviors at home.

    PubMed

    Yang, Mau-Tsuen; Chuang, Min-Wen

    2013-12-10

    Accidental falls are the major cause of serious injuries in toddlers, with most of these falls happening at home. Instead of providing immediate fall detection based on short-term observations, this paper proposes an early-warning childcare system to monitor fall-prone behaviors of toddlers at home. Using 3D human skeleton tracking and floor plane detection based on depth images captured by a Kinect system, eight fall-prone behavioral modules of toddlers are developed and organized according to four essential criteria: posture, motion, balance, and altitude. The final fall risk assessment is generated by a multi-modal fusion using either a weighted mean thresholding or a support vector machine (SVM) classification. Optimizations are performed to determine local parameter in each module and global parameters of the multi-modal fusion. Experimental results show that the proposed system can assess fall risks and trigger alarms with an accuracy rate of 92% at a speed of 20 frames per second.

  1. Gelatin-based Hydrogel Degradation and Tissue Interaction in vivo: Insights from Multimodal Preclinical Imaging in Immunocompetent Nude Mice.

    PubMed

    Tondera, Christoph; Hauser, Sandra; Krüger-Genge, Anne; Jung, Friedrich; Neffe, Axel T; Lendlein, Andreas; Klopfleisch, Robert; Steinbach, Jörg; Neuber, Christin; Pietzsch, Jens

    2016-01-01

    Hydrogels based on gelatin have evolved as promising multifunctional biomaterials. Gelatin is crosslinked with lysine diisocyanate ethyl ester (LDI) and the molar ratio of gelatin and LDI in the starting material mixture determines elastic properties of the resulting hydrogel. In order to investigate the clinical potential of these biopolymers, hydrogels with different ratios of gelatin and diisocyanate (3-fold (G10_LNCO3) and 8-fold (G10_LNCO8) molar excess of isocyanate groups) were subcutaneously implanted in mice (uni- or bilateral implantation). Degradation and biomaterial-tissue-interaction were investigated in vivo (MRI, optical imaging, PET) and ex vivo (autoradiography, histology, serum analysis). Multimodal imaging revealed that the number of covalent net points correlates well with degradation time, which allows for targeted modification of hydrogels based on properties of the tissue to be replaced. Importantly, the degradation time was also dependent on the number of implants per animal. Despite local mechanisms of tissue remodeling no adverse tissue responses could be observed neither locally nor systemically. Finally, this preclinical investigation in immunocompetent mice clearly demonstrated a complete restoration of the original healthy tissue.

  2. Multimodal nanoparticle imaging agents: design and applications

    NASA Astrophysics Data System (ADS)

    Burke, Benjamin P.; Cawthorne, Christopher; Archibald, Stephen J.

    2017-10-01

    Molecular imaging, where the location of molecules or nanoscale constructs can be tracked in the body to report on disease or biochemical processes, is rapidly expanding to include combined modality or multimodal imaging. No single imaging technique can offer the optimum combination of properties (e.g. resolution, sensitivity, cost, availability). The rapid technological advances in hardware to scan patients, and software to process and fuse images, are pushing the boundaries of novel medical imaging approaches, and hand-in-hand with this is the requirement for advanced and specific multimodal imaging agents. These agents can be detected using a selection from radioisotope, magnetic resonance and optical imaging, among others. Nanoparticles offer great scope in this area as they lend themselves, via facile modification procedures, to act as multifunctional constructs. They have relevance as therapeutics and drug delivery agents that can be tracked by molecular imaging techniques with the particular development of applications in optically guided surgery and as radiosensitizers. There has been a huge amount of research work to produce nanoconstructs for imaging, and the parameters for successful clinical translation and validation of therapeutic applications are now becoming much better understood. It is an exciting time of progress for these agents as their potential is closer to being realized with translation into the clinic. The coming 5-10 years will be critical, as we will see if the predicted improvement in clinical outcomes becomes a reality. Some of the latest advances in combination modality agents are selected and the progression pathway to clinical trials analysed. This article is part of the themed issue 'Challenges for chemistry in molecular imaging'.

  3. Appearance of osteolysis with melorheostosis: redefining the disease or a new disorder? A novel case report with multimodality imaging.

    PubMed

    Osher, Lawrence S; Blazer, Marie Mantini; Bumpus, Kelly

    2013-01-01

    We present a case report of melorheostosis with the novel radiographic finding of underlying cortical resorption. A number of radiographic patterns of melorheostosis have been described; however, the combination of new bone formation and resorption of the original cortex appears unique. Although the presence of underlying lysis has been postulated in published studies, direct radiographic evidence of bony resorption in melorheostosis has not been reported. These findings can be subtle and might go unnoticed using standard imaging. An in-depth review of the radiographic features is presented, including multimodality imaging with magnetic resonance imaging and computed tomography. Copyright © 2013 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Multimodality hard-x-ray imaging of a chromosome with nanoscale spatial resolution

    DOE PAGES

    Yan, Hanfei; Nazaretski, Evgeny; Lauer, Kenneth R.; ...

    2016-02-05

    Here, we developed a scanning hard x-ray microscope using a new class of x-ray nano-focusing optic called a multilayer Laue lens and imaged a chromosome with nanoscale spatial resolution. The combination of the hard x-ray's superior penetration power, high sensitivity to elemental composition, high spatial-resolution and quantitative analysis creates a unique tool with capabilities that other microscopy techniques cannot provide. Using this microscope, we simultaneously obtained absorption-, phase-, and fluorescence-contrast images of Pt-stained human chromosome samples. The high spatial-resolution of the microscope and its multi-modality imaging capabilities enabled us to observe the internal ultra-structures of a thick chromosome without sectioningmore » it.« less

  5. Multimodality hard-x-ray imaging of a chromosome with nanoscale spatial resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Hanfei; Nazaretski, Evgeny; Lauer, Kenneth R.

    Here, we developed a scanning hard x-ray microscope using a new class of x-ray nano-focusing optic called a multilayer Laue lens and imaged a chromosome with nanoscale spatial resolution. The combination of the hard x-ray's superior penetration power, high sensitivity to elemental composition, high spatial-resolution and quantitative analysis creates a unique tool with capabilities that other microscopy techniques cannot provide. Using this microscope, we simultaneously obtained absorption-, phase-, and fluorescence-contrast images of Pt-stained human chromosome samples. The high spatial-resolution of the microscope and its multi-modality imaging capabilities enabled us to observe the internal ultra-structures of a thick chromosome without sectioningmore » it.« less

  6. Quantitative fluorescence tomography using a trimodality system: in vivo validation

    PubMed Central

    Lin, Yuting; Barber, William C.; Iwanczyk, Jan S.; Roeck, Werner W.; Nalcioglu, Orhan; Gulsen, Gultekin

    2010-01-01

    A fully integrated trimodality fluorescence, diffuse optical, and x-ray computed tomography (FT∕DOT∕XCT) system for small animal imaging is reported in this work. The main purpose of this system is to obtain quantitatively accurate fluorescence concentration images using a multimodality approach. XCT offers anatomical information, while DOT provides the necessary background optical property map to improve FT image accuracy. The quantitative accuracy of this trimodality system is demonstrated in vivo. In particular, we show that a 2-mm-diam fluorescence inclusion located 8 mm deep in a nude mouse can only be localized when functional a priori information from DOT is available. However, the error in the recovered fluorophore concentration is nearly 87%. On the other hand, the fluorophore concentration can be accurately recovered within 2% error when both DOT functional and XCT structural a priori information are utilized together to guide and constrain the FT reconstruction algorithm. PMID:20799770

  7. Hybrid-modality ocular imaging using a clinical ultrasound system and nanosecond pulsed laser.

    PubMed

    Lim, Hoong-Ta; Matham, Murukeshan Vadakke

    2015-07-01

    Hybrid optical modality imaging is a special type of multimodality imaging significantly used in the recent past in order to harness the strengths of different imaging methods as well as to furnish complementary information beyond that provided by any individual method. We present a hybrid-modality imaging system based on a commercial clinical ultrasound imaging (USI) system using a linear array ultrasound transducer (UST) and a tunable nanosecond pulsed laser as the source. The integrated system uses photoacoustic imaging (PAI) and USI for ocular imaging to provide the complementary absorption and structural information of the eye. In this system, B-mode images from PAI and USI are acquired at 10 Hz and about 40 Hz, respectively. A linear array UST makes the system much faster compared to other ocular imaging systems using a single-element UST to form B-mode images. The results show that the proposed instrumentation is able to incorporate PAI and USI in a single setup. The feasibility and efficiency of this developed probe system was illustrated by using enucleated pig eyes as test samples. It was demonstrated that PAI could successfully capture photoacoustic signals from the iris, anterior lens surface, and posterior pole, while USI could accomplish the mapping of the eye to reveal the structures like the cornea, anterior chamber, lens, iris, and posterior pole. This system and the proposed methodology are expected to enable ocular disease diagnostic applications and can be used as a preclinical imaging system.

  8. Towards the future : the promise of intermodal and multimodal transportation systems

    DOT National Transportation Integrated Search

    1995-02-01

    Issues relating to intermodal and multimodal transportation systems are introduced and defined. Intermodal and multimodal transportation solutons are assessed within the framework of legislative efforts such as Intermodal Surface Transportation Effic...

  9. Instrumentation for optical remote sensing from space; Proceedings of the Meeting, Cannes, France, November 27-29, 1985

    NASA Technical Reports Server (NTRS)

    Seeley, John S. (Editor); Lear, John W. (Editor); Russak, Sidney L. (Editor); Monfils, Andre (Editor)

    1986-01-01

    Papers are presented on such topics as the development of the Imaging Spectrometer for Shuttle and space platform applications; the in-flight calibration of pushbroom remote sensing instruments for the SPOT program; buttable detector arrays for 1.55-1.7 micron imaging; the design of the Improved Stratospheric and Mesospheric Sounder on the Upper Atmosphere Research Satellite; and SAGE II design and in-orbit performance. Consideration is also given to the Shuttle Imaging Radar-B/C instruments; the Venus Radar Mapper multimode radar system design; various ISO instruments (ISOCAM, ISOPHOT, and SWS and LWS); and instrumentation for the Space Infrared Telescope Facility.

  10. On the Multi-Modal Object Tracking and Image Fusion Using Unsupervised Deep Learning Methodologies

    NASA Astrophysics Data System (ADS)

    LaHaye, N.; Ott, J.; Garay, M. J.; El-Askary, H. M.; Linstead, E.

    2017-12-01

    The number of different modalities of remote-sensors has been on the rise, resulting in large datasets with different complexity levels. Such complex datasets can provide valuable information separately, yet there is a bigger value in having a comprehensive view of them combined. As such, hidden information can be deduced through applying data mining techniques on the fused data. The curse of dimensionality of such fused data, due to the potentially vast dimension space, hinders our ability to have deep understanding of them. This is because each dataset requires a user to have instrument-specific and dataset-specific knowledge for optimum and meaningful usage. Once a user decides to use multiple datasets together, deeper understanding of translating and combining these datasets in a correct and effective manner is needed. Although there exists data centric techniques, generic automated methodologies that can potentially solve this problem completely don't exist. Here we are developing a system that aims to gain a detailed understanding of different data modalities. Such system will provide an analysis environment that gives the user useful feedback and can aid in research tasks. In our current work, we show the initial outputs our system implementation that leverages unsupervised deep learning techniques so not to burden the user with the task of labeling input data, while still allowing for a detailed machine understanding of the data. Our goal is to be able to track objects, like cloud systems or aerosols, across different image-like data-modalities. The proposed system is flexible, scalable and robust to understand complex likenesses within multi-modal data in a similar spatio-temporal range, and also to be able to co-register and fuse these images when needed.

  11. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  12. Computer-aided psychotherapy based on multimodal elicitation, estimation and regulation of emotion.

    PubMed

    Cosić, Krešimir; Popović, Siniša; Horvat, Marko; Kukolja, Davor; Dropuljić, Branimir; Kovač, Bernard; Jakovljević, Miro

    2013-09-01

    Contemporary psychiatry is looking at affective sciences to understand human behavior, cognition and the mind in health and disease. Since it has been recognized that emotions have a pivotal role for the human mind, an ever increasing number of laboratories and research centers are interested in affective sciences, affective neuroscience, affective psychology and affective psychopathology. Therefore, this paper presents multidisciplinary research results of Laboratory for Interactive Simulation System at Faculty of Electrical Engineering and Computing, University of Zagreb in the stress resilience. Patient's distortion in emotional processing of multimodal input stimuli is predominantly consequence of his/her cognitive deficit which is result of their individual mental health disorders. These emotional distortions in patient's multimodal physiological, facial, acoustic, and linguistic features related to presented stimulation can be used as indicator of patient's mental illness. Real-time processing and analysis of patient's multimodal response related to annotated input stimuli is based on appropriate machine learning methods from computer science. Comprehensive longitudinal multimodal analysis of patient's emotion, mood, feelings, attention, motivation, decision-making, and working memory in synchronization with multimodal stimuli provides extremely valuable big database for data mining, machine learning and machine reasoning. Presented multimedia stimuli sequence includes personalized images, movies and sounds, as well as semantically congruent narratives. Simultaneously, with stimuli presentation patient provides subjective emotional ratings of presented stimuli in terms of subjective units of discomfort/distress, discrete emotions, or valence and arousal. These subjective emotional ratings of input stimuli and corresponding physiological, speech, and facial output features provides enough information for evaluation of patient's cognitive appraisal deficit. Aggregated real-time visualization of this information provides valuable assistance in patient mental state diagnostics enabling therapist deeper and broader insights into dynamics and progress of the psychotherapy.

  13. Multimodal Reading Comprehension: Curriculum Expectations and Large-Scale Literacy Testing Practices

    ERIC Educational Resources Information Center

    Unsworth, Len

    2014-01-01

    Interpreting the image-language interface in multimodal texts is now well recognized as a crucial aspect of reading comprehension in a number of official school syllabi such as the recently published Australian Curriculum: English (ACE). This article outlines the relevant expected student learning outcomes in this curriculum and draws attention to…

  14. An fMRI Study of Multimodal Semantic and Phonological Processing in Reading Disabled Adolescents

    ERIC Educational Resources Information Center

    Landi, Nicole; Mencl, W. Einar; Frost, Stephen J.; Sandak, Rebecca; Pugh, Kenneth R.

    2010-01-01

    Using functional magnetic resonance imaging, we investigated multimodal (visual and auditory) semantic and unimodal (visual only) phonological processing in reading disabled (RD) adolescents and non-impaired (NI) control participants. We found reduced activation for RD relative to NI in a number of left-hemisphere reading-related areas across all…

  15. A Multimodal Perspective on Textuality and Contexts

    ERIC Educational Resources Information Center

    Jewitt, Carey

    2007-01-01

    Textuality is often thought of in linguistic terms; for instance, the talk and writing that circulate in the classroom. In this paper I take a multimodal perspective on textuality and context. I draw on illustrative examples from school Science and English to examine how image, colour, gesture, gaze, posture and movement--as well as writing and…

  16. "Convince Me!" Valuing Multimodal Literacies and Composing Public Service Announcements

    ERIC Educational Resources Information Center

    Selfe, Richard J.; Selfe, Cynthia L.

    2008-01-01

    For some teachers, the increasing attention to digital and multimodal composing in English and Language Arts classrooms has brought into sharp relief the profession's investment in print as the primary means of expression. Although new forms of communication that combine words, still and moving images, and animation have begun to dominate digital…

  17. MCA-NMF: Multimodal Concept Acquisition with Non-Negative Matrix Factorization

    PubMed Central

    Mangin, Olivier; Filliat, David; ten Bosch, Louis; Oudeyer, Pierre-Yves

    2015-01-01

    In this paper we introduce MCA-NMF, a computational model of the acquisition of multimodal concepts by an agent grounded in its environment. More precisely our model finds patterns in multimodal sensor input that characterize associations across modalities (speech utterances, images and motion). We propose this computational model as an answer to the question of how some class of concepts can be learnt. In addition, the model provides a way of defining such a class of plausibly learnable concepts. We detail why the multimodal nature of perception is essential to reduce the ambiguity of learnt concepts as well as to communicate about them through speech. We then present a set of experiments that demonstrate the learning of such concepts from real non-symbolic data consisting of speech sounds, images, and motions. Finally we consider structure in perceptual signals and demonstrate that a detailed knowledge of this structure, named compositional understanding can emerge from, instead of being a prerequisite of, global understanding. An open-source implementation of the MCA-NMF learner as well as scripts and associated experimental data to reproduce the experiments are publicly available. PMID:26489021

  18. Whole mouse cryo-imaging

    NASA Astrophysics Data System (ADS)

    Wilson, David; Roy, Debashish; Steyer, Grant; Gargesha, Madhusudhana; Stone, Meredith; McKinley, Eliot

    2008-03-01

    The Case cryo-imaging system is a section and image system which allows one to acquire micron-scale, information rich, whole mouse color bright field and molecular fluorescence images of an entire mouse. Cryo-imaging is used in a variety of applications, including mouse and embryo anatomical phenotyping, drug delivery, imaging agents, metastastic cancer, stem cells, and very high resolution vascular imaging, among many. Cryo-imaging fills the gap between whole animal in vivo imaging and histology, allowing one to image a mouse along the continuum from the mouse -> organ -> tissue structure -> cell -> sub-cellular domains. In this overview, we describe the technology and a variety of exciting applications. Enhancements to the system now enable tiled acquisition of high resolution images to cover an entire mouse. High resolution fluorescence imaging, aided by a novel subtraction processing algorithm to remove sub-surface fluorescence, makes it possible to detect fluorescently-labeled single cells. Multi-modality experiments in Magnetic Resonance Imaging and Cryo-imaging of a whole mouse demonstrate superior resolution of cryo-images and efficiency of registration techniques. The 3D results demonstrate the novel true-color volume visualization tools we have developed and the inherent advantage of cryo-imaging in providing unlimited depth of field and spatial resolution. The recent results continue to demonstrate the value cryo-imaging provides in the field of small animal imaging research.

  19. Design and characterization of a handheld multimodal imaging device for the assessment of oral epithelial lesions

    NASA Astrophysics Data System (ADS)

    Higgins, Laura M.; Pierce, Mark C.

    2014-08-01

    A compact handpiece combining high resolution fluorescence (HRF) imaging with optical coherence tomography (OCT) was developed to provide real-time assessment of oral lesions. This multimodal imaging device simultaneously captures coregistered en face images with subcellular detail alongside cross-sectional images of tissue microstructure. The HRF imaging acquires a 712×594 μm2 field-of-view at the sample with a spatial resolution of 3.5 μm. The OCT images were acquired to a depth of 1.5 mm with axial and lateral resolutions of 9.3 and 8.0 μm, respectively. HRF and OCT images are simultaneously displayed at 25 fps. The handheld device was used to image a healthy volunteer, demonstrating the potential for in vivo assessment of the epithelial surface for dysplastic and neoplastic changes at the cellular level, while simultaneously evaluating submucosal involvement. We anticipate potential applications in real-time assessment of oral lesions for improved surveillance and surgical guidance.

  20. NOVEL PRERETINAL HAIR PIN-LIKE VESSEL IN RETINAL ASTROCYTIC HAMARTOMA WITH VITREOUS HEMORRHAGE.

    PubMed

    Soeta, Megumi; Arai, Yusuke; Takahashi, Hidenori; Fujino, Yujiro; Tanabe, Tatsuro; Inoue, Yuji; Kawashima, Hidetoshi

    2018-01-01

    To report a case of retinal astrocytic hamartoma with vitreous hemorrhage and a hair pin-like vessel adhering to a posterior vitreous membrane. A 33-year-old man with a retinal astrocytic hamartoma presented with vitreous hemorrhage 5 times. Multimodal imaging, including fundus photography, fluorescein angiography, optical coherence tomography, and B-mode ultrasonography. Multimodal imaging demonstrated a novel hair pin-like vessel that adhered to the posterior vitreous membrane. Some cases of retinal astrocytic hamartoma with vitreous hemorrhage may be related to structure abnormalities of tumor vessels.

Top