Sample records for multidimensional multimodal imaging

  1. Medical image registration based on normalized multidimensional mutual information

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ji, Hongbing; Tong, Ming

    2009-10-01

    Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.

  2. Informatics in radiology (infoRAD): navigating the fifth dimension: innovative interface for multidimensional multimodality image navigation.

    PubMed

    Rosset, Antoine; Spadola, Luca; Pysher, Lance; Ratib, Osman

    2006-01-01

    The display and interpretation of images obtained by combining three-dimensional data acquired with two different modalities (eg, positron emission tomography and computed tomography) in the same subject require complex software tools that allow the user to adjust the image parameters. With the current fast imaging systems, it is possible to acquire dynamic images of the beating heart, which add a fourth dimension of visual information-the temporal dimension. Moreover, images acquired at different points during the transit of a contrast agent or during different functional phases add a fifth dimension-functional data. To facilitate real-time image navigation in the resultant large multidimensional image data sets, the authors developed a Digital Imaging and Communications in Medicine-compliant software program. The open-source software, called OsiriX, allows the user to navigate through multidimensional image series while adjusting the blending of images from different modalities, image contrast and intensity, and the rate of cine display of dynamic images. The software is available for free download at http://homepage.mac.com/rossetantoine/osirix. (c) RSNA, 2006.

  3. Multimodal hyperspectral optical microscopy

    DOE PAGES

    Novikova, Irina V.; Smallwood, Chuck R.; Gong, Yu; ...

    2017-09-02

    We describe a unique and convenient approach to multimodal hyperspectral optical microscopy, herein achieved by coupling a portable and transferable hyperspectral imager to various optical microscopes. The experimental and data analysis schemes involved in recording spectrally and spatially resolved fluorescence, dark field, and optical absorption micrographs are illustrated through prototypical measurements targeting selected model systems. Namely, hyperspectral fluorescence micrographs of isolated fluorescent beads are employed to ensure spectral calibration of our detector and to gauge the attainable spatial resolution of our measurements; the recorded images are diffraction-limited. Moreover, spatially over-sampled absorption spectroscopy of a single lipid (18:1 Liss Rhod PE)more » layer reveals that optical densities on the order of 10-3 may be resolved by spatially averaging the recorded optical signatures. We also briefly illustrate two applications of our setup in the general areas of plasmonics and cell biology. Most notably, we deploy hyperspectral optical absorption microscopy to identify and image algal pigments within a single live Tisochrysis lutea cell. Overall, this work paves the way for multimodal multidimensional spectral imaging measurements spanning the realms of several scientific disciples.« less

  4. Multimodal hyperspectral optical microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novikova, Irina V.; Smallwood, Chuck R.; Gong, Yu

    We describe a unique and convenient approach to multimodal hyperspectral optical microscopy, herein achieved by coupling a portable and transferable hyperspectral imager to various optical microscopes. The experimental and data analysis schemes involved in recording spectrally and spatially resolved fluorescence, dark field, and optical absorption micrographs are illustrated through prototypical measurements targeting selected model systems. Namely, hyperspectral fluorescence micrographs of isolated fluorescent beads are employed to ensure spectral calibration of our detector and to gauge the attainable spatial resolution of our measurements; the recorded images are diffraction-limited. Moreover, spatially over-sampled absorption spectroscopy of a single lipid (18:1 Liss Rhod PE)more » layer reveals that optical densities on the order of 10-3 may be resolved by spatially averaging the recorded optical signatures. We also briefly illustrate two applications of our setup in the general areas of plasmonics and cell biology. Most notably, we deploy hyperspectral optical absorption microscopy to identify and image algal pigments within a single live Tisochrysis lutea cell. Overall, this work paves the way for multimodal multidimensional spectral imaging measurements spanning the realms of several scientific disciples.« less

  5. OSIRIX: open source multimodality image navigation software

    NASA Astrophysics Data System (ADS)

    Rosset, Antoine; Pysher, Lance; Spadola, Luca; Ratib, Osman

    2005-04-01

    The goal of our project is to develop a completely new software platform that will allow users to efficiently and conveniently navigate through large sets of multidimensional data without the need of high-end expensive hardware or software. We also elected to develop our system on new open source software libraries allowing other institutions and developers to contribute to this project. OsiriX is a free and open-source imaging software designed manipulate and visualize large sets of medical images: http://homepage.mac.com/rossetantoine/osirix/

  6. Use of multidimensional, multimodal imaging and PACS to support neurological diagnoses

    NASA Astrophysics Data System (ADS)

    Wong, Stephen T. C.; Knowlton, Robert C.; Hoo, Kent S.; Huang, H. K.

    1995-05-01

    Technological advances in brain imaging have revolutionized diagnosis in neurology and neurological surgery. Major imaging techniques include magnetic resonance imaging (MRI) to visualize structural anatomy, positron emission tomography (PET) to image metabolic function and cerebral blood flow, magnetoencephalography (MEG) to visualize the location of physiologic current sources, and magnetic resonance spectroscopy (MRS) to measure specific biochemicals. Each of these techniques studies different biomedical aspects of the brain, but there lacks an effective means to quantify and correlate the disparate imaging datasets in order to improve clinical decision making processes. This paper describes several techniques developed in a UNIX-based neurodiagnostic workstation to aid the noninvasive presurgical evaluation of epilepsy patients. These techniques include online access to the picture archiving and communication systems (PACS) multimedia archive, coregistration of multimodality image datasets, and correlation and quantitation of structural and functional information contained in the registered images. For illustration, we describe the use of these techniques in a patient case of nonlesional neocortical epilepsy. We also present out future work based on preliminary studies.

  7. Use of multidimensional, multimodal imaging and PACS to support neurological diagnoses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, S.T.C.; Knowlton, R.; Hoo, K.S.

    1995-12-31

    Technological advances in brain imaging have revolutionized diagnosis in neurology and neurological surgery. Major imaging techniques include magnetic resonance imaging (MRI) to visualize structural anatomy, positron emission tomography (PET) to image metabolic function and cerebral blood flow, magnetoencephalography (MEG) to visualize the location of physiologic current sources, and magnetic resonance spectroscopy (MRS) to measure specific biochemicals. Each of these techniques studies different biomedical aspects of the grain, but there lacks an effective means to quantify and correlate the disparate imaging datasets in order to improve clinical decision making processes. This paper describes several techniques developed in a UNIX-based neurodiagnostic workstationmore » to aid the non-invasive presurgical evaluation of epilepsy patients. These techniques include on-line access to the picture archiving and communication systems (PACS) multimedia archive, coregistration of multimodality image datasets, and correlation and quantitative of structural and functional information contained in the registered images. For illustration, the authors describe the use of these techniques in a patient case of non-lesional neocortical epilepsy. They also present the future work based on preliminary studies.« less

  8. MEG-BIDS, the brain imaging data structure extended to magnetoencephalography

    PubMed Central

    Niso, Guiomar; Gorgolewski, Krzysztof J.; Bock, Elizabeth; Brooks, Teon L.; Flandin, Guillaume; Gramfort, Alexandre; Henson, Richard N.; Jas, Mainak; Litvak, Vladimir; T. Moreau, Jeremy; Oostenveld, Robert; Schoffelen, Jan-Mathijs; Tadel, Francois; Wexler, Joseph; Baillet, Sylvain

    2018-01-01

    We present a significant extension of the Brain Imaging Data Structure (BIDS) to support the specific aspects of magnetoencephalography (MEG) data. MEG measures brain activity with millisecond temporal resolution and unique source imaging capabilities. So far, BIDS was a solution to organise magnetic resonance imaging (MRI) data. The nature and acquisition parameters of MRI and MEG data are strongly dissimilar. Although there is no standard data format for MEG, we propose MEG-BIDS as a principled solution to store, organise, process and share the multidimensional data volumes produced by the modality. The standard also includes well-defined metadata, to facilitate future data harmonisation and sharing efforts. This responds to unmet needs from the multimodal neuroimaging community and paves the way to further integration of other techniques in electrophysiology. MEG-BIDS builds on MRI-BIDS, extending BIDS to a multimodal data structure. We feature several data-analytics software that have adopted MEG-BIDS, and a diverse sample of open MEG-BIDS data resources available to everyone. PMID:29917016

  9. MEG-BIDS, the brain imaging data structure extended to magnetoencephalography.

    PubMed

    Niso, Guiomar; Gorgolewski, Krzysztof J; Bock, Elizabeth; Brooks, Teon L; Flandin, Guillaume; Gramfort, Alexandre; Henson, Richard N; Jas, Mainak; Litvak, Vladimir; T Moreau, Jeremy; Oostenveld, Robert; Schoffelen, Jan-Mathijs; Tadel, Francois; Wexler, Joseph; Baillet, Sylvain

    2018-06-19

    We present a significant extension of the Brain Imaging Data Structure (BIDS) to support the specific aspects of magnetoencephalography (MEG) data. MEG measures brain activity with millisecond temporal resolution and unique source imaging capabilities. So far, BIDS was a solution to organise magnetic resonance imaging (MRI) data. The nature and acquisition parameters of MRI and MEG data are strongly dissimilar. Although there is no standard data format for MEG, we propose MEG-BIDS as a principled solution to store, organise, process and share the multidimensional data volumes produced by the modality. The standard also includes well-defined metadata, to facilitate future data harmonisation and sharing efforts. This responds to unmet needs from the multimodal neuroimaging community and paves the way to further integration of other techniques in electrophysiology. MEG-BIDS builds on MRI-BIDS, extending BIDS to a multimodal data structure. We feature several data-analytics software that have adopted MEG-BIDS, and a diverse sample of open MEG-BIDS data resources available to everyone.

  10. n-SIFT: n-dimensional scale invariant feature transform.

    PubMed

    Cheung, Warren; Hamarneh, Ghassan

    2009-09-01

    We propose the n-dimensional scale invariant feature transform (n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data.

  11. Composing for Affect, Audience, and Identity: Toward a Multidimensional Understanding of Adolescents' Multimodal Composing Goals and Designs

    ERIC Educational Resources Information Center

    Smith, Blaine E.

    2018-01-01

    This study examined adolescents' perspectives on their multimodal composing goals and designs when creating digital projects in the context of an English Language Arts class. Sociocultural and social semiotics theoretical frameworks were integrated to understand six 12th grade students' viewpoints when composing three multimodal products--a…

  12. A semantic model for multimodal data mining in healthcare information systems.

    PubMed

    Iakovidis, Dimitris; Smailis, Christos

    2012-01-01

    Electronic health records (EHRs) are representative examples of multimodal/multisource data collections; including measurements, images and free texts. The diversity of such information sources and the increasing amounts of medical data produced by healthcare institutes annually, pose significant challenges in data mining. In this paper we present a novel semantic model that describes knowledge extracted from the lowest-level of a data mining process, where information is represented by multiple features i.e. measurements or numerical descriptors extracted from measurements, images, texts or other medical data, forming multidimensional feature spaces. Knowledge collected by manual annotation or extracted by unsupervised data mining from one or more feature spaces is modeled through generalized qualitative spatial semantics. This model enables a unified representation of knowledge across multimodal data repositories. It contributes to bridging the semantic gap, by enabling direct links between low-level features and higher-level concepts e.g. describing body parts, anatomies and pathological findings. The proposed model has been developed in web ontology language based on description logics (OWL-DL) and can be applied to a variety of data mining tasks in medical informatics. It utility is demonstrated for automatic annotation of medical data.

  13. OsiriX: an open-source software for navigating in multidimensional DICOM images.

    PubMed

    Rosset, Antoine; Spadola, Luca; Ratib, Osman

    2004-09-01

    A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program's toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix.

  14. Open framework for management and processing of multi-modality and multidimensional imaging data for analysis and modelling muscular function

    NASA Astrophysics Data System (ADS)

    García Juan, David; Delattre, Bénédicte M. A.; Trombella, Sara; Lynch, Sean; Becker, Matthias; Choi, Hon Fai; Ratib, Osman

    2014-03-01

    Musculoskeletal disorders (MSD) are becoming a big healthcare economical burden in developed countries with aging population. Classical methods like biopsy or EMG used in clinical practice for muscle assessment are invasive and not accurately sufficient for measurement of impairments of muscular performance. Non-invasive imaging techniques can nowadays provide effective alternatives for static and dynamic assessment of muscle function. In this paper we present work aimed toward the development of a generic data structure for handling n-dimensional metabolic and anatomical data acquired from hybrid PET/MR scanners. Special static and dynamic protocols were developed for assessment of physical and functional images of individual muscles of the lower limb. In an initial stage of the project a manual segmentation of selected muscles was performed on high-resolution 3D static images and subsequently interpolated to full dynamic set of contours from selected 2D dynamic images across different levels of the leg. This results in a full set of 4D data of lower limb muscles at rest and during exercise. These data can further be extended to a 5D data by adding metabolic data obtained from PET images. Our data structure and corresponding image processing extension allows for better evaluation of large volumes of multidimensional imaging data that are acquired and processed to generate dynamic models of the moving lower limb and its muscular function.

  15. Burnout: A Multimodal Approach to Assessment and Resolution.

    ERIC Educational Resources Information Center

    Kesler, Kathryn D.

    1990-01-01

    Claims assessment and treatment of guidance counselor burnout is not simple. A variety of causes and symptoms leads to the need for multidimensional conceptualization and action plan. The multimodal behavior mode, BASIC I.D., with the adoption of a Setting modality, has been shown to be a comprehensive approach when applied to the understanding…

  16. Navigating the fifth dimension: new concepts in interactive multimodality and multidimensional image navigation

    NASA Astrophysics Data System (ADS)

    Ratib, Osman; Rosset, Antoine; Dahlbom, Magnus; Czernin, Johannes

    2005-04-01

    Display and interpretation of multi dimensional data obtained from the combination of 3D data acquired from different modalities (such as PET-CT) require complex software tools allowing the user to navigate and modify the different image parameters. With faster scanners it is now possible to acquire dynamic images of a beating heart or the transit of a contrast agent adding a fifth dimension to the data. We developed a DICOM-compliant software for real time navigation in very large sets of 5 dimensional data based on an intuitive multidimensional jog-wheel widely used by the video-editing industry. The software, provided under open source licensing, allows interactive, single-handed, navigation through 3D images while adjusting blending of image modalities, image contrast and intensity and the rate of cine display of dynamic images. In this study we focused our effort on the user interface and means for interactively navigating in these large data sets while easily and rapidly changing multiple parameters such as image position, contrast, intensity, blending of colors, magnification etc. Conventional mouse-driven user interface requiring the user to manipulate cursors and sliders on the screen are too cumbersome and slow. We evaluated several hardware devices and identified a category of multipurpose jogwheel device that is used in the video-editing industry that is particularly suitable for rapidly navigating in five dimensions while adjusting several display parameters interactively. The application of this tool will be demonstrated in cardiac PET-CT imaging and functional cardiac MRI studies.

  17. Using the Interactive Whiteboard to Resource Continuity and Support Multimodal Teaching in a Primary Science Classroom

    ERIC Educational Resources Information Center

    Gillen, J.; Littleton, K.; Twiner, A.; Staarman, J. K.; Mercer, N.

    2008-01-01

    All communication is inherently multimodal, and understandings of science need to be multidimensional. The interactive whiteboard offers a range of potential benefits to the primary science classroom in terms of relative ease of integration of a number of presentational and ICT functions, which, taken together, offers new opportunities for…

  18. MIND: modality independent neighbourhood descriptor for multi-modal deformable registration.

    PubMed

    Heinrich, Mattias P; Jenkinson, Mark; Bhushan, Manav; Matin, Tahreema; Gleeson, Fergus V; Brady, Sir Michael; Schnabel, Julia A

    2012-10-01

    Deformable registration of images obtained from different modalities remains a challenging task in medical image analysis. This paper addresses this important problem and proposes a modality independent neighbourhood descriptor (MIND) for both linear and deformable multi-modal registration. Based on the similarity of small image patches within one image, it aims to extract the distinctive structure in a local neighbourhood, which is preserved across modalities. The descriptor is based on the concept of image self-similarity, which has been introduced for non-local means filtering for image denoising. It is able to distinguish between different types of features such as corners, edges and homogeneously textured regions. MIND is robust to the most considerable differences between modalities: non-functional intensity relations, image noise and non-uniform bias fields. The multi-dimensional descriptor can be efficiently computed in a dense fashion across the whole image and provides point-wise local similarity across modalities based on the absolute or squared difference between descriptors, making it applicable for a wide range of transformation models and optimisation algorithms. We use the sum of squared differences of the MIND representations of the images as a similarity metric within a symmetric non-parametric Gauss-Newton registration framework. In principle, MIND would be applicable to the registration of arbitrary modalities. In this work, we apply and validate it for the registration of clinical 3D thoracic CT scans between inhale and exhale as well as the alignment of 3D CT and MRI scans. Experimental results show the advantages of MIND over state-of-the-art techniques such as conditional mutual information and entropy images, with respect to clinically annotated landmark locations. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Radiolabeled Nanoparticles for Multimodality Tumor Imaging

    PubMed Central

    Xing, Yan; Zhao, Jinhua; Conti, Peter S.; Chen, Kai

    2014-01-01

    Each imaging modality has its own unique strengths. Multimodality imaging, taking advantages of strengths from two or more imaging modalities, can provide overall structural, functional, and molecular information, offering the prospect of improved diagnostic and therapeutic monitoring abilities. The devices of molecular imaging with multimodality and multifunction are of great value for cancer diagnosis and treatment, and greatly accelerate the development of radionuclide-based multimodal molecular imaging. Radiolabeled nanoparticles bearing intrinsic properties have gained great interest in multimodality tumor imaging over the past decade. Significant breakthrough has been made toward the development of various radiolabeled nanoparticles, which can be used as novel cancer diagnostic tools in multimodality imaging systems. It is expected that quantitative multimodality imaging with multifunctional radiolabeled nanoparticles will afford accurate and precise assessment of biological signatures in cancer in a real-time manner and thus, pave the path towards personalized cancer medicine. This review addresses advantages and challenges in developing multimodality imaging probes by using different types of nanoparticles, and summarizes the recent advances in the applications of radiolabeled nanoparticles for multimodal imaging of tumor. The key issues involved in the translation of radiolabeled nanoparticles to the clinic are also discussed. PMID:24505237

  20. Towards Omni-Tomography—Grand Fusion of Multiple Modalities for Simultaneous Interior Tomography

    PubMed Central

    Wang, Ge; Zhang, Jie; Gao, Hao; Weir, Victor; Yu, Hengyong; Cong, Wenxiang; Xu, Xiaochen; Shen, Haiou; Bennett, James; Furth, Mark; Wang, Yue; Vannier, Michael

    2012-01-01

    We recently elevated interior tomography from its origin in computed tomography (CT) to a general tomographic principle, and proved its validity for other tomographic modalities including SPECT, MRI, and others. Here we propose “omni-tomography”, a novel concept for the grand fusion of multiple tomographic modalities for simultaneous data acquisition in a region of interest (ROI). Omni-tomography can be instrumental when physiological processes under investigation are multi-dimensional, multi-scale, multi-temporal and multi-parametric. Both preclinical and clinical studies now depend on in vivo tomography, often requiring separate evaluations by different imaging modalities. Over the past decade, two approaches have been used for multimodality fusion: Software based image registration and hybrid scanners such as PET-CT, PET-MRI, and SPECT-CT among others. While there are intrinsic limitations with both approaches, the main obstacle to the seamless fusion of multiple imaging modalities has been the bulkiness of each individual imager and the conflict of their physical (especially spatial) requirements. To address this challenge, omni-tomography is now unveiled as an emerging direction for biomedical imaging and systems biomedicine. PMID:22768108

  1. An atlas-based multimodal registration method for 2D images with discrepancy structures.

    PubMed

    Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng

    2018-06-04

    An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.

  2. Accessing Multi-Dimensional Images and Data Cubes in the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Tody, Douglas; Plante, R. L.; Berriman, G. B.; Cresitello-Dittmar, M.; Good, J.; Graham, M.; Greene, G.; Hanisch, R. J.; Jenness, T.; Lazio, J.; Norris, P.; Pevunova, O.; Rots, A. H.

    2014-01-01

    Telescopes across the spectrum are routinely producing multi-dimensional images and datasets, such as Doppler velocity cubes, polarization datasets, and time-resolved “movies.” Examples of current telescopes producing such multi-dimensional images include the JVLA, ALMA, and the IFU instruments on large optical and near-infrared wavelength telescopes. In the near future, both the LSST and JWST will also produce such multi-dimensional images routinely. High-energy instruments such as Chandra produce event datasets that are also a form of multi-dimensional data, in effect being a very sparse multi-dimensional image. Ensuring that the data sets produced by these telescopes can be both discovered and accessed by the community is essential and is part of the mission of the Virtual Observatory (VO). The Virtual Astronomical Observatory (VAO, http://www.usvao.org/), in conjunction with its international partners in the International Virtual Observatory Alliance (IVOA), has developed a protocol and an initial demonstration service designed for the publication, discovery, and access of arbitrarily large multi-dimensional images. The protocol describing multi-dimensional images is the Simple Image Access Protocol, version 2, which provides the minimal set of metadata required to characterize a multi-dimensional image for its discovery and access. A companion Image Data Model formally defines the semantics and structure of multi-dimensional images independently of how they are serialized, while providing capabilities such as support for sparse data that are essential to deal effectively with large cubes. A prototype data access service has been deployed and tested, using a suite of multi-dimensional images from a variety of telescopes. The prototype has demonstrated the capability to discover and remotely access multi-dimensional data via standard VO protocols. The prototype informs the specification of a protocol that will be submitted to the IVOA for approval, with an operational data cube service to be delivered in mid-2014. An associated user-installable VO data service framework will provide the capabilities required to publish VO-compatible multi-dimensional images or data cubes.

  3. Application of Multimodality Imaging Fusion Technology in Diagnosis and Treatment of Malignant Tumors under the Precision Medicine Plan.

    PubMed

    Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying

    2016-12-20

    The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.

  4. Molecular brain imaging in the multimodality era

    PubMed Central

    Price, Julie C

    2012-01-01

    Multimodality molecular brain imaging encompasses in vivo visualization, evaluation, and measurement of cellular/molecular processes. Instrumentation and software developments over the past 30 years have fueled advancements in multimodality imaging platforms that enable acquisition of multiple complementary imaging outcomes by either combined sequential or simultaneous acquisition. This article provides a general overview of multimodality neuroimaging in the context of positron emission tomography as a molecular imaging tool and magnetic resonance imaging as a structural and functional imaging tool. Several image examples are provided and general challenges are discussed to exemplify complementary features of the modalities, as well as important strengths and weaknesses of combined assessments. Alzheimer's disease is highlighted, as this clinical area has been strongly impacted by multimodality neuroimaging findings that have improved understanding of the natural history of disease progression, early disease detection, and informed therapy evaluation. PMID:22434068

  5. Coexistence of collapse and stable spatiotemporal solitons in multimode fibers

    NASA Astrophysics Data System (ADS)

    Shtyrina, Olga V.; Fedoruk, Mikhail P.; Kivshar, Yuri S.; Turitsyn, Sergei K.

    2018-01-01

    We analyze spatiotemporal solitons in multimode optical fibers and demonstrate the existence of stable solitons, in a sharp contrast to earlier predictions of collapse of multidimensional solitons in three-dimensional media. We discuss the coexistence of blow-up solutions and collapse stabilization by a low-dimensional external potential in graded-index media, and also predict the existence of stable higher-order nonlinear waves such as dipole-mode spatiotemporal solitons. To support the main conclusions of our numerical studies we employ a variational approach and derive analytically the stability criterion for input powers for the collapse stabilization.

  6. Diagnostic possibilities with multidimensional images in head and neck area using efficient registration and visualization methods

    NASA Astrophysics Data System (ADS)

    Zeilhofer, Hans-Florian U.; Krol, Zdzislaw; Sader, Robert; Hoffmann, Karl-Heinz; Gerhardt, Paul; Schweiger, Markus; Horch, Hans-Henning

    1997-05-01

    For several diseases in the head and neck area different imaging modalities are applied to the same patient.Each of these image data sets has its specific advantages and disadvantages. The combination of different methods allows to make the best use of the advantageous properties of each method while minimizing the impact of its negative aspects. Soft tissue alterations can be judged better in an MRI image while it may be unrecognizable in the relating CT. Bone tissue, on the other hand, is optimally imaged in CT. Inflammatory nuclei of the bone can be detected best by their increased signal in SPECT. Only the combination of all modalities let the physical come to an exact statement on pathological processes that involve multiple tissue structures. Several surfaces and voxel based matching functions we have tested allowed a precise merging by means of numerical optimization methods like e.g. simulated annealing without the complicated assertion of fiducial markers or the localization landmarks in 2D cross sectional slice images. The quality of the registration depends on the choice of the optimization procedure according to the complexity of the matching function landscape. Precise correlation of the multimodal head and neck area images together with its 2D and 3D presentation techniques provides a valuable tool for physicians.

  7. Multimodal Discourse Analysis of the Movie "Argo"

    ERIC Educational Resources Information Center

    Bo, Xu

    2018-01-01

    Based on multimodal discourse theory, this paper makes a multimodal discourse analysis of some shots in the movie "Argo" from the perspective of context of culture, context of situation and meaning of image. Results show that this movie constructs multimodal discourse through particular context, language and image, and successfully…

  8. A review of snapshot multidimensional optical imaging: measuring photon tags in parallel

    PubMed Central

    Gao, Liang; Wang, Lihong V.

    2015-01-01

    Multidimensional optical imaging has seen remarkable growth in the past decade. Rather than measuring only the two-dimensional spatial distribution of light, as in conventional photography, multidimensional optical imaging captures light in up to nine dimensions, providing unprecedented information about incident photons’ spatial coordinates, emittance angles, wavelength, time, and polarization. Multidimensional optical imaging can be accomplished either by scanning or parallel acquisition. Compared with scanning-based imagers, parallel acquisition—also dubbed snapshot imaging—has a prominent advantage in maximizing optical throughput, particularly when measuring a datacube of high dimensions. Here, we first categorize snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlight the snapshot advantage in the context of optical throughput, and finally we discuss their state-of-the-art implementations and applications. PMID:27134340

  9. Multimodal Imaging of Human Brain Activity: Rational, Biophysical Aspects and Modes of Integration

    PubMed Central

    Blinowska, Katarzyna; Müller-Putz, Gernot; Kaiser, Vera; Astolfi, Laura; Vanderperren, Katrien; Van Huffel, Sabine; Lemieux, Louis

    2009-01-01

    Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims of multi-modal imaging and the ways in which it can be accomplished using representative applications. Given the importance of haemodynamic and electrophysiological signals in current multi-modal imaging applications, we also review some of the basic physiology relevant to understanding their relationship. PMID:19547657

  10. Nanoparticles in Higher-Order Multimodal Imaging

    NASA Astrophysics Data System (ADS)

    Rieffel, James Ki

    Imaging procedures are a cornerstone in our current medical infrastructure. In everything from screening, diagnostics, and treatment, medical imaging is perhaps our greatest tool in evaluating individual health. Recently, there has been tremendous increase in the development of multimodal systems that combine the strengths of complimentary imaging technologies to overcome their independent weaknesses. Clinically, this has manifested in the virtually universal manufacture of combined PET-CT scanners. With this push toward more integrated imaging, new contrast agents with multimodal functionality are needed. Nanoparticle-based systems are ideal candidates based on their unique size, properties, and diversity. In chapter 1, an extensive background on recent multimodal imaging agents capable of enhancing signal or contrast in three or more modalities is presented. Chapter 2 discusses the development and characterization of a nanoparticulate probe with hexamodal imaging functionality. It is my hope that the information contained in this thesis will demonstrate the many benefits of nanoparticles in multimodal imaging, and provide insight into the potential of fully integrated imaging.

  11. Designing a Digital Story Assignment for Basic Writers Using the TPCK Framework

    ERIC Educational Resources Information Center

    Bandi-Rao, Shoba; Sepp, Mary

    2014-01-01

    The process of digital storytelling allows basic writers to take a personal narrative and translate it into a multimodal and multidimensional experience, motivating a diverse group of writers with different learning styles to engage more creatively and meaningfully in the writing process. Digital storytelling has the capacity to contextualize…

  12. Multidimensional Functional Behaviour Assessment within a Problem Analysis Framework.

    ERIC Educational Resources Information Center

    Ryba, Ken; Annan, Jean

    This paper presents a new approach to contextualized problem analysis developed for use with multimodal Functional Behaviour Assessment (FBA) at Massey University in Auckland, New Zealand. The aim of problem analysis is to simplify complex problems that are difficult to understand. It accomplishes this by providing a high order framework that can…

  13. Tinnitus Multimodal Imaging

    DTIC Science & Technology

    2014-10-01

    1 AWARD NUMBER: W81XWH-13-1-0494 TITLE: Tinnitus Multimodal Imaging PRINCIPAL INVESTIGATOR...TYPE Annual 3. DATES COVERED 30 Sept 2013 – 29 Oct 2014 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Tinnitus Multimodal Imaging...AVAILABILITY STATEMENT Approved for Public Release; Distribution Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Tinnitus is a common auditory

  14. Multi-Modality Phantom Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, Jennifer S.; Peng, Qiyu; Moses, William W.

    2009-03-20

    Multi-modality imaging has an increasing role in the diagnosis and treatment of a large number of diseases, particularly if both functional and anatomical information are acquired and accurately co-registered. Hence, there is a resulting need for multi modality phantoms in order to validate image co-registration and calibrate the imaging systems. We present our PET-ultrasound phantom development, including PET and ultrasound images of a simple prostate phantom. We use agar and gelatin mixed with a radioactive solution. We also present our development of custom multi-modality phantoms that are compatible with PET, transrectal ultrasound (TRUS), MRI and CT imaging. We describe bothmore » our selection of tissue mimicking materials and phantom construction procedures. These custom PET-TRUS-CT-MRI prostate phantoms use agargelatin radioactive mixtures with additional contrast agents and preservatives. We show multi-modality images of these custom prostate phantoms, as well as discuss phantom construction alternatives. Although we are currently focused on prostate imaging, this phantom development is applicable to many multi-modality imaging applications.« less

  15. Photoacoustic-Based Multimodal Nanoprobes: from Constructing to Biological Applications.

    PubMed

    Gao, Duyang; Yuan, Zhen

    2017-01-01

    Multimodal nanoprobes have attracted intensive attentions since they can integrate various imaging modalities to obtain complementary merits of single modality. Meanwhile, recent interest in laser-induced photoacoustic imaging is rapidly growing due to its unique advantages in visualizing tissue structure and function with high spatial resolution and satisfactory imaging depth. In this review, we summarize multimodal nanoprobes involving photoacoustic imaging. In particular, we focus on the method to construct multimodal nanoprobes. We have divided the synthetic methods into two types. First, we call it "one for all" concept, which involves intrinsic properties of the element in a single particle. Second, "all in one" concept, which means integrating different functional blocks in one particle. Then, we simply introduce the applications of the multifunctional nanoprobes for in vivo imaging and imaging-guided tumor therapy. At last, we discuss the advantages and disadvantages of the present methods to construct the multimodal nanoprobes and share our viewpoints in this area.

  16. Development of multi-dimensional body image scale for malaysian female adolescents

    PubMed Central

    Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin

    2008-01-01

    The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965). The 152 items from selected multiple measures of body image were examined through factor analysis and for internal consistency. Correlations between Multi-dimensional Body Image Scale and body mass index (BMI), risk of eating disorders and self-esteem were assessed for construct validity. A seven factor model of a 62-item Multi-dimensional Body Image Scale for Malaysian female adolescents with construct validity and good internal consistency was developed. The scale encompasses 1) preoccupation with thinness and dieting behavior, 2) appearance and body satisfaction, 3) body importance, 4) muscle increasing behavior, 5) extreme dieting behavior, 6) appearance importance, and 7) perception of size and shape dimensions. Besides, a multidimensional body image composite score was proposed to screen negative body image risk in female adolescents. The result found body image was correlated with BMI, risk of eating disorders and self-esteem in female adolescents. In short, the present study supports a multi-dimensional concept for body image and provides a new insight into its multi-dimensionality in Malaysian female adolescents with preliminary validity and reliability of the scale. The Multi-dimensional Body Image Scale can be used to identify female adolescents who are potentially at risk of developing body image disturbance through future intervention programs. PMID:20126371

  17. Development of multi-dimensional body image scale for malaysian female adolescents.

    PubMed

    Chin, Yit Siew; Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin

    2008-01-01

    The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965). The 152 items from selected multiple measures of body image were examined through factor analysis and for internal consistency. Correlations between Multi-dimensional Body Image Scale and body mass index (BMI), risk of eating disorders and self-esteem were assessed for construct validity. A seven factor model of a 62-item Multi-dimensional Body Image Scale for Malaysian female adolescents with construct validity and good internal consistency was developed. The scale encompasses 1) preoccupation with thinness and dieting behavior, 2) appearance and body satisfaction, 3) body importance, 4) muscle increasing behavior, 5) extreme dieting behavior, 6) appearance importance, and 7) perception of size and shape dimensions. Besides, a multidimensional body image composite score was proposed to screen negative body image risk in female adolescents. The result found body image was correlated with BMI, risk of eating disorders and self-esteem in female adolescents. In short, the present study supports a multi-dimensional concept for body image and provides a new insight into its multi-dimensionality in Malaysian female adolescents with preliminary validity and reliability of the scale. The Multi-dimensional Body Image Scale can be used to identify female adolescents who are potentially at risk of developing body image disturbance through future intervention programs.

  18. Multimodal Diffuse Optical Imaging

    NASA Astrophysics Data System (ADS)

    Intes, Xavier; Venugopal, Vivek; Chen, Jin; Azar, Fred S.

    Diffuse optical imaging, particularly diffuse optical tomography (DOT), is an emerging clinical modality capable of providing unique functional information, at a relatively low cost, and with nonionizing radiation. Multimodal diffuse optical imaging has enabled a synergistic combination of functional and anatomical information: the quality of DOT reconstructions has been significantly improved by incorporating the structural information derived by the combined anatomical modality. In this chapter, we will review the basic principles of diffuse optical imaging, including instrumentation and reconstruction algorithm design. We will also discuss the approaches for multimodal imaging strategies that integrate DOI with clinically established modalities. The merit of the multimodal imaging approaches is demonstrated in the context of optical mammography, but the techniques described herein can be translated to other clinical scenarios such as brain functional imaging or muscle functional imaging.

  19. Prospective Evaluation of Multimodal Optical Imaging with Automated Image Analysis to Detect Oral Neoplasia In Vivo.

    PubMed

    Quang, Timothy; Tran, Emily Q; Schwarz, Richard A; Williams, Michelle D; Vigneswaran, Nadarajah; Gillenwater, Ann M; Richards-Kortum, Rebecca

    2017-10-01

    The 5-year survival rate for patients with oral cancer remains low, in part because diagnosis often occurs at a late stage. Early and accurate identification of oral high-grade dysplasia and cancer can help improve patient outcomes. Multimodal optical imaging is an adjunctive diagnostic technique in which autofluorescence imaging is used to identify high-risk regions within the oral cavity, followed by high-resolution microendoscopy to confirm or rule out the presence of neoplasia. Multimodal optical images were obtained from 206 sites in 100 patients. Histologic diagnosis, either from a punch biopsy or an excised surgical specimen, was used as the gold standard for all sites. Histopathologic diagnoses of moderate dysplasia or worse were considered neoplastic. Images from 92 sites in the first 30 patients were used as a training set to develop automated image analysis methods for identification of neoplasia. Diagnostic performance was evaluated prospectively using images from 114 sites in the remaining 70 patients as a test set. In the training set, multimodal optical imaging with automated image analysis correctly classified 95% of nonneoplastic sites and 94% of neoplastic sites. Among the 56 sites in the test set that were biopsied, multimodal optical imaging correctly classified 100% of nonneoplastic sites and 85% of neoplastic sites. Among the 58 sites in the test set that corresponded to a surgical specimen, multimodal imaging correctly classified 100% of nonneoplastic sites and 61% of neoplastic sites. These findings support the potential of multimodal optical imaging to aid in the early detection of oral cancer. Cancer Prev Res; 10(10); 563-70. ©2017 AACR . ©2017 American Association for Cancer Research.

  20. Multimodality Image Fusion-Guided Procedures: Technique, Accuracy, and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abi-Jaoudeh, Nadine, E-mail: naj@mail.nih.gov; Kruecker, Jochen, E-mail: jochen.kruecker@philips.com; Kadoury, Samuel, E-mail: samuel.kadoury@polymtl.ca

    2012-10-15

    Personalized therapies play an increasingly critical role in cancer care: Image guidance with multimodality image fusion facilitates the targeting of specific tissue for tissue characterization and plays a role in drug discovery and optimization of tailored therapies. Positron-emission tomography (PET), magnetic resonance imaging (MRI), and contrast-enhanced computed tomography (CT) may offer additional information not otherwise available to the operator during minimally invasive image-guided procedures, such as biopsy and ablation. With use of multimodality image fusion for image-guided interventions, navigation with advanced modalities does not require the physical presence of the PET, MRI, or CT imaging system. Several commercially available methodsmore » of image-fusion and device navigation are reviewed along with an explanation of common tracking hardware and software. An overview of current clinical applications for multimodality navigation is provided.« less

  1. Multimodal quantitative phase and fluorescence imaging of cell apoptosis

    NASA Astrophysics Data System (ADS)

    Fu, Xinye; Zuo, Chao; Yan, Hao

    2017-06-01

    Fluorescence microscopy, utilizing fluorescence labeling, has the capability to observe intercellular changes which transmitted and reflected light microscopy techniques cannot resolve. However, the parts without fluorescence labeling are not imaged. Hence, the processes simultaneously happen in these parts cannot be revealed. Meanwhile, fluorescence imaging is 2D imaging where information in the depth is missing. Therefore the information in labeling parts is also not complete. On the other hand, quantitative phase imaging is capable to image cells in 3D in real time through phase calculation. However, its resolution is limited by the optical diffraction and cannot observe intercellular changes below 200 nanometers. In this work, fluorescence imaging and quantitative phase imaging are combined to build a multimodal imaging system. Such system has the capability to simultaneously observe the detailed intercellular phenomenon and 3D cell morphology. In this study the proposed multimodal imaging system is used to observe the cell behavior in the cell apoptosis. The aim is to highlight the limitations of fluorescence microscopy and to point out the advantages of multimodal quantitative phase and fluorescence imaging. The proposed multimodal quantitative phase imaging could be further applied in cell related biomedical research, such as tumor.

  2. Multimodal imaging of ischemic wounds

    NASA Astrophysics Data System (ADS)

    Zhang, Shiwu; Gnyawali, Surya; Huang, Jiwei; Liu, Peng; Gordillo, Gayle; Sen, Chandan K.; Xu, Ronald

    2012-12-01

    The wound healing process involves the reparative phases of inflammation, proliferation, and remodeling. Interrupting any of these phases may result in chronically unhealed wounds, amputation, or even patient death. Quantitative assessment of wound tissue ischemia, perfusion, and inflammation provides critical information for appropriate detection, staging, and treatment of chronic wounds. However, no method is available for noninvasive, simultaneous, and quantitative imaging of these tissue parameters. We integrated hyperspectral, laser speckle, and thermographic imaging modalities into a single setup for multimodal assessment of tissue oxygenation, perfusion, and inflammation characteristics. Advanced algorithms were developed for accurate reconstruction of wound oxygenation and appropriate co-registration between different imaging modalities. The multimodal wound imaging system was validated by an ongoing clinical trials approved by OSU IRB. In the clinical trial, a wound of 3mm in diameter was introduced on a healthy subject's lower extremity and the healing process was serially monitored by the multimodal imaging setup. Our experiments demonstrated the clinical usability of multimodal wound imaging.

  3. The pivotal role of multimodality reporter sensors in drug discovery: from cell based assays to real time molecular imaging.

    PubMed

    Ray, Pritha

    2011-04-01

    Development and marketing of new drugs require stringent validation that are expensive and time consuming. Non-invasive multimodality molecular imaging using reporter genes holds great potential to expedite these processes at reduced cost. New generations of smarter molecular imaging strategies such as Split reporter, Bioluminescence resonance energy transfer, Multimodality fusion reporter technologies will further assist to streamline and shorten the drug discovery and developmental process. This review illustrates the importance and potential of molecular imaging using multimodality reporter genes in drug development at preclinical phases.

  4. In vivo multimodal nonlinear optical imaging of mucosal tissue

    NASA Astrophysics Data System (ADS)

    Sun, Ju; Shilagard, Tuya; Bell, Brent; Motamedi, Massoud; Vargas, Gracie

    2004-05-01

    We present a multimodal nonlinear imaging approach to elucidate microstructures and spectroscopic features of oral mucosa and submucosa in vivo. The hamster buccal pouch was imaged using 3-D high resolution multiphoton and second harmonic generation microscopy. The multimodal imaging approach enables colocalization and differentiation of prominent known spectroscopic and structural features such as keratin, epithelial cells, and submucosal collagen at various depths in tissue. Visualization of cellular morphology and epithelial thickness are in excellent agreement with histological observations. These results suggest that multimodal nonlinear optical microscopy can be an effective tool for studying the physiology and pathology of mucosal tissue.

  5. SU-E-J-110: A Novel Level Set Active Contour Algorithm for Multimodality Joint Segmentation/Registration Using the Jensen-Rényi Divergence.

    PubMed

    Markel, D; Naqa, I El; Freeman, C; Vallières, M

    2012-06-01

    To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. It was found that JR divergence when used for segmentation has an improved robustness to noise compared to using mutual information, or other entropy-based metrics. The MI metric failed at around 2/3 the noise power than the JR divergence. The JR divergence metric is useful for the task of joint segmentation/registration of multimodality images and shows improved results compared entropy based metric. The algorithm can be easily modified to incorporate non-intensity based images, which would allow applications into multi-modality and texture analysis. © 2012 American Association of Physicists in Medicine.

  6. "Star Wars", Model Making, and Cultural Critique: A Case for Film Study in Art Classrooms

    ERIC Educational Resources Information Center

    Briggs, Judith

    2009-01-01

    Films are multimodal, often memorable, and change one's way of thinking. Films provide narratives and visual metaphors that function as tools for one's imagination and learning. No other film has amplified this phenomenon in the United States more than the "Star Wars" Cycle. "Star Wars" exemplifies the multidimensionality of…

  7. Literacy through Photography: Multimodal and Visual Literacy in a Third Grade Classroom

    ERIC Educational Resources Information Center

    Wiseman, Angela M.; Mäkinen, Marita; Kupiainen, Reijo

    2016-01-01

    This article reports findings from a diverse third grade classroom that integrates a literacy through photography (LTP) curriculum as a central component of writing instruction in an urban public school. A case study approach was used in order to provide an in-depth, multi-dimensional consideration of phenomena by drawing on multiple data sources…

  8. Bridging In-School and Out-of-School Literacies: An Adolescent EL's Composition of a Multimodal Project

    ERIC Educational Resources Information Center

    Pyo, Jeongsoo

    2016-01-01

    As new technology has changed adolescents' literate life pathways outside of school in remarkable ways, new uses of terminology, such as "mutiliteracies", are necessary to capture the multidimensional nature of literacy. However, there have been few studies on the multiliteracies experiences of Korean adolescent English learners (ELs).…

  9. The Behavior Assessment Battery: A Preliminary Study of Non-Stuttering Pakistani Grade-School Children

    ERIC Educational Resources Information Center

    Vanryckeghem, Martine; Mukati, Samad A.

    2006-01-01

    Background: In recent years, the importance of a multimodal approach to the assessment of the person who stutters (PWS) has become increasingly recognized. The Behavior Assessment Battery (BAB), which is a normed test procedure developed by G. Brutten, makes it possible to assess the multidimensional facets of this disorder. The emotional and…

  10. Ridge-branch-based blood vessel detection algorithm for multimodal retinal images

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hutchings, N.; Knighton, R. W.; Gregori, G.; Lujan, B. J.; Flanagan, J. G.

    2009-02-01

    Automatic detection of retinal blood vessels is important to medical diagnoses and imaging. With the development of imaging technologies, various modals of retinal images are available. Few of currently published algorithms are applied to multimodal retinal images. Besides, the performance of algorithms with pathologies is expected to be improved. The purpose of this paper is to propose an automatic Ridge-Branch-Based (RBB) detection algorithm of blood vessel centerlines and blood vessels for multimodal retinal images (color fundus photographs, fluorescein angiograms, fundus autofluorescence images, SLO fundus images and OCT fundus images, for example). Ridges, which can be considered as centerlines of vessel-like patterns, are first extracted. The method uses the connective branching information of image ridges: if ridge pixels are connected, they are more likely to be in the same class, vessel ridge pixels or non-vessel ridge pixels. Thanks to the good distinguishing ability of the designed "Segment-Based Ridge Features", the classifier and its parameters can be easily adapted to multimodal retinal images without ground truth training. We present thorough experimental results on SLO images, color fundus photograph database and other multimodal retinal images, as well as comparison between other published algorithms. Results showed that the RBB algorithm achieved a good performance.

  11. NaGdF4:Nd3+/Yb3+ Nanoparticles as Multimodal Imaging Agents

    NASA Astrophysics Data System (ADS)

    Pedraza, Francisco; Rightsell, Chris; Kumar, Ga; Giuliani, Jason; Monton, Car; Sardar, Dhiraj

    Medical imaging is a fundamental tool used for the diagnosis of numerous ailments. Each imaging modality has unique advantages; however, they possess intrinsic limitations. Some of which include low spatial resolution, sensitivity, penetration depth, and radiation damage. To circumvent this problem, the combination of imaging modalities, or multimodal imaging, has been proposed, such as Near Infrared Fluorescence imaging (NIRF) and Magnetic Resonance Imaging (MRI). Combining individual advantages, specificity and selectivity of NIRF with the deep penetration and high spatial resolution of MRI, it is possible to circumvent their shortcomings for a more robust imaging technique. In addition, both imaging modalities are very safe and minimally invasive. Fluorescent nanoparticles, such as NaGdF4:Nd3 +/Yb3 +, are excellent candidates for NIRF/MRI multimodal imaging. The dopants, Nd and Yb, absorb and emit within the biological window; where near infrared light is less attenuated by soft tissue. This results in less tissue damage and deeper tissue penetration making it a viable candidate in biological imaging. In addition, the inclusion of Gd results in paramagnetic properties, allowing their use as contrast agents in multimodal imaging. The work presented will include crystallographic results, as well as full optical and magnetic characterization to determine the nanoparticle's viability in multimodal imaging.

  12. Developing single-laser sources for multimodal coherent anti-Stokes Raman scattering microscopy

    NASA Astrophysics Data System (ADS)

    Pegoraro, Adrian Frank

    Coherent anti-Stokes Raman scattering (CARS) microscopy has developed rapidly and is opening the door to new types of experiments. This work describes the development of new laser sources for CARS microscopy and their use for different applications. It is specifically focused on multimodal nonlinear optical microscopy—the simultaneous combination of different imaging techniques. This allows us to address a diverse range of applications, such as the study of biomaterials, fluid inclusions, atherosclerosis, hepatitis C infection in cells, and ice formation in cells. For these applications new laser sources are developed that allow for practical multimodal imaging. For example, it is shown that using a single Ti:sapphire oscillator with a photonic crystal fiber, it is possible to develop a versatile multimodal imaging system using optimally chirped laser pulses. This system can perform simultaneous two photon excited fluorescence, second harmonic generation, and CARS microscopy. The versatility of the system is further demonstrated by showing that it is possible to probe different Raman modes using CARS microscopy simply by changing a time delay between the excitation beams. Using optimally chirped pulses also enables further simplification of the laser system required by using a single fiber laser combined with nonlinear optical fibers to perform effective multimodal imaging. While these sources are useful for practical multimodal imaging, it is believed that for further improvements in CARS microscopy sensitivity, new excitation schemes are necessary. This has led to the design of a new, high power, extended cavity oscillator that should be capable of implementing new excitation schemes for CARS microscopy as well as other techniques. Our interest in multimodal imaging has led us to other areas of research as well. For example, a fiber-coupling scheme for signal collection in the forward direction is demonstrated that allows for fluorescence lifetime imaging without significant temporal distortion. Also highlighted is an imaging artifact that is unique to CARS microscopy that can alter image interpretation, especially when using multimodal imaging. By combining expertise in nonlinear optics, laser development, fiber optics, and microscopy, we have developed systems and techniques that will be of benefit for multimodal CARS microscopy.

  13. Mobile, Multi-modal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia

    DTIC Science & Technology

    2015-10-01

    eyes and image choroidal vessels/capillaries using CARS intravital microscopy Subtask 3: Measure oxy-hemoglobin levels in PBI test and control eyes...AWARD NUMBER: W81XWH-14-1-0537 TITLE: Mobile, Multi-modal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia...4. TITLE AND SUBTITLE Mobile, Multimodal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia 5a. CONTRACT NUMBER W81XWH

  14. Radioactive Nanomaterials for Multimodality Imaging

    PubMed Central

    Chen, Daiqin; Dougherty, Casey A.; Yang, Dongzhi; Wu, Hongwei; Hong, Hao

    2016-01-01

    Nuclear imaging techniques, including primarily positron emission tomography (PET) and single-photon emission computed tomography (SPECT), can provide quantitative information for a biological event in vivo with ultra-high sensitivity, however, the comparatively low spatial resolution is their major limitation in clinical application. By convergence of nuclear imaging with other imaging modalities like computed tomography (CT), magnetic resonance imaging (MRI) and optical imaging, the hybrid imaging platforms can overcome the limitations from each individual imaging technique. Possessing versatile chemical linking ability and good cargo-loading capacity, radioactive nanomaterials can serve as ideal imaging contrast agents. In this review, we provide a brief overview about current state-of-the-art applications of radioactive nanomaterials in the circumstances of multimodality imaging. We present strategies for incorporation of radioisotope(s) into nanomaterials along with applications of radioactive nanomaterials in multimodal imaging. Advantages and limitations of radioactive nanomaterials for multimodal imaging applications are discussed. Finally, a future perspective of possible radioactive nanomaterial utilization is presented for improving diagnosis and patient management in a variety of diseases. PMID:27227167

  15. MO-DE-202-03: Image-Guided Surgery and Interventions in the Advanced Multimodality Image-Guided Operating (AMIGO) Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapur, T.

    At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less

  16. Robust Multimodal Dictionary Learning

    PubMed Central

    Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674

  17. Multimodal fusion of brain imaging data: A key to finding the missing link(s) in complex mental illness

    PubMed Central

    Calhoun, Vince D; Sui, Jing

    2016-01-01

    It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness. PMID:27347565

  18. Multimodal fusion of brain imaging data: A key to finding the missing link(s) in complex mental illness.

    PubMed

    Calhoun, Vince D; Sui, Jing

    2016-05-01

    It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness.

  19. Fluorescence Imaging Topography Scanning System for intraoperative multimodal imaging

    PubMed Central

    Quang, Tri T.; Kim, Hye-Yeong; Bao, Forrest Sheng; Papay, Francis A.; Edwards, W. Barry; Liu, Yang

    2017-01-01

    Fluorescence imaging is a powerful technique with diverse applications in intraoperative settings. Visualization of three dimensional (3D) structures and depth assessment of lesions, however, are oftentimes limited in planar fluorescence imaging systems. In this study, a novel Fluorescence Imaging Topography Scanning (FITS) system has been developed, which offers color reflectance imaging, fluorescence imaging and surface topography scanning capabilities. The system is compact and portable, and thus suitable for deployment in the operating room without disturbing the surgical flow. For system performance, parameters including near infrared fluorescence detection limit, contrast transfer functions and topography depth resolution were characterized. The developed system was tested in chicken tissues ex vivo with simulated tumors for intraoperative imaging. We subsequently conducted in vivo multimodal imaging of sentinel lymph nodes in mice using FITS and PET/CT. The PET/CT/optical multimodal images were co-registered and conveniently presented to users to guide surgeries. Our results show that the developed system can facilitate multimodal intraoperative imaging. PMID:28437441

  20. Microscopy with multimode fibers

    NASA Astrophysics Data System (ADS)

    Moser, Christophe; Papadopoulos, Ioannis; Farahi, Salma; Psaltis, Demetri

    2013-04-01

    Microscopes are usually thought of comprising imaging elements such as objectives and eye-piece lenses. A different type of microscope, used for endoscopy, consists of waveguiding elements such as fiber bundles, where each fiber in the bundle transports the light corresponding to one pixel in the image. Recently a new type of microscope has emerged that exploits the large number of propagating modes in a single multimode fiber. We have successfully produced fluorescence images of neural cells with sub-micrometer resolution via a 200 micrometer core multimode fiber. The method for achieving imaging consists of using digital phase conjugation to reproduce a focal spot at the tip of the multimode fiber. The image is formed by scanning the focal spot digitally and collecting the fluorescence point by point.

  1. Landmark Image Retrieval by Jointing Feature Refinement and Multimodal Classifier Learning.

    PubMed

    Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun; Ma, Shuai; Xiaoming Zhang; Senzhang Wang; Zhoujun Li; Shuai Ma; Ma, Shuai; Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun

    2018-06-01

    Landmark retrieval is to return a set of images with their landmarks similar to those of the query images. Existing studies on landmark retrieval focus on exploiting the geometries of landmarks for visual similarity matches. However, the visual content of social images is of large diversity in many landmarks, and also some images share common patterns over different landmarks. On the other side, it has been observed that social images usually contain multimodal contents, i.e., visual content and text tags, and each landmark has the unique characteristic of both visual content and text content. Therefore, the approaches based on similarity matching may not be effective in this environment. In this paper, we investigate whether the geographical correlation among the visual content and the text content could be exploited for landmark retrieval. In particular, we propose an effective multimodal landmark classification paradigm to leverage the multimodal contents of social image for landmark retrieval, which integrates feature refinement and landmark classifier with multimodal contents by a joint model. The geo-tagged images are automatically labeled for classifier learning. Visual features are refined based on low rank matrix recovery, and multimodal classification combined with group sparse is learned from the automatically labeled images. Finally, candidate images are ranked by combining classification result and semantic consistence measuring between the visual content and text content. Experiments on real-world datasets demonstrate the superiority of the proposed approach as compared to existing methods.

  2. Supervised learning based multimodal MRI brain tumour segmentation using texture features from supervoxels.

    PubMed

    Soltaninejad, Mohammadreza; Yang, Guang; Lambrou, Tryphon; Allinson, Nigel; Jones, Timothy L; Barrick, Thomas R; Howe, Franklyn A; Ye, Xujiong

    2018-04-01

    Accurate segmentation of brain tumour in magnetic resonance images (MRI) is a difficult task due to various tumour types. Using information and features from multimodal MRI including structural MRI and isotropic (p) and anisotropic (q) components derived from the diffusion tensor imaging (DTI) may result in a more accurate analysis of brain images. We propose a novel 3D supervoxel based learning method for segmentation of tumour in multimodal MRI brain images (conventional MRI and DTI). Supervoxels are generated using the information across the multimodal MRI dataset. For each supervoxel, a variety of features including histograms of texton descriptor, calculated using a set of Gabor filters with different sizes and orientations, and first order intensity statistical features are extracted. Those features are fed into a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The method is evaluated on two datasets: 1) Our clinical dataset: 11 multimodal images of patients and 2) BRATS 2013 clinical dataset: 30 multimodal images. For our clinical dataset, the average detection sensitivity of tumour (including tumour core and oedema) using multimodal MRI is 86% with balanced error rate (BER) 7%; while the Dice score for automatic tumour segmentation against ground truth is 0.84. The corresponding results of the BRATS 2013 dataset are 96%, 2% and 0.89, respectively. The method demonstrates promising results in the segmentation of brain tumour. Adding features from multimodal MRI images can largely increase the segmentation accuracy. The method provides a close match to expert delineation across all tumour grades, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. MO-DE-202-04: Multimodality Image-Guided Surgery and Intervention: For the Rest of Us

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shekhar, R.

    At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lingerfelt, Eric J; Endeve, Eirik; Hui, Yawei

    Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now--with the rise of multimodal acquisition systems and the associated processing capability--the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalablemore » data analysis and simulation and manage uploaded data files via an intuitive, cross-platform client user interface. This framework delivers authenticated, "push-button" execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing compute-and-data cloud infrastructures and HPC environments like Titan at the Oak Ridge Leadershp Computing Facility (OLCF).« less

  5. Tinnitus Multimodal Imaging

    DTIC Science & Technology

    2015-10-01

    AWARD NUMBER: W81XWH-13-1-0494 TITLE: Tinnitus Multimodal Imaging PRINCIPAL INVESTIGATOR: Steven Wan Cheung CONTRACTING ORGANIZATION...NUMBER W81XWH-13-1-0494 Tinnitus Multimodal Imaging 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Steven W. Cheung...13. SUPPLEMENTARY NOTES 14. ABSTRACT Tinnitus is a common auditory perceptual disorder whose neural substrates are under intense debate. This project

  6. Application of a hierarchical structure stochastic learning automation

    NASA Technical Reports Server (NTRS)

    Neville, R. G.; Chrystall, M. S.; Mars, P.

    1979-01-01

    A hierarchical structure automaton was developed using a two state stochastic learning automato (SLA) in a time shared model. Application of the hierarchical SLA to systems with multidimensional, multimodal performance criteria is described. Results of experiments performed with the hierarchical SLA using a performance index with a superimposed noise component of ? or - delta distributed uniformly over the surface are discussed.

  7. A multimodal imaging workflow to visualize metal mixtures in the human placenta and explore colocalization with biological response markers.

    PubMed

    Niedzwiecki, Megan M; Austin, Christine; Remark, Romain; Merad, Miriam; Gnjatic, Sacha; Estrada-Gutierrez, Guadalupe; Espejel-Nuñez, Aurora; Borboa-Olivares, Hector; Guzman-Huerta, Mario; Wright, Rosalind J; Wright, Robert O; Arora, Manish

    2016-04-01

    Fetal exposure to essential and toxic metals can influence life-long health trajectories. The placenta regulates chemical transmission from maternal circulation to the fetus and itself exhibits a complex response to environmental stressors. The placenta can thus be a useful matrix to monitor metal exposures and stress responses in utero, but strategies to explore the biologic effects of metal mixtures in this organ are not well-developed. In this proof-of-concept study, we used laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) to measure the distributions of multiple metals in placental tissue from a low-birth-weight pregnancy, and we developed an approach to identify the components of metal mixtures that colocalized with biological response markers. Our novel workflow, which includes custom-developed software tools and algorithms for spatial outlier identification and background subtraction in multidimensional elemental image stacks, enables rapid image processing and seamless integration of data from elemental imaging and immunohistochemistry. Using quantitative spatial statistics, we identified distinct patterns of metal accumulation at sites of inflammation. Broadly, our multiplexed approach can be used to explore the mechanisms mediating complex metal exposures and biologic responses within placentae and other tissue types. Our LA-ICP-MS image processing workflow can be accessed through our interactive R Shiny application 'shinyImaging', which is available at or through our laboratory's website, .

  8. Multimodal Imaging of the Normal Eye.

    PubMed

    Kawali, Ankush; Pichi, Francesco; Avadhani, Kavitha; Invernizzi, Alessandro; Hashimoto, Yuki; Mahendradas, Padmamalini

    2017-10-01

    Multimodal imaging is the concept of "bundling" images obtained from various imaging modalities, viz., fundus photograph, fundus autofluorescence imaging, infrared (IR) imaging, simultaneous fluorescein and indocyanine angiography, optical coherence tomography (OCT), and, more recently, OCT angiography. Each modality has its pros and cons as well as its limitations. Combination of multiple imaging techniques will overcome their individual weaknesses and give a comprehensive picture. Such approach helps in accurate localization of a lesion and understanding the pathology in posterior segment. It is important to know imaging of normal eye before one starts evaluating pathology. This article describes multimodal imaging modalities in detail and discusses healthy eye features as seen on various imaging modalities mentioned above.

  9. A novel automated method for doing registration and 3D reconstruction from multi-modal RGB/IR image sequences

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2016-09-01

    In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.

  10. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-08-01

    Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.

  11. Multimodal Image Registration through Simultaneous Segmentation.

    PubMed

    Aganj, Iman; Fischl, Bruce

    2017-11-01

    Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.

  12. Multimodal hard x-ray imaging with resolution approaching 10 nm for studies in material science

    NASA Astrophysics Data System (ADS)

    Yan, Hanfei; Bouet, Nathalie; Zhou, Juan; Huang, Xiaojing; Nazaretski, Evgeny; Xu, Weihe; Cocco, Alex P.; Chiu, Wilson K. S.; Brinkman, Kyle S.; Chu, Yong S.

    2018-03-01

    We report multimodal scanning hard x-ray imaging with spatial resolution approaching 10 nm and its application to contemporary studies in the field of material science. The high spatial resolution is achieved by focusing hard x-rays with two crossed multilayer Laue lenses and raster-scanning a sample with respect to the nanofocusing optics. Various techniques are used to characterize and verify the achieved focus size and imaging resolution. The multimodal imaging is realized by utilizing simultaneously absorption-, phase-, and fluorescence-contrast mechanisms. The combination of high spatial resolution and multimodal imaging enables a comprehensive study of a sample on a very fine length scale. In this work, the unique multimodal imaging capability was used to investigate a mixed ionic-electronic conducting ceramic-based membrane material employed in solid oxide fuel cells and membrane separations (compound of Ce0.8Gd0.2O2‑x and CoFe2O4) which revealed the existence of an emergent material phase and quantified the chemical complexity at the nanoscale.

  13. Towards a Compact Fiber Laser for Multimodal Imaging

    NASA Astrophysics Data System (ADS)

    Nie, Bai; Saytashev, Ilyas; Dantus, Marcos

    We report on multimodal depth-resolved imaging of unstained living Drosophila Melanogaster larva using sub-50 fs pulses centered at 1060 nm wavelength. Both second harmonic and third harmonic generation imaging modalities are demonstrated.

  14. Towards a compact fiber laser for multimodal imaging

    NASA Astrophysics Data System (ADS)

    Nie, Bai; Saytashev, Ilyas; Dantus, Marcos

    2014-03-01

    We report on multimodal depth-resolved imaging of unstained living Drosophila Melanogaster larva using sub-50 fs pulses centered at 1060 nm wavelength. Both second harmonic and third harmonic generation imaging modalities are demonstrated.

  15. A generative probabilistic model and discriminative extensions for brain lesion segmentation – with application to tumor and stroke

    PubMed Central

    Menze, Bjoern H.; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-André; Székely, Gabor; Ayache, Nicholas; Golland, Polina

    2016-01-01

    We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM) to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as “tumor core” or “fluid-filled structure”, but without a one-to-one correspondence to the hypo-or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the generative-discriminative model to be one of the top ranking methods in the BRATS evaluation. PMID:26599702

  16. A Generative Probabilistic Model and Discriminative Extensions for Brain Lesion Segmentation--With Application to Tumor and Stroke.

    PubMed

    Menze, Bjoern H; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-Andre; Szekely, Gabor; Ayache, Nicholas; Golland, Polina

    2016-04-01

    We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM), to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as "tumor core" or "fluid-filled structure", but without a one-to-one correspondence to the hypo- or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the extended discriminative -discriminative model to be one of the top ranking methods in the BRATS evaluation.

  17. A multimodal image sensor system for identifying water stress in grapevines

    NASA Astrophysics Data System (ADS)

    Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong

    2012-11-01

    Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.

  18. A hybrid multimodal non-rigid registration of MR images based on diffeomorphic demons.

    PubMed

    Lu, Huanxiang; Cattin, Philippe C; Reyes, Mauricio

    2010-01-01

    In this paper we present a novel hybrid approach for multimodal medical image registration based on diffeomorphic demons. Diffeomorphic demons have proven to be a robust and efficient way for intensity-based image registration. A very recent extension even allows to use mutual information (MI) as a similarity measure to registration multimodal images. However, due to the intensity correspondence uncertainty existing in some anatomical parts, it is difficult for a purely intensity-based algorithm to solve the registration problem. Therefore, we propose to combine the resulting transformations from both intensity-based and landmark-based methods for multimodal non-rigid registration based on diffeomorphic demons. Several experiments on different types of MR images were conducted, for which we show that a better anatomical correspondence between the images can be obtained using the hybrid approach than using either intensity information or landmarks alone.

  19. Multimodal imaging of cutaneous wound tissue

    NASA Astrophysics Data System (ADS)

    Zhang, Shiwu; Gnyawali, Surya; Huang, Jiwei; Ren, Wenqi; Gordillo, Gayle; Sen, Chandan K.; Xu, Ronald

    2015-01-01

    Quantitative assessment of wound tissue ischemia, perfusion, and inflammation provides critical information for appropriate detection, staging, and treatment of chronic wounds. However, few methods are available for simultaneous assessment of these tissue parameters in a noninvasive and quantitative fashion. We integrated hyperspectral, laser speckle, and thermographic imaging modalities in a single-experimental setup for multimodal assessment of tissue oxygenation, perfusion, and inflammation characteristics. Algorithms were developed for appropriate coregistration between wound images acquired by different imaging modalities at different times. The multimodal wound imaging system was validated in an occlusion experiment, where oxygenation and perfusion maps of a healthy subject's upper extremity were continuously monitored during a postocclusive reactive hyperemia procedure and compared with standard measurements. The system was also tested in a clinical trial where a wound of three millimeters in diameter was introduced on a healthy subject's lower extremity and the healing process was continuously monitored. Our in vivo experiments demonstrated the clinical feasibility of multimodal cutaneous wound imaging.

  20. MULTIMODAL IMAGING OF SYPHILITIC MULTIFOCAL RETINITIS.

    PubMed

    Curi, Andre L; Sarraf, David; Cunningham, Emmett T

    2015-01-01

    To describe multimodal imaging of syphilitic multifocal retinitis. Observational case series. Two patients developed multifocal retinitis after treatment of unrecognized syphilitic uveitis with systemic corticosteroids in the absence of appropriate antibiotic therapy. Multimodal imaging localized the foci of retinitis within the retina in contrast to superficial retinal precipitates that accumulate on the surface of the retina in eyes with untreated syphilitic uveitis. Although the retinitis resolved after treatment with systemic penicillin in both cases, vision remained poor in the patient with multifocal retinitis involving the macula. Treatment of unrecognized syphilitic uveitis with corticosteroids in the absence of antitreponemal treatment can lead to the development of multifocal retinitis. Multimodal imaging, and optical coherence tomography in particular, can be used to distinguish multifocal retinitis from superficial retinal precipitates or accumulations.

  1. Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.

    PubMed

    Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng

    2017-12-01

    How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.

  2. Multimodality cardiac imaging at IRCCS Policlinico San Donato: a new interdisciplinary vision.

    PubMed

    Lombardi, Massimo; Secchi, Francesco; Pluchinotta, Francesca R; Castelvecchio, Serenella; Montericcio, Vincenzo; Camporeale, Antonia; Bandera, Francesco

    2016-04-28

    Multimodality imaging is the efficient integration of various methods of cardiovascular imaging to improve the ability to diagnose, guide therapy, or predict outcome. This approach implies both the availability of different technologies in a single unit and the presence of dedicated staff with cardiologic and radiologic background and certified competence in more than one imaging technique. Interaction with clinical practice and existence of research programmes and educational activities are pivotal for the success of this model. The aim of this paper is to describe the multimodality cardiac imaging programme recently started at San Donato Hospital.

  3. A multimodal parallel architecture: A cognitive framework for multimodal interactions.

    PubMed

    Cohn, Neil

    2016-01-01

    Human communication is naturally multimodal, and substantial focus has examined the semantic correspondences in speech-gesture and text-image relationships. However, visual narratives, like those in comics, provide an interesting challenge to multimodal communication because the words and/or images can guide the overall meaning, and both modalities can appear in complicated "grammatical" sequences: sentences use a syntactic structure and sequential images use a narrative structure. These dual structures create complexity beyond those typically addressed by theories of multimodality where only a single form uses combinatorial structure, and also poses challenges for models of the linguistic system that focus on single modalities. This paper outlines a broad theoretical framework for multimodal interactions by expanding on Jackendoff's (2002) parallel architecture for language. Multimodal interactions are characterized in terms of their component cognitive structures: whether a particular modality (verbal, bodily, visual) is present, whether it uses a grammatical structure (syntax, narrative), and whether it "dominates" the semantics of the overall expression. Altogether, this approach integrates multimodal interactions into an existing framework of language and cognition, and characterizes interactions between varying complexity in the verbal, bodily, and graphic domains. The resulting theoretical model presents an expanded consideration of the boundaries of the "linguistic" system and its involvement in multimodal interactions, with a framework that can benefit research on corpus analyses, experimentation, and the educational benefits of multimodality. Copyright © 2015.

  4. Design and demonstration of multimodal optical scanning microscopy for confocal and two-photon imaging

    NASA Astrophysics Data System (ADS)

    Chun, Wanhee; Do, Dukho; Gweon, Dae-Gab

    2013-01-01

    We developed a multimodal microscopy based on an optical scanning system in order to obtain diverse optical information of the same area of a sample. Multimodal imaging researches have mostly depended on a commercial microscope platform, easy to use but restrictive to extend imaging modalities. In this work, the beam scanning optics, especially including a relay lens, was customized to transfer broadband (400-1000 nm) lights to a sample without any optical error or loss. The customized scanning optics guarantees the best performances of imaging techniques utilizing the lights within the design wavelength. Confocal reflection, confocal fluorescence, and two-photon excitation fluorescence images were obtained, through respective implemented imaging channels, to demonstrate imaging feasibility for near-UV, visible, near-IR continuous light, and pulsed light in the scanning optics. The imaging performances for spatial resolution and image contrast were verified experimentally; the results were satisfactory in comparison with theoretical results. The advantages of customization, containing low cost, outstanding combining ability and diverse applications, will contribute to vitalize multimodal imaging researches.

  5. Dye-enhanced multimodal confocal imaging as a novel approach to intraoperative diagnosis of brain tumors.

    PubMed

    Snuderl, Matija; Wirth, Dennis; Sheth, Sameer A; Bourne, Sarah K; Kwon, Churl-Su; Ancukiewicz, Marek; Curry, William T; Frosch, Matthew P; Yaroslavsky, Anna N

    2013-01-01

    Intraoperative diagnosis plays an important role in accurate sampling of brain tumors, limiting the number of biopsies required and improving the distinction between brain and tumor. The goal of this study was to evaluate dye-enhanced multimodal confocal imaging for discriminating gliomas from nonglial brain tumors and from normal brain tissue for diagnostic use. We investigated a total of 37 samples including glioma (13), meningioma (7), metastatic tumors (9) and normal brain removed for nontumoral indications (8). Tissue was stained in 0.05 mg/mL aqueous solution of methylene blue (MB) for 2-5 minutes and multimodal confocal images were acquired using a custom-built microscope. After imaging, tissue was formalin fixed and paraffin embedded for standard neuropathologic evaluation. Thirteen pathologists provided diagnoses based on the multimodal confocal images. The investigated tumor types exhibited distinctive and complimentary characteristics in both the reflectance and fluorescence responses. Images showed distinct morphological features similar to standard histology. Pathologists were able to distinguish gliomas from normal brain tissue and nonglial brain tumors, and to render diagnoses from the images in a manner comparable to haematoxylin and eosin (H&E) slides. These results confirm the feasibility of multimodal confocal imaging for intravital intraoperative diagnosis. © 2012 The Authors; Brain Pathology © 2012 International Society of Neuropathology.

  6. Image-guided plasma therapy of cutaneous wound

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiwu; Ren, Wenqi; Yu, Zelin; Zhang, Shiwu; Yue, Ting; Xu, Ronald

    2014-02-01

    The wound healing process involves the reparative phases of inflammation, proliferation, and remodeling. Interrupting any of these phases may result in chronically unhealed wounds, amputation, or even patient death. Despite the clinical significance in chronic wound management, no effective methods have been developed for quantitative image-guided treatment. We integrated a multimodal imaging system with a cold atmospheric plasma probe for image-guided treatment of chronic wound. Multimodal imaging system offers a non-invasive, painless, simultaneous and quantitative assessment of cutaneous wound healing. Cold atmospheric plasma accelerates the wound healing process through many mechanisms including decontamination, coagulation and stimulation of the wound healing. The therapeutic effect of cold atmospheric plasma is studied in vivo under the guidance of a multimodal imaging system. Cutaneous wounds are created on the dorsal skin of the nude mice. During the healing process, the sample wound is treated by cold atmospheric plasma at different controlled dosage, while the control wound is healed naturally. The multimodal imaging system integrating a multispectral imaging module and a laser speckle imaging module is used to collect the information of cutaneous tissue oxygenation (i.e. oxygen saturation, StO2) and blood perfusion simultaneously to assess and guide the plasma therapy. Our preliminary tests show that cold atmospheric plasma in combination with multimodal imaging guidance has the potential to facilitate the healing of chronic wounds.

  7. MO-DE-202-00: Image-Guided Interventions: Advances in Intraoperative Imaging, Guidance, and An Emerging Role for Medical Physics in Surgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less

  8. MO-DE-202-02: Advances in Image Registration and Reconstruction for Image-Guided Neurosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siewerdsen, J.

    At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less

  9. A simultaneous multimodal imaging system for tissue functional parameters

    NASA Astrophysics Data System (ADS)

    Ren, Wenqi; Zhang, Zhiwu; Wu, Qiang; Zhang, Shiwu; Xu, Ronald

    2014-02-01

    Simultaneous and quantitative assessment of skin functional characteristics in different modalities will facilitate diagnosis and therapy in many clinical applications such as wound healing. However, many existing clinical practices and multimodal imaging systems are subjective, qualitative, sequential for multimodal data collection, and need co-registration between different modalities. To overcome these limitations, we developed a multimodal imaging system for quantitative, non-invasive, and simultaneous imaging of cutaneous tissue oxygenation and blood perfusion parameters. The imaging system integrated multispectral and laser speckle imaging technologies into one experimental setup. A Labview interface was developed for equipment control, synchronization, and image acquisition. Advanced algorithms based on a wide gap second derivative reflectometry and laser speckle contrast analysis (LASCA) were developed for accurate reconstruction of tissue oxygenation and blood perfusion respectively. Quantitative calibration experiments and a new style of skinsimulating phantom were designed to verify the accuracy and reliability of the imaging system. The experimental results were compared with a Moor tissue oxygenation and perfusion monitor. For In vivo testing, a post-occlusion reactive hyperemia (PORH) procedure in human subject and an ongoing wound healing monitoring experiment using dorsal skinfold chamber models were conducted to validate the usability of our system for dynamic detection of oxygenation and perfusion parameters. In this study, we have not only setup an advanced multimodal imaging system for cutaneous tissue oxygenation and perfusion parameters but also elucidated its potential for wound healing assessment in clinical practice.

  10. MO-DE-202-01: Image-Guided Focused Ultrasound Surgery and Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farahani, K.

    At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less

  11. Optical/MRI Multimodality Molecular Imaging

    NASA Astrophysics Data System (ADS)

    Ma, Lixin; Smith, Charles; Yu, Ping

    2007-03-01

    Multimodality molecular imaging that combines anatomical and functional information has shown promise in development of tumor-targeted pharmaceuticals for cancer detection or therapy. We present a new multimodality imaging technique that combines fluorescence molecular tomography (FMT) and magnetic resonance imaging (MRI) for in vivo molecular imaging of preclinical tumor models. Unlike other optical/MRI systems, the new molecular imaging system uses parallel phase acquisition based on heterodyne principle. The system has a higher accuracy of phase measurements, reduced noise bandwidth, and an efficient modulation of the fluorescence diffuse density waves. Fluorescent Bombesin probes were developed for targeting breast cancer cells and prostate cancer cells. Tissue phantom and small animal experiments were performed for calibration of the imaging system and validation of the targeting probes.

  12. Medical Image Retrieval: A Multimodal Approach

    PubMed Central

    Cao, Yu; Steffey, Shawn; He, Jianbiao; Xiao, Degui; Tao, Cui; Chen, Ping; Müller, Henning

    2014-01-01

    Medical imaging is becoming a vital component of war on cancer. Tremendous amounts of medical image data are captured and recorded in a digital format during cancer care and cancer research. Facing such an unprecedented volume of image data with heterogeneous image modalities, it is necessary to develop effective and efficient content-based medical image retrieval systems for cancer clinical practice and research. While substantial progress has been made in different areas of content-based image retrieval (CBIR) research, direct applications of existing CBIR techniques to the medical images produced unsatisfactory results, because of the unique characteristics of medical images. In this paper, we develop a new multimodal medical image retrieval approach based on the recent advances in the statistical graphic model and deep learning. Specifically, we first investigate a new extended probabilistic Latent Semantic Analysis model to integrate the visual and textual information from medical images to bridge the semantic gap. We then develop a new deep Boltzmann machine-based multimodal learning model to learn the joint density model from multimodal information in order to derive the missing modality. Experimental results with large volume of real-world medical images have shown that our new approach is a promising solution for the next-generation medical imaging indexing and retrieval system. PMID:26309389

  13. Tinnitus Multimodal Imaging

    DTIC Science & Technology

    2016-12-01

    images were segmented into gray and white matter images and spatially normalized to the MNI template (3 mm isotropic voxels) using the DARTEL toolbox in...AWARD NUMBER: W81XWH-13-1-0494 TITLE: Tinnitus Multimodal Imaging PRINCIPAL INVESTIGATOR: Steven Wan Cheung CONTRACTING ORGANIZATION... Medical Research and Materiel Command Fort Detrick, Maryland 21702-5012 DISTRIBUTION STATEMENT: Approved for Public Release; Distribution Unlimited

  14. Melanoma detection using smartphone and multimode hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    MacKinnon, Nicholas; Vasefi, Fartash; Booth, Nicholas; Farkas, Daniel L.

    2016-04-01

    This project's goal is to determine how to effectively implement a technology continuum from a low cost, remotely deployable imaging device to a more sophisticated multimode imaging system within a standard clinical practice. In this work a smartphone is used in conjunction with an optical attachment to capture cross-polarized and collinear color images of a nevus that are analyzed to quantify chromophore distribution. The nevus is also imaged by a multimode hyperspectral system, our proprietary SkinSpect™ device. Relative accuracy and biological plausibility of the two systems algorithms are compared to assess aspects of feasibility of in-home or primary care practitioner smartphone screening prior to rigorous clinical analysis via the SkinSpect.

  15. Simultaneous multimodal ophthalmic imaging using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    PubMed Central

    Malone, Joseph D.; El-Haddad, Mohamed T.; Bozic, Ivan; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2016-01-01

    Scanning laser ophthalmoscopy (SLO) benefits diagnostic imaging and therapeutic guidance by allowing for high-speed en face imaging of retinal structures. When combined with optical coherence tomography (OCT), SLO enables real-time aiming and retinal tracking and provides complementary information for post-acquisition volumetric co-registration, bulk motion compensation, and averaging. However, multimodality SLO-OCT systems generally require dedicated light sources, scanners, relay optics, detectors, and additional digitization and synchronization electronics, which increase system complexity. Here, we present a multimodal ophthalmic imaging system using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography (SS-SESLO-OCT) for in vivo human retinal imaging. SESLO reduces the complexity of en face imaging systems by multiplexing spatial positions as a function of wavelength. SESLO image quality benefited from single-mode illumination and multimode collection through a prototype double-clad fiber coupler, which optimized scattered light throughput and reduce speckle contrast while maintaining lateral resolution. Using a shared 1060 nm swept-source, shared scanner and imaging optics, and a shared dual-channel high-speed digitizer, we acquired inherently co-registered en face retinal images and OCT cross-sections simultaneously at 200 frames-per-second. PMID:28101411

  16. Targeted delivery of cancer-specific multimodal contrast agents for intraoperative detection of tumor boundaries and therapeutic margins

    NASA Astrophysics Data System (ADS)

    Xu, Ronald X.; Xu, Jeff S.; Huang, Jiwei; Tweedle, Michael F.; Schmidt, Carl; Povoski, Stephen P.; Martin, Edward W.

    2010-02-01

    Background: Accurate assessment of tumor boundaries and intraoperative detection of therapeutic margins are important oncologic principles for minimal recurrence rates and improved long-term outcomes. However, many existing cancer imaging tools are based on preoperative image acquisition and do not provide real-time intraoperative information that supports critical decision-making in the operating room. Method: Poly lactic-co-glycolic acid (PLGA) microbubbles (MBs) and nanobubbles (NBs) were synthesized by a modified double emulsion method. The MB and NB surfaces were conjugated with CC49 antibody to target TAG-72 antigen, a human glycoprotein complex expressed in many epithelial-derived cancers. Multiple imaging agents were encapsulated in MBs and NBs for multimodal imaging. Both one-step and multi-step cancer targeting strategies were explored. Active MBs/NBs were also fabricated for therapeutic margin assessment in cancer ablation therapies. Results: The multimodal contrast agents and the cancer-targeting strategies were tested on tissue simulating phantoms, LS174 colon cancer cell cultures, and cancer xenograft nude mice. Concurrent multimodal imaging was demonstrated using fluorescence and ultrasound imaging modalities. Technical feasibility of using active MBs and portable imaging tools such as ultrasound for intraoperative therapeutic margin assessment was demonstrated in a biological tissue model. Conclusion: The cancer-specific multimodal contrast agents described in this paper have the potential for intraoperative detection of tumor boundaries and therapeutic margins.

  17. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions

    NASA Astrophysics Data System (ADS)

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-01

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  18. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions.

    PubMed

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-21

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  19. Carbon Tube Electrodes for Electrocardiography-Gated Cardiac Multimodality Imaging in Mice

    PubMed Central

    Choquet, Philippe; Goetz, Christian; Aubertin, Gaelle; Hubele, Fabrice; Sannié, Sébastien; Constantinesco, André

    2011-01-01

    This report describes a simple design of noninvasive carbon tube electrodes that facilitates electrocardiography (ECG) in mice during cardiac multimodality preclinical imaging. Both forepaws and the left hindpaw, covered by conductive gel, of mice were placed into the openings of small carbon tubes. Cardiac ECG-gated single-photon emission CT, X-ray CT, and MRI were tested (n = 60) in 20 mice. For all applications, electrodes were used in a warmed multimodality imaging cell. A heart rate of 563 ± 48 bpm was recorded from anesthetized mice regardless of the imaging technique used, with acquisition times ranging from 1 to 2 h. PMID:21333165

  20. Volume curtaining: a focus+context effect for multimodal volume visualization

    NASA Astrophysics Data System (ADS)

    Fairfield, Adam J.; Plasencia, Jonathan; Jang, Yun; Theodore, Nicholas; Crawford, Neil R.; Frakes, David H.; Maciejewski, Ross

    2014-03-01

    In surgical preparation, physicians will often utilize multimodal imaging scans to capture complementary information to improve diagnosis and to drive patient-specific treatment. These imaging scans may consist of data from magnetic resonance imaging (MR), computed tomography (CT), or other various sources. The challenge in using these different modalities is that the physician must mentally map the two modalities together during the diagnosis and planning phase. Furthermore, the different imaging modalities will be generated at various resolutions as well as slightly different orientations due to patient placement during scans. In this work, we present an interactive system for multimodal data fusion, analysis and visualization. Developed with partners from neurological clinics, this work discusses initial system requirements and physician feedback at the various stages of component development. Finally, we present a novel focus+context technique for the interactive exploration of coregistered multi-modal data.

  1. Quantitative multimodality imaging in cancer research and therapy.

    PubMed

    Yankeelov, Thomas E; Abramson, Richard G; Quarles, C Chad

    2014-11-01

    Advances in hardware and software have enabled the realization of clinically feasible, quantitative multimodality imaging of tissue pathophysiology. Earlier efforts relating to multimodality imaging of cancer have focused on the integration of anatomical and functional characteristics, such as PET-CT and single-photon emission CT (SPECT-CT), whereas more-recent advances and applications have involved the integration of multiple quantitative, functional measurements (for example, multiple PET tracers, varied MRI contrast mechanisms, and PET-MRI), thereby providing a more-comprehensive characterization of the tumour phenotype. The enormous amount of complementary quantitative data generated by such studies is beginning to offer unique insights into opportunities to optimize care for individual patients. Although important technical optimization and improved biological interpretation of multimodality imaging findings are needed, this approach can already be applied informatively in clinical trials of cancer therapeutics using existing tools. These concepts are discussed herein.

  2. Multimode intravascular RF coil for MRI-guided interventions.

    PubMed

    Kurpad, Krishna N; Unal, Orhan

    2011-04-01

    To demonstrate the feasibility of using a single intravascular radiofrequency (RF) probe connected to the external magnetic resonance imaging (MRI) system via a single coaxial cable to perform active tip tracking and catheter visualization and high signal-to-noise ratio (SNR) intravascular imaging. A multimode intravascular RF coil was constructed on a 6F balloon catheter and interfaced to a 1.5T MRI scanner via a decoupling circuit. Bench measurements of coil impedances were followed by imaging experiments in saline and phantoms. The multimode coil behaves as an inductively coupled transmit coil. The forward-looking capability of 6 mm was measured. A greater than 3-fold increase in SNR compared to conventional imaging using optimized external coil was demonstrated. Simultaneous active tip tracking and catheter visualization was demonstrated. It is feasible to perform 1) active tip tracking, 2) catheter visualization, and 3) high SNR imaging using a single multimode intravascular RF coil that is connected to the external system via a single coaxial cable. Copyright © 2011 Wiley-Liss, Inc.

  3. Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain

    PubMed Central

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images. PMID:25214889

  4. Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.

    PubMed

    Franchi, G; Angulo, J; Moreaud, M; Sorbier, L

    2018-01-01

    The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  5. Current concepts in adult aphasia.

    PubMed

    McNeil, M R

    1984-01-01

    This paper provides a review of recent research from the areas of speech and language pathology, cognitive psychology, psycholinguistics, neurology, and rehabilitation medicine which is used to refine and extend current definitions of aphasia. Evidence is presented from these diverse disciplines, which supports a multimodality, performance-based, verbal and non-verbal, cortical and subcortical, and cognitively multidimensional view of aphasia. A summary of current practice in the assessment and treatment of adult aphasia is summarized.

  6. Multi-mode of Four and Six Wave Parametric Amplified Process

    NASA Astrophysics Data System (ADS)

    Zhu, Dayu; Yang, Yiheng; Zhang, Da; Liu, Ruizhou; Ma, Danmeng; Li, Changbiao; Zhang, Yanpeng

    2017-03-01

    Multiple quantum modes in correlated fields are essential for future quantum information processing and quantum computing. Here we report the generation of multi-mode phenomenon through parametric amplified four- and six-wave mixing processes in a rubidium atomic ensemble. The multi-mode properties in both frequency and spatial domains are studied. On one hand, the multi-mode behavior is dominantly controlled by the intensity of external dressing effect, or nonlinear phase shift through internal dressing effect, in frequency domain; on the other hand, the multi-mode behavior is visually demonstrated from the images of the biphoton fields directly, in spatial domain. Besides, the correlation of the two output fields is also demonstrated in both domains. Our approach supports efficient applications for scalable quantum correlated imaging.

  7. Multi-mode of Four and Six Wave Parametric Amplified Process.

    PubMed

    Zhu, Dayu; Yang, Yiheng; Zhang, Da; Liu, Ruizhou; Ma, Danmeng; Li, Changbiao; Zhang, Yanpeng

    2017-03-03

    Multiple quantum modes in correlated fields are essential for future quantum information processing and quantum computing. Here we report the generation of multi-mode phenomenon through parametric amplified four- and six-wave mixing processes in a rubidium atomic ensemble. The multi-mode properties in both frequency and spatial domains are studied. On one hand, the multi-mode behavior is dominantly controlled by the intensity of external dressing effect, or nonlinear phase shift through internal dressing effect, in frequency domain; on the other hand, the multi-mode behavior is visually demonstrated from the images of the biphoton fields directly, in spatial domain. Besides, the correlation of the two output fields is also demonstrated in both domains. Our approach supports efficient applications for scalable quantum correlated imaging.

  8. Multimodal optoacoustic and multiphoton fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sela, Gali; Razansky, Daniel; Shoham, Shy

    2013-03-01

    Multiphoton microscopy is a powerful imaging modality that enables structural and functional imaging with cellular and sub-cellular resolution, deep within biological tissues. Yet, its main contrast mechanism relies on extrinsically administered fluorescent indicators. Here we developed a system for simultaneous multimodal optoacoustic and multiphoton fluorescence 3D imaging, which attains both absorption and fluorescence-based contrast by integrating an ultrasonic transducer into a two-photon laser scanning microscope. The system is readily shown to enable acquisition of multimodal microscopic images of fluorescently labeled targets and cell cultures as well as intrinsic absorption-based images of pigmented biological tissue. During initial experiments, it was further observed that that detected optoacoustically-induced response contains low frequency signal variations, presumably due to cavitation-mediated signal generation by the high repetition rate (80MHz) near IR femtosecond laser. The multimodal system may provide complementary structural and functional information to the fluorescently labeled tissue, by superimposing optoacoustic images of intrinsic tissue chromophores, such as melanin deposits, pigmentation, and hemoglobin or other extrinsic particle or dye-based markers highly absorptive in the NIR spectrum.

  9. Recommendations on nuclear and multimodality imaging in IE and CIED infections.

    PubMed

    Erba, Paola Anna; Lancellotti, Patrizio; Vilacosta, Isidre; Gaemperli, Oliver; Rouzet, Francois; Hacker, Marcus; Signore, Alberto; Slart, Riemer H J A; Habib, Gilbert

    2018-05-24

    In the latest update of the European Society of Cardiology (ESC) guidelines for the management of infective endocarditis (IE), imaging is positioned at the centre of the diagnostic work-up so that an early and accurate diagnosis can be reached. Besides echocardiography, contrast-enhanced CT (ce-CT), radiolabelled leucocyte (white blood cell, WBC) SPECT/CT and [ 18 F]FDG PET/CT are included as diagnostic tools in the diagnostic flow chart for IE. Following the clinical guidelines that provided a straightforward message on the role of multimodality imaging, we believe that it is highly relevant to produce specific recommendations on nuclear multimodality imaging in IE and cardiac implantable electronic device infections. In these procedural recommendations we therefore describe in detail the technical and practical aspects of WBC SPECT/CT and [ 18 F]FDG PET/CT, including ce-CT acquisition protocols. We also discuss the advantages and limitations of each procedure, specific pitfalls when interpreting images, and the most important results from the literature, and also provide recommendations on the appropriate use of multimodality imaging.

  10. Multimodal swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography at 400 kHz

    NASA Astrophysics Data System (ADS)

    El-Haddad, Mohamed T.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2017-02-01

    Multimodal imaging systems that combine scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) have demonstrated the utility of concurrent en face and volumetric imaging for aiming, eye tracking, bulk motion compensation, mosaicking, and contrast enhancement. However, this additional functionality trades off with increased system complexity and cost because both SLO and OCT generally require dedicated light sources, galvanometer scanners, relay and imaging optics, detectors, and control and digitization electronics. We previously demonstrated multimodal ophthalmic imaging using swept-source spectrally encoded SLO and OCT (SS-SESLO-OCT). Here, we present system enhancements and a new optical design that increase our SS-SESLO-OCT data throughput by >7x and field-of-view (FOV) by >4x. A 200 kHz 1060 nm Axsun swept-source was optically buffered to 400 kHz sweep-rate, and SESLO and OCT were simultaneously digitized on dual input channels of a 4 GS/s digitizer at 1.2 GS/s per channel using a custom k-clock. We show in vivo human imaging of the anterior segment out to the limbus and retinal fundus over a >40° FOV. In addition, nine overlapping volumetric SS-SESLO-OCT volumes were acquired under video-rate SESLO preview and guidance. In post-processing, all nine SESLO images and en face projections of the corresponding OCT volumes were mosaicked to show widefield multimodal fundus imaging with a >80° FOV. Concurrent multimodal SS-SESLO-OCT may have applications in clinical diagnostic imaging by enabling aiming, image registration, and multi-field mosaicking and benefit intraoperative imaging by allowing for real-time surgical feedback, instrument tracking, and overlays of computationally extracted image-based surrogate biomarkers of disease.

  11. Cross-modal learning to rank via latent joint representation.

    PubMed

    Wu, Fei; Jiang, Xinyang; Li, Xi; Tang, Siliang; Lu, Weiming; Zhang, Zhongfei; Zhuang, Yueting

    2015-05-01

    Cross-modal ranking is a research topic that is imperative to many applications involving multimodal data. Discovering a joint representation for multimodal data and learning a ranking function are essential in order to boost the cross-media retrieval (i.e., image-query-text or text-query-image). In this paper, we propose an approach to discover the latent joint representation of pairs of multimodal data (e.g., pairs of an image query and a text document) via a conditional random field and structural learning in a listwise ranking manner. We call this approach cross-modal learning to rank via latent joint representation (CML²R). In CML²R, the correlations between multimodal data are captured in terms of their sharing hidden variables (e.g., topics), and a hidden-topic-driven discriminative ranking function is learned in a listwise ranking manner. The experiments show that the proposed approach achieves a good performance in cross-media retrieval and meanwhile has the capability to learn the discriminative representation of multimodal data.

  12. Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer's Disease Diagnosis.

    PubMed

    Liu, Manhua; Cheng, Danni; Wang, Kundong; Wang, Yaping

    2018-03-23

    Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance.

  13. Crucial breakthrough of second near-infrared biological window fluorophores: design and synthesis toward multimodal imaging and theranostics

    DOE PAGES

    He, Shuqing; Song, Jun; Qu, Junle; ...

    2018-01-01

    Recent advances in the chemical design and synthesis of fluorophores in the second near-infrared biological window (NIR-II) for multimodal imaging and theranostics are summarized and highlighted in this review article.

  14. Crucial breakthrough of second near-infrared biological window fluorophores: design and synthesis toward multimodal imaging and theranostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Shuqing; Song, Jun; Qu, Junle

    Recent advances in the chemical design and synthesis of fluorophores in the second near-infrared biological window (NIR-II) for multimodal imaging and theranostics are summarized and highlighted in this review article.

  15. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  16. The sweet spot: FDG and other 2-carbon glucose analogs for multi-modal metabolic imaging of tumor metabolism

    PubMed Central

    Cox, Benjamin L; Mackie, Thomas R; Eliceiri, Kevin W

    2015-01-01

    Multi-modal imaging approaches of tumor metabolism that provide improved specificity, physiological relevance and spatial resolution would improve diagnosing of tumors and evaluation of tumor progression. Currently, the molecular probe FDG, glucose fluorinated with 18F at the 2-carbon, is the primary metabolic approach for clinical diagnostics with PET imaging. However, PET lacks the resolution necessary to yield intratumoral distributions of deoxyglucose, on the cellular level. Multi-modal imaging could elucidate this problem, but requires the development of new glucose analogs that are better suited for other imaging modalities. Several such analogs have been created and are reviewed here. Also reviewed are several multi-modal imaging studies that have been performed that attempt to shed light on the cellular distribution of glucose analogs within tumors. Some of these studies are performed in vitro, while others are performed in vivo, in an animal model. The results from these studies introduce a visualization gap between the in vitro and in vivo studies that, if solved, could enable the early detection of tumors, the high resolution monitoring of tumors during treatment, and the greater accuracy in assessment of different imaging agents. PMID:25625022

  17. Towards an ultra-thin medical endoscope: multimode fibre as a wide-field image transferring medium

    NASA Astrophysics Data System (ADS)

    Duriš, Miroslav; Bradu, Adrian; Podoleanu, Adrian; Hughes, Michael

    2018-03-01

    Multimode optical fibres are attractive for biomedical and industrial applications such as endoscopes because of the small cross section and imaging resolution they can provide in comparison to widely-used fibre bundles. However, the image is randomly scrambled by propagation through a multimode fibre. Even though the scrambling is unpredictable, it is deterministic, and therefore the scrambling can be reversed. To unscramble the image, we treat the multimode fibre as a linear, disordered scattering medium. To calibrate, we scan a focused beam of coherent light over thousands of different beam positions at the distal end and record complex fields at the proximal end of the fibre. This way, the inputoutput response of the system is determined, which then allows computational reconstruction of reflection-mode images. However, there remains the problem of illuminating the tissue via the fibre while avoiding back reflections from the proximal face. To avoid this drawback, we provide here the first preliminary confirmation that an image can be transferred through a 2x2 fibre coupler, with the sample at its distal port interrogated in reflection. Light is injected into one port for illumination and then collected from a second port for imaging.

  18. The new frontiers of multimodality and multi-isotope imaging

    NASA Astrophysics Data System (ADS)

    Behnam Azad, Babak; Nimmagadda, Sridhar

    2014-06-01

    Technological advances in imaging systems and the development of target specific imaging tracers has been rapidly growing over the past two decades. Recent progress in "all-in-one" imaging systems that allow for automated image coregistration has significantly added to the growth of this field. These developments include ultra high resolution PET and SPECT scanners that can be integrated with CT or MR resulting in PET/CT, SPECT/CT, SPECT/PET and PET/MRI scanners for simultaneous high resolution high sensitivity anatomical and functional imaging. These technological developments have also resulted in drastic enhancements in image quality and acquisition time while eliminating cross compatibility issues between modalities. Furthermore, the most cutting edge technology, though mostly preclinical, also allows for simultaneous multimodality multi-isotope image acquisition and image reconstruction based on radioisotope decay characteristics. These scientific advances, in conjunction with the explosion in the development of highly specific multimodality molecular imaging agents, may aid in realizing simultaneous imaging of multiple biological processes and pave the way towards more efficient diagnosis and improved patient care.

  19. Multimodal imaging of lung cancer and its microenvironment (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hariri, Lida P.; Niederst, Matthew J.; Mulvey, Hillary; Adams, David C.; Hu, Haichuan; Chico Calero, Isabel; Szabari, Margit V.; Vakoc, Benjamin J.; Hasan, Tayyaba; Bouma, Brett E.; Engelman, Jeffrey A.; Suter, Melissa J.

    2016-03-01

    Despite significant advances in targeted therapies for lung cancer, nearly all patients develop drug resistance within 6-12 months and prognosis remains poor. Developing drug resistance is a progressive process that involves tumor cells and their microenvironment. We hypothesize that microenvironment factors alter tumor growth and response to targeted therapy. We conducted in vitro studies in human EGFR-mutant lung carcinoma cells, and demonstrated that factors secreted from lung fibroblasts results in increased tumor cell survival during targeted therapy with EGFR inhibitor, gefitinib. We also demonstrated that increased environment stiffness results in increased tumor survival during gefitinib therapy. In order to test our hypothesis in vivo, we developed a multimodal optical imaging protocol for preclinical intravital imaging in mouse models to assess tumor and its microenvironment over time. We have successfully conducted multimodal imaging of dorsal skinfold chamber (DSC) window mice implanted with GFP-labeled human EGFR mutant lung carcinoma cells and visualized changes in tumor development and microenvironment facets over time. Multimodal imaging included structural OCT to assess tumor viability and necrosis, polarization-sensitive OCT to measure tissue birefringence for collagen/fibroblast detection, and Doppler OCT to assess tumor vasculature. Confocal imaging was also performed for high-resolution visualization of EGFR-mutant lung cancer cells labeled with GFP, and was coregistered with OCT. Our results demonstrated that stromal support and vascular growth are essential to tumor progression. Multimodal imaging is a useful tool to assess tumor and its microenvironment over time.

  20. Multimodal flexible cystoscopy for creating co-registered panoramas of the bladder urothelium

    NASA Astrophysics Data System (ADS)

    Seibel, Eric J.; Soper, Timothy D.; Burkhardt, Matthew R.; Porter, Michael P.; Yoon, W. Jong

    2012-02-01

    Bladder cancer is the most expensive cancer to treat due to the high rate of recurrence. Though white light cystoscopy is the gold standard for bladder cancer surveillance, the advent of fluorescence biomarkers provides an opportunity to improve sensitivity for early detection and reduced recurrence resulting from more accurate excision. Ideally, fluorescence information could be combined with standard reflectance images to provide multimodal views of the bladder wall. The scanning fiber endoscope (SFE) of 1.2mm in diameter is able to acquire wide-field multimodal video from a bladder phantom with fluorescence cancer "hot-spots". The SFE generates images by scanning red, green, and blue (RGB) laser light and detects the backscatter signal for reflectance video of 500-line resolution at 30 frames per second. We imaged a bladder phantom with painted vessels and mimicked fluorescent lesions by applying green fluorescent microspheres to the surface. By eliminating the green laser illumination, simultaneous reflectance and fluorescence images can be acquired at the same field of view, resolution, and frame rate. Moreover, the multimodal SFE is combined with a robotic steering mechanism and image stitching software as part of a fully automated bladder surveillance system. Using this system, the SFE can be reliably articulated over the entire 360° bladder surface. Acquired images can then be stitched into a multimodal 3D panorama of the bladder using software developed in our laboratory. In each panorama, the fluorescence images are exactly co-registered with RGB reflectance.

  1. New Technologies, New Possibilities for the Arts and Multimodality in English Language Arts

    ERIC Educational Resources Information Center

    Williams, Wendy R.

    2014-01-01

    This article discusses the arts, multimodality, and new technologies in English language arts. It then turns to the example of the illuminated text--a multimodal book report consisting of animated text, music, and images--to consider how art, multimodality, and technology can work together to support students' reading of literature and inspire…

  2. Combining kriging, multispectral and multimodal microscopy to resolve malaria-infected erythrocyte contents.

    PubMed

    Dabo-Niang, S; Zoueu, J T

    2012-09-01

    In this communication, we demonstrate how kriging, combine with multispectral and multimodal microscopy can enhance the resolution of malaria-infected images and provide more details on their composition, for analysis and diagnosis. The results of this interpolation applied to the two principal components of multispectral and multimodal images illustrate that the examination of the content of Plasmodium falciparum infected human erythrocyte is improved. © 2012 The Authors Journal of Microscopy © 2012 Royal Microscopical Society.

  3. Multifocus confocal Raman microspectroscopy for fast multimode vibrational imaging of living cells.

    PubMed

    Okuno, Masanari; Hamaguchi, Hiro-o

    2010-12-15

    We have developed a multifocus confocal Raman microspectroscopic system for the fast multimode vibrational imaging of living cells. It consists of an inverted microscope equipped with a microlens array, a pinhole array, a fiber bundle, and a multichannel Raman spectrometer. Forty-eight Raman spectra from 48 foci under the microscope are simultaneously obtained by using multifocus excitation and image-compression techniques. The multifocus confocal configuration suppresses the background generated from the cover glass and the cell culturing medium so that high-contrast images are obtainable with a short accumulation time. The system enables us to obtain multimode (10 different vibrational modes) vibrational images of living cells in tens of seconds with only 1 mW laser power at one focal point. This image acquisition time is more than 10 times faster than that in conventional single-focus Raman microspectroscopy.

  4. Integrated photoacoustic microscopy, optical coherence tomography, and fluorescence microscopy for multimodal chorioretinal imaging

    NASA Astrophysics Data System (ADS)

    Tian, Chao; Zhang, Wei; Nguyen, Van Phuc; Huang, Ziyi; Wang, Xueding; Paulus, Yannis M.

    2018-02-01

    Current clinical available retinal imaging techniques have limitations, including limited depth of penetration or requirement for the invasive injection of exogenous contrast agents. Here, we developed a novel multimodal imaging system for high-speed, high-resolution retinal imaging of larger animals, such as rabbits. The system integrates three state-of-the-art imaging modalities, including photoacoustic microscopy (PAM), optical coherence tomography (OCT), and fluorescence microscopy (FM). In vivo experimental results of rabbit eyes show that the PAM is able to visualize laser-induced retinal burns and distinguish individual eye blood vessels using a laser exposure dose of 80 nJ, which is well below the American National Standards Institute (ANSI) safety limit 160 nJ. The OCT can discern different retinal layers and visualize laser burns and choroidal detachments. The novel multi-modal imaging platform holds great promise in ophthalmic imaging.

  5. Multi-Modal Nano-Probes for Radionuclide and 5-color Near Infrared Optical Lymphatic Imaging

    PubMed Central

    Kobayashi, Hisataka; Koyama, Yoshinori; Barrett, Tristan; Hama, Yukihiro; Regino, Celeste A. S.; Shin, In Soo; Jang, Beom-Su; Le, Nhat; Paik, Chang H.; Choyke, Peter L.; Urano, Yasuteru

    2008-01-01

    Current contrast agents generally have one function and can only be imaged in monochrome, therefore, the majority of imaging methods can only impart uniparametric information. A single nano-particle has the potential to be loaded with multiple payloads. Such multi-modality probes have the ability to be imaged by more than one imaging technique, which could compensate for the weakness or even combine the advantages of each individual modality. Furthermore, optical imaging using different optical probes enables us to achieve multi-color in vivo imaging, wherein multiple parameters can be read from a single image. To allow differentiation of multiple optical signals in vivo, each probe should have a close but different near infrared emission. To this end, we synthesized nano-probes with multi-modal and multi-color potential, which employed a polyamidoamine dendrimer platform linked to both radionuclides and optical probes, permitting dual-modality scintigraphic and 5-color near infrared optical lymphatic imaging using a multiple excitation spectrally-resolved fluorescence imaging technique. PMID:19079788

  6. Multimodal nanoprobes for radionuclide and five-color near-infrared optical lymphatic imaging.

    PubMed

    Kobayashi, Hisataka; Koyama, Yoshinori; Barrett, Tristan; Hama, Yukihiro; Regino, Celeste A S; Shin, In Soo; Jang, Beom-Su; Le, Nhat; Paik, Chang H; Choyke, Peter L; Urano, Yasuteru

    2007-11-01

    Current contrast agents generally have one function and can only be imaged in monochrome; therefore, the majority of imaging methods can only impart uniparametric information. A single nanoparticle has the potential to be loaded with multiple payloads. Such multimodality probes have the ability to be imaged by more than one imaging technique, which could compensate for the weakness or even combine the advantages of each individual modality. Furthermore, optical imaging using different optical probes enables us to achieve multicolor in vivo imaging, wherein multiple parameters can be read from a single image. To allow differentiation of multiple optical signals in vivo, each probe should have a close but different near-infrared emission. To this end, we synthesized nanoprobes with multimodal and multicolor potential, which employed a polyamidoamine dendrimer platform linked to both radionuclides and optical probes, permitting dual-modality scintigraphic and five-color near-infrared optical lymphatic imaging using a multiple-excitation spectrally resolved fluorescence imaging technique.

  7. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  8. General consumer communication tools for improved image management and communication in medicine.

    PubMed

    Rosset, Chantal; Rosset, Antoine; Ratib, Osman

    2005-12-01

    We elected to explore new technologies emerging on the general consumer market that can improve and facilitate image and data communication in medical and clinical environment. These new technologies developed for communication and storage of data can improve the user convenience and facilitate the communication and transport of images and related data beyond the usual limits and restrictions of a traditional picture archiving and communication systems (PACS) network. We specifically tested and implemented three new technologies provided on Apple computer platforms. (1) We adopted the iPod, a MP3 portable player with a hard disk storage, to easily and quickly move large number of DICOM images. (2) We adopted iChat, a videoconference and instant-messaging software, to transmit DICOM images in real time to a distant computer for conferencing teleradiology. (3) Finally, we developed a direct secure interface to use the iDisk service, a file-sharing service based on the WebDAV technology, to send and share DICOM files between distant computers. These three technologies were integrated in a new open-source image navigation and display software called OsiriX allowing for manipulation and communication of multimodality and multidimensional DICOM image data sets. This software is freely available as an open-source project at http://homepage.mac.com/rossetantoine/OsiriX. Our experience showed that the implementation of these technologies allowed us to significantly enhance the existing PACS with valuable new features without any additional investment or the need for complex extensions of our infrastructure. The added features such as teleradiology, secure and convenient image and data communication, and the use of external data storage services open the gate to a much broader extension of our imaging infrastructure to the outside world.

  9. Spectral embedding-based registration (SERg) for multimodal fusion of prostate histology and MRI

    NASA Astrophysics Data System (ADS)

    Hwuang, Eileen; Rusu, Mirabela; Karthigeyan, Sudha; Agner, Shannon C.; Sparks, Rachel; Shih, Natalie; Tomaszewski, John E.; Rosen, Mark; Feldman, Michael; Madabhushi, Anant

    2014-03-01

    Multi-modal image registration is needed to align medical images collected from different protocols or imaging sources, thereby allowing the mapping of complementary information between images. One challenge of multimodal image registration is that typical similarity measures rely on statistical correlations between image intensities to determine anatomical alignment. The use of alternate image representations could allow for mapping of intensities into a space or representation such that the multimodal images appear more similar, thus facilitating their co-registration. In this work, we present a spectral embedding based registration (SERg) method that uses non-linearly embedded representations obtained from independent components of statistical texture maps of the original images to facilitate multimodal image registration. Our methodology comprises the following main steps: 1) image-derived textural representation of the original images, 2) dimensionality reduction using independent component analysis (ICA), 3) spectral embedding to generate the alternate representations, and 4) image registration. The rationale behind our approach is that SERg yields embedded representations that can allow for very different looking images to appear more similar, thereby facilitating improved co-registration. Statistical texture features are derived from the image intensities and then reduced to a smaller set by using independent component analysis to remove redundant information. Spectral embedding generates a new representation by eigendecomposition from which only the most important eigenvectors are selected. This helps to accentuate areas of salience based on modality-invariant structural information and therefore better identifies corresponding regions in both the template and target images. The spirit behind SERg is that image registration driven by these areas of salience and correspondence should improve alignment accuracy. In this work, SERg is implemented using Demons to allow the algorithm to more effectively register multimodal images. SERg is also tested within the free-form deformation framework driven by mutual information. Nine pairs of synthetic T1-weighted to T2-weighted brain MRI were registered under the following conditions: five levels of noise (0%, 1%, 3%, 5%, and 7%) and two levels of bias field (20% and 40%) each with and without noise. We demonstrate that across all of these conditions, SERg yields a mean squared error that is 81.51% lower than that of Demons driven by MRI intensity alone. We also spatially align twenty-six ex vivo histology sections and in vivo prostate MRI in order to map the spatial extent of prostate cancer onto corresponding radiologic imaging. SERg performs better than intensity registration by decreasing the root mean squared distance of annotated landmarks in the prostate gland via both Demons algorithm and mutual information-driven free-form deformation. In both synthetic and clinical experiments, the observed improvement in alignment of the template and target images suggest the utility of parametric eigenvector representations and hence SERg for multimodal image registration.

  10. Multimodal system for the planning and guidance of bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Cheirsilp, Ronnarit; Zang, Xiaonan; Byrnes, Patrick

    2015-03-01

    Many technical innovations in multimodal radiologic imaging and bronchoscopy have emerged recently in the effort against lung cancer. Modern X-ray computed-tomography (CT) scanners provide three-dimensional (3D) high-resolution chest images, positron emission tomography (PET) scanners give complementary molecular imaging data, and new integrated PET/CT scanners combine the strengths of both modalities. State-of-the-art bronchoscopes permit minimally invasive tissue sampling, with vivid endobronchial video enabling navigation deep into the airway-tree periphery, while complementary endobronchial ultrasound (EBUS) reveals local views of anatomical structures outside the airways. In addition, image-guided intervention (IGI) systems have proven their utility for CT-based planning and guidance of bronchoscopy. Unfortunately, no IGI system exists that integrates all sources effectively through the complete lung-cancer staging work flow. This paper presents a prototype of a computer-based multimodal IGI system that strives to fill this need. The system combines a wide range of automatic and semi-automatic image-processing tools for multimodal data fusion and procedure planning. It also provides a flexible graphical user interface for follow-on guidance of bronchoscopy/EBUS. Human-study results demonstrate the system's potential.

  11. Structured illumination multimodal 3D-resolved quantitative phase and fluorescence sub-diffraction microscopy

    PubMed Central

    Chowdhury, Shwetadwip; Eldridge, Will J.; Wax, Adam; Izatt, Joseph A.

    2017-01-01

    Sub-diffraction resolution imaging has played a pivotal role in biological research by visualizing key, but previously unresolvable, sub-cellular structures. Unfortunately, applications of far-field sub-diffraction resolution are currently divided between fluorescent and coherent-diffraction regimes, and a multimodal sub-diffraction technique that bridges this gap has not yet been demonstrated. Here we report that structured illumination (SI) allows multimodal sub-diffraction imaging of both coherent quantitative-phase (QP) and fluorescence. Due to SI’s conventionally fluorescent applications, we first demonstrate the principle of SI-enabled three-dimensional (3D) QP sub-diffraction imaging with calibration microspheres. Image analysis confirmed enhanced lateral and axial resolutions over diffraction-limited QP imaging, and established striking parallels between coherent SI and conventional optical diffraction tomography. We next introduce an optical system utilizing SI to achieve 3D sub-diffraction, multimodal QP/fluorescent visualization of A549 biological cells fluorescently tagged for F-actin. Our results suggest that SI has a unique utility in studying biological phenomena with significant molecular, biophysical, and biochemical components. PMID:28663887

  12. A Multimodal Search Engine for Medical Imaging Studies.

    PubMed

    Pinho, Eduardo; Godinho, Tiago; Valente, Frederico; Costa, Carlos

    2017-02-01

    The use of digital medical imaging systems in healthcare institutions has increased significantly, and the large amounts of data in these systems have led to the conception of powerful support tools: recent studies on content-based image retrieval (CBIR) and multimodal information retrieval in the field hold great potential in decision support, as well as for addressing multiple challenges in healthcare systems, such as computer-aided diagnosis (CAD). However, the subject is still under heavy research, and very few solutions have become part of Picture Archiving and Communication Systems (PACS) in hospitals and clinics. This paper proposes an extensible platform for multimodal medical image retrieval, integrated in an open-source PACS software with profile-based CBIR capabilities. In this article, we detail a technical approach to the problem by describing its main architecture and each sub-component, as well as the available web interfaces and the multimodal query techniques applied. Finally, we assess our implementation of the engine with computational performance benchmarks.

  13. Multi-mode Intravascular RF Coil for MRI-guided Interventions

    PubMed Central

    Kurpad, Krishna N.; Unal, Orhan

    2011-01-01

    Purpose To demonstrate the feasibility of using a single intravascular RF probe connected to the external MRI system via a single coaxial cable to perform active tip tracking and catheter visualization, and high SNR intravascular imaging. Materials and Methods A multi-mode intravascular RF coil was constructed on a 6F balloon catheter and interfaced to a 1.5T MRI scanner via a decoupling circuit. Bench measurements of coil impedances were followed by imaging experiments in saline and phantoms. Results The multi-mode coil behaves as an inductively-coupled transmit coil. Forward looking capability of 6mm is measured. Greater than 3-fold increase in SNR compared to conventional imaging using optimized external coil is demonstrated. Simultaneous active tip tracking and catheter visualization is demonstrated. Conclusions It is feasible to perform 1) active tip tracking, 2) catheter visualization, and 3) high SNR imaging using a single multi-mode intravascular RF coil that is connected to the external system via a single coaxial cable. PMID:21448969

  14. Identifying Multimodal Intermediate Phenotypes between Genetic Risk Factors and Disease Status in Alzheimer’s Disease

    PubMed Central

    Hao, Xiaoke; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L.; Saykin, Andrew J.; Zhang, Daoqiang; Shen, Li

    2016-01-01

    Neuroimaging genetics has attracted growing attention and interest, which is thought to be a powerful strategy to examine the influence of genetic variants (i.e., single nucleotide polymorphisms (SNPs)) on structures or functions of human brain. In recent studies, univariate or multivariate regression analysis methods are typically used to capture the effective associations between genetic variants and quantitative traits (QTs) such as brain imaging phenotypes. The identified imaging QTs, although associated with certain genetic markers, may not be all disease specific. A useful, but underexplored, scenario could be to discover only those QTs associated with both genetic markers and disease status for revealing the chain from genotype to phenotype to symptom. In addition, multimodal brain imaging phenotypes are extracted from different perspectives and imaging markers consistently showing up in multimodalities may provide more insights for mechanistic understanding of diseases (i.e., Alzheimer’s disease (AD)). In this work, we propose a general framework to exploit multi-modal brain imaging phenotypes as intermediate traits that bridge genetic risk factors and multi-class disease status. We applied our proposed method to explore the relation between the well-known AD risk SNP APOE rs429358 and three baseline brain imaging modalities (i.e., structural magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET) and F-18 florbetapir PET scans amyloid imaging (AV45)) from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The empirical results demonstrate that our proposed method not only helps improve the performances of imaging genetic associations, but also discovers robust and consistent regions of interests (ROIs) across multi-modalities to guide the disease-induced interpretation. PMID:27277494

  15. Femtosecond Multidimensional Imaging - Watching Chemistry from the Molecule's Point of View

    NASA Astrophysics Data System (ADS)

    Geßner, O.; Lee, A. M. D.; Chrysostom, E. t.-H.; Hayden, C. C.; Stolow, Albert

    Using Femtosecond Multidimensional Imaging we disentangle the complex neutral dissociation mechanism of the NO dimer. We characterize all electronic configurations from start to finish and directly observe the evolution of intramolecular vibrational energy redistribution (IVR).

  16. Multimodality bonchoscopic imaging of tracheopathica osteochondroplastica

    NASA Astrophysics Data System (ADS)

    Colt, Henri; Murgu, Septimiu D.; Ahn, Yeh-Chan; Brenner, Matt

    2009-05-01

    Results of a commercial optical coherence tomography system used as part of a multimodality diagnostic bronchoscopy platform are presented for a 61-year-old patient with central airway obstruction from tracheopathica osteochondroplastica. Comparison to results of white-light bronchoscopy, histology, and endobronchial ultrasound examination are accompanied by a discussion of resolution, penetration depth, contrast, and field of view of these imaging modalities. White-light bronchoscopy revealed irregularly shaped, firm submucosal nodules along cartilaginous structures of the anterior and lateral walls of the trachea, sparing the muscular posterior membrane. Endobronchial ultrasound showed a hyperechoic density of 0.4 cm thickness. optical coherence tomography (OCT) was performed using a commercially available, compact time-domain OCT system (Niris System, Imalux Corp., Cleveland, Ohio) with a magnetically actuating probe (two-dimensional, front imaging, and inside actuation). Images showed epithelium, upper submucosa, and osseous submucosal nodule layers corresponding with histopathology. To our knowledge, this is the first time these commercially available systems are used as part of a multimodality bronchoscopy platform to study diagnostic imaging of a benign disease causing central airway obstruction. Further studies are needed to optimize these systems for pulmonary applications and to determine how new-generation imaging modalities will be integrated into a multimodality bronchoscopy platform.

  17. [Research on non-rigid registration of multi-modal medical image based on Demons algorithm].

    PubMed

    Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang

    2014-02-01

    Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.

  18. Multimodality Imaging Approach towards Primary Aortic Sarcomas Arising after Endovascular Abdominal Aortic Aneurysm Repair: Case Series Report.

    PubMed

    Kamran, Mudassar; Fowler, Kathryn J; Mellnick, Vincent M; Sicard, Gregorio A; Narra, Vamsi R

    2016-06-01

    Primary aortic neoplasms are rare. Aortic sarcoma arising after endovascular aneurysm repair (EVAR) is a scarce subset of primary aortic malignancies, reports of which are infrequent in the published literature. The diagnosis of aortic sarcoma is challenging due to its non-specific clinical presentation, and the prognosis is poor due to delayed diagnosis, rapid proliferation, and propensity for metastasis. Post-EVAR, aortic sarcomas may mimic other more common aortic processes on surveillance imaging. Radiologists are rarely knowledgeable about this rare entity for which multimodality imaging and awareness are invaluable in early diagnosis. A series of three pathologically confirmed cases are presented to display the multimodality imaging features and clinical presentations of aortic sarcoma arising after EVAR.

  19. A multimodal imaging platform with integrated simultaneous photoacoustic microscopy, optical coherence tomography, optical Doppler tomography and fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dadkhah, Arash; Zhou, Jun; Yeasmin, Nusrat; Jiao, Shuliang

    2018-02-01

    Various optical imaging modalities with different optical contrast mechanisms have been developed over the past years. Although most of these imaging techniques are being used in many biomedical applications and researches, integration of these techniques will allow researchers to reach the full potential of these technologies. Nevertheless, combining different imaging techniques is always challenging due to the difference in optical and hardware requirements for different imaging systems. Here, we developed a multimodal optical imaging system with the capability of providing comprehensive structural, functional and molecular information of living tissue in micrometer scale. This imaging system integrates photoacoustic microscopy (PAM), optical coherence tomography (OCT), optical Doppler tomography (ODT) and fluorescence microscopy in one platform. Optical-resolution PAM (OR-PAM) provides absorption-based imaging of biological tissues. Spectral domain OCT is able to provide structural information based on the scattering property of biological sample with no need for exogenous contrast agents. In addition, ODT is a functional extension of OCT with the capability of measurement and visualization of blood flow based on the Doppler effect. Fluorescence microscopy allows to reveal molecular information of biological tissue using autofluoresce or exogenous fluorophores. In-vivo as well as ex-vivo imaging studies demonstrated the capability of our multimodal imaging system to provide comprehensive microscopic information on biological tissues. Integrating all the aforementioned imaging modalities for simultaneous multimodal imaging has promising potential for preclinical research and clinical practice in the near future.

  20. Multimodal targeted high relaxivity thermosensitive liposome for in vivo imaging

    NASA Astrophysics Data System (ADS)

    Kuijten, Maayke M. P.; Hannah Degeling, M.; Chen, John W.; Wojtkiewicz, Gregory; Waterman, Peter; Weissleder, Ralph; Azzi, Jamil; Nicolay, Klaas; Tannous, Bakhos A.

    2015-11-01

    Liposomes are spherical, self-closed structures formed by lipid bilayers that can encapsulate drugs and/or imaging agents in their hydrophilic core or within their membrane moiety, making them suitable delivery vehicles. We have synthesized a new liposome containing gadolinium-DOTA lipid bilayer, as a targeting multimodal molecular imaging agent for magnetic resonance and optical imaging. We showed that this liposome has a much higher molar relaxivities r1 and r2 compared to a more conventional liposome containing gadolinium-DTPA-BSA lipid. By incorporating both gadolinium and rhodamine in the lipid bilayer as well as biotin on its surface, we used this agent for multimodal imaging and targeting of tumors through the strong biotin-streptavidin interaction. Since this new liposome is thermosensitive, it can be used for ultrasound-mediated drug delivery at specific sites, such as tumors, and can be guided by magnetic resonance imaging.

  1. Image recovery from defocused 2D fluorescent images in multimodal digital holographic microscopy.

    PubMed

    Quan, Xiangyu; Matoba, Osamu; Awatsuji, Yasuhiro

    2017-05-01

    A technique of three-dimensional (3D) intensity retrieval from defocused, two-dimensional (2D) fluorescent images in the multimodal digital holographic microscopy (DHM) is proposed. In the multimodal DHM, 3D phase and 2D fluorescence distributions are obtained simultaneously by an integrated system of an off-axis DHM and a conventional epifluorescence microscopy, respectively. This gives us more information of the target; however, defocused fluorescent images are observed due to the short depth of field. In this Letter, we propose a method to recover the defocused images based on the phase compensation and backpropagation from the defocused plane to the focused plane using the distance information that is obtained from a 3D phase distribution. By applying Zernike polynomial phase correction, we brought back the fluorescence intensity to the focused imaging planes. The experimental demonstration using fluorescent beads is presented, and the expected applications are suggested.

  2. Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.

    2007-03-01

    In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.

  3. Parallel Information Processing (Image Transmission Via Fiber Bundle and Multimode Fiber

    NASA Technical Reports Server (NTRS)

    Kukhtarev, Nicholai

    2003-01-01

    Growing demand for visual, user-friendly representation of information inspires search for the new methods of image transmission. Currently used in-series (sequential) methods of information processing are inherently slow and are designed mainly for transmission of one or two dimensional arrays of data. Conventional transmission of data by fibers requires many fibers with array of laser diodes and photodetectors. In practice, fiber bundles are also used for transmission of images. Image is formed on the fiber-optic bundle entrance surface and each fiber transmits the incident image to the exit surface. Since the fibers do not preserve phase, only 2D intensity distribution can be transmitted in this way. Each single mode fiber transmit only one pixel of an image. Multimode fibers may be also used, so that each mode represent different pixel element. Direct transmission of image through multimode fiber is hindered by the mode scrambling and phase randomization. To overcome these obstacles wavelength and time-division multiplexing have been used, with each pixel transmitted on a separate wavelength or time interval. Phase-conjugate techniques also was tested in, but only in the unpractical scheme when reconstructed image return back to the fiber input end. Another method of three-dimensional imaging over single mode fibers was demonstrated in, using laser light of reduced spatial coherence. Coherence encoding, needed for a transmission of images by this methods, was realized with grating interferometer or with the help of an acousto-optic deflector. We suggest simple practical holographic method of image transmission over single multimode fiber or over fiber bundle with coherent light using filtering by holographic optical elements. Originally this method was successfully tested for the single multimode fiber. In this research we have modified holographic method for transmission of laser illuminated images over commercially available fiber bundle (fiber endoscope, or fiberscope).

  4. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, G.P.; Skeate, M.F.

    1996-10-15

    An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.

  5. Fast and Robust Registration of Multimodal Remote Sensing Images via Dense Orientated Gradient Feature

    NASA Astrophysics Data System (ADS)

    Ye, Y.

    2017-09-01

    This paper presents a fast and robust method for the registration of multimodal remote sensing data (e.g., optical, LiDAR, SAR and map). The proposed method is based on the hypothesis that structural similarity between images is preserved across different modalities. In the definition of the proposed method, we first develop a pixel-wise feature descriptor named Dense Orientated Gradient Histogram (DOGH), which can be computed effectively at every pixel and is robust to non-linear intensity differences between images. Then a fast similarity metric based on DOGH is built in frequency domain using the Fast Fourier Transform (FFT) technique. Finally, a template matching scheme is applied to detect tie points between images. Experimental results on different types of multimodal remote sensing images show that the proposed similarity metric has the superior matching performance and computational efficiency than the state-of-the-art methods. Moreover, based on the proposed similarity metric, we also design a fast and robust automatic registration system for multimodal images. This system has been evaluated using a pair of very large SAR and optical images (more than 20000 × 20000 pixels). Experimental results show that our system outperforms the two popular commercial software systems (i.e. ENVI and ERDAS) in both registration accuracy and computational efficiency.

  6. Water-stable NaLuF4-based upconversion nanophosphors with long-term validity for multimodal lymphatic imaging.

    PubMed

    Zhou, Jing; Zhu, Xingjun; Chen, Min; Sun, Yun; Li, Fuyou

    2012-09-01

    Multimodal imaging is rapidly becoming an important tool for biomedical applications because it can compensate for the deficiencies of individual imaging modalities. Herein, multifunctional NaLuF(4)-based upconversion nanoparticles (Lu-UCNPs) were synthesized though a facile one-step microemulsion method under ambient condition. The doping of lanthanide ions (Gd(3+), Yb(3+) and Er(3+)/Tm(3+)) endows the Lu-UCNPs with high T(1)-enhancement, bright upconversion luminescence (UCL) emissions, and excellent X-ray absorption coefficient. Moreover, the as-prepared Lu-UCNPs are stable in water for more than six months, due to the protection of sodium glutamate and diethylene triamine pentacetate acid (DTPA) coordinating ligands on the surface. Lu-UCNPs have been successfully applied to the trimodal CT/MR/UCL lymphatic imaging on the modal of small animals. It is worth noting that Lu-UCNPs could be used for imaging even after preserving for over six months. In vitro transmission electron microscope (TEM), methyl thiazolyl tetrazolium (MTT) assay and histological analysis demonstrated that Lu-UCNPs exhibited low toxicity on living systems. Therefore, Lu-UCNPs could be multimodal agents for CT/MR/UCL imaging, and the concept can be served as a platform technology for the next-generation of probes for multimodal imaging. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. ADMultiImg: a novel missing modality transfer learning based CAD system for diagnosis of MCI due to AD using incomplete multi-modality imaging data

    NASA Astrophysics Data System (ADS)

    Liu, Xiaonan; Chen, Kewei; Wu, Teresa; Weidman, David; Lure, Fleming; Li, Jing

    2018-02-01

    Alzheimer's Disease (AD) is the most common cause of dementia and currently has no cure. Treatments targeting early stages of AD such as Mild Cognitive Impairment (MCI) may be most effective to deaccelerate AD, thus attracting increasing attention. However, MCI has substantial heterogeneity in that it can be caused by various underlying conditions, not only AD. To detect MCI due to AD, NIA-AA published updated consensus criteria in 2011, in which the use of multi-modality images was highlighted as one of the most promising methods. It is of great interest to develop a CAD system based on automatic, quantitative analysis of multi-modality images and machine learning algorithms to help physicians more adequately diagnose MCI due to AD. The challenge, however, is that multi-modality images are not universally available for many patients due to cost, access, safety, and lack of consent. We developed a novel Missing Modality Transfer Learning (MMTL) algorithm capable of utilizing whatever imaging modalities are available for an MCI patient to diagnose the patient's likelihood of MCI due to AD. Furthermore, we integrated MMTL with radiomics steps including image processing, feature extraction, and feature screening, and a post-processing for uncertainty quantification (UQ), and developed a CAD system called "ADMultiImg" to assist clinical diagnosis of MCI due to AD using multi-modality images together with patient demographic and genetic information. Tested on ADNI date, our system can generate a diagnosis with high accuracy even for patients with only partially available image modalities (AUC=0.94), and therefore may have broad clinical utility.

  8. Novel multimodality segmentation using level sets and Jensen-Rényi divergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markel, Daniel, E-mail: daniel.markel@mail.mcgill.ca; Zaidi, Habib; Geneva Neuroscience Center, Geneva University, CH-1205 Geneva

    2013-12-15

    Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. Methods: A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set activemore » contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. Results: The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with aR{sup 2} value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. Conclusions: The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.« less

  9. Novel multimodality segmentation using level sets and Jensen-Rényi divergence.

    PubMed

    Markel, Daniel; Zaidi, Habib; El Naqa, Issam

    2013-12-01

    Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set active contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with a R(2) value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.

  10. Beyond endoscopic assessment in inflammatory bowel disease: real-time histology of disease activity by non-linear multimodal imaging

    NASA Astrophysics Data System (ADS)

    Chernavskaia, Olga; Heuke, Sandro; Vieth, Michael; Friedrich, Oliver; Schürmann, Sebastian; Atreya, Raja; Stallmach, Andreas; Neurath, Markus F.; Waldner, Maximilian; Petersen, Iver; Schmitt, Michael; Bocklitz, Thomas; Popp, Jürgen

    2016-07-01

    Assessing disease activity is a prerequisite for an adequate treatment of inflammatory bowel diseases (IBD) such as Crohn’s disease and ulcerative colitis. In addition to endoscopic mucosal healing, histologic remission poses a promising end-point of IBD therapy. However, evaluating histological remission harbors the risk for complications due to the acquisition of biopsies and results in a delay of diagnosis because of tissue processing procedures. In this regard, non-linear multimodal imaging techniques might serve as an unparalleled technique that allows the real-time evaluation of microscopic IBD activity in the endoscopy unit. In this study, tissue sections were investigated using the non-linear multimodal microscopy combination of coherent anti-Stokes Raman scattering (CARS), two-photon excited auto fluorescence (TPEF) and second-harmonic generation (SHG). After the measurement a gold-standard assessment of histological indexes was carried out based on a conventional H&E stain. Subsequently, various geometry and intensity related features were extracted from the multimodal images. An optimized feature set was utilized to predict histological index levels based on a linear classifier. Based on the automated prediction, the diagnosis time interval is decreased. Therefore, non-linear multimodal imaging may provide a real-time diagnosis of IBD activity suited to assist clinical decision making within the endoscopy unit.

  11. Multimodality imaging of adult gastric emergencies: A pictorial review

    PubMed Central

    Sunnapwar, Abhijit; Ojili, Vijayanadh; Katre, Rashmi; Shah, Hardik; Nagar, Arpit

    2017-01-01

    Acute gastric emergencies require urgent surgical or nonsurgical intervention because they are associated with high morbidity and mortality. Imaging plays an important role in diagnosis since the clinical symptoms are often nonspecific and radiologist may be the first one to suggest a diagnosis as the imaging findings are often characteristic. The purpose of this article is to provide a comprehensive review of multimodality imaging (plain radiograph, fluoroscopy, and computed tomography) of various life threatening gastric emergencies. PMID:28515579

  12. Cross-Modality Image Synthesis via Weakly Coupled and Geometry Co-Regularized Joint Dictionary Learning.

    PubMed

    Huang, Yawen; Shao, Ling; Frangi, Alejandro F

    2018-03-01

    Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors, such as patient discomfort, increased cost, prolonged scanning time, and scanner unavailability. In additionally, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. In this paper, we propose a weakly coupled and geometry co-regularized joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting the large amounts of training data is often impractical. Our learning stage requires only a few registered multi-modality image pairs as training data. To employ both paired images and a large set of unpaired data, a cross-modality image matching criterion is proposed. Then, we propose a unified model by integrating such a criterion into the joint dictionary learning and the observed common feature space for associating cross-modality data for the purpose of synthesis. Furthermore, two regularization terms are added to construct robust sparse representations. Our experimental results demonstrate superior performance of the proposed model over state-of-the-art methods.

  13. Application of Virtual Navigation with Multimodality Image Fusion in Foramen Ovale Cannulation.

    PubMed

    Qiu, Xixiong; Liu, Weizong; Zhang, Mingdong; Lin, Hengzhou; Zhou, Shoujun; Lei, Yi; Xia, Jun

    2017-11-01

    Idiopathic trigeminal neuralgia (ITN) can be effectively treated with radiofrequency thermocoagulation. However, this procedure requires cannulation of the foramen ovale, and conventional cannulation methods are associated with high failure rates. Multimodality imaging can improve the accuracy of cannulation because each imaging method can compensate for the drawbacks of the other. We aim to determine the feasibility and accuracy of percutaneous foramen ovale cannulation under the guidance of virtual navigation with multimodality image fusion in a self-designed anatomical model of human cadaveric heads. Five cadaveric head specimens were investigated in this study. Spiral computed tomography (CT) scanning clearly displayed the foramen ovale in all five specimens (10 foramina), which could not be visualized using two-dimensional ultrasound alone. The ultrasound and spiral CT images were fused, and percutaneous cannulation of the foramen ovale was performed under virtual navigation. After this, spiral CT scanning was immediately repeated to confirm the accuracy of the cannulation. Postprocedural spiral CT confirmed that the ultrasound and CT images had been successfully fused for all 10 foramina, which were accurately and successfully cannulated. The success rates of both image fusion and cannulation were 100%. Virtual navigation with multimodality image fusion can substantially facilitate foramen ovale cannulation and is worthy of clinical application. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  14. XML-based scripting of multimodality image presentations in multidisciplinary clinical conferences

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Allada, Vivekanand; Dahlbom, Magdalena; Marcus, Phillip; Fine, Ian; Lapstra, Lorelle

    2002-05-01

    We developed a multi-modality image presentation software for display and analysis of images and related data from different imaging modalities. The software is part of a cardiac image review and presentation platform that supports integration of digital images and data from digital and analog media such as videotapes, analog x-ray films and 35 mm cine films. The software supports standard DICOM image files as well as AVI and PDF data formats. The system is integrated in a digital conferencing room that includes projections of digital and analog sources, remote videoconferencing capabilities, and an electronic whiteboard. The goal of this pilot project is to: 1) develop a new paradigm for image and data management for presentation in a clinically meaningful sequence adapted to case-specific scenarios, 2) design and implement a multi-modality review and conferencing workstation using component technology and customizable 'plug-in' architecture to support complex review and diagnostic tasks applicable to all cardiac imaging modalities and 3) develop an XML-based scripting model of image and data presentation for clinical review and decision making during routine clinical tasks and multidisciplinary clinical conferences.

  15. Dual CARS and SHG image acquisition scheme that combines single central fiber and multimode fiber bundle to collect and differentiate backward and forward generated photons

    PubMed Central

    Weng, Sheng; Chen, Xu; Xu, Xiaoyun; Wong, Kelvin K.; Wong, Stephen T. C.

    2016-01-01

    In coherent anti-Stokes Raman scattering (CARS) and second harmonic generation (SHG) imaging, backward and forward generated photons exhibit different image patterns and thus capture salient intrinsic information of tissues from different perspectives. However, they are often mixed in collection using traditional image acquisition methods and thus are hard to interpret. We developed a multimodal scheme using a single central fiber and multimode fiber bundle to simultaneously collect and differentiate images formed by these two types of photons and evaluated the scheme in an endomicroscopy prototype. The ratio of these photons collected was calculated for the characterization of tissue regions with strong or weak epi-photon generation while different image patterns of these photons at different tissue depths were revealed. This scheme provides a new approach to extract and integrate information captured by backward and forward generated photons in dual CARS/SHG imaging synergistically for biomedical applications. PMID:27375938

  16. Multimodal molecular 3D imaging for the tumoral volumetric distribution assessment of folate-based biosensors.

    PubMed

    Ramírez-Nava, Gerardo J; Santos-Cuevas, Clara L; Chairez, Isaac; Aranda-Lara, Liliana

    2017-12-01

    The aim of this study was to characterize the in vivo volumetric distribution of three folate-based biosensors by different imaging modalities (X-ray, fluorescence, Cerenkov luminescence, and radioisotopic imaging) through the development of a tridimensional image reconstruction algorithm. The preclinical and multimodal Xtreme imaging system, with a Multimodal Animal Rotation System (MARS), was used to acquire bidimensional images, which were processed to obtain the tridimensional reconstruction. Images of mice at different times (biosensor distribution) were simultaneously obtained from the four imaging modalities. The filtered back projection and inverse Radon transformation were used as main image-processing techniques. The algorithm developed in Matlab was able to calculate the volumetric profiles of 99m Tc-Folate-Bombesin (radioisotopic image), 177 Lu-Folate-Bombesin (Cerenkov image), and FolateRSense™ 680 (fluorescence image) in tumors and kidneys of mice, and no significant differences were detected in the volumetric quantifications among measurement techniques. The imaging tridimensional reconstruction algorithm can be easily extrapolated to different 2D acquisition-type images. This characteristic flexibility of the algorithm developed in this study is a remarkable advantage in comparison to similar reconstruction methods.

  17. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images.

    PubMed

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue

    2018-01-01

    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  18. Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis

    PubMed Central

    Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.

    2006-01-01

    In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709

  19. Brain development during the preschool years

    PubMed Central

    Brown, Timothy T.; Jernigan, Terry L.

    2012-01-01

    The preschool years represent a time of expansive psychological growth, with the initial expression of many psychological abilities that will continue to be refined into young adulthood. Likewise, brain development during this age is characterized by its “blossoming” nature, showing some of its most dynamic and elaborative anatomical and physiological changes. In this article, we review human brain development during the preschool years, sampling scientific evidence from a variety of sources. First, we cover neurobiological foundations of early postnatal development, explaining some of the primary mechanisms seen at a larger scale within neuroimaging studies. Next, we review evidence from both structural and functional imaging studies, which now accounts for a large portion of our current understanding of typical brain development. Within anatomical imaging, we focus on studies of developing brain morphology and tissue properties, including diffusivity of white matter fiber tracts. We also present new data on changes during the preschool years in cortical area, thickness, and volume. Physiological brain development is then reviewed, touching on influential results from several different functional imaging and recording modalities in the preschool and early school-age years, including positron emission tomography (PET), electroencephalography (EEG) and event-related potentials (ERP), functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and near-infrared spectroscopy (NIRS). Here, more space is devoted to explaining some of the key methodological factors that are required for interpretation. We end with a section on multimodal and multidimensional imaging approaches, which we believe will be critical for increasing our understanding of brain development and its relationship to cognitive and behavioral growth in the preschool years and beyond. PMID:23007644

  20. Recent Advances in Molecular, Multimodal and Theranostic Ultrasound Imaging

    PubMed Central

    Kiessling, Fabian; Fokong, Stanley; Bzyl, Jessica; Lederle, Wiltrud; Palmowski, Moritz; Lammers, Twan

    2014-01-01

    Ultrasound (US) imaging is an exquisite tool for the non-invasive and real-time diagnosis of many different diseases. In this context, US contrast agents can improve lesion delineation, characterization and therapy response evaluation. US contrast agents are usually micrometer-sized gas bubbles, stabilized with soft or hard shells. By conjugating antibodies to the microbubble (MB) surface, and by incorporating diagnostic agents, drugs or nucleic acids into or onto the MB shell, molecular, multimodal and theranostic MB can be generated. We here summarize recent advances in molecular, multimodal and theranostic US imaging, and introduce concepts how such advanced MB can be generated, applied and imaged. Examples are given for their use to image and treat oncological, cardiovascular and neurological diseases. Furthermore, we discuss for which therapeutic entities incorporation into (or conjugation to) MB is meaningful, and how US-mediated MB destruction can increase their extravasation, penetration, internalization and efficacy. PMID:24316070

  1. The value of multimodality imaging in the investigation of a PSA recurrence after radical prostatectomy in the Irish hospital setting.

    PubMed

    McLoughlin, L C; Inder, S; Moran, D; O'Rourke, C; Manecksha, R P; Lynch, T H

    2018-02-01

    The diagnostic evaluation of a PSA recurrence after RP in the Irish hospital setting involves multimodality imaging with MRI, CT, and bone scanning, despite the low diagnostic yield from imaging at low PSA levels. We aim to investigate the value of multimodality imaging in PC patients after RP with a PSA recurrence. Forty-eight patients with a PSA recurrence after RP who underwent multimodality imaging were evaluated. Demographic data, postoperative PSA levels, and imaging studies performed at those levels were evaluated. Eight (21%) MRIs, 6 (33%) CTs, and 4 (9%) bone scans had PCa-specific findings. Three (12%) patients had a positive MRI with a PSA <1.0 ng/ml, while 5 (56%) were positive at PSA ≥1.1 ng/ml (p = 0.05). Zero patient had a positive CT TAP at a PSA level <1.0 ng/ml, while 5 (56%) were positive at levels ≥1.1 ng/ml (p = 0.03). Zero patient had a positive bone at PSA levels <1.0 ng/ml, while 4 (27%) were positive at levels ≥1.1 ng/ml (p = 0.01). The diagnostic yield from multimodality imaging, and isotope bone scanning in particular, in PSA levels <1.0 ng/ml, is low. There is a statistically significant increase in the frequency of positive findings on CT and bone scanning at PSA levels ≥1.1 ng/ml. MRI alone is of investigative value at PSA <1.0 ng/ml. The indication for CT, MRI, or isotope bone scanning should be carefully correlated with the clinical question and how it will affect further management.

  2. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.

  3. Fast and robust multimodal image registration using a local derivative pattern.

    PubMed

    Jiang, Dongsheng; Shi, Yonghong; Chen, Xinrong; Wang, Manning; Song, Zhijian

    2017-02-01

    Deformable multimodal image registration, which can benefit radiotherapy and image guided surgery by providing complementary information, remains a challenging task in the medical image analysis field due to the difficulty of defining a proper similarity measure. This article presents a novel, robust and fast binary descriptor, the discriminative local derivative pattern (dLDP), which is able to encode images of different modalities into similar image representations. dLDP calculates a binary string for each voxel according to the pattern of intensity derivatives in its neighborhood. The descriptor similarity is evaluated using the Hamming distance, which can be efficiently computed, instead of conventional L1 or L2 norms. For the first time, we validated the effectiveness and feasibility of the local derivative pattern for multimodal deformable image registration with several multi-modal registration applications. dLDP was compared with three state-of-the-art methods in artificial image and clinical settings. In the experiments of deformable registration between different magnetic resonance imaging (MRI) modalities from BrainWeb, between computed tomography and MRI images from patient data, and between MRI and ultrasound images from BITE database, we show our method outperforms localized mutual information and entropy images in terms of both accuracy and time efficiency. We have further validated dLDP for the deformable registration of preoperative MRI and three-dimensional intraoperative ultrasound images. Our results indicate that dLDP reduces the average mean target registration error from 4.12 mm to 2.30 mm. This accuracy is statistically equivalent to the accuracy of the state-of-the-art methods in the study; however, in terms of computational complexity, our method significantly outperforms other methods and is even comparable to the sum of the absolute difference. The results reveal that dLDP can achieve superior performance regarding both accuracy and time efficiency in general multimodal image registration. In addition, dLDP also indicates the potential for clinical ultrasound guided intervention. © 2016 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  4. Gadolinium-Conjugated Gold Nanoshells for Multimodal Diagnostic Imaging and Photothermal Cancer Therapy

    PubMed Central

    Coughlin, Andrew J.; Ananta, Jeyarama S.; Deng, Nanfu; Larina, Irina V.; Decuzzi, Paolo

    2014-01-01

    Multimodal imaging offers the potential to improve diagnosis and enhance the specificity of photothermal cancer therapy. Toward this goal, we have engineered gadolinium-conjugated gold nanoshells and demonstrated that they enhance contrast for magnetic resonance imaging, X-Ray, optical coherence tomography, reflectance confocal microscopy, and two-photon luminescence. Additionally, these particles effectively convert near-infrared light to heat, which can be used to ablate cancer cells. Ultimately, these studies demonstrate the potential of gadolinium-nanoshells for image-guided photothermal ablation. PMID:24115690

  5. Assessing Anxiety in Youth with the Multidimensional Anxiety Scale for Children (MASC)

    PubMed Central

    Wei, Chiaying; Hoff, Alexandra; Villabø, Marianne A.; Peterman, Jeremy; Kendall, Philip C.; Piacentini, John; McCracken, James; Walkup, John T.; Albano, Anne Marie; Rynn, Moira; Sherrill, Joel; Sakolsky, Dara; Birmaher, Boris; Ginsburg, Golda; Keaton, Courtney; Gosch, Elizabeth; Compton, Scott N.; March, John

    2013-01-01

    The present study examined the psychometric properties, including discriminant validity and clinical utility, of the youth self-report and parent-report forms of the Multidimensional Anxiety Scale for Children (MASC) among youth with anxiety disorders. The sample included parents and youth (N= 488, 49.6% male) ages 7 – 17 who participated in the Child/Adolescent Anxiety Multimodal Study (CAMS). Although the typical low agreement between parent and youth self-reports was found, the MASC evidenced good internal reliability across MASC subscales and informants. The main MASC subscales (i.e., Physical Symptoms, Harm Avoidance, Social Anxiety, and Separation/Panic) were examined. The Social Anxiety and Separation/Panic subscales were found to be significantly predictive of the presence and severity of social phobia and separation anxiety disorder, respectively. Using multiple informants improved the accuracy of prediction. The MASC subscales demonstrated good psychometric properties and clinical utilities in identifying youth with anxiety disorders. PMID:23845036

  6. Intraoperative imaging-guided cancer surgery: from current fluorescence molecular imaging methods to future multi-modality imaging technology.

    PubMed

    Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan

    2014-01-01

    Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery.

  7. Intraoperative Imaging-Guided Cancer Surgery: From Current Fluorescence Molecular Imaging Methods to Future Multi-Modality Imaging Technology

    PubMed Central

    Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan

    2014-01-01

    Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery. PMID:25250092

  8. Introduction of a standardized multimodality image protocol for navigation-guided surgery of suspected low-grade gliomas.

    PubMed

    Mert, Aygül; Kiesel, Barbara; Wöhrer, Adelheid; Martínez-Moreno, Mauricio; Minchev, Georgi; Furtner, Julia; Knosp, Engelbert; Wolfsberger, Stefan; Widhalm, Georg

    2015-01-01

    OBJECT Surgery of suspected low-grade gliomas (LGGs) poses a special challenge for neurosurgeons due to their diffusely infiltrative growth and histopathological heterogeneity. Consequently, neuronavigation with multimodality imaging data, such as structural and metabolic data, fiber tracking, and 3D brain visualization, has been proposed to optimize surgery. However, currently no standardized protocol has been established for multimodality imaging data in modern glioma surgery. The aim of this study was therefore to define a specific protocol for multimodality imaging and navigation for suspected LGG. METHODS Fifty-one patients who underwent surgery for a diffusely infiltrating glioma with nonsignificant contrast enhancement on MRI and available multimodality imaging data were included. In the first 40 patients with glioma, the authors retrospectively reviewed the imaging data, including structural MRI (contrast-enhanced T1-weighted, T2-weighted, and FLAIR sequences), metabolic images derived from PET, or MR spectroscopy chemical shift imaging, fiber tracking, and 3D brain surface/vessel visualization, to define standardized image settings and specific indications for each imaging modality. The feasibility and surgical relevance of this new protocol was subsequently prospectively investigated during surgery with the assistance of an advanced electromagnetic navigation system in the remaining 11 patients. Furthermore, specific surgical outcome parameters, including the extent of resection, histological analysis of the metabolic hotspot, presence of a new postoperative neurological deficit, and intraoperative accuracy of 3D brain visualization models, were assessed in each of these patients. RESULTS After reviewing these first 40 cases of glioma, the authors defined a specific protocol with standardized image settings and specific indications that allows for optimal and simultaneous visualization of structural and metabolic data, fiber tracking, and 3D brain visualization. This new protocol was feasible and was estimated to be surgically relevant during navigation-guided surgery in all 11 patients. According to the authors' predefined surgical outcome parameters, they observed a complete resection in all resectable gliomas (n = 5) by using contour visualization with T2-weighted or FLAIR images. Additionally, tumor tissue derived from the metabolic hotspot showed the presence of malignant tissue in all WHO Grade III or IV gliomas (n = 5). Moreover, no permanent postoperative neurological deficits occurred in any of these patients, and fiber tracking and/or intraoperative monitoring were applied during surgery in the vast majority of cases (n = 10). Furthermore, the authors found a significant intraoperative topographical correlation of 3D brain surface and vessel models with gyral anatomy and superficial vessels. Finally, real-time navigation with multimodality imaging data using the advanced electromagnetic navigation system was found to be useful for precise guidance to surgical targets, such as the tumor margin or the metabolic hotspot. CONCLUSIONS In this study, the authors defined a specific protocol for multimodality imaging data in suspected LGGs, and they propose the application of this new protocol for advanced navigation-guided procedures optimally in conjunction with continuous electromagnetic instrument tracking to optimize glioma surgery.

  9. Using complex networks towards information retrieval and diagnostics in multidimensional imaging

    NASA Astrophysics Data System (ADS)

    Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen

    2015-12-01

    We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers.

  10. Using complex networks towards information retrieval and diagnostics in multidimensional imaging.

    PubMed

    Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen

    2015-12-02

    We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers.

  11. Using complex networks towards information retrieval and diagnostics in multidimensional imaging

    PubMed Central

    Banerjee, Soumya Jyoti; Azharuddin, Mohammad; Sen, Debanjan; Savale, Smruti; Datta, Himadri; Dasgupta, Anjan Kr; Roy, Soumen

    2015-01-01

    We present a fresh and broad yet simple approach towards information retrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers. PMID:26626047

  12. [New aspects of complex chronic tinnitus. I: Assessment of a multi-modality behavioral medicine treatment concept].

    PubMed

    Goebel, G; Keeser, W; Fichter, M; Rief, W

    1991-01-01

    "Complex tinnitus" is a diagnostic term denoting a disturbance pattern where the patient hears highly annoying and painful noises or sounds that do not originate from a recognisable external source and can be described only by the patient himself. It seems that the suffering mainly depends upon the extent to which the tinnitus is experienced as a phenomenon that is beyond control. Part I reports on an examination of the treatment success achieved with 28 consecutive patients who had been treated according to an integrative multimodal behavioural medicine concept. This resulted--despite continual loudness--in a decrease in the degree of unpleasantness of the tinnitus, by 17% (p less than 0.01) with corresponding normalisation of decisive symptom factors in Hopkins Symptom-Check-List (SCL-90-R) and Freiburg Personality-Inventary (FPI-R). On the whole, 19 out of the total of 28 patients showed essential to marked improvement of the disturbance pattern. Part II presents a multidimensional tinnitus model and the essential psychotherapeutic focal points of a multimodal psychotherapy concept in complex chronic tinnitus, as well as the parallel phenomena in the chronic pain syndrome.

  13. Nanoengineered multimodal contrast agent for medical image guidance

    NASA Astrophysics Data System (ADS)

    Perkins, Gregory J.; Zheng, Jinzi; Brock, Kristy; Allen, Christine; Jaffray, David A.

    2005-04-01

    Multimodality imaging has gained momentum in radiation therapy planning and image-guided treatment delivery. Specifically, computed tomography (CT) and magnetic resonance (MR) imaging are two complementary imaging modalities often utilized in radiation therapy for visualization of anatomical structures for tumour delineation and accurate registration of image data sets for volumetric dose calculation. The development of a multimodal contrast agent for CT and MR with prolonged in vivo residence time would provide long-lasting spatial and temporal correspondence of the anatomical features of interest, and therefore facilitate multimodal image registration, treatment planning and delivery. The multimodal contrast agent investigated consists of nano-sized stealth liposomes encapsulating conventional iodine and gadolinium-based contrast agents. The average loading achieved was 33.5 +/- 7.1 mg/mL of iodine for iohexol and 9.8 +/- 2.0 mg/mL of gadolinium for gadoteridol. The average liposome diameter was 46.2 +/- 13.5 nm. The system was found to be stable in physiological buffer over a 15-day period, releasing 11.9 +/- 1.1% and 11.2 +/- 0.9% of the total amounts of iohexol and gadoteridol loaded, respectively. 200 minutes following in vivo administration, the contrast agent maintained a relative contrast enhancement of 81.4 +/- 13.05 differential Hounsfield units (ΔHU) in CT (40% decrease from the peak signal value achieved 3 minutes post-injection) and 731.9 +/- 144.2 differential signal intensity (ΔSI) in MR (46% decrease from the peak signal value achieved 3 minutes post-injection) in the blood (aorta), a relative contrast enhancement of 38.0 +/- 5.1 ΔHU (42% decrease from the peak signal value achieved 3 minutes post-injection) and 178.6 +/- 41.4 ΔSI (62% decrease from the peak signal value achieved 3 minutes post-injection) in the liver (parenchyma), a relative contrast enhancement of 9.1 +/- 1.7 ΔHU (94% decrease from the peak signal value achieved 3 minutes post-injection) and 461.7 +/- 78.1 ΔSI (60% decrease from the peak signal value achieved 5 minutes post-injection) in the kidney (cortex) of a New Zealand white rabbit. This multimodal contrast agent, with prolonged in vivo residence time and imaging efficacy, has the potential to bring about improvements in the fields of medical imaging and radiation therapy, particularly for image registration and guidance.

  14. Multimodal Image-Based Virtual Reality Presurgical Simulation and Evaluation for Trigeminal Neuralgia and Hemifacial Spasm.

    PubMed

    Yao, Shujing; Zhang, Jiashu; Zhao, Yining; Hou, Yuanzheng; Xu, Xinghua; Zhang, Zhizhong; Kikinis, Ron; Chen, Xiaolei

    2018-05-01

    To address the feasibility and predictive value of multimodal image-based virtual reality in detecting and assessing features of neurovascular confliction (NVC), particularly regarding the detection of offending vessels, degree of compression exerted on the nerve root, in patients who underwent microvascular decompression for nonlesional trigeminal neuralgia and hemifacial spasm (HFS). This prospective study includes 42 consecutive patients who underwent microvascular decompression for classic primary trigeminal neuralgia or HFS. All patients underwent preoperative 1.5-T magnetic resonance imaging (MRI) with T2-weighted three-dimensional (3D) sampling perfection with application-optimized contrasts by using different flip angle evolutions, 3D time-of-flight magnetic resonance angiography, and 3D T1-weighted gadolinium-enhanced sequences in combination, whereas 2 patients underwent extra experimental preoperative 7.0-T MRI scans with the same imaging protocol. Multimodal MRIs were then coregistered with open-source software 3D Slicer, followed by 3D image reconstruction to generate virtual reality (VR) images for detection of possible NVC in the cerebellopontine angle. Evaluations were performed by 2 reviewers and compared with the intraoperative findings. For detection of NVC, multimodal image-based VR sensitivity was 97.6% (40/41) and specificity was 100% (1/1). Compared with the intraoperative findings, the κ coefficients for predicting the offending vessel and the degree of compression were >0.75 (P < 0.001). The 7.0-T scans have a clearer view of vessels in the cerebellopontine angle, which may have significant impact on detection of small-caliber offending vessels with relatively slow flow speed in cases of HFS. Multimodal image-based VR using 3D sampling perfection with application-optimized contrasts by using different flip angle evolutions in combination with 3D time-of-flight magnetic resonance angiography sequences proved to be reliable in detecting NVC and in predicting the degree of root compression. The VR image-based simulation correlated well with the real surgical view. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Multimodal Research: Addressing the Complexity of Multimodal Environments and the Challenges for CALL

    ERIC Educational Resources Information Center

    Tan, Sabine; O'Halloran, Kay L.; Wignell, Peter

    2016-01-01

    Multimodality, the study of the interaction of language with other semiotic resources such as images and sound resources, has significant implications for computer assisted language learning (CALL) with regards to understanding the impact of digital environments on language teaching and learning. In this paper, we explore recent manifestations of…

  16. Feature-based Alignment of Volumetric Multi-modal Images

    PubMed Central

    Toews, Matthew; Zöllei, Lilla; Wells, William M.

    2014-01-01

    This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955

  17. Multimodal Spectral Imaging of Cells Using a Transmission Diffraction Grating on a Light Microscope

    PubMed Central

    Isailovic, Dragan; Xu, Yang; Copus, Tyler; Saraswat, Suraj; Nauli, Surya M.

    2011-01-01

    A multimodal methodology for spectral imaging of cells is presented. The spectral imaging setup uses a transmission diffraction grating on a light microscope to concurrently record spectral images of cells and cellular organelles by fluorescence, darkfield, brightfield, and differential interference contrast (DIC) spectral microscopy. Initially, the setup was applied for fluorescence spectral imaging of yeast and mammalian cells labeled with multiple fluorophores. Fluorescence signals originating from fluorescently labeled biomolecules in cells were collected through triple or single filter cubes, separated by the grating, and imaged using a charge-coupled device (CCD) camera. Cellular components such as nuclei, cytoskeleton, and mitochondria were spatially separated by the fluorescence spectra of the fluorophores present in them, providing detailed multi-colored spectral images of cells. Additionally, the grating-based spectral microscope enabled measurement of scattering and absorption spectra of unlabeled cells and stained tissue sections using darkfield and brightfield or DIC spectral microscopy, respectively. The presented spectral imaging methodology provides a readily affordable approach for multimodal spectral characterization of biological cells and other specimens. PMID:21639978

  18. Rapid Screening of Cancer Margins in Tissue with Multimodal Confocal Microscopy

    PubMed Central

    Gareau, Daniel S.; Jeon, Hana; Nehal, Kishwer S.; Rajadhyaksha, Milind

    2012-01-01

    Background Complete and accurate excision of cancer is guided by the examination of histopathology. However, preparation of histopathology is labor intensive and slow, leading to insufficient sampling of tissue and incomplete and/or inaccurate excision of margins. We demonstrate the potential utility of multimodal confocal mosaicing microscopy for rapid screening of cancer margins, directly in fresh surgical excisions, without the need for conventional embedding, sectioning or processing. Materials/Methods A multimodal confocal mosaicing microscope was developed to image basal cell carcinoma margins in surgical skin excisions, with resolution that shows nuclear detail. Multimodal contrast is with fluorescence for imaging nuclei and reflectance for cellular cytoplasm and dermal collagen. Thirtyfive excisions of basal cell carcinomas from Mohs surgery were imaged, and the mosaics analyzed by comparison to the corresponding frozen pathology. Results Confocal mosaics are produced in about 9 minutes, displaying tissue in fields-of-view of 12 mm with 2X magnification. A digital staining algorithm transforms black and white contrast to purple and pink, which simulates the appearance of standard histopathology. Mosaicing enables rapid digital screening, which mimics the examination of histopathology. Conclusions Multimodal confocal mosaicing microscopy offers a technology platform to potentially enable real-time pathology at the bedside. The imaging may serve as an adjunct to conventional histopathology, to expedite screening of margins and guide surgery toward more complete and accurate excision of cancer. PMID:22721570

  19. Development of a multi-scale and multi-modality imaging system to characterize tumours and their microenvironment in vivo

    NASA Astrophysics Data System (ADS)

    Rouffiac, Valérie; Ser-Leroux, Karine; Dugon, Emilie; Leguerney, Ingrid; Polrot, Mélanie; Robin, Sandra; Salomé-Desnoulez, Sophie; Ginefri, Jean-Christophe; Sebrié, Catherine; Laplace-Builhé, Corinne

    2015-03-01

    In vivo high-resolution imaging of tumor development is possible through dorsal skinfold chamber implantable on mice model. However, current intravital imaging systems are weakly tolerated along time by mice and do not allow multimodality imaging. Our project aims to develop a new chamber for: 1- long-term micro/macroscopic visualization of tumor (vascular and cellular compartments) and tissue microenvironment; and 2- multimodality imaging (photonic, MRI and sonography). Our new experimental device was patented in March 2014 and was primarily assessed on 75 mouse engrafted with 4T1-Luc tumor cell line, and validated in confocal and multiphoton imaging after staining the mice vasculature using Dextran 155KDa-TRITC or Dextran 2000kDa-FITC. Simultaneously, a universal stage was designed for optimal removal of respiratory and cardiac artifacts during microscopy assays. Experimental results from optical, ultrasound (Bmode and pulse subtraction mode) and MRI imaging (anatomic sequences) showed that our patented design, unlike commercial devices, improves longitudinal monitoring over several weeks (35 days on average against 12 for the commercial chamber) and allows for a better characterization of the early and late tissue alterations due to tumour development. We also demonstrated the compatibility for multimodality imaging and the increase of mice survival was by a factor of 2.9, with our new skinfold chamber. Current developments include: 1- defining new procedures for multi-labelling of cells and tissue (screening of fluorescent molecules and imaging protocols); 2- developing ultrasound and MRI imaging procedures with specific probes; 3- correlating optical/ultrasound/MRI data for a complete mapping of tumour development and microenvironment.

  20. Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation

    PubMed Central

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829

  1. Multimodal Imaging in Klippel-Trénaunay-Weber Syndrome: Clinical Photography, Computed Tomoangiography, Infrared Thermography, and 99mTc-Phytate Lymphoscintigraphy.

    PubMed

    Kim, Su Wan; Song, Heesung

    2017-12-01

    We report the case of a 19-year-old man who presented with a 12-year history of progressive fatigue, feeling hot, excessive sweating, and numbness in the left arm. He had undergone multimodal imaging and was diagnosed as having Klippel-Trénaunay-Weber syndrome (KTWS). This is a rare congenital disease, defined by combinations of nevus flammeus, venous and lymphatic malformation, and hypertrophy of the affected limbs. Lower extremities are affected mostly. Conventional modalities for evaluating KTWS are ultrasonography, CT, MRI, lymphoscintigraphy, and angiography. There are few reports on multimodal imaging of upper extremities of KTWS patients, and this is the first report of an infrared thermography in KTWS.

  2. High-resolution multimodal clinical multiphoton tomography of skin

    NASA Astrophysics Data System (ADS)

    König, Karsten

    2011-03-01

    This review focuses on multimodal multiphoton tomography based on near infrared femtosecond lasers. Clinical multiphoton tomographs for 3D high-resolution in vivo imaging have been placed into the market several years ago. The second generation of this Prism-Award winning High-Tech skin imaging tool (MPTflex) was introduced in 2010. The same year, the world's first clinical CARS studies have been performed with a hybrid multimodal multiphoton tomograph. In particular, non-fluorescent lipids and water as well as mitochondrial fluorescent NAD(P)H, fluorescent elastin, keratin, and melanin as well as SHG-active collagen has been imaged with submicron resolution in patients suffering from psoriasis. Further multimodal approaches include the combination of multiphoton tomographs with low-resolution wide-field systems such as ultrasound, optoacoustical, OCT, and dermoscopy systems. Multiphoton tomographs are currently employed in Australia, Japan, the US, and in several European countries for early diagnosis of skin cancer, optimization of treatment strategies, and cosmetic research including long-term testing of sunscreen nanoparticles as well as anti-aging products.

  3. Ultrasmall biomolecule-anchored hybrid GdVO4 nanophosphors as a metabolizable multimodal bioimaging contrast agent.

    PubMed

    Dong, Kai; Ju, Enguo; Liu, Jianhua; Han, Xueli; Ren, Jinsong; Qu, Xiaogang

    2014-10-21

    Multimodal molecular imaging has recently attracted much attention on disease diagnostics by taking advantage of individual imaging modalities. Herein, we have demonstrated a new paradigm for multimodal bioimaging based on amino acids-anchored ultrasmall lanthanide-doped GdVO4 nanoprobes. On the merit of special metal-cation complexation and abundant functional groups, these amino acids-anchored nanoprobes showed high colloidal stability and excellent dispersibility. Additionally, due to typical paramagnetic behaviour, high X-ray mass absorption coefficient and strong fluorescence, these nanoprobes would provide a unique opportunity to develop multifunctional probes for MRI, CT and luminescence imaging. More importantly, the small size and biomolecular coatings endow the nanoprobes with effective metabolisability and high biocompatibility. With the superior stability, high biocompatibility, effective metabolisability and excellent contrast performance, amino acids-capped GdVO4:Eu(3+) nanocastings are a promising candidate as multimodal contrast agents and would bring more opportunities for biological and medical applications with further modifications.

  4. Plasma-assisted quadruple-channel optosensing of proteins and cells with Mn-doped ZnS quantum dots.

    PubMed

    Li, Chenghui; Wu, Peng; Hou, Xiandeng

    2016-02-21

    Information extraction from nano-bio-systems is crucial for understanding their inner molecular level interactions and can help in the development of multidimensional/multimodal sensing devices to realize novel or expanded functionalities. The intrinsic fluorescence (IF) of proteins has long been considered as an effective tool for studying protein structures and dynamics, but not for protein recognition analysis partially because it generally contributes to the fluorescence background in bioanalysis. Here we explored the use of IF as the fourth channel optical input for a multidimensional optosensing device, together with the triple-channel optical output of Mn-doped ZnS QDs (fluorescence from ZnS host, phosphorescence from Mn(2+) dopant, and Rayleigh light scattering from the QDs), to dramatically improve the protein recognition and discrimination resolution. To further increase the cross-reactivity of the multidimensional optosensing device, plasma modification of proteins was explored to enhance the IF difference as well as their interactions with Mn-doped ZnS QDs. Such a sensor device was demonstrated for highly discriminative and precise identification of proteins in human serum and urine samples, and for cancer and normal cells as well.

  5. Dynamic State Estimation of Terrestrial and Solar Plasmas

    NASA Astrophysics Data System (ADS)

    Kamalabadi, Farzad

    A pervasive problem in virtually all branches of space science is the estimation of multi-dimensional state parameters of a dynamical system from a collection of indirect, often incomplete, and imprecise measurements. Subsequent scientific inference is predicated on rigorous analysis, interpretation, and understanding of physical observations and on the reliability of the associated quantitative statistical bounds and performance characteristics of the algorithms used. In this work, we focus on these dynamic state estimation problems and illustrate their importance in the context of two timely activities in space remote sensing. First, we discuss the estimation of multi-dimensional ionospheric state parameters from UV spectral imaging measurements anticipated to be acquired the recently selected NASA Heliophysics mission, Ionospheric Connection Explorer (ICON). Next, we illustrate that similar state-space formulations provide the means for the estimation of 3D, time-dependent densities and temperatures in the solar corona from a series of white-light and EUV measurements. We demonstrate that, while a general framework for the stochastic formulation of the state estimation problem is suited for systematic inference of the parameters of a hidden Markov process, several challenges must be addressed in the assimilation of an increasing volume and diversity of space observations. These challenges are: (1) the computational tractability when faced with voluminous and multimodal data, (2) the inherent limitations of the underlying models which assume, often incorrectly, linear dynamics and Gaussian noise, and (3) the unavailability or inaccuracy of transition probabilities and noise statistics. We argue that pursuing answers to these questions necessitates cross-disciplinary research that enables progress toward systematically reconciling observational and theoretical understanding of the space environment.

  6. Radionuclide Myocardial Perfusion Imaging for the Evaluation of Patients With Known or Suspected Coronary Artery Disease in the Era of Multimodality Cardiovascular Imaging

    PubMed Central

    Taqueti, Viviany R.; Di Carli, Marcelo F.

    2018-01-01

    Over the last several decades, radionuclide myocardial perfusion imaging (MPI) with single photon emission tomography and positron emission tomography has been a mainstay for the evaluation of patients with known or suspected coronary artery disease (CAD). More recently, technical advances in separate and complementary imaging modalities including coronary computed tomography angiography, computed tomography perfusion, cardiac magnetic resonance imaging, and contrast stress echocardiography have expanded the toolbox of diagnostic testing for cardiac patients. While the growth of available technologies has heralded an exciting era of multimodality cardiovascular imaging, coordinated and dispassionate utilization of these techniques is needed to implement the right test for the right patient at the right time, a promise of “precision medicine.” In this article, we review the maturing role of MPI in the current era of multimodality cardiovascular imaging, particularly in the context of recent advances in myocardial blood flow quantitation, and as applied to the evaluation of patients with known or suspected CAD. PMID:25770849

  7. Integrated scanning laser ophthalmoscopy and optical coherence tomography for quantitative multimodal imaging of retinal degeneration and autofluorescence

    NASA Astrophysics Data System (ADS)

    Issaei, Ali; Szczygiel, Lukasz; Hossein-Javaheri, Nima; Young, Mei; Molday, L. L.; Molday, R. S.; Sarunic, M. V.

    2011-03-01

    Scanning Laser Ophthalmoscopy (SLO) and Coherence Tomography (OCT) are complimentary retinal imaging modalities. Integration of SLO and OCT allows for both fluorescent detection and depth- resolved structural imaging of the retinal cell layers to be performed in-vivo. System customization is required to image rodents used in medical research by vision scientists. We are investigating multimodal SLO/OCT imaging of a rodent model of Stargardt's Macular Dystrophy which is characterized by retinal degeneration and accumulation of toxic autofluorescent lipofuscin deposits. Our new findings demonstrate the ability to track fundus autofluorescence and retinal degeneration concurrently.

  8. Multimodal Image Alignment via Linear Mapping between Feature Modalities.

    PubMed

    Jiang, Yanyun; Zheng, Yuanjie; Hou, Sujuan; Chang, Yuchou; Gee, James

    2017-01-01

    We propose a novel landmark matching based method for aligning multimodal images, which is accomplished uniquely by resolving a linear mapping between different feature modalities. This linear mapping results in a new measurement on similarity of images captured from different modalities. In addition, our method simultaneously solves this linear mapping and the landmark correspondences by minimizing a convex quadratic function. Our method can estimate complex image relationship between different modalities and nonlinear nonrigid spatial transformations even in the presence of heavy noise, as shown in our experiments carried out by using a variety of image modalities.

  9. Use of multimodality imaging and artificial intelligence for diagnosis and prognosis of early stages of Alzheimer's disease.

    PubMed

    Liu, Xiaonan; Chen, Kewei; Wu, Teresa; Weidman, David; Lure, Fleming; Li, Jing

    2018-04-01

    Alzheimer's disease (AD) is a major neurodegenerative disease and the most common cause of dementia. Currently, no treatment exists to slow down or stop the progression of AD. There is converging belief that disease-modifying treatments should focus on early stages of the disease, that is, the mild cognitive impairment (MCI) and preclinical stages. Making a diagnosis of AD and offering a prognosis (likelihood of converting to AD) at these early stages are challenging tasks but possible with the help of multimodality imaging, such as magnetic resonance imaging (MRI), fluorodeoxyglucose (FDG)-positron emission topography (PET), amyloid-PET, and recently introduced tau-PET, which provides different but complementary information. This article is a focused review of existing research in the recent decade that used statistical machine learning and artificial intelligence methods to perform quantitative analysis of multimodality image data for diagnosis and prognosis of AD at the MCI or preclinical stages. We review the existing work in 3 subareas: diagnosis, prognosis, and methods for handling modality-wise missing data-a commonly encountered problem when using multimodality imaging for prediction or classification. Factors contributing to missing data include lack of imaging equipment, cost, difficulty of obtaining patient consent, and patient drop-off (in longitudinal studies). Finally, we summarize our major findings and provide some recommendations for potential future research directions. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Overlapping-image multimode interference couplers with a reduced number of self-images for uniform and nonuniform power splitting

    NASA Astrophysics Data System (ADS)

    Bachmann, M.; Besse, P. A.; Melchior, H.

    1995-10-01

    Overlapping-image multimode interference (MMI) couplers, a new class of devices, permit uniform and nonuniform power splitting. A theoretical description directly relates coupler geometry to image intensities, positions, and phases. Among many possibilities of nonuniform power splitting, examples of 1 \\times 2 couplers with ratios of 15:85 and 28:72 are given. An analysis of uniform power splitters includes the well-known 2 \\times N and 1 \\times N MMI couplers. Applications of MMI couplers include mode filters, mode splitters-combiners, and mode converters.

  11. Magnetic nanobubbles with potential for targeted drug delivery and trimodal imaging in breast cancer: an in vitro study.

    PubMed

    Song, Weixiang; Luo, Yindeng; Zhao, Yajing; Liu, Xinjie; Zhao, Jiannong; Luo, Jie; Zhang, Qunxia; Ran, Haitao; Wang, Zhigang; Guo, Dajing

    2017-05-01

    The aim of this study was to improve tumor-targeted therapy for breast cancer by designing magnetic nanobubbles with the potential for targeted drug delivery and multimodal imaging. Herceptin-decorated and ultrasmall superparamagnetic iron oxide (USPIO)/paclitaxel (PTX)-embedded nanobubbles (PTX-USPIO-HER-NBs) were manufactured by combining a modified double-emulsion evaporation process with carbodiimide technique. PTX-USPIO-HER-NBs were examined for characterization, specific cell-targeting ability and multimodal imaging. PTX-USPIO-HER-NBs exhibited excellent entrapment efficiency of Herceptin/PTX/USPIO and showed greater cytotoxic effects than other delivery platforms. Low-frequency ultrasound triggered accelerated PTX release. Moreover, the magnetic nanobubbles were able to enhance ultrasound, magnetic resonance and photoacoustics trimodal imaging. These results suggest that PTX-USPIO-HER-NBs have potential as a multimodal contrast agent and as a system for ultrasound-triggered drug release in breast cancer.

  12. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor

    PubMed Central

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-01-01

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190

  13. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor.

    PubMed

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-09-15

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.

  14. Exogenous Molecular Probes for Targeted Imaging in Cancer: Focus on Multi-modal Imaging

    PubMed Central

    Joshi, Bishnu P.; Wang, Thomas D.

    2010-01-01

    Cancer is one of the major causes of mortality and morbidity in our healthcare system. Molecular imaging is an emerging methodology for the early detection of cancer, guidance of therapy, and monitoring of response. The development of new instruments and exogenous molecular probes that can be labeled for multi-modality imaging is critical to this process. Today, molecular imaging is at a crossroad, and new targeted imaging agents are expected to broadly expand our ability to detect and manage cancer. This integrated imaging strategy will permit clinicians to not only localize lesions within the body but also to manage their therapy by visualizing the expression and activity of specific molecules. This information is expected to have a major impact on drug development and understanding of basic cancer biology. At this time, a number of molecular probes have been developed by conjugating various labels to affinity ligands for targeting in different imaging modalities. This review will describe the current status of exogenous molecular probes for optical, scintigraphic, MRI and ultrasound imaging platforms. Furthermore, we will also shed light on how these techniques can be used synergistically in multi-modal platforms and how these techniques are being employed in current research. PMID:22180839

  15. Multimodal Task-Driven Dictionary Learning for Image Classification

    DTIC Science & Technology

    2015-12-18

    1 Multimodal Task-Driven Dictionary Learning for Image Classification Soheil Bahrampour, Student Member, IEEE, Nasser M. Nasrabadi, Fellow, IEEE...Asok Ray, Fellow, IEEE, and W. Kenneth Jenkins, Life Fellow, IEEE Abstract— Dictionary learning algorithms have been suc- cessfully used for both...reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms. While these methods are

  16. Learning of Multimodal Representations With Random Walks on the Click Graph.

    PubMed

    Wu, Fei; Lu, Xinyan; Song, Jun; Yan, Shuicheng; Zhang, Zhongfei Mark; Rui, Yong; Zhuang, Yueting

    2016-02-01

    In multimedia information retrieval, most classic approaches tend to represent different modalities of media in the same feature space. With the click data collected from the users' searching behavior, existing approaches take either one-to-one paired data (text-image pairs) or ranking examples (text-query-image and/or image-query-text ranking lists) as training examples, which do not make full use of the click data, particularly the implicit connections among the data objects. In this paper, we treat the click data as a large click graph, in which vertices are images/text queries and edges indicate the clicks between an image and a query. We consider learning a multimodal representation from the perspective of encoding the explicit/implicit relevance relationship between the vertices in the click graph. By minimizing both the truncated random walk loss as well as the distance between the learned representation of vertices and their corresponding deep neural network output, the proposed model which is named multimodal random walk neural network (MRW-NN) can be applied to not only learn robust representation of the existing multimodal data in the click graph, but also deal with the unseen queries and images to support cross-modal retrieval. We evaluate the latent representation learned by MRW-NN on a public large-scale click log data set Clickture and further show that MRW-NN achieves much better cross-modal retrieval performance on the unseen queries/images than the other state-of-the-art methods.

  17. Multimodality Molecular Imaging-Guided Tumor Border Delineation and Photothermal Therapy Analysis Based on Graphene Oxide-Conjugated Gold Nanoparticles Chelated with Gd.

    PubMed

    Ma, Xibo; Jin, Yushen; Wang, Yi; Zhang, Shuai; Peng, Dong; Yang, Xin; Wei, Shoushui; Chai, Wei; Li, Xuejun; Tian, Jie

    2018-01-01

    Tumor cell complete extinction is a crucial measure to evaluate antitumor efficacy. The difficulties in defining tumor margins and finding satellite metastases are the reason for tumor recurrence. A synergistic method based on multimodality molecular imaging needs to be developed so as to achieve the complete extinction of the tumor cells. In this study, graphene oxide conjugated with gold nanostars and chelated with Gd through 1,4,7,10-tetraazacyclododecane-N,N',N,N'-tetraacetic acid (DOTA) (GO-AuNS-DOTA-Gd) were prepared to target HCC-LM3-fLuc cells and used for therapy. For subcutaneous tumor, multimodality molecular imaging including photoacoustic imaging (PAI) and magnetic resonance imaging (MRI) and the related processing techniques were used to monitor the pharmacokinetics process of GO-AuNS-DOTA-Gd in order to determine the optimal time for treatment. For orthotopic tumor, MRI was used to delineate the tumor location and margin in vivo before treatment. Then handheld photoacoustic imaging system was used to determine the tumor location during the surgery and guided the photothermal therapy. The experiment result based on orthotopic tumor demonstrated that this synergistic method could effectively reduce tumor residual and satellite metastases by 85.71% compared with the routine photothermal method without handheld PAI guidance. These results indicate that this multimodality molecular imaging-guided photothermal therapy method is promising with a good prospect in clinical application.

  18. Novel multifunctional theranostic liposome drug delivery system: construction, characterization, and multimodality MR, near-infrared fluorescent, and nuclear imaging.

    PubMed

    Li, Shihong; Goins, Beth; Zhang, Lujun; Bao, Ande

    2012-06-20

    Liposomes are effective lipid nanoparticle drug delivery systems, which can also be functionalized with noninvasive multimodality imaging agents with each modality providing distinct information and having synergistic advantages in diagnosis, monitoring of disease treatment, and evaluation of liposomal drug pharmacokinetics. We designed and constructed a multifunctional theranostic liposomal drug delivery system, which integrated multimodality magnetic resonance (MR), near-infrared (NIR) fluorescent and nuclear imaging of liposomal drug delivery, and therapy monitoring and prediction. The premanufactured liposomes were composed of DSPC/cholesterol/Gd-DOTA-DSPE/DOTA-DSPE with the molar ratio of 39:35:25:1 and having ammonium sulfate/pH gradient. A lipidized NIR fluorescent tracer, IRDye-DSPE, was effectively postinserted into the premanufactured liposomes. Doxorubicin could be effectively postloaded into the multifunctional liposomes. The multifunctional doxorubicin-liposomes could also be stably radiolabeled with (99m)Tc or (64)Cu for single-photon emission computed tomography (SPECT) or positron emission tomography (PET) imaging, respectively. MR images displayed the high-resolution micro-intratumoral distribution of the liposomes in squamous cell carcinoma of head and neck (SCCHN) tumor xenografts in nude rats after intratumoral injection. NIR fluorescent, SPECT, and PET images also clearly showed either the high intratumoral retention or distribution of the multifunctional liposomes. This multifunctional drug carrying liposome system is promising for disease theranostics allowing noninvasive multimodality NIR fluorescent, MR, SPECT, and PET imaging of their in vivo behavior and capitalizing on the inherent advantages of each modality.

  19. How acute and chronic alcohol consumption affects brain networks: insights from multimodal neuroimaging.

    PubMed

    Schulte, Tilman; Oberlin, Brandon G; Kareken, David A; Marinkovic, Ksenija; Müller-Oehring, Eva M; Meyerhoff, Dieter J; Tapert, Susan

    2012-12-01

    Multimodal imaging combining 2 or more techniques is becoming increasingly important because no single imaging approach has the capacity to elucidate all clinically relevant characteristics of a network. This review highlights recent advances in multimodal neuroimaging (i.e., combined use and interpretation of data collected through magnetic resonance imaging [MRI], functional MRI, diffusion tensor imaging, positron emission tomography, magnetoencephalography, MR perfusion, and MR spectroscopy methods) that leads to a more comprehensive understanding of how acute and chronic alcohol consumption affect neural networks underlying cognition, emotion, reward processing, and drinking behavior. Several innovative investigators have started utilizing multiple imaging approaches within the same individual to better understand how alcohol influences brain systems, both during intoxication and after years of chronic heavy use. Their findings can help identify mechanism-based therapeutic and pharmacological treatment options, and they may increase the efficacy and cost effectiveness of such treatments by predicting those at greatest risk for relapse. Copyright © 2012 by the Research Society on Alcoholism.

  20. Dye-Enhanced Multimodal Confocal Imaging of Brain Cancers

    NASA Astrophysics Data System (ADS)

    Wirth, Dennis; Snuderl, Matija; Sheth, Sameer; Curry, William; Yaroslavsky, Anna

    2011-04-01

    Background and Significance: Accurate high resolution intraoperative detection of brain tumors may result in improved patient survival and better quality of life. The goal of this study was to evaluate dye enhanced multimodal confocal imaging for discriminating normal and cancerous brain tissue. Materials and Methods: Fresh thick brain specimens were obtained from the surgeries. Normal and cancer tissues were investigated. Samples were stained in methylene blue and imaged. Reflectance and fluorescence signals were excited at 658nm. Fluorescence emission and polarization were registered from 670 nm to 710 nm. The system provided lateral resolution of 0.6 μm and axial resolution of 7 μm. Normal and cancer specimens exhibited distinctively different characteristics. H&E histopathology was processed from each imaged sample. Results and Conclusions: The analysis of normal and cancerous tissues indicated clear differences in appearance in both the reflectance and fluorescence responses. These results confirm the feasibility of multimodal confocal imaging for intraoperative detection of small cancer nests and cells.

  1. Simulation of range imaging-based estimation of respiratory lung motion. Influence of noise, signal dimensionality and sampling patterns.

    PubMed

    Wilms, M; Werner, R; Blendowski, M; Ortmüller, J; Handels, H

    2014-01-01

    A major problem associated with the irradiation of thoracic and abdominal tumors is respiratory motion. In clinical practice, motion compensation approaches are frequently steered by low-dimensional breathing signals (e.g., spirometry) and patient-specific correspondence models, which are used to estimate the sought internal motion given a signal measurement. Recently, the use of multidimensional signals derived from range images of the moving skin surface has been proposed to better account for complex motion patterns. In this work, a simulation study is carried out to investigate the motion estimation accuracy of such multidimensional signals and the influence of noise, the signal dimensionality, and different sampling patterns (points, lines, regions). A diffeomorphic correspondence modeling framework is employed to relate multidimensional breathing signals derived from simulated range images to internal motion patterns represented by diffeomorphic non-linear transformations. Furthermore, an automatic approach for the selection of optimal signal combinations/patterns within this framework is presented. This simulation study focuses on lung motion estimation and is based on 28 4D CT data sets. The results show that the use of multidimensional signals instead of one-dimensional signals significantly improves the motion estimation accuracy, which is, however, highly affected by noise. Only small differences exist between different multidimensional sampling patterns (lines and regions). Automatically determined optimal combinations of points and lines do not lead to accuracy improvements compared to results obtained by using all points or lines. Our results show the potential of multidimensional breathing signals derived from range images for the model-based estimation of respiratory motion in radiation therapy.

  2. Combined multi-modal photoacoustic tomography, optical coherence tomography (OCT) and OCT angiography system with an articulated probe for in vivo human skin structure and vasculature imaging

    PubMed Central

    Liu, Mengyang; Chen, Zhe; Zabihian, Behrooz; Sinz, Christoph; Zhang, Edward; Beard, Paul C.; Ginner, Laurin; Hoover, Erich; Minneman, Micheal P.; Leitgeb, Rainer A.; Kittler, Harald; Drexler, Wolfgang

    2016-01-01

    Cutaneous blood flow accounts for approximately 5% of cardiac output in human and plays a key role in a number of a physiological and pathological processes. We show for the first time a multi-modal photoacoustic tomography (PAT), optical coherence tomography (OCT) and OCT angiography system with an articulated probe to extract human cutaneous vasculature in vivo in various skin regions. OCT angiography supplements the microvasculature which PAT alone is unable to provide. Co-registered volumes for vessel network is further embedded in the morphologic image provided by OCT. This multi-modal system is therefore demonstrated as a valuable tool for comprehensive non-invasive human skin vasculature and morphology imaging in vivo. PMID:27699106

  3. Multimodal Nonlinear Optical Microscopy

    PubMed Central

    Yue, Shuhua; Slipchenko, Mikhail N.; Cheng, Ji-Xin

    2013-01-01

    Because each nonlinear optical (NLO) imaging modality is sensitive to specific molecules or structures, multimodal NLO imaging capitalizes the potential of NLO microscopy for studies of complex biological tissues. The coupling of multiphoton fluorescence, second harmonic generation, and coherent anti-Stokes Raman scattering (CARS) has allowed investigation of a broad range of biological questions concerning lipid metabolism, cancer development, cardiovascular disease, and skin biology. Moreover, recent research shows the great potential of using CARS microscope as a platform to develop more advanced NLO modalities such as electronic-resonance-enhanced four-wave mixing, stimulated Raman scattering, and pump-probe microscopy. This article reviews the various approaches developed for realization of multimodal NLO imaging as well as developments of new NLO modalities on a CARS microscope. Applications to various aspects of biological and biomedical research are discussed. PMID:24353747

  4. MO-D-BRB-02: SBRT Treatment Planning and Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y.

    2016-06-15

    Increased use of SBRT and hypofractionation in radiation oncology practice has posted a number of challenges to medical physicist, ranging from planning, image-guided patient setup and on-treatment monitoring, to quality assurance (QA) and dose delivery. This symposium is designed to provide current knowledge necessary for the safe and efficient implementation of SBRT in various linac platforms, including the emerging digital linacs equipped with high dose rate FFF beams. Issues related to 4D CT, PET and MRI simulations, 3D/4D CBCT guided patient setup, real-time image guidance during SBRT dose delivery using gated/un-gated VMAT/IMRT, and technical advancements in QA of SBRT (inmore » particular, strategies dealing with high dose rate FFF beams) will be addressed. The symposium will help the attendees to gain a comprehensive understanding of the SBRT workflow and facilitate their clinical implementation of the state-of-art imaging and planning techniques. Learning Objectives: Present background knowledge of SBRT, describe essential requirements for safe implementation of SBRT, and discuss issues specific to SBRT treatment planning and QA. Update on the use of multi-dimensional and multi-modality imaging for reliable guidance of SBRT. Discuss treatment planning and QA issues specific to SBRT. Provide a comprehensive overview of emerging digital linacs and summarize the key geometric and dosimetric features of the new generation of linacs for substantially improved SBRT. NIH/NCI; Varian Medical Systems; F. Yin, Duke University has a research agreement with Varian Medical Systems. In addition to research grant, I had a technology license agreement with Varian Medical Systems.« less

  5. A Multidimensional Approach to the Study of Emotion Recognition in Autism Spectrum Disorders

    PubMed Central

    Xavier, Jean; Vignaud, Violaine; Ruggiero, Rosa; Bodeau, Nicolas; Cohen, David; Chaby, Laurence

    2015-01-01

    Although deficits in emotion recognition have been widely reported in autism spectrum disorder (ASD), experiments have been restricted to either facial or vocal expressions. Here, we explored multimodal emotion processing in children with ASD (N = 19) and with typical development (TD, N = 19), considering uni (faces and voices) and multimodal (faces/voices simultaneously) stimuli and developmental comorbidities (neuro-visual, language and motor impairments). Compared to TD controls, children with ASD had rather high and heterogeneous emotion recognition scores but showed also several significant differences: lower emotion recognition scores for visual stimuli, for neutral emotion, and a greater number of saccades during visual task. Multivariate analyses showed that: (1) the difficulties they experienced with visual stimuli were partially alleviated with multimodal stimuli. (2) Developmental age was significantly associated with emotion recognition in TD children, whereas it was the case only for the multimodal task in children with ASD. (3) Language impairments tended to be associated with emotion recognition scores of ASD children in the auditory modality. Conversely, in the visual or bimodal (visuo-auditory) tasks, the impact of developmental coordination disorder or neuro-visual impairments was not found. We conclude that impaired emotion processing constitutes a dimension to explore in the field of ASD, as research has the potential to define more homogeneous subgroups and tailored interventions. However, it is clear that developmental age, the nature of the stimuli, and other developmental comorbidities must also be taken into account when studying this dimension. PMID:26733928

  6. Fluorine-18-labeled Gd3+/Yb3+/Er3+ co-doped NaYF4 nanophosphors for multimodality PET/MR/UCL imaging.

    PubMed

    Zhou, Jing; Yu, Mengxiao; Sun, Yun; Zhang, Xianzhong; Zhu, Xingjun; Wu, Zhanhong; Wu, Dongmei; Li, Fuyou

    2011-02-01

    Molecular imaging modalities provide a wealth of information that is highly complementary and rarely redundant. To combine the advantages of molecular imaging techniques, (18)F-labeled Gd(3+)/Yb(3+)/Er(3+) co-doped NaYF(4) nanophosphors (NPs) simultaneously possessing with radioactivity, magnetic, and upconversion luminescent properties have been fabricated for multimodality positron emission tomography (PET), magnetic resonance imaging (MRI), and laser scanning upconversion luminescence (UCL) imaging. Hydrophilic citrate-capped NaY(0.2)Gd(0.6)Yb(0.18)Er(0.02)F(4) nanophosphors (cit-NPs) were obtained from hydrophobic oleic acid (OA)-coated nanoparticles (OA-NPs) through a process of ligand exchange of OA with citrate, and were found to be monodisperse with an average size of 22 × 19 nm. The obtained hexagonal cit-NPs show intense UCL emission in the visible region and paramagnetic longitudinal relaxivity (r(1) = 0.405 s(-1)·(mM)(-1)). Through a facile inorganic reaction based on the strong binding between Y(3+) and F(-), (18)F-labeled NPs have been fabricated in high yield. The use of cit-NPs as a multimodal probe has been further explored for T(1)-weighted MR and PET imaging in vivo and UCL imaging of living cells and tissue slides. The results indicate that (18)F-labeled NaY(0.2)Gd(0.6)Yb(0.18)Er(0.02) is a potential candidate as a multimodal nanoprobe for ultra-sensitive molecular imaging from the cellular scale to whole-body evaluation. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Multimodal nonlinear microscopy of biopsy specimen: towards intraoperative diagnostics (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Schmitt, Michael; Heuke, Sandro; Meyer, Tobias; Chernavskaia, Olga; Bocklitz, Thomas W.; Popp, Juergen

    2016-03-01

    The realization of label-free molecule specific imaging of morphology and chemical composition of tissue at subcellular spatial resolution in real time is crucial for many envisioned applications in medicine, e.g., precise surgical guidance and non-invasive histopathologic examination of tissue. Thus, new approaches for a fast and reliable in vivo and near in vivo (ex corpore in vivo) tissue characterization to supplement routine pathological diagnostics is needed. Spectroscopic imaging approaches are particularly important since they have the potential to provide a pathologist with adequate support in the form of clinically-relevant information under both ex vivo and in vivo conditions. In this contribution it is demonstrated, that multimodal nonlinear microscopy combining coherent anti-Stokes Raman scattering (CARS), two photon excited fluorescence (TPEF) and second harmonic generation (SHG) enables the detection of characteristic structures and the accompanying molecular changes of widespread diseases, particularly of cancer and atherosclerosis. The detailed images enable an objective evaluation of the tissue samples for an early diagnosis of the disease status. Increasing the spectral resolution and analyzing CARS images at multiple Raman resonances improves the chemical specificity. To facilitate handling and interpretation of the image data characteristic properties can be automatically extracted by advanced image processing algorithms, e.g., for tissue classification. Overall, the presented examples show the great potential of multimodal imaging to augment standard intraoperative clinical assessment with functional multimodal CARS/SHG/TPEF images to highlight functional activity and tumor boundaries. It ensures fast, label-free and non-invasive intraoperative tissue classification paving the way towards in vivo optical pathology.

  8. Au Nanocage Functionalized with Ultra-small Fe3O4 Nanoparticles for Targeting T1-T2Dual MRI and CT Imaging of Tumor

    NASA Astrophysics Data System (ADS)

    Wang, Guannan; Gao, Wei; Zhang, Xuanjun; Mei, Xifan

    2016-06-01

    Diagnostic approaches based on multimodal imaging of clinical noninvasive imaging (eg. MRI/CT scanner) are highly developed in recent years for accurate selection of the therapeutic regimens in critical diseases. Therefore, it is highly demanded in the development of appropriate all-in-one multimodal contrast agents (MCAs) for the MRI/CT multimodal imaging. Here a novel ideal MCAs (F-AuNC@Fe3O4) were engineered by assemble Au nanocages (Au NC) and ultra-small iron oxide nanoparticles (Fe3O4) for simultaneous T1-T2dual MRI and CT contrast imaging. In this system, the Au nanocages offer facile thiol modification and strong X-ray attenuation property for CT imaging. The ultra-small Fe3O4 nanoparticles, as excellent contrast agent, is able to provide great enhanced signal of T1- and T2-weighted MRI (r1 = 6.263 mM-1 s-1, r2 = 28.117 mM-1 s-1) due to their ultra-refined size. After functionalization, the present MCAs nanoparticles exhibited small average size, low aggregation and excellent biocompatible. In vitro and In vivo studies revealed that the MCAs show long-term circulation time, renal clearance properties and outstanding capability of selective accumulation in tumor tissues for simultaneous CT imaging and T1- and T2-weighted MRI. Taken together, these results show that as-prepared MCAs are excellent candidates as MRI/CT multimodal imaging contrast agents.

  9. A prototype hand-held tri-modal instrument for in vivo ultrasound, photoacoustic, and fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Kang, Jeeun; Chang, Jin Ho; Wilson, Brian C.; Veilleux, Israel; Bai, Yanhui; DaCosta, Ralph; Kim, Kang; Ha, Seunghan; Lee, Jong Gun; Kim, Jeong Seok; Lee, Sang-Goo; Kim, Sun Mi; Lee, Hak Jong; Ahn, Young Bok; Han, Seunghee; Yoo, Yangmo; Song, Tai-Kyong

    2015-03-01

    Multi-modality imaging is beneficial for both preclinical and clinical applications as it enables complementary information from each modality to be obtained in a single procedure. In this paper, we report the design, fabrication, and testing of a novel tri-modal in vivo imaging system to exploit molecular/functional information from fluorescence (FL) and photoacoustic (PA) imaging as well as anatomical information from ultrasound (US) imaging. The same ultrasound transducer was used for both US and PA imaging, bringing the pulsed laser light into a compact probe by fiberoptic bundles. The FL subsystem is independent of the acoustic components but the front end that delivers and collects the light is physically integrated into the same probe. The tri-modal imaging system was implemented to provide each modality image in real time as well as co-registration of the images. The performance of the system was evaluated through phantom and in vivo animal experiments. The results demonstrate that combining the modalities does not significantly compromise the performance of each of the separate US, PA, and FL imaging techniques, while enabling multi-modality registration. The potential applications of this novel approach to multi-modality imaging range from preclinical research to clinical diagnosis, especially in detection/localization and surgical guidance of accessible solid tumors.

  10. Showing or Telling a Story: A Comparative Study of Public Education Texts in Multimodality and Monomodality

    ERIC Educational Resources Information Center

    Wang, Kelu

    2013-01-01

    Multimodal texts that combine words and images produce meaning in a different way from monomodal texts that rely on words. They differ not only in representing the subject matter, but also constructing relationships between text producers and text receivers. This article uses two multimodal texts and one monomodal written text as samples, which…

  11. Label-aligned Multi-task Feature Learning for Multimodal Classification of Alzheimer’s Disease and Mild Cognitive Impairment

    PubMed Central

    Zu, Chen; Jie, Biao; Liu, Mingxia; Chen, Songcan

    2015-01-01

    Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer’s disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI. PMID:26572145

  12. GMars-T Enabling Multimodal Subdiffraction Structural and Functional Fluorescence Imaging in Live Cells.

    PubMed

    Wang, Sheng; Chen, Xuanze; Chang, Lei; Ding, Miao; Xue, Ruiying; Duan, Haifeng; Sun, Yujie

    2018-06-05

    Fluorescent probes with multimodal and multilevel imaging capabilities are highly valuable as imaging with such probes not only can obtain new layers of information but also enable cross-validation of results under different experimental conditions. In recent years, the development of genetically encoded reversibly photoswitchable fluorescent proteins (RSFPs) has greatly promoted the application of various kinds of live-cell nanoscopy approaches, including reversible saturable optical fluorescence transitions (RESOLFT) and stochastic optical fluctuation imaging (SOFI). However, these two classes of live-cell nanoscopy approaches require different optical characteristics of specific RSFPs. In this work, we developed GMars-T, a monomeric bright green RSFP which can satisfy both RESOLFT and photochromic SOFI (pcSOFI) imaging in live cells. We further generated biosensor based on bimolecular fluorescence complementation (BiFC) of GMars-T which offers high specificity and sensitivity in detecting and visualizing various protein-protein interactions (PPIs) in different subcellular compartments under physiological conditions (e.g., 37 °C) in live mammalian cells. Thus, the newly developed GMars-T can serve as both structural imaging probe with multimodal super-resolution imaging capability and functional imaging probe for reporting PPIs with high specificity and sensitivity based on its derived biosensor.

  13. Introduction to clinical and laboratory (small-animal) image registration and fusion.

    PubMed

    Zanzonico, Pat B; Nehmeh, Sadek A

    2006-01-01

    Imaging has long been a vital component of clinical medicine and, increasingly, of biomedical research in small-animals. Clinical and laboratory imaging modalities can be divided into two general categories, structural (or anatomical) and functional (or physiological). The latter, in particular, has spawned what has come to be known as "molecular imaging". Image registration and fusion have rapidly emerged as invaluable components of both clinical and small-animal imaging and has lead to the development and marketing of a variety of multi-modality, e.g. PET-CT, devices which provide registered and fused three-dimensional image sets. This paper briefly reviews the basics of image registration and fusion and available clinical and small-animal multi-modality instrumentation.

  14. Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.

    PubMed

    Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie

    2016-07-01

    Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.

  15. Deep features for efficient multi-biometric recognition with face and ear images

    NASA Astrophysics Data System (ADS)

    Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng

    2017-07-01

    Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.

  16. Multimodal optical imaging system for in vivo investigation of cerebral oxygen delivery and energy metabolism

    PubMed Central

    Yaseen, Mohammad A.; Srinivasan, Vivek J.; Gorczynska, Iwona; Fujimoto, James G.; Boas, David A.; Sakadžić, Sava

    2015-01-01

    Improving our understanding of brain function requires novel tools to observe multiple physiological parameters with high resolution in vivo. We have developed a multimodal imaging system for investigating multiple facets of cerebral blood flow and metabolism in small animals. The system was custom designed and features multiple optical imaging capabilities, including 2-photon and confocal lifetime microscopy, optical coherence tomography, laser speckle imaging, and optical intrinsic signal imaging. Here, we provide details of the system’s design and present in vivo observations of multiple metrics of cerebral oxygen delivery and energy metabolism, including oxygen partial pressure, microvascular blood flow, and NADH autofluorescence. PMID:26713212

  17. Computational method for multi-modal microscopy based on transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao

    2017-02-01

    In this paper, we develop the requisite theory to describe a hybrid virtual-physical multi-modal imaging system which yields quantitative phase, Zernike phase contrast, differential interference contrast (DIC), and light field moment imaging simultaneously based on transport of intensity equation(TIE). We then give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens based TIE system, combined with the appropriate post-processing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.

  18. Multimodality Molecular Imaging of Cardiac Cell Transplantation: Part I. Reporter Gene Design, Characterization, and Optical in Vivo Imaging of Bone Marrow Stromal Cells after Myocardial Infarction

    PubMed Central

    Parashurama, Natesh; Ahn, Byeong-Cheol; Ziv, Keren; Ito, Ken; Paulmurugan, Ramasamy; Willmann, Jürgen K.; Chung, Jaehoon; Ikeno, Fumiaki; Swanson, Julia C.; Merk, Denis R.; Lyons, Jennifer K.; Yerushalmi, David; Teramoto, Tomohiko; Kosuge, Hisanori; Dao, Catherine N.; Ray, Pritha; Patel, Manishkumar; Chang, Ya-fang; Mahmoudi, Morteza; Cohen, Jeff Eric; Goldstone, Andrew Brooks; Habte, Frezghi; Bhaumik, Srabani; Yaghoubi, Shahriar; Robbins, Robert C.; Dash, Rajesh; Yang, Phillip C.; Brinton, Todd J.; Yock, Paul G.; McConnell, Michael V.

    2016-01-01

    Purpose To use multimodality reporter-gene imaging to assess the serial survival of marrow stromal cells (MSC) after therapy for myocardial infarction (MI) and to determine if the requisite preclinical imaging end point was met prior to a follow-up large-animal MSC imaging study. Materials and Methods Animal studies were approved by the Institutional Administrative Panel on Laboratory Animal Care. Mice (n = 19) that had experienced MI were injected with bone marrow–derived MSC that expressed a multimodality triple fusion (TF) reporter gene. The TF reporter gene (fluc2-egfp-sr39ttk) consisted of a human promoter, ubiquitin, driving firefly luciferase 2 (fluc2), enhanced green fluorescent protein (egfp), and the sr39tk positron emission tomography reporter gene. Serial bioluminescence imaging of MSC-TF and ex vivo luciferase assays were performed. Correlations were analyzed with the Pearson product-moment correlation, and serial imaging results were analyzed with a mixed-effects regression model. Results Analysis of the MSC-TF after cardiac cell therapy showed significantly lower signal on days 8 and 14 than on day 2 (P = .011 and P = .001, respectively). MSC-TF with MI demonstrated significantly higher signal than MSC-TF without MI at days 4, 8, and 14 (P = .016). Ex vivo luciferase activity assay confirmed the presence of MSC-TF on days 8 and 14 after MI. Conclusion Multimodality reporter-gene imaging was successfully used to assess serial MSC survival after therapy for MI, and it was determined that the requisite preclinical imaging end point, 14 days of MSC survival, was met prior to a follow-up large-animal MSC study. © RSNA, 2016 Online supplemental material is available for this article. PMID:27308957

  19. Drusen Characterization with Multimodal Imaging

    PubMed Central

    Spaide, Richard F.; Curcio, Christine A.

    2010-01-01

    Summary Multimodal imaging findings and histological demonstration of soft drusen, cuticular drusen, and subretinal drusenoid deposits provided information used to develop a model explaining their imaging characteristics. Purpose To characterize the known appearance of cuticular drusen, subretinal drusenoid deposits (reticular pseudodrusen), and soft drusen as revealed by multimodal fundus imaging; to create an explanatory model that accounts for these observations. Methods Reported color, fluorescein angiographic, autofluorescence, and spectral domain optical coherence tomography (SD-OCT) images of patients with cuticular drusen, soft drusen, and subretinal drusenoid deposits were reviewed, as were actual images from affected eyes. Representative histological sections were examined. The geometry, location, and imaging characteristics of these lesions were evaluated. A hypothesis based on the Beer-Lambert Law of light absorption was generated to fit these observations. Results Cuticular drusen appear as numerous uniform round yellow-white punctate accumulations under the retinal pigment epithelium (RPE). Soft drusen are larger yellow-white dome-shaped mounds of deposit under the RPE. Subretinal drusenoid deposits are polymorphous light-grey interconnected accumulations above the RPE. Based on the model, both cuticular and soft drusen appear yellow due to the removal of shorter wavelength light by a double pass through the RPE. Subretinal drusenoid deposits, which are located on the RPE, are not subjected to short wavelength attenuation and therefore are more prominent when viewed with blue light. The location and morphology of extracellular material in relationship to the RPE, and associated changes to RPE morphology and pigmentation, appeared to be primary determinants of druse appearance in different imaging modalities. Conclusion Although cuticular drusen, subretinal drusenoid deposits, and soft drusen are composed of common components, they are distinguishable by multimodal imaging due to differences in location, morphology, and optical filtering effects by drusenoid material and the RPE. PMID:20924263

  20. Tunable X-ray speckle-based phase-contrast and dark-field imaging using the unified modulated pattern analysis approach

    NASA Astrophysics Data System (ADS)

    Zdora, M.-C.; Thibault, P.; Deyhle, H.; Vila-Comamala, J.; Rau, C.; Zanette, I.

    2018-05-01

    X-ray phase-contrast and dark-field imaging provides valuable, complementary information about the specimen under study. Among the multimodal X-ray imaging methods, X-ray grating interferometry and speckle-based imaging have drawn particular attention, which, however, in their common implementations incur certain limitations that can restrict their range of applications. Recently, the unified modulated pattern analysis (UMPA) approach was proposed to overcome these limitations and combine grating- and speckle-based imaging in a single approach. Here, we demonstrate the multimodal imaging capabilities of UMPA and highlight its tunable character regarding spatial resolution, signal sensitivity and scan time by using different reconstruction parameters.

  1. Multimodality imaging of ovarian cystic lesions: Review with an imaging based algorithmic approach

    PubMed Central

    Wasnik, Ashish P; Menias, Christine O; Platt, Joel F; Lalchandani, Usha R; Bedi, Deepak G; Elsayes, Khaled M

    2013-01-01

    Ovarian cystic masses include a spectrum of benign, borderline and high grade malignant neoplasms. Imaging plays a crucial role in characterization and pretreatment planning of incidentally detected or suspected adnexal masses, as diagnosis of ovarian malignancy at an early stage is correlated with a better prognosis. Knowledge of differential diagnosis, imaging features, management trends and an algorithmic approach of such lesions is important for optimal clinical management. This article illustrates a multi-modality approach in the diagnosis of a spectrum of ovarian cystic masses and also proposes an algorithmic approach for the diagnosis of these lesions. PMID:23671748

  2. Rapid multi-modality preregistration based on SIFT descriptor.

    PubMed

    Chen, Jian; Tian, Jie

    2006-01-01

    This paper describes the scale invariant feature transform (SIFT) method for rapid preregistration of medical image. This technique originates from Lowe's method wherein preregistration is achieved by matching the corresponding keypoints between two images. The computational complexity has been reduced when we applied SIFT preregistration method before refined registration due to its O(n) exponential calculations. The features of SIFT are highly distinctive and invariant to image scaling and rotation, and partially invariant to change in illumination and contrast, it is robust and repeatable for cursorily matching two images. We also altered the descriptor so our method can deal with multimodality preregistration.

  3. A systematic, multimodality approach to emergency elbow imaging.

    PubMed

    Singer, Adam D; Hanna, Tarek; Jose, Jean; Datir, Abhijit

    2016-01-01

    The elbow is a complex synovial hinge joint that is frequently involved in both athletic and nonathletic injuries. A thorough understanding of the normal anatomy and various injury patterns is essential when utilizing diagnostic imaging to identify damaged structures and to assist in surgical planning. In this review, the elbow anatomy will be scrutinized in a systematic approach. This will be followed by a comprehensive presentation of elbow injuries that are commonly seen in the emergency department accompanied by multimodality imaging findings. A short discussion regarding pitfalls in elbow imaging is also included. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Multimode-Optical-Fiber Imaging Probe

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah

    2000-01-01

    Currently, endoscopic surgery uses single-mode fiber-bundles to obtain in vivo image information inside orifices of the body. This limits their use to the larger natural bodily orifices and to surgical procedures where there is plenty of room for manipulation. The knee joint, for example can be easily viewed with a fiber optic viewer, but joints in the finger cannot. However, there are a host of smaller orifices where fiber endoscopy would play an important role if a cost effective fiber probe were developed with small enough dimensions (< 250 microns). Examples of beneficiaries of micro-endoscopes are the treatment of the Eustatian tube of the middle ear, the breast ducts, tear ducts, coronary arteries, fallopian tubes, as well as the treatment of salivary duct parotid disease, and the neuro endoscopy of the ventricles and spinal canal. To solve this problem, this work describes an approach for recovering images from. tightly confined spaces using multimode fibers and analytically demonstrates that the concept is sound. The proof of concept draws upon earlier works that concentrated on image recovery after two-way transmission through a multimode fiber as well as work that demonstrated the recovery of images after one-way transmission through a multimode fiber. Both relied on generating a phase conjugated wavefront which was predistorted with the characteristics of the fiber. The described approach also relies on generating a phase conjugated wavefront, but utilizes two fibers to capture the image at some intermediate point (accessible by the fibers, but which is otherwise visually unaccessible).

  5. Multimode-Optical-Fiber Imaging Probe

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah

    1999-01-01

    Currently, endoscopic surgery uses single-mode fiber-bundles to obtain in vivo image information inside the orifices of the body. This limits their use to the larger natural orifices and to surgical procedures where there is plenty of room for manipulation. The knee joint, for example, can be easily viewed with a fiber optic viewer, but joints in the finger cannot. However, there are a host of smaller orifices where fiber endoscopy would play an important role if a cost effective fiber probe were developed with small enough dimensions (less than or equal to 250 microns). Examples of beneficiaries of micro-endoscopes are the treatment of the Eustatian tube of the middle ear, the breast ducts, tear ducts, coronary arteries, fallopian tubes, as well as the treatment of salivary duct parotid disease, and the neuro endoscopy of the ventricles and spinal canal. This work describes an approach for recovering images from tightly confined spaces using multimode. The concept draws upon earlier works that concentrated on image recovery after two-way transmission through a multimode fiber as well as work that demonstrated the recovery of images after one-way transmission through a multimode fiber. Both relied on generating a phase conjugated wavefront, which was predistorted with the characteristics of the fiber. The approach described here also relies on generating a phase conjugated wavefront, but utilizes two fibers to capture the image at some intermediate point (accessible by the fibers, but which is otherwise visually inaccessible).

  6. Patient-tailored multimodal neuroimaging, visualization and quantification of human intra-cerebral hemorrhage

    NASA Astrophysics Data System (ADS)

    Goh, Sheng-Yang M.; Irimia, Andrei; Vespa, Paul M.; Van Horn, John D.

    2016-03-01

    In traumatic brain injury (TBI) and intracerebral hemorrhage (ICH), the heterogeneity of lesion sizes and types necessitates a variety of imaging modalities to acquire a comprehensive perspective on injury extent. Although it is advantageous to combine imaging modalities and to leverage their complementary benefits, there are difficulties in integrating information across imaging types. Thus, it is important that efforts be dedicated to the creation and sustained refinement of resources for multimodal data integration. Here, we propose a novel approach to the integration of neuroimaging data acquired from human patients with TBI/ICH using various modalities; we also demonstrate the integrated use of multimodal magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) data for TBI analysis based on both visual observations and quantitative metrics. 3D models of healthy-appearing tissues and TBIrelated pathology are generated, both of which are derived from multimodal imaging data. MRI volumes acquired using FLAIR, SWI, and T2 GRE are used to segment pathology. Healthy tissues are segmented using user-supervised tools, and results are visualized using a novel graphical approach called a `connectogram', where brain connectivity information is depicted within a circle of radially aligned elements. Inter-region connectivity and its strength are represented by links of variable opacities drawn between regions, where opacity reflects the percentage longitudinal change in brain connectivity density. Our method for integrating, analyzing and visualizing structural brain changes due to TBI and ICH can promote knowledge extraction and enhance the understanding of mechanisms underlying recovery.

  7. MMX-I: A data-processing software for multi-modal X-ray imaging and tomography

    NASA Astrophysics Data System (ADS)

    Bergamaschi, A.; Medjoubi, K.; Messaoudi, C.; Marco, S.; Somogyi, A.

    2017-06-01

    Scanning hard X-ray imaging allows simultaneous acquisition of multimodal information, including X-ray fluorescence, absorption, phase and dark-field contrasts, providing structural and chemical details of the samples. Combining these scanning techniques with the infrastructure developed for fast data acquisition at Synchrotron Soleil permits to perform multimodal imaging and tomography during routine user experiments at the Nanoscopium beamline. A main challenge of such imaging techniques is the online processing and analysis of the generated very large volume (several hundreds of Giga Bytes) multimodal data-sets. This is especially important for the wide user community foreseen at the user oriented Nanoscopium beamline (e.g. from the fields of Biology, Life Sciences, Geology, Geobiology), having no experience in such data-handling. MMX-I is a new multi-platform open-source freeware for the processing and reconstruction of scanning multi-technique X-ray imaging and tomographic datasets. The MMX-I project aims to offer, both expert users and beginners, the possibility of processing and analysing raw data, either on-site or off-site. Therefore we have developed a multi-platform (Mac, Windows and Linux 64bit) data processing tool, which is easy to install, comprehensive, intuitive, extendable and user-friendly. MMX-I is now routinely used by the Nanoscopium user community and has demonstrated its performance in treating big data.

  8. Image-guided thoracic surgery in the hybrid operation room.

    PubMed

    Ujiie, Hideki; Effat, Andrew; Yasufuku, Kazuhiro

    2017-01-01

    There has been an increase in the use of image-guided technology to facilitate minimally invasive therapy. The next generation of minimally invasive therapy is focused on advancement and translation of novel image-guided technologies in therapeutic interventions, including surgery, interventional pulmonology, radiation therapy, and interventional laser therapy. To establish the efficacy of different minimally invasive therapies, we have developed a hybrid operating room, known as the guided therapeutics operating room (GTx OR) at the Toronto General Hospital. The GTx OR is equipped with multi-modality image-guidance systems, which features a dual source-dual energy computed tomography (CT) scanner, a robotic cone-beam CT (CBCT)/fluoroscopy, high-performance endobronchial ultrasound system, endoscopic surgery system, near-infrared (NIR) fluorescence imaging system, and navigation tracking systems. The novel multimodality image-guidance systems allow physicians to quickly, and accurately image patients while they are on the operating table. This yield improved outcomes since physicians are able to use image guidance during their procedures, and carry out innovative multi-modality therapeutics. Multiple preclinical translational studies pertaining to innovative minimally invasive technology is being developed in our guided therapeutics laboratory (GTx Lab). The GTx Lab is equipped with similar technology, and multimodality image-guidance systems as the GTx OR, and acts as an appropriate platform for translation of research into human clinical trials. Through the GTx Lab, we are able to perform basic research, such as the development of image-guided technologies, preclinical model testing, as well as preclinical imaging, and then translate that research into the GTx OR. This OR allows for the utilization of new technologies in cancer therapy, including molecular imaging, and other innovative imaging modalities, and therefore enables a better quality of life for patients, both during and after the procedure. In this article, we describe capabilities of the GTx systems, and discuss the first-in-human technologies used, and evaluated in GTx OR.

  9. WE-H-206-02: Recent Advances in Multi-Modality Molecular Imaging of Small Animals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsui, B.

    Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less

  10. Characterizing virus-induced gene silencing at the cellular level with in situ multimodal imaging

    DOE PAGES

    Burkhow, Sadie J.; Stephens, Nicole M.; Mei, Yu; ...

    2018-05-25

    Reverse genetic strategies, such as virus-induced gene silencing, are powerful techniques to study gene function. Currently, there are few tools to study the spatial dependence of the consequences of gene silencing at the cellular level. Here, we report the use of multimodal Raman and mass spectrometry imaging to study the cellular-level biochemical changes that occur from silencing the phytoene desaturase ( pds) gene using a Foxtail mosaic virus (FoMV) vector in maize leaves. The multimodal imaging method allows the localized carotenoid distribution to be measured and reveals differences lost in the spatial average when analyzing a carotenoid extraction of themore » whole leaf. The nature of the Raman and mass spectrometry signals are complementary: silencing pds reduces the downstream carotenoid Raman signal and increases the phytoene mass spectrometry signal.« less

  11. Characterizing virus-induced gene silencing at the cellular level with in situ multimodal imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burkhow, Sadie J.; Stephens, Nicole M.; Mei, Yu

    Reverse genetic strategies, such as virus-induced gene silencing, are powerful techniques to study gene function. Currently, there are few tools to study the spatial dependence of the consequences of gene silencing at the cellular level. Here, we report the use of multimodal Raman and mass spectrometry imaging to study the cellular-level biochemical changes that occur from silencing the phytoene desaturase ( pds) gene using a Foxtail mosaic virus (FoMV) vector in maize leaves. The multimodal imaging method allows the localized carotenoid distribution to be measured and reveals differences lost in the spatial average when analyzing a carotenoid extraction of themore » whole leaf. The nature of the Raman and mass spectrometry signals are complementary: silencing pds reduces the downstream carotenoid Raman signal and increases the phytoene mass spectrometry signal.« less

  12. Calibration for single multi-mode fiber digital scanning microscopy imaging system

    NASA Astrophysics Data System (ADS)

    Yin, Zhe; Liu, Guodong; Liu, Bingguo; Gan, Yu; Zhuang, Zhitao; Chen, Fengdong

    2015-11-01

    Single multimode fiber (MMF) digital scanning imaging system is a development tendency of modern endoscope. We concentrate on the calibration method of the imaging system. Calibration method comprises two processes, forming scanning focused spots and calibrating the couple factors varied with positions. Adaptive parallel coordinate algorithm (APC) is adopted to form the focused spots at the multimode fiber (MMF) output. Compare with other algorithm, APC contains many merits, i.e. rapid speed, small amount calculations and no iterations. The ratio of the optics power captured by MMF to the intensity of the focused spots is called couple factor. We setup the calibration experimental system to form the scanning focused spots and calculate the couple factors for different object positions. The experimental result the couple factor is higher in the center than the edge.

  13. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.

    PubMed

    Scharfe, Michael; Pielot, Rainer; Schreiber, Falk

    2010-01-11

    Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  14. A Novel Multifunctional Theranostic Liposome Drug Delivery System: Construction, Characterization, and Multimodality MR, Near-infrared Fluorescent and Nuclear Imaging

    PubMed Central

    Li, Shihong; Goins, Beth; Zhang, Lujun; Bao, Ande

    2012-01-01

    Liposomes are effective lipid nanoparticle drug delivery systems, which can also be functionalized with non-invasive multimodality imaging agents with each modality providing distinct information and having synergistic advantages in diagnosis, monitoring of disease treatment, and evaluation of liposomal drug pharmacokinetics. We designed and constructed a multifunctional theranostic liposomal drug delivery system, which integrated multimodality magnetic resonance (MR), near-infrared (NIR) fluorescent and nuclear imaging of liposomal drug delivery, and therapy monitoring and prediction. The pre-manufactured liposomes were composed of DSPC/cholesterol/Gd-DOTADSPE/DOTA-DSPE with the molar ratio of 39:35:25:1 and having ammonium sulfate/pH gradient. A lipidized NIR fluorescent tracer, IRDye-DSPE, was effectively post-inserted into the pre-manufactured liposomes. Doxorubicin could be effectively post-loaded into the multifunctional liposomes. The multifunctional doxorubicin-liposomes could also be stably radiolabeled with 99mTc or 64Cu for single photon emission computed tomography (SPECT) or positron emission tomography (PET) imaging, respectively. MR images displayed the high resolution micro-intratumoral distribution of the liposomes in squamous cell carcinoma of head and neck (SCCHN) tumor xenografts in nude rats after intratumoral injection. NIR fluorescent, SPECT and PET images also clearly showed either the high intratumoral retention or distribution of the multifunctional liposomes. This multifunctional drug carrying liposome system is promising for disease theranostics allowing non-invasive multimodality NIR fluorescent, MR, SPECT and PET imaging of their in vivo behavior and capitalizing on the inherent advantages of each modality. PMID:22577859

  15. Multimodal microscopy and the stepwise multi-photon activation fluorescence of melanin

    NASA Astrophysics Data System (ADS)

    Lai, Zhenhua

    The author's work is divided into three aspects: multimodal microscopy, stepwise multi-photon activation fluorescence (SMPAF) of melanin, and customized-profile lenses (CPL) for on-axis laser scanners, which will be introduced respectively. A multimodal microscope provides the ability to image samples with multiple modalities on the same stage, which incorporates the benefits of all modalities. The multimodal microscopes developed in this dissertation are the Keck 3D fusion multimodal microscope 2.0 (3DFM 2.0), upgraded from the old 3DFM with improved performance and flexibility, and the multimodal microscope for targeting small particles (the "Target" system). The control systems developed for both microscopes are low-cost and easy-to-build, with all components off-the-shelf. The control system have not only significantly decreased the complexity and size of the microscope, but also increased the pixel resolution and flexibility. The SMPAF of melanin, activated by a continuous-wave (CW) mode near-infrared (NIR) laser, has potential applications for a low-cost and reliable method of detecting melanin. The photophysics of melanin SMPAF has been studied by theoretical analysis of the excitation process and investigation of the spectra, activation threshold, and photon number absorption of melanin SMPAF. SMPAF images of melanin in mouse hair and skin, mouse melanoma, and human black and white hairs are compared with images taken by conventional multi-photon fluorescence microscopy (MPFM) and confocal reflectance microscopy (CRM). SMPAF images significantly increase specificity and demonstrate the potential to increase sensitivity for melanin detection compared to MPFM images and CRM images. Employing melanin SMPAF imaging to detect melanin inside human skin in vivo has been demonstrated, which proves the effectiveness of melanin detection using SMPAF for medical purposes. Selective melanin ablation with micrometer resolution has been presented using the Target system. Compared to the traditional selective photothermolysis, this method demonstrates higher precision, higher specificity and deeper penetration. Therefore, the SMPAF guided selective ablation of melanin is a promising tool of removing melanin for both medical and cosmetic purposes. Three CPLs have been designed for low-cost linear-motion scanners, low-cost fast spinning scanners and high-precision fast spinning scanners. Each design has been tailored to the industrial manufacturing ability and market demands.

  16. Multimodal brain-tumor segmentation based on Dirichlet process mixture model with anisotropic diffusion and Markov random field prior.

    PubMed

    Lu, Yisu; Jiang, Jun; Yang, Wei; Feng, Qianjin; Chen, Wufan

    2014-01-01

    Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use.

  17. Multimodal Brain-Tumor Segmentation Based on Dirichlet Process Mixture Model with Anisotropic Diffusion and Markov Random Field Prior

    PubMed Central

    Lu, Yisu; Jiang, Jun; Chen, Wufan

    2014-01-01

    Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use. PMID:25254064

  18. Intraoperative high-field magnetic resonance imaging, multimodal neuronavigation, and intraoperative electrophysiological monitoring-guided surgery for treating supratentorial cavernomas.

    PubMed

    Li, Fang-Ye; Chen, Xiao-Lei; Xu, Bai-Nan

    2016-09-01

    To determine the beneficial effects of intraoperative high-field magnetic resonance imaging (MRI), multimodal neuronavigation, and intraoperative electrophysiological monitoring-guided surgery for treating supratentorial cavernomas. Twelve patients with 13 supratentorial cavernomas were prospectively enrolled and operated while using a 1.5 T intraoperative MRI, multimodal neuronavigation, and intraoperative electrophysiological monitoring. All cavernomas were deeply located in subcortical areas or involved critical areas. Intraoperative high-field MRIs were obtained for the intraoperative "visualization" of surrounding eloquent structures, "brain shift" corrections, and navigational plan updates. All cavernomas were successfully resected with guidance from intraoperative MRI, multimodal neuronavigation, and intraoperative electrophysiological monitoring. In 5 cases with supratentorial cavernomas, intraoperative "brain shift" severely deterred locating of the lesions; however, intraoperative MRI facilitated precise locating of these lesions. During long-term (>3 months) follow-up, some or all presenting signs and symptoms improved or resolved in 4 cases, but were unchanged in 7 patients. Intraoperative high-field MRI, multimodal neuronavigation, and intraoperative electrophysiological monitoring are helpful in surgeries for the treatment of small deeply seated subcortical cavernomas.

  19. Machine learning approaches for integrating clinical and imaging features in late-life depression classification and response prediction.

    PubMed

    Patel, Meenal J; Andreescu, Carmen; Price, Julie C; Edelman, Kathryn L; Reynolds, Charles F; Aizenstein, Howard J

    2015-10-01

    Currently, depression diagnosis relies primarily on behavioral symptoms and signs, and treatment is guided by trial and error instead of evaluating associated underlying brain characteristics. Unlike past studies, we attempted to estimate accurate prediction models for late-life depression diagnosis and treatment response using multiple machine learning methods with inputs of multi-modal imaging and non-imaging whole brain and network-based features. Late-life depression patients (medicated post-recruitment) (n = 33) and older non-depressed individuals (n = 35) were recruited. Their demographics and cognitive ability scores were recorded, and brain characteristics were acquired using multi-modal magnetic resonance imaging pretreatment. Linear and nonlinear learning methods were tested for estimating accurate prediction models. A learning method called alternating decision trees estimated the most accurate prediction models for late-life depression diagnosis (87.27% accuracy) and treatment response (89.47% accuracy). The diagnosis model included measures of age, Mini-mental state examination score, and structural imaging (e.g. whole brain atrophy and global white mater hyperintensity burden). The treatment response model included measures of structural and functional connectivity. Combinations of multi-modal imaging and/or non-imaging measures may help better predict late-life depression diagnosis and treatment response. As a preliminary observation, we speculate that the results may also suggest that different underlying brain characteristics defined by multi-modal imaging measures-rather than region-based differences-are associated with depression versus depression recovery because to our knowledge this is the first depression study to accurately predict both using the same approach. These findings may help better understand late-life depression and identify preliminary steps toward personalized late-life depression treatment. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Multimode optical dermoscopy (SkinSpect) analysis for skin with melanocytic nevus

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas; Saager, Rolf; Kelly, Kristen M.; Maly, Tyler; Chave, Robert; Booth, Nicholas; Durkin, Anthony J.; Farkas, Daniel L.

    2016-04-01

    We have developed a multimode dermoscope (SkinSpect™) capable of illuminating human skin samples in-vivo with spectrally-programmable linearly-polarized light at 33 wavelengths between 468nm and 857 nm. Diffusely reflected photons are separated into collinear and cross-polarized image paths and images captured for each illumination wavelength. In vivo human skin nevi (N = 20) were evaluated with the multimode dermoscope and melanin and hemoglobin concentrations were compared with Spatially Modulated Quantitative Spectroscopy (SMoQS) measurements. Both systems show low correlation between their melanin and hemoglobin concentrations, demonstrating the ability of the SkinSpect™ to separate these molecular signatures and thus act as a biologically plausible device capable of early onset melanoma detection.

  1. Laser Microdissection and Atmospheric Pressure Chemical Ionization Mass Spectrometry Coupled for Multimodal Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lorenz, Matthias; Ovchinnikova, Olga S; Kertesz, Vilmos

    2013-01-01

    This paper describes the coupling of ambient laser ablation surface sampling, accomplished using a laser capture microdissection system, with atmospheric pressure chemical ionization mass spectrometry for high spatial resolution multimodal imaging. A commercial laser capture microdissection system was placed in close proximity to a modified ion source of a mass spectrometer designed to allow for sampling of laser ablated material via a transfer tube directly into the ionization region. Rhodamine 6G dye of red sharpie ink in a laser etched pattern as well as cholesterol and phosphatidylcholine in a cerebellum mouse brain thin tissue section were identified and imaged frommore » full scan mass spectra. A minimal spot diameter of 8 m was achieved using the 10X microscope cutting objective with a lateral oversampling pixel resolution of about 3.7 m. Distinguishing between features approximately 13 m apart in a cerebellum mouse brain thin tissue section was demonstrated in a multimodal fashion including co-registered optical and mass spectral chemical images.« less

  2. Design and applications of a multimodality image data warehouse framework.

    PubMed

    Wong, Stephen T C; Hoo, Kent Soo; Knowlton, Robert C; Laxer, Kenneth D; Cao, Xinhau; Hawkins, Randall A; Dillon, William P; Arenson, Ronald L

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications--namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains.

  3. Design and Applications of a Multimodality Image Data Warehouse Framework

    PubMed Central

    Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885

  4. A Multimodal Approach to Counselor Supervision.

    ERIC Educational Resources Information Center

    Ponterotto, Joseph G.; Zander, Toni A.

    1984-01-01

    Represents an initial effort to apply Lazarus's multimodal approach to a model of counselor supervision. Includes continuously monitoring the trainee's behavior, affect, sensations, images, cognitions, interpersonal functioning, and when appropriate, biological functioning (diet and drugs) in the supervisory process. (LLL)

  5. A Visual Analytics Approach for Station-Based Air Quality Data

    PubMed Central

    Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui

    2016-01-01

    With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support. PMID:28029117

  6. A Visual Analytics Approach for Station-Based Air Quality Data.

    PubMed

    Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui

    2016-12-24

    With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.

  7. Moderated histogram equalization, an automatic means of enhancing the contrast in digital light micrographs reversibly.

    PubMed

    Entwistle, A

    2004-06-01

    A means for improving the contrast in the images produced from digital light micrographs is described that requires no intervention by the experimenter: zero-order, scaling, tonally independent, moderated histogram equalization. It is based upon histogram equalization, which often results in digital light micrographs that contain regions that appear to be saturated, negatively biased or very grainy. Here a non-decreasing monotonic function is introduced into the process, which moderates the changes in contrast that are generated. This method is highly effective for all three of the main types of contrast found in digital light micrography: bright objects viewed against a dark background, e.g. fluorescence and dark-ground or dark-field image data sets; bright and dark objects sets against a grey background, e.g. image data sets collected with phase or Nomarski differential interference contrast optics; and darker objects set against a light background, e.g. views of absorbing specimens. Moreover, it is demonstrated that there is a single fixed moderating function, whose actions are independent of the number of elements of image data, which works well with all types of digital light micrographs, including multimodal or multidimensional image data sets. The use of this fixed function is very robust as the appearance of the final image is not altered discernibly when it is applied repeatedly to an image data set. Consequently, moderated histogram equalization can be applied to digital light micrographs as a push-button solution, thereby eliminating biases that those undertaking the processing might have introduced during manual processing. Finally, moderated histogram equalization yields a mapping function and so, through the use of look-up tables, indexes or palettes, the information present in the original data file can be preserved while an image with the improved contrast is displayed on the monitor screen.

  8. Multimodal Nonlinear Optical Imaging for Sensitive Detection of Multiple Pharmaceutical Solid-State Forms and Surface Transformations.

    PubMed

    Novakovic, Dunja; Saarinen, Jukka; Rojalin, Tatu; Antikainen, Osmo; Fraser-Miller, Sara J; Laaksonen, Timo; Peltonen, Leena; Isomäki, Antti; Strachan, Clare J

    2017-11-07

    Two nonlinear imaging modalities, coherent anti-Stokes Raman scattering (CARS) and sum-frequency generation (SFG), were successfully combined for sensitive multimodal imaging of multiple solid-state forms and their changes on drug tablet surfaces. Two imaging approaches were used and compared: (i) hyperspectral CARS combined with principal component analysis (PCA) and SFG imaging and (ii) simultaneous narrowband CARS and SFG imaging. Three different solid-state forms of indomethacin-the crystalline gamma and alpha forms, as well as the amorphous form-were clearly distinguished using both approaches. Simultaneous narrowband CARS and SFG imaging was faster, but hyperspectral CARS and SFG imaging has the potential to be applied to a wider variety of more complex samples. These methodologies were further used to follow crystallization of indomethacin on tablet surfaces under two storage conditions: 30 °C/23% RH and 30 °C/75% RH. Imaging with (sub)micron resolution showed that the approach allowed detection of very early stage surface crystallization. The surfaces progressively crystallized to predominantly (but not exclusively) the gamma form at lower humidity and the alpha form at higher humidity. Overall, this study suggests that multimodal nonlinear imaging is a highly sensitive, solid-state (and chemically) specific, rapid, and versatile imaging technique for understanding and hence controlling (surface) solid-state forms and their complex changes in pharmaceuticals.

  9. A Multi-modal, Discriminative and Spatially Invariant CNN for RGB-D Object Labeling.

    PubMed

    Asif, Umar; Bennamoun, Mohammed; Sohel, Ferdous

    2017-08-30

    While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multimodal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multimodal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance - this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability - this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multimodal hierarchical fusion - this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., imageand pixel-levels), and fused into a Conditional Random Field (CRF)- based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.

  10. Application of high-resolution linear Radon transform for Rayleigh-wave dispersive energy imaging and mode separating

    USGS Publications Warehouse

    Luo, Y.; Xia, J.; Miller, R.D.; Liu, J.; Xu, Y.; Liu, Q.

    2008-01-01

    Multichannel Analysis of Surface Waves (MASW) analysis is an efficient tool to obtain the vertical shear-wave profile. One of the key steps in the MASW method is to generate an image of dispersive energy in the frequency-velocity domain, so dispersion curves can be determined by picking peaks of dispersion energy. In this paper, we image Rayleigh-wave dispersive energy and separate multimodes from a multichannel record by high-resolution linear Radon transform (LRT). We first introduce Rayleigh-wave dispersive energy imaging by high-resolution LRT. We then show the process of Rayleigh-wave mode separation. Results of synthetic and real-world examples demonstrate that (1) compared with slant stacking algorithm, high-resolution LRT can improve the resolution of images of dispersion energy by more than 50% (2) high-resolution LRT can successfully separate multimode dispersive energy of Rayleigh waves with high resolution; and (3) multimode separation and reconstruction expand frequency ranges of higher mode dispersive energy, which not only increases the investigation depth but also provides a means to accurately determine cut-off frequencies.

  11. Multidimensional custom-made non-linear microscope: from ex-vivo to in-vivo imaging

    NASA Astrophysics Data System (ADS)

    Cicchi, R.; Sacconi, L.; Jasaitis, A.; O'Connor, R. P.; Massi, D.; Sestini, S.; de Giorgi, V.; Lotti, T.; Pavone, F. S.

    2008-09-01

    We have built a custom-made multidimensional non-linear microscope equipped with a combination of several non-linear laser imaging techniques involving fluorescence lifetime, multispectral two-photon and second-harmonic generation imaging. The optical system was mounted on a vertical honeycomb breadboard in an upright configuration, using two galvo-mirrors relayed by two spherical mirrors as scanners. A double detection system working in non-descanning mode has allowed both photon counting and a proportional regime. This experimental setup offering high spatial (micrometric) and temporal (sub-nanosecond) resolution has been used to image both ex-vivo and in-vivo biological samples, including cells, tissues, and living animals. Multidimensional imaging was used to spectroscopically characterize human skin lesions, as malignant melanoma and naevi. Moreover, two-color detection of two photon excited fluorescence was applied to in-vivo imaging of living mice intact neocortex, as well as to induce neuronal microlesions by femtosecond laser burning. The presented applications demonstrate the capability of the instrument to be used in a wide range of biological and biomedical studies.

  12. Probes for multidimensional nanospectroscopic imaging and methods of fabrication thereof

    DOEpatents

    Weber-Bargioni, Alexander; Cabrini, Stefano; Bao, Wei; Melli, Mauro; Yablonovitch, Eli; Schuck, Peter J

    2015-03-17

    This disclosure provides systems, methods, and apparatus related to probes for multidimensional nanospectroscopic imaging. In one aspect, a method includes providing a transparent tip comprising a dielectric material. A four-sided pyramidal-shaped structure is formed at an apex of the transparent tip using a focused ion beam. Metal layers are deposited over two opposing sides of the four-sided pyramidal-shaped structure.

  13. PIRATE: pediatric imaging response assessment and targeting environment

    NASA Astrophysics Data System (ADS)

    Glenn, Russell; Zhang, Yong; Krasin, Matthew; Hua, Chiaho

    2010-02-01

    By combining the strengths of various imaging modalities, the multimodality imaging approach has potential to improve tumor staging, delineation of tumor boundaries, chemo-radiotherapy regime design, and treatment response assessment in cancer management. To address the urgent needs for efficient tools to analyze large-scale clinical trial data, we have developed an integrated multimodality, functional and anatomical imaging analysis software package for target definition and therapy response assessment in pediatric radiotherapy (RT) patients. Our software provides quantitative tools for automated image segmentation, region-of-interest (ROI) histogram analysis, spatial volume-of-interest (VOI) analysis, and voxel-wise correlation across modalities. To demonstrate the clinical applicability of this software, histogram analyses were performed on baseline and follow-up 18F-fluorodeoxyglucose (18F-FDG) PET images of nine patients with rhabdomyosarcoma enrolled in an institutional clinical trial at St. Jude Children's Research Hospital. In addition, we combined 18F-FDG PET, dynamic-contrast-enhanced (DCE) MR, and anatomical MR data to visualize the heterogeneity in tumor pathophysiology with the ultimate goal of adaptive targeting of regions with high tumor burden. Our software is able to simultaneously analyze multimodality images across multiple time points, which could greatly speed up the analysis of large-scale clinical trial data and validation of potential imaging biomarkers.

  14. Ex vivo catheter-based imaging of coronary atherosclerosis using multimodality OCT and NIRAF excited at 633 nm

    PubMed Central

    Wang, Hao; Gardecki, Joseph A.; Ughi, Giovanni J.; Jacques, Paulino Vacas; Hamidi, Ehsan; Tearney, Guillermo J.

    2015-01-01

    While optical coherence tomography (OCT) has been shown to be capable of imaging coronary plaque microstructure, additional chemical/molecular information may be needed in order to determine which lesions are at risk of causing an acute coronary event. In this study, we used a recently developed imaging system and double-clad fiber (DCF) catheter capable of simultaneously acquiring both OCT and red excited near-infrared autofluorescence (NIRAF) images (excitation: 633 nm, emission: 680nm to 900nm). We found that NIRAF is elevated in lesions that contain necrotic core – a feature that is critical for vulnerable plaque diagnosis and that is not readily discriminated by OCT alone. We first utilized a DCF ball lens probe and a bench top setup to acquire en face NIRAF images of aortic plaques ex vivo (n = 20). In addition, we used the OCT-NIRAF system and fully assembled catheters to acquire multimodality images from human coronary arteries (n = 15) prosected from human cadaver hearts (n = 5). Comparison of these images with corresponding histology demonstrated that necrotic core plaques exhibited significantly higher NIRAF intensity than other plaque types. These results suggest that multimodality intracoronary OCT-NIRAF imaging technology may be used in the future to provide improved characterization of coronary artery disease in human patients. PMID:25909020

  15. Research of the multimodal brain-tumor segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Yisu; Chen, Wufan

    2015-12-01

    It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. A new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain tumor images, we developed the algorithm to segment multimodal brain tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated and compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance.

  16. Albumin based versatile multifunctional nanocarriers for cancer therapy: Fabrication, surface modification, multimodal therapeutics and imaging approaches.

    PubMed

    Kudarha, Ritu R; Sawant, Krutika K

    2017-12-01

    Albumin is a versatile protein used as a carrier system for cancer therapeutics. As a carrier it can provide tumor specificity, reduce drug related toxicity, maintain therapeutic concentration of the active moiety like drug, gene, peptide, protein etc. for long period of time and also reduce drug related toxicities. Apart from cancer therapy, it is also utilized in the imaging and multimodal therapy of cancer. This review highlights the important properties, structure and types of albumin based nanocarriers with regards to their use for cancer targeting. It also provides brief discussion on methods of preparation of these nanocarriers and their surface modification. Applications of albumin nanocarriers for cancer therapy, gene delivery, imaging, phototherapy and multimodal therapy have also been discussed. This review also provides brief discussion about albumin based marketed nano formulations and those under clinical trials. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Multimodal MR imaging in hepatic encephalopathy: state of the art.

    PubMed

    Zhang, Xiao Dong; Zhang, Long Jiang

    2018-06-01

    Hepatic encephalopathy (HE) is a neurological or neuropsychological complication due to liver failure or portosystemic shunting. The clinical manifestation is highly variable, which can exhibit mild cognitive or motor impairment initially, or gradually progress to a coma, even death, without treatment. Neuroimaging plays a critical role in uncovering the neural mechanism of HE. In particular, multimodality MR imaging is able to assess both structural and functional derangements of the brain with HE in focal or neural network perspectives. In recent years, there has been rapid development in novel MR technologies and applications to investigate the pathophysiological mechanism of HE. Therefore, it is necessary to update the latest MR findings regarding HE by use of multimodality MRI to refine and deepen our understanding of the neural traits in HE. Herein, this review highlights the latest MR imaging findings in HE to refresh our understanding of MRI application in HE.

  18. Spinal metastases: multimodality imaging in diagnosis and stereotactic body radiation therapy planning.

    PubMed

    Jabehdar Maralani, Pejman; Lo, Simon S; Redmond, Kristin; Soliman, Hany; Myrehaug, Sten; Husain, Zain A; Heyn, Chinthaka; Kapadia, Anish; Chan, Aimee; Sahgal, Arjun

    2017-01-01

    Due to increased effectiveness of cancer treatments and increasing survival rates, metastatic disease has become more frequent compared to the past, with the spine being the most common site of bony metastases. Diagnostic imaging is an integral part of screening, diagnosis and follow-up of spinal metastases. In this article, we review the principles of multimodality imaging for tumor detection with respect to their value for diagnosis and stereotactic body radiation therapy planning for spinal metastases. We will also review the current international consensus agreement for stereotactic body radiation therapy planning, and the role of imaging in achieving the best possible treatment plan.

  19. Simultaneous in vivo imaging of melanin and lipofuscin in the retina with multimodal photoacoustic ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangyang; Zhang, Hao F.; Zhou, Lixiang; Jiao, Shuliang

    2012-02-01

    We combined photoacoustic ophthalmoscopy (PAOM) with autofluorescence imaging for simultaneous in vivo imaging of dual molecular contrasts in the retina using a single light source. The dual molecular contrasts come from melanin and lipofuscin in the retinal pigment epithelium (RPE). Melanin and lipofuscin are two types of pigments and are believed to play opposite roles (protective vs. exacerbate) in the RPE in the aging process. We successfully imaged the retina of pigmented and albino rats at different ages. The experimental results showed that multimodal PAOM system can be a potentially powerful tool in the study of age-related degenerative retinal diseases.

  20. New Finger Biometric Method Using Near Infrared Imaging

    PubMed Central

    Lee, Eui Chul; Jung, Hyunwoo; Kim, Daeyeoul

    2011-01-01

    In this paper, we propose a new finger biometric method. Infrared finger images are first captured, and then feature extraction is performed using a modified Gaussian high-pass filter through binarization, local binary pattern (LBP), and local derivative pattern (LDP) methods. Infrared finger images include the multimodal features of finger veins and finger geometries. Instead of extracting each feature using different methods, the modified Gaussian high-pass filter is fully convolved. Therefore, the extracted binary patterns of finger images include the multimodal features of veins and finger geometries. Experimental results show that the proposed method has an error rate of 0.13%. PMID:22163741

  1. Live animal myelin histomorphometry of the spinal cord with video-rate multimodal nonlinear microendoscopy

    NASA Astrophysics Data System (ADS)

    Bélanger, Erik; Crépeau, Joël; Laffray, Sophie; Vallée, Réal; De Koninck, Yves; Côté, Daniel

    2012-02-01

    In vivo imaging of cellular dynamics can be dramatically enabling to understand the pathophysiology of nervous system diseases. To fully exploit the power of this approach, the main challenges have been to minimize invasiveness and maximize the number of concurrent optical signals that can be combined to probe the interplay between multiple cellular processes. Label-free coherent anti-Stokes Raman scattering (CARS) microscopy, for example, can be used to follow demyelination in neurodegenerative diseases or after trauma, but myelin imaging alone is not sufficient to understand the complex sequence of events that leads to the appearance of lesions in the white matter. A commercially available microendoscope is used here to achieve minimally invasive, video-rate multimodal nonlinear imaging of cellular processes in live mouse spinal cord. The system allows for simultaneous CARS imaging of myelin sheaths and two-photon excitation fluorescence microendoscopy of microglial cells and axons. Morphometric data extraction at high spatial resolution is also described, with a technique for reducing motion-related imaging artifacts. Despite its small diameter, the microendoscope enables high speed multimodal imaging over wide areas of tissue, yet at resolution sufficient to quantify subtle differences in myelin thickness and microglial motility.

  2. Detail-enhanced multimodality medical image fusion based on gradient minimization smoothing filter and shearing filter.

    PubMed

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2018-02-13

    In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.

  3. 3D multi-scale FCN with random modality voxel dropout learning for Intervertebral Disc Localization and Segmentation from Multi-modality MR Images.

    PubMed

    Li, Xiaomeng; Dou, Qi; Chen, Hao; Fu, Chi-Wing; Qi, Xiaojuan; Belavý, Daniel L; Armbrecht, Gabriele; Felsenberg, Dieter; Zheng, Guoyan; Heng, Pheng-Ann

    2018-04-01

    Intervertebral discs (IVDs) are small joints that lie between adjacent vertebrae. The localization and segmentation of IVDs are important for spine disease diagnosis and measurement quantification. However, manual annotation is time-consuming and error-prone with limited reproducibility, particularly for volumetric data. In this work, our goal is to develop an automatic and accurate method based on fully convolutional networks (FCN) for the localization and segmentation of IVDs from multi-modality 3D MR data. Compared with single modality data, multi-modality MR images provide complementary contextual information, which contributes to better recognition performance. However, how to effectively integrate such multi-modality information to generate accurate segmentation results remains to be further explored. In this paper, we present a novel multi-scale and modality dropout learning framework to locate and segment IVDs from four-modality MR images. First, we design a 3D multi-scale context fully convolutional network, which processes the input data in multiple scales of context and then merges the high-level features to enhance the representation capability of the network for handling the scale variation of anatomical structures. Second, to harness the complementary information from different modalities, we present a random modality voxel dropout strategy which alleviates the co-adaption issue and increases the discriminative capability of the network. Our method achieved the 1st place in the MICCAI challenge on automatic localization and segmentation of IVDs from multi-modality MR images, with a mean segmentation Dice coefficient of 91.2% and a mean localization error of 0.62 mm. We further conduct extensive experiments on the extended dataset to validate our method. We demonstrate that the proposed modality dropout strategy with multi-modality images as contextual information improved the segmentation accuracy significantly. Furthermore, experiments conducted on extended data collected from two different time points demonstrate the efficacy of our method on tracking the morphological changes in a longitudinal study. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Unifying framework for multimodal brain MRI segmentation based on Hidden Markov Chains.

    PubMed

    Bricq, S; Collet, Ch; Armspach, J P

    2008-12-01

    In the frame of 3D medical imaging, accurate segmentation of multimodal brain MR images is of interest for many brain disorders. However, due to several factors such as noise, imaging artifacts, intrinsic tissue variation and partial volume effects, tissue classification remains a challenging task. In this paper, we present a unifying framework for unsupervised segmentation of multimodal brain MR images including partial volume effect, bias field correction, and information given by a probabilistic atlas. Here-proposed method takes into account neighborhood information using a Hidden Markov Chain (HMC) model. Due to the limited resolution of imaging devices, voxels may be composed of a mixture of different tissue types, this partial volume effect is included to achieve an accurate segmentation of brain tissues. Instead of assigning each voxel to a single tissue class (i.e., hard classification), we compute the relative amount of each pure tissue class in each voxel (mixture estimation). Further, a bias field estimation step is added to the proposed algorithm to correct intensity inhomogeneities. Furthermore, atlas priors were incorporated using probabilistic brain atlas containing prior expectations about the spatial localization of different tissue classes. This atlas is considered as a complementary sensor and the proposed method is extended to multimodal brain MRI without any user-tunable parameter (unsupervised algorithm). To validate this new unifying framework, we present experimental results on both synthetic and real brain images, for which the ground truth is available. Comparison with other often used techniques demonstrates the accuracy and the robustness of this new Markovian segmentation scheme.

  5. A low-cost multimodal head-mounted display system for neuroendoscopic surgery.

    PubMed

    Xu, Xinghua; Zheng, Yi; Yao, Shujing; Sun, Guochen; Xu, Bainan; Chen, Xiaolei

    2018-01-01

    With rapid advances in technology, wearable devices as head-mounted display (HMD) have been adopted for various uses in medical science, ranging from simply aiding in fitness to assisting surgery. We aimed to investigate the feasibility and practicability of a low-cost multimodal HMD system in neuroendoscopic surgery. A multimodal HMD system, mainly consisted of a HMD with two built-in displays, an action camera, and a laptop computer displaying reconstructed medical images, was developed to assist neuroendoscopic surgery. With this intensively integrated system, the neurosurgeon could freely switch between endoscopic image, three-dimensional (3D) reconstructed virtual endoscopy images, and surrounding environment images. Using a leap motion controller, the neurosurgeon could adjust or rotate the 3D virtual endoscopic images at a distance to better understand the positional relation between lesions and normal tissues at will. A total of 21 consecutive patients with ventricular system diseases underwent neuroendoscopic surgery with the aid of this system. All operations were accomplished successfully, and no system-related complications occurred. The HMD was comfortable to wear and easy to operate. Screen resolution of the HMD was high enough for the neurosurgeon to operate carefully. With the system, the neurosurgeon might get a better comprehension on lesions by freely switching among images of different modalities. The system had a steep learning curve, which meant a quick increment of skill with it. Compared with commercially available surgical assistant instruments, this system was relatively low-cost. The multimodal HMD system is feasible, practical, helpful, and relatively cost efficient in neuroendoscopic surgery.

  6. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets

    PubMed Central

    2010-01-01

    Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262

  7. A Multimodal Imaging Protocol, (123)I/(99)Tc-Sestamibi, SPECT, and SPECT/CT, in Primary Hyperparathyroidism Adds Limited Benefit for Preoperative Localization.

    PubMed

    Lee, Grace S; McKenzie, Travis J; Mullan, Brian P; Farley, David R; Thompson, Geoffrey B; Richards, Melanie L

    2016-03-01

    Focused parathyroidectomy in primary hyperparathyroidism (1°HPT) is possible with accurate preoperative localization and intraoperative PTH monitoring (IOPTH). The added benefit of multimodal imaging techniques for operative success is unknown. Patients with 1°HPT, who underwent parathyroidectomy in 2012-2014 at a single institution, were retrospectively reviewed. Only the patients who underwent the standardized multimodal imaging workup consisting of (123)I/(99)Tc-sestamibi subtraction scintigraphy, SPECT, and SPECT/CT were assessed. Of 360 patients who were identified, a curative operation was performed in 96%, using pre-operative imaging and IOPTH. Imaging analysis showed that (123)I/(99)Tc-sestamibi had a sensitivity of 86% (95% CI 82-90%), positive predictive value (PPV) 93%, and accuracy 81%, based on correct lateralization. SPECT had a sensitivity of 77% (95% CI 72-82%), PPV 92% and accuracy 72%. SPECT/CT had a sensitivity of 75% (95% CI 70-80%), PPV of 94%, and accuracy 71%. There were 3 of 45 (7%) patients with negative sestamibi imaging that had an accurate SPECT and SPECT/CT. Of 312 patients (87%) with positive uptake on sestamibi (93% true positive, 7% false positive), concordant findings were present in 86% SPECT and 84% SPECT/CT. In cases where imaging modalities were discordant, but at least one method was true-positive, (123)I/(99)Tc-sestamibi was significantly better than both SPECT and SPECT/CT (p < 0.001). The inclusion of SPECT and SPECT/CT in 1°HPT imaging protocol increases patient cost up to 2.4-fold. (123)I/(99)Tc-sestamibi subtraction imaging is highly sensitive for preoperative localization in 1°HPT. SPECT and SPECT/CT are commonly concordant with (123)I/(99)Tc-sestamibi and rarely increase the sensitivity. Routine inclusion of multimodality imaging technique adds minimal clinical benefit but increases cost to patient in high-volume setting.

  8. Rational Design of a Triple Reporter Gene for Multimodality Molecular Imaging

    PubMed Central

    Hsieh, Ya-Ju; Ke, Chien-Chih; Yeh, Skye Hsin-Hsien; Lin, Chien-Feng; Chen, Fu-Du; Lin, Kang-Ping; Chen, Ran-Chou; Liu, Ren-Shyan

    2014-01-01

    Multimodality imaging using noncytotoxic triple fusion (TF) reporter genes is an important application for cell-based tracking, drug screening, and therapy. The firefly luciferase (fl), monomeric red fluorescence protein (mrfp), and truncated herpes simplex virus type 1 thymidine kinase SR39 mutant (ttksr39) were fused together to create TF reporter gene constructs with different order. The enzymatic activities of TF protein in vitro and in vivo were determined by luciferase reporter assay, H-FEAU cellular uptake experiment, bioluminescence imaging, and micropositron emission tomography (microPET). The TF construct expressed in H1299 cells possesses luciferase activity and red fluorescence. The tTKSR39 activity is preserved in TF protein and mediates high levels of H-FEAU accumulation and significant cell death from ganciclovir (GCV) prodrug activation. In living animals, the luciferase and tTKSR39 activities of TF protein have also been successfully validated by multimodality imaging systems. The red fluorescence signal is relatively weak for in vivo imaging but may expedite FACS-based selection of TF reporter expressing cells. We have developed an optimized triple fusion reporter construct DsRedm-fl-ttksr39 for more effective and sensitive in vivo animal imaging using fluorescence, bioluminescence, and PET imaging modalities, which may facilitate different fields of biomedical research and applications. PMID:24809057

  9. Multimodal optical imager for inner ear hearing loss diagnosis (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Park, Jesung; Maguluri, Gopi N.; Zhao, Youbo; Iftimia, Nicusor V.

    2017-02-01

    Sensorineural hearing loss (SNHL), which typically originates in the cochlea, is the most common otologic problem caused by aging and noise trauma. The cochlea, a delicate and complex biological mechanosensory transducer in the inner ear, has been extensively studied with the goal of improving diagnosis of SNHL. However, the difficulty associated with accessing the cochlea and resolving the microstructures that facilitate hearing within it in a minimally-invasive way has prevented us from being able to assess the pathology underlying SNHL in humans. To address this problem we investigated the ability of a multimodal optical system that combines optical coherence tomography (OCT) and single photon autofluorescence imaging (AFI) to enable visualization and evaluation of microstructures in the cochlea. A laboratory OCT/AFI imager was built to acquire high resolution OCT and single photon fluorescence images of the cochlea. The imager's ability to resolve diagnostically-relevant details was evaluated in ears extracted from normal and noise-exposed mice. A prototype endoscopic OCT/AFI imager was developed based on a double-clad fiber approach. Our measurements show that the multimodal OCT/AFI imager can be used to evaluate structural integrity in the mouse cochlea. Therefore, we believe that this technology is promising as a potential clinical evaluation tool, and as a technique for guiding otologic surgeries such as cochlear implant surgery.

  10. Advanced Contrast Agents for Multimodal Biomedical Imaging Based on Nanotechnology.

    PubMed

    Calle, Daniel; Ballesteros, Paloma; Cerdán, Sebastián

    2018-01-01

    Clinical imaging modalities have reached a prominent role in medical diagnosis and patient management in the last decades. Different image methodologies as Positron Emission Tomography, Single Photon Emission Tomography, X-Rays, or Magnetic Resonance Imaging are in continuous evolution to satisfy the increasing demands of current medical diagnosis. Progress in these methodologies has been favored by the parallel development of increasingly more powerful contrast agents. These are molecules that enhance the intrinsic contrast of the images in the tissues where they accumulate, revealing noninvasively the presence of characteristic molecular targets or differential physiopathological microenvironments. The contrast agent field is currently moving to improve the performance of these molecules by incorporating the advantages that modern nanotechnology offers. These include, mainly, the possibilities to combine imaging and therapeutic capabilities over the same theranostic platform or improve the targeting efficiency in vivo by molecular engineering of the nanostructures. In this review, we provide an introduction to multimodal imaging methods in biomedicine, the sub-nanometric imaging agents previously used and the development of advanced multimodal and theranostic imaging agents based in nanotechnology. We conclude providing some illustrative examples from our own laboratories, including recent progress in theranostic formulations of magnetoliposomes containing ω-3 poly-unsaturated fatty acids to treat inflammatory diseases, or the use of stealth liposomes engineered with a pH-sensitive nanovalve to release their cargo specifically in the acidic extracellular pH microenvironment of tumors.

  11. Superparamagnetic nanoparticles for enhanced magnetic resonance and multimodal imaging

    NASA Astrophysics Data System (ADS)

    Sikma, Elise Ann Schultz

    Magnetic resonance imaging (MRI) is a powerful tool for noninvasive tomographic imaging of biological systems with high spatial and temporal resolution. Superparamagnetic (SPM) nanoparticles have emerged as highly effective MR contrast agents due to their biocompatibility, ease of surface modification and magnetic properties. Conventional nanoparticle contrast agents suffer from difficult synthetic reproducibility, polydisperse sizes and weak magnetism. Numerous synthetic techniques and nanoparticle formulations have been developed to overcome these barriers. However, there are still major limitations in the development of new nanoparticle-based probes for MR and multimodal imaging including low signal amplification and absence of biochemical reporters. To address these issues, a set of multimodal (T2/optical) and dual contrast (T1/T2) nanoparticle probes has been developed. Their unique magnetic properties and imaging capabilities were thoroughly explored. An enzyme-activatable contrast agent is currently being developed as an innovative means for early in vivo detection of cancer at the cellular level. Multimodal probes function by combining the strengths of multiple imaging techniques into a single agent. Co-registration of data obtained by multiple imaging modalities validates the data, enhancing its quality and reliability. A series of T2/optical probes were successfully synthesized by attachment of a fluorescent dye to the surface of different types of nanoparticles. The multimodal nanoparticles generated sufficient MR and fluorescence signal to image transplanted islets in vivo. Dual contrast T1/T2 imaging probes were designed to overcome disadvantages inherent in the individual T1 and T2 components. A class of T1/T2 agents was developed consisting of a gadolinium (III) complex (DTPA chelate or DO3A macrocycle) conjugated to a biocompatible silica-coated metal oxide nanoparticle through a disulfide linker. The disulfide linker has the ability to be reduced in vivo by glutathione, releasing large payloads of signal-enhancing T1 probes into the surrounding environment. Optimization of the agent occurred over three sequential generations, with each generation addressing a new challenge. The result was a T2 nanoparticle containing high levels of conjugated T1 complex demonstrating enhanced MR relaxation properties. The probes created here have the potential to play a key role in the advancement of nanoparticle-based agents in biomedical MRI applications.

  12. GLO-Roots: an imaging platform enabling multidimensional characterization of soil-grown root systems

    PubMed Central

    Rellán-Álvarez, Rubén; Lobet, Guillaume; Lindner, Heike; Pradier, Pierre-Luc; Sebastian, Jose; Yee, Muh-Ching; Geng, Yu; Trontin, Charlotte; LaRue, Therese; Schrager-Lavelle, Amanda; Haney, Cara H; Nieu, Rita; Maloof, Julin; Vogel, John P; Dinneny, José R

    2015-01-01

    Root systems develop different root types that individually sense cues from their local environment and integrate this information with systemic signals. This complex multi-dimensional amalgam of inputs enables continuous adjustment of root growth rates, direction, and metabolic activity that define a dynamic physical network. Current methods for analyzing root biology balance physiological relevance with imaging capability. To bridge this divide, we developed an integrated-imaging system called Growth and Luminescence Observatory for Roots (GLO-Roots) that uses luminescence-based reporters to enable studies of root architecture and gene expression patterns in soil-grown, light-shielded roots. We have developed image analysis algorithms that allow the spatial integration of soil properties, gene expression, and root system architecture traits. We propose GLO-Roots as a system that has great utility in presenting environmental stimuli to roots in ways that evoke natural adaptive responses and in providing tools for studying the multi-dimensional nature of such processes. DOI: http://dx.doi.org/10.7554/eLife.07597.001 PMID:26287479

  13. GLO-Roots: An imaging platform enabling multidimensional characterization of soil-grown root systems

    DOE PAGES

    Rellan-Alvarez, Ruben; Lobet, Guillaume; Lindner, Heike; ...

    2015-08-19

    Root systems develop different root types that individually sense cues from their local environment and integrate this information with systemic signals. This complex multi-dimensional amalgam of inputs enables continuous adjustment of root growth rates, direction, and metabolic activity that define a dynamic physical network. Current methods for analyzing root biology balance physiological relevance with imaging capability. To bridge this divide, we developed an integrated-imaging system called Growth and Luminescence Observatory for Roots (GLO-Roots) that uses luminescence-based reporters to enable studies of root architecture and gene expression patterns in soil-grown, light-shielded roots. We have developed image analysis algorithms that allow themore » spatial integration of soil properties, gene expression, and root system architecture traits. We propose GLO-Roots as a system that has great utility in presenting environmental stimuli to roots in ways that evoke natural adaptive responses and in providing tools for studying the multi-dimensional nature of such processes.« less

  14. Multimodality imaging of hepato-biliary disorders in pregnancy: a pictorial essay.

    PubMed

    Ong, Eugene M W; Drukteinis, Jennifer S; Peters, Hope E; Mortelé, Koenraad J

    2009-09-01

    Hepato-biliary disorders are rare complications of pregnancy, but they may be severe, with high fetal and maternal morbidity and mortality. Imaging is, therefore, essential in the rapid diagnosis of some of these conditions so that appropriate, life-saving treatment can be administered. This pictorial essay illustrates the multimodality imaging features of pregnancy-induced hepato-biliary disorders, such as acute fatty liver of pregnancy, preeclamsia and eclampsia, and HELLP syndrome, as well as those conditions which occur in pregnancy but are not unique to it, such as viral hepatitis, Budd-Chiari syndrome, focal hepatic lesions, biliary sludge, cholecystolithiasis, and choledocholithiasis.

  15. Automated diagnosis of prostate cancer in multi-parametric MRI based on multimodal convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Le, Minh Hung; Chen, Jingyu; Wang, Liang; Wang, Zhiwei; Liu, Wenyu; (Tim Cheng, Kwang-Ting; Yang, Xin

    2017-08-01

    Automated methods for prostate cancer (PCa) diagnosis in multi-parametric magnetic resonance imaging (MP-MRIs) are critical for alleviating requirements for interpretation of radiographs while helping to improve diagnostic accuracy (Artan et al 2010 IEEE Trans. Image Process. 19 2444-55, Litjens et al 2014 IEEE Trans. Med. Imaging 33 1083-92, Liu et al 2013 SPIE Medical Imaging (International Society for Optics and Photonics) p 86701G, Moradi et al 2012 J. Magn. Reson. Imaging 35 1403-13, Niaf et al 2014 IEEE Trans. Image Process. 23 979-91, Niaf et al 2012 Phys. Med. Biol. 57 3833, Peng et al 2013a SPIE Medical Imaging (International Society for Optics and Photonics) p 86701H, Peng et al 2013b Radiology 267 787-96, Wang et al 2014 BioMed. Res. Int. 2014). This paper presents an automated method based on multimodal convolutional neural networks (CNNs) for two PCa diagnostic tasks: (1) distinguishing between cancerous and noncancerous tissues and (2) distinguishing between clinically significant (CS) and indolent PCa. Specifically, our multimodal CNNs effectively fuse apparent diffusion coefficients (ADCs) and T2-weighted MP-MRI images (T2WIs). To effectively fuse ADCs and T2WIs we design a new similarity loss function to enforce consistent features being extracted from both ADCs and T2WIs. The similarity loss is combined with the conventional classification loss functions and integrated into the back-propagation procedure of CNN training. The similarity loss enables better fusion results than existing methods as the feature learning processes of both modalities are mutually guided, jointly facilitating CNN to ‘see’ the true visual patterns of PCa. The classification results of multimodal CNNs are further combined with the results based on handcrafted features using a support vector machine classifier. To achieve a satisfactory accuracy for clinical use, we comprehensively investigate three critical factors which could greatly affect the performance of our multimodal CNNs but have not been carefully studied previously. (1) Given limited training data, how can these be augmented in sufficient numbers and variety for fine-tuning deep CNN networks for PCa diagnosis? (2) How can multimodal MP-MRI information be effectively combined in CNNs? (3) What is the impact of different CNN architectures on the accuracy of PCa diagnosis? Experimental results on extensive clinical data from 364 patients with a total of 463 PCa lesions and 450 identified noncancerous image patches demonstrate that our system can achieve a sensitivity of 89.85% and a specificity of 95.83% for distinguishing cancer from noncancerous tissues and a sensitivity of 100% and a specificity of 76.92% for distinguishing indolent PCa from CS PCa. This result is significantly superior to the state-of-the-art method relying on handcrafted features.

  16. Multidimensional incremental parsing for universal source coding.

    PubMed

    Bae, Soo Hyun; Juang, Biing-Hwang

    2008-10-01

    A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.

  17. The evolution of gadolinium based contrast agents: from single-modality to multi-modality

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Liu, Ruiqing; Peng, Hui; Li, Penghui; Xu, Zushun; Whittaker, Andrew K.

    2016-05-01

    Gadolinium-based contrast agents are extensively used as magnetic resonance imaging (MRI) contrast agents due to their outstanding signal enhancement and ease of chemical modification. However, it is increasingly recognized that information obtained from single modal molecular imaging cannot satisfy the higher requirements on the efficiency and accuracy for clinical diagnosis and medical research, due to its limitation and default rooted in single molecular imaging technique itself. To compensate for the deficiencies of single function magnetic resonance imaging contrast agents, the combination of multi-modality imaging has turned to be the research hotpot in recent years. This review presents an overview on the recent developments of the functionalization of gadolinium-based contrast agents, and their application in biomedicine applications.

  18. A Pretargeted Approach for the Multimodal PET/NIRF Imaging of Colorectal Cancer.

    PubMed

    Adumeau, Pierre; Carnazza, Kathryn E; Brand, Christian; Carlin, Sean D; Reiner, Thomas; Agnew, Brian J; Lewis, Jason S; Zeglis, Brian M

    2016-01-01

    The complementary nature of positron emission tomography (PET) and near-infrared fluorescence (NIRF) imaging makes the development of strategies for the multimodal PET/NIRF imaging of cancer a very enticing prospect. Indeed, in the context of colorectal cancer, a single multimodal PET/NIRF imaging agent could be used to stage the disease, identify candidates for surgical intervention, and facilitate the image-guided resection of the disease. While antibodies have proven to be highly effective vectors for the delivery of radioisotopes and fluorophores to malignant tissues, the use of radioimmunoconjugates labeled with long-lived nuclides such as 89 Zr poses two important clinical complications: high radiation doses to the patient and the need for significant lag time between imaging and surgery. In vivo pretargeting strategies that decouple the targeting vector from the radioactivity at the time of injection have the potential to circumvent these issues by facilitating the use of positron-emitting radioisotopes with far shorter half-lives. Here, we report the synthesis, characterization, and in vivo validation of a pretargeted strategy for the multimodal PET and NIRF imaging of colorectal carcinoma. This approach is based on the rapid and bioorthogonal ligation between a trans -cyclooctene- and fluorophore-bearing immunoconjugate of the huA33 antibody (huA33-Dye800-TCO) and a 64 Cu-labeled tetrazine radioligand ( 64 Cu-Tz-SarAr). In vivo imaging experiments in mice bearing A33 antigen-expressing SW1222 colorectal cancer xenografts clearly demonstrate that this approach enables the non-invasive visualization of tumors and the image-guided resection of malignant tissue, all at only a fraction of the radiation dose created by a directly labeled radioimmunoconjugate. Additional in vivo experiments in peritoneal and patient-derived xenograft models of colorectal carcinoma reinforce the efficacy of this methodology and underscore its potential as an innovative and useful clinical tool.

  19. Acute imaging does not improve ASTRAL score's accuracy despite having a prognostic value.

    PubMed

    Ntaios, George; Papavasileiou, Vasileios; Faouzi, Mohamed; Vanacker, Peter; Wintermark, Max; Michel, Patrik

    2014-10-01

    The ASTRAL score was recently shown to reliably predict three-month functional outcome in patients with acute ischemic stroke. The study aims to investigate whether information from multimodal imaging increases ASTRAL score's accuracy. All patients registered in the ASTRAL registry until March 2011 were included. In multivariate logistic-regression analyses, we added covariates derived from parenchymal, vascular, and perfusion imaging to the 6-parameter model of the ASTRAL score. If a specific imaging covariate remained an independent predictor of three-month modified Rankin score>2, the area-under-the-curve (AUC) of this new model was calculated and compared with ASTRAL score's AUC. We also performed similar logistic regression analyses in arbitrarily chosen patient subgroups. When added to the ASTRAL score, the following covariates on admission computed tomography/magnetic resonance imaging-based multimodal imaging were not significant predictors of outcome: any stroke-related acute lesion, any nonstroke-related lesions, chronic/subacute stroke, leukoaraiosis, significant arterial pathology in ischemic territory on computed tomography angiography/magnetic resonance angiography/Doppler, significant intracranial arterial pathology in ischemic territory, and focal hypoperfusion on perfusion-computed tomography. The Alberta Stroke Program Early CT score on plain imaging and any significant extracranial arterial pathology on computed tomography angiography/magnetic resonance angiography/Doppler were independent predictors of outcome (odds ratio: 0·93, 95% CI: 0·87-0·99 and odds ratio: 1·49, 95% CI: 1·08-2·05, respectively) but did not increase ASTRAL score's AUC (0·849 vs. 0·850, and 0·8563 vs. 0·8564, respectively). In exploratory analyses in subgroups of different prognosis, age or stroke severity, no covariate was found to increase ASTRAL score's AUC, either. The addition of information derived from multimodal imaging does not increase ASTRAL score's accuracy to predict functional outcome despite having an independent prognostic value. More selected radiological parameters applied in specific subgroups of stroke patients may add prognostic value of multimodal imaging. © 2014 World Stroke Organization.

  20. Multidimensional Visualization of MHD and Turbulence in Fusion Plasmas [Multi-dimensional Visualization of Turbulence in Fusion Plasmas

    DOE PAGES

    Muscatello, Christopher M.; Domier, Calvin W.; Hu, Xing; ...

    2014-08-13

    Here, quasi-optical imaging at sub-THz frequencies has had a major impact on fusion plasma diagnostics. Mm-wave imaging reflectometry utilizes microwaves to actively probe fusion plasmas, inferring the local properties of electron density fluctuations. Electron cyclotron emission imaging is a multichannel radiometer that passively measures the spontaneous emission of microwaves from the plasma to infer local properties of electron temperature fluctuations. These imaging diagnostics work together to diagnose the characteristics of turbulence. Important quantities such as amplitude and wavenumber of coherent fluctuations, correlation lengths and decor relation times of turbulence, and poloidal flow velocity of the plasma are readily inferred.

  1. Synthesis and radiolabeling of a somatostatin analog for multimodal imaging

    NASA Astrophysics Data System (ADS)

    Edwards, W. Barry; Liang, Kexian; Xu, Baogang; Anderson, Carolyn J.; Achilefu, Samuel

    2006-02-01

    A new multimodal imaging agent for imaging the somatostatin receptor has been synthesized and evaluated in vitro and in vivo. A somatostatin analog, conjugated to both 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraaceticacid (DOTA) and cypate (BS-296), was synthesized entirely on the solid phase (Fmoc) and purified by RP-HPLC. DOTA was added as a ligand for radiometals such as 64Cu or 177Lu for either radio-imaging or radiotherapy respectively. Cytate, a cypatesomatostatin analog conjugate, has previously demonstrated the ability to visualize somatostatin receptor rich tumor xenografts and natural organs by optical imaging techniques. BS-296 exhibited low nanomolar inhibitory capacity toward the binding of radiolabeled somatostatin analogs in cell membranes enriched in the somatostatin receptor, demonstrating the high affinity of this multimodal imaging peptide and indicating its potential as a molecular imaging agent. 64Cu, an isotope for diagnostic imaging and radiotherapy, was selected as the isotope for radiolabeling BS-296. BS-296 was radiolabeled with 64Cu in high specific activity (200 μCi/μg) in 90% radiochemical yield. Addition of 2,5-dihydroxybenzoic acid (gentisic acid) prevented radiolysis of the sample, allowing for study of the 64Cu -BS-296 the day following radiolabeling. Furthermore, inclusion of DMSO at a level of 20% was found not to interfere with radiolabeling yields and prevented the adherence of 64Cu -BS-296 to the walls of the reaction vessel.

  2. A multimodal imaging approach enables in vivo assessment of antifungal treatment in a mouse model of invasive pulmonary aspergillosis.

    PubMed

    Poelmans, Jennifer; Himmelreich, Uwe; Vanherp, Liesbeth; Zhai, Luca; Hillen, Amy; Holvoet, Bryan; Belderbos, Sarah; Brock, Matthias; Maertens, Johan; Velde, Greetje Vande; Lagrou, Katrien

    2018-05-14

    Aspergillus fumigatus causes life-threatening lung infections in immunocompromised patients. Mouse models are extensively used in research to assess the in vivo efficacy of antifungals. In recent years, there has been an increasing interest in the use of non-invasive imaging techniques to evaluate experimental infections. However, single imaging modalities have limitations concerning the type of information they can provide. In this study, magnetic resonance imaging and bioluminescence imaging were combined to obtain longitudinal information on the extent of developing lesions and fungal load in a leucopenic mouse model of IPA. This multimodal imaging approach was used to assess changes occurring within lungs of infected mice receiving voriconazole treatment starting at different time points after infection. Results showed that IPA development depends on the inoculum size used to infect animals and that disease can be successfully prevented or treated by initiating intervention during early stages of infection. Furthermore, we demonstrated that reduction of the fungal load is not necessarily associated with the disappearance of lesions on anatomical lung images, especially when antifungal treatment coincides with immune recovery. In conclusion, multimodal imaging allows to investigate different aspects of disease progression or recovery by providing complementary information on dynamic processes, which are highly useful for assessing the efficacy of (novel) therapeutic compounds in a time- and labor-efficient manner. Copyright © 2018 American Society for Microbiology.

  3. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    NASA Astrophysics Data System (ADS)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  4. Enhancing image classification models with multi-modal biomarkers

    NASA Astrophysics Data System (ADS)

    Caban, Jesus J.; Liao, David; Yao, Jianhua; Mollura, Daniel J.; Gochuico, Bernadette; Yoo, Terry

    2011-03-01

    Currently, most computer-aided diagnosis (CAD) systems rely on image analysis and statistical models to diagnose, quantify, and monitor the progression of a particular disease. In general, CAD systems have proven to be effective at providing quantitative measurements and assisting physicians during the decision-making process. As the need for more flexible and effective CADs continues to grow, questions about how to enhance their accuracy have surged. In this paper, we show how statistical image models can be augmented with multi-modal physiological values to create more robust, stable, and accurate CAD systems. In particular, this paper demonstrates how highly correlated blood and EKG features can be treated as biomarkers and used to enhance image classification models designed to automatically score subjects with pulmonary fibrosis. In our results, a 3-5% improvement was observed when comparing the accuracy of CADs that use multi-modal biomarkers with those that only used image features. Our results show that lab values such as Erythrocyte Sedimentation Rate and Fibrinogen, as well as EKG measurements such as QRS and I:40, are statistically significant and can provide valuable insights about the severity of the pulmonary fibrosis disease.

  5. Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.

    PubMed

    Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu

    2016-01-01

    The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.

  6. Colocalization of cellular nanostructure using confocal fluorescence and partial wave spectroscopy.

    PubMed

    Chandler, John E; Stypula-Cyrus, Yolanda; Almassalha, Luay; Bauer, Greta; Bowen, Leah; Subramanian, Hariharan; Szleifer, Igal; Backman, Vadim

    2017-03-01

    A new multimodal confocal microscope has been developed, which includes a parallel Partial Wave Spectroscopic (PWS) microscopy path. This combination of modalities allows molecular-specific sensing of nanoscale intracellular structure using fluorescent labels. Combining molecular specificity and sensitivity to nanoscale structure allows localization of nanostructural intracellular changes, which is critical for understanding the mechanisms of diseases such as cancer. To demonstrate the capabilities of this multimodal instrument, we imaged HeLa cells treated with valinomycin, a potassium ionophore that uncouples oxidative phosphorylation. Colocalization of fluorescence images of the nuclei (Hoechst 33342) and mitochondria (anti-mitochondria conjugated to Alexa Fluor 488) with PWS measurements allowed us to detect a significant decrease in nuclear nanoscale heterogeneity (Σ), while no significant change in Σ was observed at mitochondrial sites. In addition, application of the new multimodal imaging approach was demonstrated on human buccal samples prepared using a cancer screening protocol. These images demonstrate that nanoscale intracellular structure can be studied in healthy and diseased cells at molecular-specific sites. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Simultaneous fluorescence and quantitative phase microscopy with single-pixel detectors

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Suo, Jinli; Zhang, Yuanlong; Dai, Qionghai

    2018-02-01

    Multimodal microscopy offers high flexibilities for biomedical observation and diagnosis. Conventional multimodal approaches either use multiple cameras or a single camera spatially multiplexing different modes. The former needs expertise demanding alignment and the latter suffers from limited spatial resolution. Here, we report an alignment-free full-resolution simultaneous fluorescence and quantitative phase imaging approach using single-pixel detectors. By combining reference-free interferometry with single-pixel detection, we encode the phase and fluorescence of the sample in two detection arms at the same time. Then we employ structured illumination and the correlated measurements between the sample and the illuminations for reconstruction. The recovered fluorescence and phase images are inherently aligned thanks to single-pixel detection. To validate the proposed method, we built a proof-of-concept setup for first imaging the phase of etched glass with the depth of a few hundred nanometers and then imaging the fluorescence and phase of the quantum dot drop. This method holds great potential for multispectral fluorescence microscopy with additional single-pixel detectors or a spectrometer. Besides, this cost-efficient multimodal system might find broad applications in biomedical science and neuroscience.

  8. Love that Book: Multimodal Response to Literature

    ERIC Educational Resources Information Center

    Dalton, Bridget; Grisham, Dana L.

    2013-01-01

    Composing with different modes--image, sound, video and the written word--to respond to and analyze literary and informational text helps students develop as readers and digital communicators. This article showcases five multimodal strategies for engaging children in rich literature-based learning using digital tools and Internet resources.

  9. Depth-resolved imaging of colon tumor using optical coherence tomography and fluorescence laminar optical tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tang, Qinggong; Frank, Aaron; Wang, Jianting; Chen, Chao-wei; Jin, Lily; Lin, Jon; Chan, Joanne M.; Chen, Yu

    2016-03-01

    Early detection of neoplastic changes remains a critical challenge in clinical cancer diagnosis and treatment. Many cancers arise from epithelial layers such as those of the gastrointestinal (GI) tract. Current standard endoscopic technology is unable to detect those subsurface lesions. Since cancer development is associated with both morphological and molecular alterations, imaging technologies that can quantitative image tissue's morphological and molecular biomarkers and assess the depth extent of a lesion in real time, without the need for tissue excision, would be a major advance in GI cancer diagnostics and therapy. In this research, we investigated the feasibility of multi-modal optical imaging including high-resolution optical coherence tomography (OCT) and depth-resolved high-sensitivity fluorescence laminar optical tomography (FLOT) for structural and molecular imaging. APC (adenomatous polyposis coli) mice model were imaged using OCT and FLOT and the correlated histopathological diagnosis was obtained. Quantitative structural (the scattering coefficient) and molecular imaging parameters (fluorescence intensity) from OCT and FLOT images were developed for multi-parametric analysis. This multi-modal imaging method has demonstrated the feasibility for more accurate diagnosis with 87.4% (87.3%) for sensitivity (specificity) which gives the most optimal diagnosis (the largest area under receiver operating characteristic (ROC) curve). This project results in a new non-invasive multi-modal imaging platform for improved GI cancer detection, which is expected to have a major impact on detection, diagnosis, and characterization of GI cancers, as well as a wide range of epithelial cancers.

  10. WE-H-206-03: Promises and Challenges of Benchtop X-Ray Fluorescence CT (XFCT) for Quantitative in Vivo Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, S.

    Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less

  11. WE-H-206-01: Photoacoustic Tomography: Multiscale Imaging From Organelles to Patients by Ultrasonically Beating the Optical Diffusion Limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, L.

    Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less

  12. WE-H-206-00: Advances in Preclinical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less

  13. Combined multimodal photoacoustic tomography, optical coherence tomography (OCT) and OCT based angiography system for in vivo imaging of multiple skin disorders in human(Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Liu, Mengyang; Chen, Zhe; Sinz, Christoph; Rank, Elisabet; Zabihian, Behrooz; Zhang, Edward Z.; Beard, Paul C.; Kittler, Harald; Drexler, Wolfgang

    2017-02-01

    All optical photoacoustic tomography (PAT) using a planar Fabry-Perot interferometer polymer film sensor has been demonstrated for in vivo human palm imaging with an imaging penetration depth of 5 mm. The relatively larger vessels in the superficial plexus and the vessels in the dermal plexus are visible in PAT. However, due to both resolution and sensitivity limits, all optical PAT cannot reveal the smaller vessels such as capillary loops and venules. Melanin absorption also sometimes causes difficulties in PAT to resolve vessels. Optical coherence tomography (OCT) based angiography, on the other hand, has been proven suitable for microvasculature visualization in the first couple millimeters in human. In our work, we combine an all optical PAT system with an OCT system featuring a phase stable akinetic swept source. This multimodal PAT/OCT/OCT-angiography system provides us co-registered human skin vasculature information as well as the structural information of cutaneous. The scanning units of the sub-systems are assembled into one probe, which is then mounted onto a portable rack. The probe and rack design gives six degrees of freedom, allowing the multimodal optical imaging probe to access nearly all regions of human body. Utilizing this probe, we perform imaging on patients with various skin disorders as well as on healthy controls. Fused PAT/OCT-angiography volume shows the complete blood vessel network in human skin, which is further embedded in the morphology provided by OCT. A comparison between the results from the disordered regions and the normal regions demonstrates the clinical translational value of this multimodal optical imaging system in dermatology.

  14. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    PubMed

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  15. Multimodal full-field optical coherence tomography on biological tissue: toward all optical digital pathology

    NASA Astrophysics Data System (ADS)

    Harms, F.; Dalimier, E.; Vermeulen, P.; Fragola, A.; Boccara, A. C.

    2012-03-01

    Optical Coherence Tomography (OCT) is an efficient technique for in-depth optical biopsy of biological tissues, relying on interferometric selection of ballistic photons. Full-Field Optical Coherence Tomography (FF-OCT) is an alternative approach to Fourier-domain OCT (spectral or swept-source), allowing parallel acquisition of en-face optical sections. Using medium numerical aperture objective, it is possible to reach an isotropic resolution of about 1x1x1 ìm. After stitching a grid of acquired images, FF-OCT gives access to the architecture of the tissue, for both macroscopic and microscopic structures, in a non-invasive process, which makes the technique particularly suitable for applications in pathology. Here we report a multimodal approach to FF-OCT, combining two Full-Field techniques for collecting a backscattered endogeneous OCT image and a fluorescence exogeneous image in parallel. Considering pathological diagnosis of cancer, visualization of cell nuclei is of paramount importance. OCT images, even for the highest resolution, usually fail to identify individual nuclei due to the nature of the optical contrast used. We have built a multimodal optical microscope based on the combination of FF-OCT and Structured Illumination Microscopy (SIM). We used x30 immersion objectives, with a numerical aperture of 1.05, allowing for sub-micron transverse resolution. Fluorescent staining of nuclei was obtained using specific fluorescent dyes such as acridine orange. We present multimodal images of healthy and pathological skin tissue at various scales. This instrumental development paves the way for improvements of standard pathology procedures, as a faster, non sacrificial, operator independent digital optical method compared to frozen sections.

  16. Voxel-based automated detection of focal cortical dysplasia lesions using diffusion tensor imaging and T2-weighted MRI data.

    PubMed

    Wang, Yanming; Zhou, Yawen; Wang, Huijuan; Cui, Jin; Nguchu, Benedictor Alexander; Zhang, Xufei; Qiu, Bensheng; Wang, Xiaoxiao; Zhu, Mingwang

    2018-05-21

    The aim of this study was to automatically detect focal cortical dysplasia (FCD) lesions in patients with extratemporal lobe epilepsy by relying on diffusion tensor imaging (DTI) and T2-weighted magnetic resonance imaging (MRI) data. We implemented an automated classifier using voxel-based multimodal features to identify gray and white matter abnormalities of FCD in patient cohorts. In addition to the commonly used T2-weighted image intensity feature, DTI-based features were also utilized. A Gaussian processes for machine learning (GPML) classifier was tested on 12 patients with FCD (8 with histologically confirmed FCD) scanned at 1.5 T and cross-validated using a leave-one-out strategy. Moreover, we compared the multimodal GPML paradigm's performance with that of single modal GPML and classical support vector machine (SVM). Our results demonstrated that the GPML performance on DTI-based features (mean AUC = 0.63) matches with the GPML performance on T2-weighted image intensity feature (mean AUC = 0.64). More promisingly, GPML yielded significantly improved performance (mean AUC = 0.76) when applying DTI-based features to multimodal paradigm. Based on the results, it can also be clearly stated that the proposed GPML strategy performed better and is robust to unbalanced dataset contrary to SVM that performed poorly (AUC = 0.69). Therefore, the GPML paradigm using multimodal MRI data containing DTI modality has promising result towards detection of the FCD lesions and provides an effective direction for future researches. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Development of ClearPEM-Sonic, a multimodal mammography system for PET and Ultrasound

    NASA Astrophysics Data System (ADS)

    Cucciati, G.; Auffray, E.; Bugalho, R.; Cao, L.; Di Vara, N.; Farina, F.; Felix, N.; Frisch, B.; Ghezzi, A.; Juhan, V.; Jun, D.; Lasaygues, P.; Lecoq, P.; Mensah, S.; Mundler, O.; Neves, J.; Paganoni, M.; Peter, J.; Pizzichemi, M.; Siles, P.; Silva, J. C.; Silva, R.; Tavernier, S.; Tessonnier, L.; Varela, J.

    2014-03-01

    ClearPEM-Sonic is an innovative imaging device specifically developed for breast cancer. The possibility to work in PEM-Ultrasound multimodality allows to obtain metabolic and morphological information increasing the specificity of the exam. The ClearPEM detector is developed to maximize the sensitivity and the spatial resolution as compared to Whole-Body PET scanners. It is coupled with a 3D ultrasound system, the SuperSonic Imagine Aixplorer that improves the specificity of the exam by providing a tissue elasticity map. This work describes the ClearPEM-Sonic project focusing on the technological developments it has required, the technical merits (and limits) and the first multimodal images acquired on a dedicated phantom. It finally presents selected clinical case studies that confirm the value of PEM information.

  18. Focusing and imaging with increased numerical apertures through multimode fibers with micro-fabricated optics.

    PubMed

    Bianchi, S; Rajamanickam, V P; Ferrara, L; Di Fabrizio, E; Liberale, C; Di Leonardo, R

    2013-12-01

    The use of individual multimode optical fibers in endoscopy applications has the potential to provide highly miniaturized and noninvasive probes for microscopy and optical micromanipulation. A few different strategies have been proposed recently, but they all suffer from intrinsically low resolution related to the low numerical aperture of multimode fibers. Here, we show that two-photon polymerization allows for direct fabrication of micro-optics components on the fiber end, resulting in an increase of the numerical aperture to a value that is close to 1. Coupling light into the fiber through a spatial light modulator, we were able to optically scan a submicrometer spot (300 nm FWHM) over an extended region, facing the opposite fiber end. Fluorescence imaging with improved resolution is also demonstrated.

  19. Analysis of multimode fiber bundles for endoscopic spectral-domain optical coherence tomography

    PubMed Central

    Risi, Matthew D.; Makhlouf, Houssine; Rouse, Andrew R.; Gmitro, Arthur F.

    2016-01-01

    A theoretical analysis of the use of a fiber bundle in spectral-domain optical coherence tomography (OCT) systems is presented. The fiber bundle enables a flexible endoscopic design and provides fast, parallelized acquisition of the OCT data. However, the multimode characteristic of the fibers in the fiber bundle affects the depth sensitivity of the imaging system. A description of light interference in a multimode fiber is presented along with numerical simulations and experimental studies to illustrate the theoretical analysis. PMID:25967012

  20. Children Creating Multimodal Stories about a Familiar Environment

    ERIC Educational Resources Information Center

    Kervin, Lisa; Mantei, Jessica

    2017-01-01

    Storytelling is a practice that enables children to apply their literacy skills. This article shares a collaborative literacy strategy devised to enable children to create multimodal stories about their familiar school environment. The strategy uses resources, including the children's own drawings, images from Google Maps, and the Puppet Pals…

  1. Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images

    NASA Astrophysics Data System (ADS)

    Miri, Mohammad Saleh; Lee, Kyungmoo; Niemeijer, Meindert; Abràmoff, Michael D.; Kwon, Young H.; Garvin, Mona K.

    2013-03-01

    Glaucoma is one of the major causes of blindness worldwide. One important structural parameter for the diagnosis and management of glaucoma is the cup-to-disc ratio (CDR), which tends to become larger as glaucoma progresses. While approaches exist for segmenting the optic disc and cup within fundus photographs, and more recently, within spectral-domain optical coherence tomography (SD-OCT) volumes, no approaches have been reported for the simultaneous segmentation of these structures within both modalities combined. In this work, a multimodal pixel-classification approach for the segmentation of the optic disc and cup within fundus photographs and SD-OCT volumes is presented. In particular, after segmentation of other important structures (such as the retinal layers and retinal blood vessels) and fundus-to-SD-OCT image registration, features are extracted from both modalities and a k-nearest-neighbor classification approach is used to classify each pixel as cup, rim, or background. The approach is evaluated on 70 multimodal image pairs from 35 subjects in a leave-10%-out fashion (by subject). A significant improvement in classification accuracy is obtained using the multimodal approach over that obtained from the corresponding unimodal approach (97.8% versus 95.2%; p < 0:05; paired t-test).

  2. Mechanisms of murine cerebral malaria: Multimodal imaging of altered cerebral metabolism and protein oxidation at hemorrhage sites

    PubMed Central

    Hackett, Mark J.; Aitken, Jade B.; El-Assaad, Fatima; McQuillan, James A.; Carter, Elizabeth A.; Ball, Helen J.; Tobin, Mark J.; Paterson, David; de Jonge, Martin D.; Siegele, Rainer; Cohen, David D.; Vogt, Stefan; Grau, Georges E.; Hunt, Nicholas H.; Lay, Peter A.

    2015-01-01

    Using a multimodal biospectroscopic approach, we settle several long-standing controversies over the molecular mechanisms that lead to brain damage in cerebral malaria, which is a major health concern in developing countries because of high levels of mortality and permanent brain damage. Our results provide the first conclusive evidence that important components of the pathology of cerebral malaria include peroxidative stress and protein oxidation within cerebellar gray matter, which are colocalized with elevated nonheme iron at the site of microhemorrhage. Such information could not be obtained previously from routine imaging methods, such as electron microscopy, fluorescence, and optical microscopy in combination with immunocytochemistry, or from bulk assays, where the level of spatial information is restricted to the minimum size of tissue that can be dissected. We describe the novel combination of chemical probe–free, multimodal imaging to quantify molecular markers of disturbed energy metabolism and peroxidative stress, which were used to provide new insights into understanding the pathogenesis of cerebral malaria. In addition to these mechanistic insights, the approach described acts as a template for the future use of multimodal biospectroscopy for understanding the molecular processes involved in a range of clinically important acute and chronic (neurodegenerative) brain diseases to improve treatment strategies. PMID:26824064

  3. A Deep and Autoregressive Approach for Topic Modeling of Multimodal Data.

    PubMed

    Zheng, Yin; Zhang, Yu-Jin; Larochelle, Hugo

    2016-06-01

    Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set.

  4. Drug-related webpages classification based on multi-modal local decision fusion

    NASA Astrophysics Data System (ADS)

    Hu, Ruiguang; Su, Xiaojing; Liu, Yanxin

    2018-03-01

    In this paper, multi-modal local decision fusion is used for drug-related webpages classification. First, meaningful text are extracted through HTML parsing, and effective images are chosen by the FOCARSS algorithm. Second, six SVM classifiers are trained for six kinds of drug-taking instruments, which are represented by PHOG. One SVM classifier is trained for the cannabis, which is represented by the mid-feature of BOW model. For each instance in a webpage, seven SVMs give seven labels for its image, and other seven labels are given by searching the names of drug-taking instruments and cannabis in its related text. Concatenating seven labels of image and seven labels of text, the representation of those instances in webpages are generated. Last, Multi-Instance Learning is used to classify those drugrelated webpages. Experimental results demonstrate that the classification accuracy of multi-instance learning with multi-modal local decision fusion is much higher than those of single-modal classification.

  5. Multimodal Deep Autoencoder for Human Pose Recovery.

    PubMed

    Hong, Chaoqun; Yu, Jun; Wan, Jian; Tao, Dacheng; Wang, Meng

    2015-12-01

    Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.

  6. Multi-modal diffuse optical techniques for breast cancer neoadjuvant chemotherapy monitoring (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Cochran, Jeffrey M.; Busch, David R.; Ban, Han Y.; Kavuri, Venkaiah C.; Schweiger, Martin J.; Arridge, Simon R.; Yodh, Arjun G.

    2017-02-01

    We present high spatial density, multi-modal, parallel-plate Diffuse Optical Tomography (DOT) imaging systems for the purpose of breast tumor detection. One hybrid instrument provides time domain (TD) and continuous wave (CW) DOT at 64 source fiber positions. The TD diffuse optical spectroscopy with PMT- detection produces low-resolution images of absolute tissue scattering and absorption while the spatially dense array of CCD-coupled detector fibers (108 detectors) provides higher-resolution CW images of relative tissue optical properties. Reconstruction of the tissue optical properties, along with total hemoglobin concentration and tissue oxygen saturation, is performed using the TOAST software suite. Comparison of the spatially-dense DOT images and MR images allows for a robust validation of DOT against an accepted clinical modality. Additionally, the structural information from co-registered MR images is used as a spatial prior to improve the quality of the functional optical images and provide more accurate quantification of the optical and hemodynamic properties of tumors. We also present an optical-only imaging system that provides frequency domain (FD) DOT at 209 source positions with full CCD detection and incorporates optical fringe projection profilometry to determine the breast boundary. This profilometry serves as a spatial constraint, improving the quality of the DOT reconstructions while retaining the benefits of an optical-only device. We present initial images from both human subjects and phantoms to display the utility of high spatial density data and multi-modal information in DOT reconstruction with the two systems.

  7. Integration of Sparse Multi-modality Representation and Anatomical Constraint for Isointense Infant Brain MR Image Segmentation

    PubMed Central

    Wang, Li; Shi, Feng; Gao, Yaozong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination process. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6–8 months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6 months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter. PMID:24291615

  8. Fluorescence labeled microbubbles for multimodal imaging.

    PubMed

    Barrefelt, Åsa; Zhao, Ying; Larsson, Malin K; Egri, Gabriella; Kuiper, Raoul V; Hamm, Jörg; Saghafian, Maryam; Caidahl, Kenneth; Brismar, Torkel B; Aspelin, Peter; Heuchel, Rainer; Muhammed, Mamoun; Dähne, Lars; Hassan, Moustapha

    2015-08-28

    Air-filled polyvinyl alcohol microbubbles (PVA-MBs) were recently introduced as a contrast agent for ultrasound imaging. In the present study, we explore the possibility of extending their application in multimodal imaging by labeling them with a near infrared (NIR) fluorophore, VivoTag-680. PVA-MBs were injected intravenously into FVB/N female mice and their dynamic biodistribution over 24 h was determined by 3D-fluorescence imaging co-registered with 3D-μCT imaging, to verify the anatomic location. To further confirm the biodistribution results from in vivo imaging, organs were removed and examined histologically using bright field and fluorescence microscopy. Fluorescence imaging detected PVA-MB accumulation in the lungs within the first 30 min post-injection. Redistribution to a low extent was observed in liver and kidneys at 4 h, and to a high extent mainly in the liver and spleen at 24 h. Histology confirmed PVA-MB localization in lung capillaries and macrophages. In the liver, they were associated with Kupffer cells; in the spleen, they were located mostly within the marginal-zone. Occasional MBs were observed in the kidney glomeruli and interstitium. The potential application of PVA-MBs as a contrast agent was also studied using ultrasound (US) imaging in subcutaneous and orthotopic pancreatic cancer mouse models, to visualize blood flow within the tumor mass. In conclusion, this study showed that PVA-MBs are useful as a contrast agent for multimodal imaging. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Echocardiography in the Era of Multimodality Cardiovascular Imaging

    PubMed Central

    Shah, Benoy Nalin

    2013-01-01

    Echocardiography remains the most frequently performed cardiac imaging investigation and is an invaluable tool for detailed and accurate evaluation of cardiac structure and function. Echocardiography, nuclear cardiology, cardiac magnetic resonance imaging, and cardiovascular-computed tomography comprise the subspeciality of cardiovascular imaging, and these techniques are often used together for a multimodality, comprehensive assessment of a number of cardiac diseases. This paper provides the general cardiologist and physician with an overview of state-of-the-art modern echocardiography, summarising established indications as well as highlighting advances in stress echocardiography, three-dimensional echocardiography, deformation imaging, and contrast echocardiography. Strengths and limitations of echocardiography are discussed as well as the growing role of real-time three-dimensional echocardiography in the guidance of structural heart interventions in the cardiac catheter laboratory. PMID:23878804

  10. Magnetic Nanoparticles for Multi-Imaging and Drug Delivery

    PubMed Central

    Lee, Jae-Hyun; Kim, Ji-wook; Cheon, Jinwoo

    2013-01-01

    Various bio-medical applications of magnetic nanoparticles have been explored during the past few decades. As tools that hold great potential for advancing biological sciences, magnetic nanoparticles have been used as platform materials for enhanced magnetic resonance imaging (MRI) agents, biological separation and magnetic drug delivery systems, and magnetic hyperthermia treatment. Furthermore, approaches that integrate various imaging and bioactive moieties have been used in the design of multi-modality systems, which possess synergistically enhanced properties such as better imaging resolution and sensitivity, molecular recognition capabilities, stimulus responsive drug delivery with on-demand control, and spatio-temporally controlled cell signal activation. Below, recent studies that focus on the design and synthesis of multi-mode magnetic nanoparticles will be briefly reviewed and their potential applications in the imaging and therapy areas will be also discussed. PMID:23579479

  11. Rat brain imaging using full field optical coherence microscopy with short multimode fiber probe

    NASA Astrophysics Data System (ADS)

    Sato, Manabu; Saito, Daisuke; Kurotani, Reiko; Abe, Hiroyuki; Kawauchi, Satoko; Sato, Shunichi; Nishidate, Izumi

    2017-02-01

    We demonstrated FF OCM(full field optical coherence microscopy) using an ultrathin forward-imaging SMMF (short multimode fiber) probe of 50 μm core diameter, 125 μm diameter, and 7.4 mm length, which is a typical graded-index multimode fiber for optical communications. The axial resolution was measured to be 2.20 μm, which is close to the calculated axial resolution of 2.06 μm. The lateral resolution was evaluated to be 4.38 μm using a test pattern. Assuming that the FWHM of the contrast is the DOF (depth of focus), the DOF of the signal is obtained at 36 μm and that of the OCM is 66 μm. The contrast of the OCT images was 6.1 times higher than that of the signal images due to the coherence gate. After an euthanasia the rat brain was resected and cut at 2.6mm tail from Bregma. Contacting SMMF to the primary somatosensory cortex and the agranular insular cortex of ex vivo brain, OCM images of the brain were measured 100 times with 2μm step. 3D OCM images of the brain were measured, and internal structure information was obtained. The feasibility of an SMMF as an ultrathin forward-imaging probe in full-field OCM has been demonstrated.

  12. Multimodality image display station

    NASA Astrophysics Data System (ADS)

    Myers, H. Joseph

    1990-07-01

    The Multi-modality Image Display Station (MIDS) is designed for the use of physicians outside of the radiology department. Connected to a local area network or a host computer, it provides speedy access to digitized radiology images and written diagnostics needed by attending and consulting physicians near the patient bedside. Emphasis has been placed on low cost, high performance and ease of use. The work is being done as a joint study with the University of Texas Southwestern Medical Center at Dallas, and as part of a joint development effort with the Mayo Clinic. MIDS is a prototype, and should not be assumed to be an IBM product.

  13. Multidimensional Processing and Visual Rendering of Complex 3D Biomedical Images

    NASA Technical Reports Server (NTRS)

    Sams, Clarence F.

    2016-01-01

    The proposed technology uses advanced image analysis techniques to maximize the resolution and utility of medical imaging methods being used during spaceflight. We utilize COTS technology for medical imaging, but our applications require higher resolution assessment of the medical images than is routinely applied with nominal system software. By leveraging advanced data reduction and multidimensional imaging techniques utilized in analysis of Planetary Sciences and Cell Biology imaging, it is possible to significantly increase the information extracted from the onboard biomedical imaging systems. Year 1 focused on application of these techniques to the ocular images collected on ground test subjects and ISS crewmembers. Focus was on the choroidal vasculature and the structure of the optic disc. Methods allowed for increased resolution and quantitation of structural changes enabling detailed assessment of progression over time. These techniques enhance the monitoring and evaluation of crew vision issues during space flight.

  14. Intrasubject multimodal groupwise registration with the conditional template entropy.

    PubMed

    Polfliet, Mathias; Klein, Stefan; Huizinga, Wyke; Paulides, Margarethus M; Niessen, Wiro J; Vandemeulebroucke, Jef

    2018-05-01

    Image registration is an important task in medical image analysis. Whereas most methods are designed for the registration of two images (pairwise registration), there is an increasing interest in simultaneously aligning more than two images using groupwise registration. Multimodal registration in a groupwise setting remains difficult, due to the lack of generally applicable similarity metrics. In this work, a novel similarity metric for such groupwise registration problems is proposed. The metric calculates the sum of the conditional entropy between each image in the group and a representative template image constructed iteratively using principal component analysis. The proposed metric is validated in extensive experiments on synthetic and intrasubject clinical image data. These experiments showed equivalent or improved registration accuracy compared to other state-of-the-art (dis)similarity metrics and improved transformation consistency compared to pairwise mutual information. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  15. BNU-LSVED: a multimodal spontaneous expression database in educational environment

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Wei, Qinglan; He, Jun; Yu, Lejun; Zhu, Xiaoming

    2016-09-01

    In the field of pedagogy or educational psychology, emotions are treated as very important factors, which are closely associated with cognitive processes. Hence, it is meaningful for teachers to analyze students' emotions in classrooms, thus adjusting their teaching activities and improving students ' individual development. To provide a benchmark for different expression recognition algorithms, a large collection of training and test data in classroom environment has become an acute problem that needs to be resolved. In this paper, we present a multimodal spontaneous database in real learning environment. To collect the data, students watched seven kinds of teaching videos and were simultaneously filmed by a camera. Trained coders made one of the five learning expression labels for each image sequence extracted from the captured videos. This subset consists of 554 multimodal spontaneous expression image sequences (22,160 frames) recorded in real classrooms. There are four main advantages in this database. 1) Due to recorded in the real classroom environment, viewer's distance from the camera and the lighting of the database varies considerably between image sequences. 2) All the data presented are natural spontaneous responses to teaching videos. 3) The multimodal database also contains nonverbal behavior including eye movement, head posture and gestures to infer a student ' s affective state during the courses. 4) In the video sequences, there are different kinds of temporal activation patterns. In addition, we have demonstrated the labels for the image sequences are in high reliability through Cronbach's alpha method.

  16. New developments in multimodal clinical multiphoton tomography

    NASA Astrophysics Data System (ADS)

    König, Karsten

    2011-03-01

    80 years ago, the PhD student Maria Goeppert predicted in her thesis in Goettingen, Germany, two-photon effects. It took 30 years to prove her theory, and another three decades to realize the first two-photon microscope. With the beginning of this millennium, first clinical multiphoton tomographs started operation in research institutions, hospitals, and in the cosmetic industry. The multiphoton tomograph MPTflexTM with its miniaturized flexible scan head became the Prism-Award 2010 winner in the category Life Sciences. Multiphoton tomographs with its superior submicron spatial resolution can be upgraded to 5D imaging tools by adding spectral time-correlated single photon counting units. Furthermore, multimodal hybrid tomographs provide chemical fingerprinting and fast wide-field imaging. The world's first clinical CARS studies have been performed with a hybrid multimodal multiphoton tomograph in spring 2010. In particular, nonfluorescent lipids and water as well as mitochondrial fluorescent NAD(P)H, fluorescent elastin, keratin, and melanin as well as SHG-active collagen have been imaged in patients with dermatological disorders. Further multimodal approaches include the combination of multiphoton tomographs with low-resolution imaging tools such as ultrasound, optoacoustic, OCT, and dermoscopy systems. Multiphoton tomographs are currently employed in Australia, Japan, the US, and in several European countries for early diagnosis of skin cancer (malignant melanoma), optimization of treatment strategies (wound healing, dermatitis), and cosmetic research including long-term biosafety tests of ZnO sunscreen nanoparticles and the measurement of the stimulated biosynthesis of collagen by anti-ageing products.

  17. Multimodal nanoparticle imaging agents: design and applications

    NASA Astrophysics Data System (ADS)

    Burke, Benjamin P.; Cawthorne, Christopher; Archibald, Stephen J.

    2017-10-01

    Molecular imaging, where the location of molecules or nanoscale constructs can be tracked in the body to report on disease or biochemical processes, is rapidly expanding to include combined modality or multimodal imaging. No single imaging technique can offer the optimum combination of properties (e.g. resolution, sensitivity, cost, availability). The rapid technological advances in hardware to scan patients, and software to process and fuse images, are pushing the boundaries of novel medical imaging approaches, and hand-in-hand with this is the requirement for advanced and specific multimodal imaging agents. These agents can be detected using a selection from radioisotope, magnetic resonance and optical imaging, among others. Nanoparticles offer great scope in this area as they lend themselves, via facile modification procedures, to act as multifunctional constructs. They have relevance as therapeutics and drug delivery agents that can be tracked by molecular imaging techniques with the particular development of applications in optically guided surgery and as radiosensitizers. There has been a huge amount of research work to produce nanoconstructs for imaging, and the parameters for successful clinical translation and validation of therapeutic applications are now becoming much better understood. It is an exciting time of progress for these agents as their potential is closer to being realized with translation into the clinic. The coming 5-10 years will be critical, as we will see if the predicted improvement in clinical outcomes becomes a reality. Some of the latest advances in combination modality agents are selected and the progression pathway to clinical trials analysed. This article is part of the themed issue 'Challenges for chemistry in molecular imaging'.

  18. Brain's tumor image processing using shearlet transform

    NASA Astrophysics Data System (ADS)

    Cadena, Luis; Espinosa, Nikolai; Cadena, Franklin; Korneeva, Anna; Kruglyakov, Alexey; Legalov, Alexander; Romanenko, Alexey; Zotin, Alexander

    2017-09-01

    Brain tumor detection is well known research area for medical and computer scientists. In last decades there has been much research done on tumor detection, segmentation, and classification. Medical imaging plays a central role in the diagnosis of brain tumors and nowadays uses methods non-invasive, high-resolution techniques, especially magnetic resonance imaging and computed tomography scans. Edge detection is a fundamental tool in image processing, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image has discontinuities. Shearlets is the most successful frameworks for the efficient representation of multidimensional data, capturing edges and other anisotropic features which frequently dominate multidimensional phenomena. The paper proposes an improved brain tumor detection method by automatically detecting tumor location in MR images, its features are extracted by new shearlet transform.

  19. Crafting a Social Context for Medical Informatics Networks

    NASA Astrophysics Data System (ADS)

    Patel, Salil H.

    Effective healthcare delivery is increasingly predicated upon the availability, accuracy, and integrity of personal health information. Tracking and analysis of medical information throughout its lifeeycle may be viewed through the lenses of both physical network architecture and the broader social context in which such information is gathered and applied. As information technology and evidence-based practice models evolve in tandem, the development of interlinked multimodal and multidimensional databases has shown great promise for improving public health. To this end. providers, regulators, payers, and individual patients each share rights and responsibilities in creating a milieu which both, fosters and protects the practice and promise of medical information.

  20. Multi-Dimensionality of Synthetic Vision Cockpit Displays: Prevention of Controlled-Flight-Into-Terrain

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.

    2006-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results showed the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.

  1. Appearance of osteolysis with melorheostosis: redefining the disease or a new disorder? A novel case report with multimodality imaging.

    PubMed

    Osher, Lawrence S; Blazer, Marie Mantini; Bumpus, Kelly

    2013-01-01

    We present a case report of melorheostosis with the novel radiographic finding of underlying cortical resorption. A number of radiographic patterns of melorheostosis have been described; however, the combination of new bone formation and resorption of the original cortex appears unique. Although the presence of underlying lysis has been postulated in published studies, direct radiographic evidence of bony resorption in melorheostosis has not been reported. These findings can be subtle and might go unnoticed using standard imaging. An in-depth review of the radiographic features is presented, including multimodality imaging with magnetic resonance imaging and computed tomography. Copyright © 2013 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  2. Multimodality hard-x-ray imaging of a chromosome with nanoscale spatial resolution

    DOE PAGES

    Yan, Hanfei; Nazaretski, Evgeny; Lauer, Kenneth R.; ...

    2016-02-05

    Here, we developed a scanning hard x-ray microscope using a new class of x-ray nano-focusing optic called a multilayer Laue lens and imaged a chromosome with nanoscale spatial resolution. The combination of the hard x-ray's superior penetration power, high sensitivity to elemental composition, high spatial-resolution and quantitative analysis creates a unique tool with capabilities that other microscopy techniques cannot provide. Using this microscope, we simultaneously obtained absorption-, phase-, and fluorescence-contrast images of Pt-stained human chromosome samples. The high spatial-resolution of the microscope and its multi-modality imaging capabilities enabled us to observe the internal ultra-structures of a thick chromosome without sectioningmore » it.« less

  3. Multimodality hard-x-ray imaging of a chromosome with nanoscale spatial resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Hanfei; Nazaretski, Evgeny; Lauer, Kenneth R.

    Here, we developed a scanning hard x-ray microscope using a new class of x-ray nano-focusing optic called a multilayer Laue lens and imaged a chromosome with nanoscale spatial resolution. The combination of the hard x-ray's superior penetration power, high sensitivity to elemental composition, high spatial-resolution and quantitative analysis creates a unique tool with capabilities that other microscopy techniques cannot provide. Using this microscope, we simultaneously obtained absorption-, phase-, and fluorescence-contrast images of Pt-stained human chromosome samples. The high spatial-resolution of the microscope and its multi-modality imaging capabilities enabled us to observe the internal ultra-structures of a thick chromosome without sectioningmore » it.« less

  4. A Multimode Optical Imaging System for Preclinical Applications In Vivo: Technology Development, Multiscale Imaging, and Chemotherapy Assessment

    PubMed Central

    Hwang, Jae Youn; Wachsmann-Hogiu, Sebastian; Ramanujan, V. Krishnan; Ljubimova, Julia; Gross, Zeev; Gray, Harry B.; Medina-Kauwe, Lali K.; Farkas, Daniel L.

    2012-01-01

    Purpose Several established optical imaging approaches have been applied, usually in isolation, to preclinical studies; however, truly useful in vivo imaging may require a simultaneous combination of imaging modalities to examine dynamic characteristics of cells and tissues. We developed a new multimode optical imaging system designed to be application-versatile, yielding high sensitivity, and specificity molecular imaging. Procedures We integrated several optical imaging technologies, including fluorescence intensity, spectral, lifetime, intravital confocal, two-photon excitation, and bioluminescence, into a single system that enables functional multiscale imaging in animal models. Results The approach offers a comprehensive imaging platform for kinetic, quantitative, and environmental analysis of highly relevant information, with micro-to-macroscopic resolution. Applied to small animals in vivo, this provides superior monitoring of processes of interest, represented here by chemo-/nanoconstruct therapy assessment. Conclusions This new system is versatile and can be optimized for various applications, of which cancer detection and targeted treatment are emphasized here. PMID:21874388

  5. Multi-channel MRI segmentation with graph cuts using spectral gradient and multidimensional Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Lecoeur, Jérémy; Ferré, Jean-Christophe; Collins, D. Louis; Morrisey, Sean P.; Barillot, Christian

    2009-02-01

    A new segmentation framework is presented taking advantage of multimodal image signature of the different brain tissues (healthy and/or pathological). This is achieved by merging three different modalities of gray-level MRI sequences into a single RGB-like MRI, hence creating a unique 3-dimensional signature for each tissue by utilising the complementary information of each MRI sequence. Using the scale-space spectral gradient operator, we can obtain a spatial gradient robust to intensity inhomogeneity. Even though it is based on psycho-visual color theory, it can be very efficiently applied to the RGB colored images. More over, it is not influenced by the channel assigment of each MRI. Its optimisation by the graph cuts paradigm provides a powerful and accurate tool to segment either healthy or pathological tissues in a short time (average time about ninety seconds for a brain-tissues classification). As it is a semi-automatic method, we run experiments to quantify the amount of seeds needed to perform a correct segmentation (dice similarity score above 0.85). Depending on the different sets of MRI sequences used, this amount of seeds (expressed as a relative number in pourcentage of the number of voxels of the ground truth) is between 6 to 16%. We tested this algorithm on brainweb for validation purpose (healthy tissue classification and MS lesions segmentation) and also on clinical data for tumours and MS lesions dectection and tissues classification.

  6. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  7. Multimodal Reading Comprehension: Curriculum Expectations and Large-Scale Literacy Testing Practices

    ERIC Educational Resources Information Center

    Unsworth, Len

    2014-01-01

    Interpreting the image-language interface in multimodal texts is now well recognized as a crucial aspect of reading comprehension in a number of official school syllabi such as the recently published Australian Curriculum: English (ACE). This article outlines the relevant expected student learning outcomes in this curriculum and draws attention to…

  8. An fMRI Study of Multimodal Semantic and Phonological Processing in Reading Disabled Adolescents

    ERIC Educational Resources Information Center

    Landi, Nicole; Mencl, W. Einar; Frost, Stephen J.; Sandak, Rebecca; Pugh, Kenneth R.

    2010-01-01

    Using functional magnetic resonance imaging, we investigated multimodal (visual and auditory) semantic and unimodal (visual only) phonological processing in reading disabled (RD) adolescents and non-impaired (NI) control participants. We found reduced activation for RD relative to NI in a number of left-hemisphere reading-related areas across all…

  9. A Multimodal Perspective on Textuality and Contexts

    ERIC Educational Resources Information Center

    Jewitt, Carey

    2007-01-01

    Textuality is often thought of in linguistic terms; for instance, the talk and writing that circulate in the classroom. In this paper I take a multimodal perspective on textuality and context. I draw on illustrative examples from school Science and English to examine how image, colour, gesture, gaze, posture and movement--as well as writing and…

  10. "Convince Me!" Valuing Multimodal Literacies and Composing Public Service Announcements

    ERIC Educational Resources Information Center

    Selfe, Richard J.; Selfe, Cynthia L.

    2008-01-01

    For some teachers, the increasing attention to digital and multimodal composing in English and Language Arts classrooms has brought into sharp relief the profession's investment in print as the primary means of expression. Although new forms of communication that combine words, still and moving images, and animation have begun to dominate digital…

  11. Concept for Classifying Facade Elements Based on Material, Geometry and Thermal Radiation Using Multimodal Uav Remote Sensing

    NASA Astrophysics Data System (ADS)

    Ilehag, R.; Schenk, A.; Hinz, S.

    2017-08-01

    This paper presents a concept for classification of facade elements, based on the material and the geometry of the elements in addition to the thermal radiation of the facade with the usage of a multimodal Unmanned Aerial Vehicle (UAV) system. Once the concept is finalized and functional, the workflow can be used for energy demand estimations for buildings by exploiting existing methods for estimation of heat transfer coefficient and the transmitted heat loss. The multimodal system consists of a thermal, a hyperspectral and an optical sensor, which can be operational with a UAV. While dealing with sensors that operate in different spectra and have different technical specifications, such as the radiometric and the geometric resolution, the challenges that are faced are presented. Addressed are the different approaches of data fusion, such as image registration, generation of 3D models by performing image matching and the means for classification based on either the geometry of the object or the pixel values. As a first step towards realizing the concept, the result from a geometric calibration with a designed multimodal calibration pattern is presented.

  12. MCA-NMF: Multimodal Concept Acquisition with Non-Negative Matrix Factorization

    PubMed Central

    Mangin, Olivier; Filliat, David; ten Bosch, Louis; Oudeyer, Pierre-Yves

    2015-01-01

    In this paper we introduce MCA-NMF, a computational model of the acquisition of multimodal concepts by an agent grounded in its environment. More precisely our model finds patterns in multimodal sensor input that characterize associations across modalities (speech utterances, images and motion). We propose this computational model as an answer to the question of how some class of concepts can be learnt. In addition, the model provides a way of defining such a class of plausibly learnable concepts. We detail why the multimodal nature of perception is essential to reduce the ambiguity of learnt concepts as well as to communicate about them through speech. We then present a set of experiments that demonstrate the learning of such concepts from real non-symbolic data consisting of speech sounds, images, and motions. Finally we consider structure in perceptual signals and demonstrate that a detailed knowledge of this structure, named compositional understanding can emerge from, instead of being a prerequisite of, global understanding. An open-source implementation of the MCA-NMF learner as well as scripts and associated experimental data to reproduce the experiments are publicly available. PMID:26489021

  13. Airborne multidimensional integrated remote sensing system

    NASA Astrophysics Data System (ADS)

    Xu, Weiming; Wang, Jianyu; Shu, Rong; He, Zhiping; Ma, Yanhua

    2006-12-01

    In this paper, we present a kind of airborne multidimensional integrated remote sensing system that consists of an imaging spectrometer, a three-line scanner, a laser ranger, a position & orientation subsystem and a stabilizer PAV30. The imaging spectrometer is composed of two sets of identical push-broom high spectral imager with a field of view of 22°, which provides a field of view of 42°. The spectral range of the imaging spectrometer is from 420nm to 900nm, and its spectral resolution is 5nm. The three-line scanner is composed of two pieces of panchromatic CCD and a RGB CCD with 20° stereo angle and 10cm GSD(Ground Sample Distance) with 1000m flying height. The laser ranger can provide height data of three points every other four scanning lines of the spectral imager and those three points are calibrated to match the corresponding pixels of the spectral imager. The post-processing attitude accuracy of POS/AV 510 used as the position & orientation subsystem, which is the aerial special exterior parameters measuring product of Canadian Applanix Corporation, is 0.005° combined with base station data. The airborne multidimensional integrated remote sensing system was implemented successfully, performed the first flying experiment on April, 2005, and obtained satisfying data.

  14. Design and characterization of a handheld multimodal imaging device for the assessment of oral epithelial lesions

    NASA Astrophysics Data System (ADS)

    Higgins, Laura M.; Pierce, Mark C.

    2014-08-01

    A compact handpiece combining high resolution fluorescence (HRF) imaging with optical coherence tomography (OCT) was developed to provide real-time assessment of oral lesions. This multimodal imaging device simultaneously captures coregistered en face images with subcellular detail alongside cross-sectional images of tissue microstructure. The HRF imaging acquires a 712×594 μm2 field-of-view at the sample with a spatial resolution of 3.5 μm. The OCT images were acquired to a depth of 1.5 mm with axial and lateral resolutions of 9.3 and 8.0 μm, respectively. HRF and OCT images are simultaneously displayed at 25 fps. The handheld device was used to image a healthy volunteer, demonstrating the potential for in vivo assessment of the epithelial surface for dysplastic and neoplastic changes at the cellular level, while simultaneously evaluating submucosal involvement. We anticipate potential applications in real-time assessment of oral lesions for improved surveillance and surgical guidance.

  15. NOVEL PRERETINAL HAIR PIN-LIKE VESSEL IN RETINAL ASTROCYTIC HAMARTOMA WITH VITREOUS HEMORRHAGE.

    PubMed

    Soeta, Megumi; Arai, Yusuke; Takahashi, Hidenori; Fujino, Yujiro; Tanabe, Tatsuro; Inoue, Yuji; Kawashima, Hidetoshi

    2018-01-01

    To report a case of retinal astrocytic hamartoma with vitreous hemorrhage and a hair pin-like vessel adhering to a posterior vitreous membrane. A 33-year-old man with a retinal astrocytic hamartoma presented with vitreous hemorrhage 5 times. Multimodal imaging, including fundus photography, fluorescein angiography, optical coherence tomography, and B-mode ultrasonography. Multimodal imaging demonstrated a novel hair pin-like vessel that adhered to the posterior vitreous membrane. Some cases of retinal astrocytic hamartoma with vitreous hemorrhage may be related to structure abnormalities of tumor vessels.

  16. Papillary fibroelastoma diagnosed through multimodality cardiac imaging: a rare tumour in an uncommon location with review of literature.

    PubMed

    Anand, Senthil; Sydow, Nicole; Janardhanan, Rajesh

    2017-08-08

    We describe the case of a woman presenting with transient ischaemic attack, who was found to have a papillary fibroelastoma arising from the aortic wall, an extremely rare location. We describe the multimodality imaging techniques used in diagnosing this patient and review the most recent literature on evaluation and management of patients with cardiac papillary fibroelastomas. © BMJ Publishing Group Ltd (unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. Hybrid Core-Shell (HyCoS) Nanoparticles produced by Complex Coacervation for Multimodal Applications

    NASA Astrophysics Data System (ADS)

    Vecchione, D.; Grimaldi, A. M.; Forte, E.; Bevilacqua, Paolo; Netti, P. A.; Torino, E.

    2017-03-01

    Multimodal imaging probes can provide diagnostic information combining different imaging modalities. Nanoparticles (NPs) can contain two or more imaging tracers that allow several diagnostic techniques to be used simultaneously. In this work, a complex coacervation process to produce core-shell completely biocompatible polymeric nanoparticles (HyCoS) for multimodal imaging applications is described. Innovations on the traditional coacervation process are found in the control of the reaction temperature, allowing a speeding up of the reaction itself, and the production of a double-crosslinked system to improve the stability of the nanostructures in the presence of a clinically relevant contrast agent for MRI (Gd-DTPA). Through the control of the crosslinking behavior, an increase up to 6 times of the relaxometric properties of the Gd-DTPA is achieved. Furthermore, HyCoS can be loaded with a high amount of dye such as ATTO 633 or conjugated with a model dye such as FITC for in vivo optical imaging. The results show stable core-shell polymeric nanoparticles that can be used both for MRI and for optical applications allowing detection free from harmful radiation. Additionally, preliminary results about the possibility to trigger the release of a drug through a pH effect are reported.

  18. Exploring photoreceptor reflectivity via multimodal imaging of outer retinal tubulation in advanced age-related macular degeneration

    PubMed Central

    Litts, Katie M.; Wang, Xiaolin; Clark, Mark E.; Owsley, Cynthia; Freund, K. Bailey; Curcio, Christine A.; Zhang, Yuhua

    2016-01-01

    Purpose To investigate the microscopic structure of outer retinal tubulation (ORT) and optical properties of cone photoreceptors in vivo, we studied ORT appearance by multimodal imaging, including spectral domain optical coherence tomography (SD-OCT) and adaptive optics scanning laser ophthalmoscopy (AOSLO). Methods Four eyes of 4 subjects with advanced AMD underwent color fundus photography, infrared reflectance imaging, SD-OCT, and AOSLO with a high-resolution research instrument. ORT was identified in closely spaced (11 μm) SD-OCT volume scans. Results ORT in cross-sectional and en face SD-OCT was a hyporeflective area representing a lumen surrounded by a hyperreflective border consisting of cone photoreceptor mitochondria and external limiting membrane, per previous histology. In contrast, ORT by AOSLO was a hyporeflective structure of the same shape as in en face SD-OCT but lacking visualizable cone photoreceptors. Conclusion Lack of ORT cone reflectivity by AOSLO indicates that cones have lost their normal directionality and waveguiding property due to loss of outer segments and subsequent retinal remodeling. Reflective ORT cones by SD-OCT, in contrast, may depend partly on mitochondria as light scatterers within inner segments of these degenerating cells, a phenomenon enhanced by coherent imaging. Multimodal imaging of ORT provides insight into cone degeneration and reflectivity sources in OCT. PMID:27584549

  19. The Neurochemical and Microstructural Changes in the Brain of Systemic Lupus Erythematosus Patients: A Multimodal MRI Study

    PubMed Central

    Zhang, Zhiyan; Wang, Yukai; Shen, Zhiwei; Yang, Zhongxian; Li, Li; Chen, Dongxiao; Yan, Gen; Cheng, Xiaofang; Shen, Yuanyu; Tang, Xiangyong; Hu, Wei; Wu, Renhua

    2016-01-01

    The diagnosis and pathology of neuropsychiatric systemic lupus erythematosus (NPSLE) remains challenging. Herein, we used multimodal imaging to assess anatomical and functional changes in brains of SLE patients instead of a single MRI approach generally used in previous studies. Twenty-two NPSLE patients, 21 non-NPSLE patients and 20 healthy controls (HCs) underwent 3.0 T MRI with multivoxel magnetic resonance spectroscopy, T1-weighted volumetric images for voxel based morphometry (VBM) and diffusional kurtosis imaging (DKI) scans. While there were findings in other basal ganglia regions, the most consistent findings were observed in the posterior cingulate gyrus (PCG). The reduction of multiple metabolite concentration was observed in the PCG in the two patient groups, and the NPSLE patients were more prominent. The two patient groups displayed lower diffusional kurtosis (MK) values in the bilateral PCG compared with HCs (p < 0.01) as assessed by DKI. Grey matter reduction in the PCG was observed in the NPSLE group using VBM. Positive correlations among cognitive function scores and imaging metrics in bilateral PCG were detected. Multimodal imaging is useful for evaluating SLE subjects and potentially determining disease pathology. Impairments of cognitive function in SLE patients may be interpreted by metabolic and microstructural changes in the PCG. PMID:26758023

  20. Luminomagnetic Eu3+- and Dy3+-doped hydroxyapatite for multimodal imaging.

    PubMed

    Tesch, Annemarie; Wenisch, Christoph; Herrmann, Karl-Heinz; Reichenbach, Jürgen R; Warncke, Paul; Fischer, Dagmar; Müller, Frank A

    2017-12-01

    Multimodal imaging has recently attracted much attention due to the advantageous combination of different imaging modalities, like photoluminescence (PL) and magnetic resonance imaging (MRI). In the present study, luminescent and magnetic hydroxyapatites (HAp) were prepared via doping with europium (Eu 3+ ) and dysprosium (Dy 3+ ), respectively. Co-doping of Eu 3+ and Dy 3+ was used to combine the desired physical properties. Both lanthanide ions were successfully incorporated in the HAp crystal lattice, where they preferentially occupied calcium(I) sites. While Eu-doped HAp (Eu:HAp) exhibits dopant concentration dependent persistent PL properties, Dy-doped HAp (Dy:HAp) shows paramagnetic behavior due to the high magnetic moment of Dy 3+ . Co-doped HAp (Eu:Dy:HAp) nanoparticles combine both properties in one single crystal. Remarkably, multimodal co-doped HAp features enhanced PL properties due to an energy transfer from Dy 3+ sensitizer to Eu 3+ activator ions. Eu:Dy:HAp exhibits strong transverse relaxation effects with a maximum transverse relaxivity of 83.3L/(mmol·s). Due to their tunable PL, magnetic properties and cytocompatibility Eu:-, Dy:- and Eu:Dy:HAp represent promising biocompatible ceramic materials for luminescence imaging that simultaneously may serve as a contrast agent for MRI in permanent implants or functional coatings. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  2. Shape-Controlled Synthesis of Isotopic Yttrium-90-Labeled Rare Earth Fluoride Nanocrystals for Multimodal Imaging.

    PubMed

    Paik, Taejong; Chacko, Ann-Marie; Mikitsh, John L; Friedberg, Joseph S; Pryma, Daniel A; Murray, Christopher B

    2015-09-22

    Isotopically labeled nanomaterials have recently attracted much attention in biomedical research, environmental health studies, and clinical medicine because radioactive probes allow the elucidation of in vitro and in vivo cellular transport mechanisms, as well as the unambiguous distribution and localization of nanomaterials in vivo. In addition, nanocrystal-based inorganic materials have a unique capability of customizing size, shape, and composition; with the potential to be designed as multimodal imaging probes. Size and shape of nanocrystals can directly influence interactions with biological systems, hence it is important to develop synthetic methods to design radiolabeled nanocrystals with precise control of size and shape. Here, we report size- and shape-controlled synthesis of rare earth fluoride nanocrystals doped with the β-emitting radioisotope yttrium-90 ((90)Y). Size and shape of nanocrystals are tailored via tight control of reaction parameters and the type of rare earth hosts (e.g., Gd or Y) employed. Radiolabeled nanocrystals are synthesized in high radiochemical yield and purity as well as excellent radiolabel stability in the face of surface modification with different polymeric ligands. We demonstrate the Cerenkov radioluminescence imaging and magnetic resonance imaging capabilities of (90)Y-doped GdF3 nanoplates, which offer unique opportunities as a promising platform for multimodal imaging and targeted therapy.

  3. Development of a Multi-modal Tissue Diagnostic System Combining High Frequency Ultrasound and Photoacoustic Imaging with Lifetime Fluorescence Spectroscopy

    PubMed Central

    Sun, Yang; Stephens, Douglas N.; Park, Jesung; Sun, Yinghua; Marcu, Laura; Cannata, Jonathan M.; Shung, K. Kirk

    2010-01-01

    We report the development and validate a multi-modal tissue diagnostic technology, which combines three complementary techniques into one system including ultrasound backscatter microscopy (UBM), photoacoustic imaging (PAI), and time-resolved laser-induced fluorescence spectroscopy (TR-LIFS). UBM enables the reconstruction of the tissue microanatomy. PAI maps the optical absorption heterogeneity of the tissue associated with structure information and has the potential to provide functional imaging of the tissue. Examination of the UBM and PAI images allows for localization of regions of interest for TR-LIFS evaluation of the tissue composition. The hybrid probe consists of a single element ring transducer with concentric fiber optics for multi-modal data acquisition. Validation and characterization of the multi-modal system and ultrasonic, photoacoustic, and spectroscopic data coregistration were conducted in a physical phantom with properties of ultrasound scattering, optical absorption, and fluorescence. The UBM system with the 41 MHz ring transducer can reach the axial and lateral resolution of 30 and 65 μm, respectively. The PAI system with 532 nm excitation light from a Nd:YAG laser shows great contrast for the distribution of optical absorbers. The TR-LIFS system records the fluorescence decay with the time resolution of ~300 ps and a high sensitivity of nM concentration range. Biological phantom constructed with different types of tissues (tendon and fat) was used to demonstrate the complementary information provided by the three modalities. Fluorescence spectra and lifetimes were compared to differentiate chemical composition of tissues at the regions of interest determined by the coregistered high resolution UBM and PAI image. Current results demonstrate that the fusion of these techniques enables sequentially detection of functional, morphological, and compositional features of biological tissue, suggesting potential applications in diagnosis of tumors and atherosclerotic plaques. PMID:21894259

  4. Development of a Multi-modal Tissue Diagnostic System Combining High Frequency Ultrasound and Photoacoustic Imaging with Lifetime Fluorescence Spectroscopy.

    PubMed

    Sun, Yang; Stephens, Douglas N; Park, Jesung; Sun, Yinghua; Marcu, Laura; Cannata, Jonathan M; Shung, K Kirk

    2008-01-01

    We report the development and validate a multi-modal tissue diagnostic technology, which combines three complementary techniques into one system including ultrasound backscatter microscopy (UBM), photoacoustic imaging (PAI), and time-resolved laser-induced fluorescence spectroscopy (TR-LIFS). UBM enables the reconstruction of the tissue microanatomy. PAI maps the optical absorption heterogeneity of the tissue associated with structure information and has the potential to provide functional imaging of the tissue. Examination of the UBM and PAI images allows for localization of regions of interest for TR-LIFS evaluation of the tissue composition. The hybrid probe consists of a single element ring transducer with concentric fiber optics for multi-modal data acquisition. Validation and characterization of the multi-modal system and ultrasonic, photoacoustic, and spectroscopic data coregistration were conducted in a physical phantom with properties of ultrasound scattering, optical absorption, and fluorescence. The UBM system with the 41 MHz ring transducer can reach the axial and lateral resolution of 30 and 65 μm, respectively. The PAI system with 532 nm excitation light from a Nd:YAG laser shows great contrast for the distribution of optical absorbers. The TR-LIFS system records the fluorescence decay with the time resolution of ~300 ps and a high sensitivity of nM concentration range. Biological phantom constructed with different types of tissues (tendon and fat) was used to demonstrate the complementary information provided by the three modalities. Fluorescence spectra and lifetimes were compared to differentiate chemical composition of tissues at the regions of interest determined by the coregistered high resolution UBM and PAI image. Current results demonstrate that the fusion of these techniques enables sequentially detection of functional, morphological, and compositional features of biological tissue, suggesting potential applications in diagnosis of tumors and atherosclerotic plaques.

  5. IDH mutation assessment of glioma using texture features of multimodal MR images

    NASA Astrophysics Data System (ADS)

    Zhang, Xi; Tian, Qiang; Wu, Yu-Xia; Xu, Xiao-Pan; Li, Bao-Juan; Liu, Yi-Xiong; Liu, Yang; Lu, Hong-Bing

    2017-03-01

    Purpose: To 1) find effective texture features from multimodal MRI that can distinguish IDH mutant and wild status, and 2) propose a radiomic strategy for preoperatively detecting IDH mutation patients with glioma. Materials and Methods: 152 patients with glioma were retrospectively included from the Cancer Genome Atlas. Corresponding T1-weighted image before- and post-contrast, T2-weighted image and fluid-attenuation inversion recovery image from the Cancer Imaging Archive were analyzed. Specific statistical tests were applied to analyze the different kind of baseline information of LrGG patients. Finally, 168 texture features were derived from multimodal MRI per patient. Then the support vector machine-based recursive feature elimination (SVM-RFE) and classification strategy was adopted to find the optimal feature subset and build the identification models for detecting the IDH mutation. Results: Among 152 patients, 92 and 60 were confirmed to be IDH-wild and mutant, respectively. Statistical analysis showed that the patients without IDH mutation was significant older than patients with IDH mutation (p<0.01), and the distribution of some histological subtypes was significant different between IDH wild and mutant groups (p<0.01). After SVM-RFE, 15 optimal features were determined for IDH mutation detection. The accuracy, sensitivity, specificity, and AUC after SVM-RFE and parameter optimization were 82.2%, 85.0%, 78.3%, and 0.841, respectively. Conclusion: This study presented a radiomic strategy for noninvasively discriminating IDH mutation of patients with glioma. It effectively incorporated kinds of texture features from multimodal MRI, and SVM-based classification strategy. Results suggested that features selected from SVM-RFE were more potential to identifying IDH mutation. The proposed radiomics strategy could facilitate the clinical decision making in patients with glioma.

  6. Frameless multimodal image guidance of localized convection-enhanced delivery of therapeutics in the brain

    PubMed Central

    van der Bom, Imramsjah M J; Moser, Richard P; Gao, Guanping; Sena-Esteves, Miguel; Aronin, Neil

    2013-01-01

    Introduction Convection-enhanced delivery (CED) has been shown to be an effective method of administering macromolecular compounds into the brain that are unable to cross the blood-brain barrier. Because the administration is highly localized, accurate cannula placement by minimally invasive surgery is an important requisite. This paper reports on the use of an angiographic c-arm system which enables truly frameless multimodal image guidance during CED surgery. Methods A microcannula was placed into the striatum of five sheep under real-time fluoroscopic guidance using imaging data previously acquired by cone beam computed tomography (CBCT) and MRI, enabling three-dimensional navigation. After introduction of the cannula, high resolution CBCT was performed and registered with MRI to confirm the position of the cannula tip and to make adjustments as necessary. Adeno-associated viral vector-10, designed to deliver small-hairpin micro RNA (shRNAmir), was mixed with 2.0 mM gadolinium (Gd) and infused at a rate of 3 μl/min for a total of 100 μl. Upon completion, the animals were transferred to an MR scanner to assess the approximate distribution by measuring the volume of spread of Gd. Results The cannula was successfully introduced under multimodal image guidance. High resolution CBCT enabled validation of the cannula position and Gd-enhanced MRI after CED confirmed localized administration of the therapy. Conclusion A microcannula for CED was introduced into the striatum of five sheep under multimodal image guidance. The non-alloy 300 μm diameter cannula tip was well visualized using CBCT, enabling confirmation of the position of the end of the tip in the area of interest. PMID:22193239

  7. GRAFT-VERSUS-HOST DISEASE PANUVEITIS AND BILATERAL SEROUS DETACHMENTS: MULTIMODAL IMAGING ANALYSIS.

    PubMed

    Jung, Jesse J; Chen, Michael H; Rofagha, Soraya; Lee, Scott S

    2017-01-01

    To report the multimodal imaging findings and follow-up of a case of graft-versus-host disease-induced bilateral panuveitis and serous retinal detachments after allogenic bone marrow transplant for acute myeloid leukemia. A 75-year-old black man presented with acute decreased vision in both eyes for 1 week. Clinical examination and multimodal imaging, including spectral domain optical coherence tomography, fundus autofluorescence, fluorescein angiography, and swept-source optical coherence tomography angiography (Investigational Device; Carl Zeiss Meditec Inc) were performed. Clinical examination of the patient revealed anterior and posterior inflammation and bilateral serous retinal detachments. Ultra-widefield fundus autofluorescence demonstrated hyperautofluorescence secondary to subretinal fluid; and fluorescein angiography revealed multiple areas of punctate hyperfluorescence, leakage, and staining of the optic discs. Spectral domain and enhanced depth imaging optical coherence tomography demonstrated subretinal fluid, a thickened, undulating retinal pigment epithelium layer, and a thickened choroid in both eyes. En-face swept-source optical coherence tomography angiography did not show any retinal vascular abnormalities but did demonstrate patchy areas of decreased choriocapillaris flow. An extensive systemic infectious and malignancy workup was negative and the patient was treated with high-dose oral prednisone immunosuppression. Subsequent 6-month follow-up demonstrated complete resolution of the inflammation and bilateral serous detachments after completion of the prednisone taper over a 3-month period. Graft-versus-host disease panuveitis and bilateral serous retinal detachments are rare complications of allogenic bone marrow transplant for acute myeloid leukemia and can be diagnosed with clinical and multimodal imaging analysis. This form of autoimmune inflammation may occur after the recovery of T-cell activity within the donor graft targeting the host. Infectious and recurrent malignancy must be ruled out before initiation of immunosuppression, which can affectively treat this form of graft-versus-host disease.

  8. Morphology supporting function: attenuation correction for SPECT/CT, PET/CT, and PET/MR imaging

    PubMed Central

    Lee, Tzu C.; Alessio, Adam M.; Miyaoka, Robert M.; Kinahan, Paul E.

    2017-01-01

    Both SPECT, and in particular PET, are unique in medical imaging for their high sensitivity and direct link to a physical quantity, i.e. radiotracer concentration. This gives PET and SPECT imaging unique capabilities for accurately monitoring disease activity for the purposes of clinical management or therapy development. However, to achieve a direct quantitative connection between the underlying radiotracer concentration and the reconstructed image values several confounding physical effects have to be estimated, notably photon attenuation and scatter. With the advent of dual-modality SPECT/CT, PET/CT, and PET/MR scanners, the complementary CT or MR image data can enable these corrections, although there are unique challenges for each combination. This review covers the basic physics underlying photon attenuation and scatter and summarizes technical considerations for multimodal imaging with regard to PET and SPECT quantification and methods to address the challenges for each multimodal combination. PMID:26576737

  9. The Use of Interactive Raster Graphics in the Display and Manipulation of Multidimensional Data

    NASA Technical Reports Server (NTRS)

    Anderson, D. C.

    1981-01-01

    Techniques for the review, display, and manipulation of multidimensional data are developed and described. Multidimensional data is meant in this context to describe scalar data associated with a three dimensional geometry or otherwise too complex to be well represented by traditional graphs. Raster graphics techniques are used to display a shaded image of a three dimensional geometry. The use of color to represent scalar data associated with the geometries in shaded images is explored. Distinct hues are associated with discrete data ranges, thus emulating the traditional representation of data with isarithms, or lines of constant numerical value. Data ranges are alternatively associated with a continuous spectrum of hues to show subtler data trends. The application of raster graphics techniques to the display of bivariate functions is explored.

  10. CHARACTERIZING PHOTORECEPTOR CHANGES IN ACUTE POSTERIOR MULTIFOCAL PLACOID PIGMENT EPITHELIOPATHY USING ADAPTIVE OPTICS.

    PubMed

    Roberts, Philipp K; Nesper, Peter L; Onishi, Alex C; Skondra, Dimitra; Jampol, Lee M; Fawzi, Amani A

    2018-01-01

    To characterize lesions of acute posterior multifocal placoid pigment epitheliopathy (APMPPE) by multimodal imaging including adaptive optics scanning laser ophthalmoscopy (AOSLO). We included patients with APMPPE at different stages of evolution of the placoid lesions. Color fundus photography, spectral domain optical coherence tomography, infrared reflectance, fundus autofluorescence, and AOSLO images were obtained and registered to correlate microstructural changes. Eight eyes of four patients (two women) were included and analyzed by multimodal imaging. Photoreceptor reflectivity within APMPPE lesions was more heterogeneous than in adjacent healthy areas. Hyperpigmentation on color fundus photography appeared hyperreflective on infrared reflectance and on AOSLO. Irregularity of the interdigitation zone and the photoreceptor inner and outer segment junctions (IS/OS) on spectral domain optical coherence tomography was associated with photoreceptor hyporeflectivity on AOSLO. Interruption of the interdigitation zone or IS/OS was associated with loss of photoreceptor reflectivity on AOSLO. Irregularities in the reflectivity of the photoreceptor mosaic are visible on AOSLO even in inactive APMPPE lesions, where the photoreceptor bands on spectral domain optical coherence tomography have recovered. Adaptive optics scanning laser ophthalmoscopy combined with multimodal imaging has the potential to enhance our understanding of photoreceptor involvement in APMPPE.

  11. Tissue imaging using full field optical coherence microscopy with short multimode fiber probe

    NASA Astrophysics Data System (ADS)

    Sato, Manabu; Eto, Kai; Goto, Tetsuhiro; Kurotani, Reiko; Abe, Hiroyuki; Nishidate, Izumi

    2018-03-01

    In achieving minimally invasive accessibility to deeply located regions the size of the imaging probes is important. We demonstrated full-field optical coherence tomography (FF-OCM) using an ultrathin forward-imaging short multimode fiber (SMMF) probe of 50 μm core diameter, 125 μm diameter, and 7.4 mm length for optical communications. The axial resolution was measured to be 2.14 μm and the lateral resolution was also evaluated to be below 4.38 μm using a test pattern (TP). The spatial mode and polarization characteristics of SMMF were evaluated. Inserting SMMF to in vivo rat brain, 3D images were measured and 2D information of nerve fibers was obtained. The feasibility of an SMMF as an ultrathin forward-imaging probe in FF-OCM has been demonstrated.

  12. vECTlab—A fully integrated multi-modality Monte Carlo simulation framework for the radiological imaging sciences

    NASA Astrophysics Data System (ADS)

    Peter, Jörg; Semmler, Wolfhard

    2007-10-01

    Alongside and in part motivated by recent advances in molecular diagnostics, the development of dual-modality instruments for patient and dedicated small animal imaging has gained attention by diverse research groups. The desire for such systems is high not only to link molecular or functional information with the anatomical structures, but also for detecting multiple molecular events simultaneously at shorter total acquisition times. While PET and SPECT have been integrated successfully with X-ray CT, the advance of optical imaging approaches (OT) and the integration thereof into existing modalities carry a high application potential, particularly for imaging small animals. A multi-modality Monte Carlo (MC) simulation approach at present has been developed that is able to trace high-energy (keV) as well as optical (eV) photons concurrently within identical phantom representation models. We show that the involved two approaches for ray-tracing keV and eV photons can be integrated into a unique simulation framework which enables both photon classes to be propagated through various geometry models representing both phantoms and scanners. The main advantage of such integrated framework for our specific application is the investigation of novel tomographic multi-modality instrumentation intended for in vivo small animal imaging through time-resolved MC simulation upon identical phantom geometries. Design examples are provided for recently proposed SPECT-OT and PET-OT imaging systems.

  13. Multimodal autofluorescence detection of cancer: from single cells to living organism

    NASA Astrophysics Data System (ADS)

    Horilova, J.; Cunderlikova, B.; Cagalinec, M.; Chorvat, D.; Marcek Chorvatova, A.

    2018-02-01

    Multimodal optical imaging of suspected tissues is showing to be a promising method for distinguishing suspected cancerous tissues from healthy ones. In particular, the combination of steady-state spectroscopic methods with timeresolved fluorescence provides more precise insight into native metabolism when focused on tissue autofluorescence. Cancer is linked to specific metabolic remodelation detectable spectroscopically. In this work, we evaluate possibilities and limitations of multimodal optical cancer detection in single cells, collagen-based 3D cell cultures and in living organisms (whole mice), as a representation of gradually increasing complexity of model systems.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamran, Mudassar, E-mail: kamranm@mir.wustl.edu; Fowler, Kathryn J., E-mail: fowlerk@mir.wustl.edu; Mellnick, Vincent M., E-mail: mellnickv@mir.wustl.edu

    Primary aortic neoplasms are rare. Aortic sarcoma arising after endovascular aneurysm repair (EVAR) is a scarce subset of primary aortic malignancies, reports of which are infrequent in the published literature. The diagnosis of aortic sarcoma is challenging due to its non-specific clinical presentation, and the prognosis is poor due to delayed diagnosis, rapid proliferation, and propensity for metastasis. Post-EVAR, aortic sarcomas may mimic other more common aortic processes on surveillance imaging. Radiologists are rarely knowledgeable about this rare entity for which multimodality imaging and awareness are invaluable in early diagnosis. A series of three pathologically confirmed cases are presented tomore » display the multimodality imaging features and clinical presentations of aortic sarcoma arising after EVAR.« less

  15. Multimodal device for assessment of skin malformations

    NASA Astrophysics Data System (ADS)

    Bekina, A.; Garancis, V.; Rubins, U.; Spigulis, J.; Valeine, L.; Berzina, A.

    2013-11-01

    A variety of multi-spectral imaging devices is commercially available and used for skin diagnostics and monitoring; however, an alternative cost-efficient device can provide an advanced spectral analysis of skin. A compact multimodal device for diagnosis of pigmented skin lesions was developed and tested. A polarized LED light source illuminates the skin surface at four different wavelengths - blue (450 nm), green (545 nm), red (660 nm) and infrared (940 nm). Spectra of reflected light from the 25 mm wide skin spot are imaged by a CMOS sensor. Four spectral images are obtained for mapping of the main skin chromophores. The specific chromophore distribution differences between different skin malformations were analyzed and information of subcutaneous structures was consecutively extracted.

  16. Multimodal instrument for high-sensitivity autofluorescence and spectral optical coherence tomography of the human eye fundus

    PubMed Central

    Komar, Katarzyna; Stremplewski, Patrycjusz; Motoczyńska, Marta; Szkulmowski, Maciej; Wojtkowski, Maciej

    2013-01-01

    In this paper we present a multimodal device for imaging fundus of human eye in vivo which combines functionality of autofluorescence by confocal SLO with Fourier domain OCT. Native fluorescence of human fundus was excited by modulated laser beam (λ = 473 nm, 20 MHz) and lock-in detection was applied resulting in improving sensitivity. The setup allows for acquisition of high resolution OCT and high contrast AF images using fluorescence excitation power of 50-65 μW without averaging consecutive images. Successful functioning of constructed device have been demonstrated for 8 healthy volunteers of different age ranging from 24 to 83 years old. PMID:24298426

  17. Multifunctional fluorescent and magnetic nanoparticles for biomedical applications

    NASA Astrophysics Data System (ADS)

    Selvan, Subramanian T.

    2012-03-01

    Hybrid multifunctional nanoparticles (NPs) are emerging as useful probes for magnetic based targeting, delivery, cell separation, magnetic resonance imaging (MRI), and fluorescence-based bio-labeling applications. Assessing from the literature, the development of multifunctional NPs for multimodality imaging is still in its infancy state. This report focuses on our recent work on quantum dots (QDs), magnetic NPs (MNPs) and bi-functional NPs (composed of either QDs or rare-earth NPs, and magnetic NPs - iron oxide or gadolinium oxide) for multimodality imaging based biomedical applications. The combination of MRI and fluorescence would ally each other in improving the sensitivity and resolution, resulting in improved and early diagnosis of the disease. The challenges in this area are discussed.

  18. Resolution and throughput optimized intraoperative spectrally encoded coherence tomography and reflectometry (iSECTR) for multimodal imaging during ophthalmic microsurgery

    NASA Astrophysics Data System (ADS)

    Malone, Joseph D.; El-Haddad, Mohamed T.; Leeburg, Kelsey C.; Terrones, Benjamin D.; Tao, Yuankai K.

    2018-02-01

    Limited visualization of semi-transparent structures in the eye remains a critical barrier to improving clinical outcomes and developing novel surgical techniques. While increases in imaging speed has enabled intraoperative optical coherence tomography (iOCT) imaging of surgical dynamics, several critical barriers to clinical adoption remain. Specifically, these include (1) static field-of-views (FOVs) requiring manual instrument-tracking; (2) high frame-rates require sparse sampling, which limits FOV; and (3) small iOCT FOV also limits the ability to co-register data with surgical microscopy. We previously addressed these limitations in image-guided ophthalmic microsurgery by developing microscope-integrated multimodal intraoperative swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography. Complementary en face images enabled orientation and coregistration with the widefield surgical microscope view while OCT imaging enabled depth-resolved visualization of surgical instrument positions relative to anatomic structures-of-interest. In addition, we demonstrated novel integrated segmentation overlays for augmented-reality surgical guidance. Unfortunately, our previous system lacked the resolution and optical throughput for in vivo retinal imaging and necessitated removal of cornea and lens. These limitations were predominately a result of optical aberrations from imaging through a shared surgical microscope objective lens, which was modeled as a paraxial surface. Here, we present an optimized intraoperative spectrally encoded coherence tomography and reflectometry (iSECTR) system. We use a novel lens characterization method to develop an accurate model of surgical microscope objective performance and balance out inherent aberrations using iSECTR relay optics. Using this system, we demonstrate in vivo multimodal ophthalmic imaging through a surgical microscope

  19. Molecular Imaging and Therapy of Prostate Cancer

    DTIC Science & Technology

    2015-10-01

    arsenic-based, IGF1R-targeted radiopharmaceuticals can allow for PET imaging, IRT, and monitoring the therapeutic response of PCa. Specific Aims: Aim 1: To...models with PET imaging. Aim 3: To monitor the efficacy of 76As-based IRT of PCa with multimodality imaging.

  20. Multimodal Medical Image Fusion by Adaptive Manifold Filter.

    PubMed

    Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna

    2015-01-01

    Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.

  1. Ions doped melanin nanoparticle as a multiple imaging agent.

    PubMed

    Ha, Shin-Woo; Cho, Hee-Sang; Yoon, Young Il; Jang, Moon-Sun; Hong, Kwan Soo; Hui, Emmanuel; Lee, Jung Hee; Yoon, Tae-Jong

    2017-10-10

    Multimodal nanomaterials are useful for providing enhanced diagnostic information simultaneously for a variety of in vivo imaging methods. According to our research findings, these multimodal nanomaterials offer promising applications for cancer therapy. Melanin nanoparticles can be used as a platform imaging material and they can be simply produced by complexation with various imaging active ions. They are capable of specifically targeting epidermal growth factor receptor (EGFR)-expressing cancer cells by being anchored with a specific antibody. Ion-doped melanin nanoparticles were found to have high bioavailability with long-term stability in solution, without any cytotoxicity in both in vitro and in vivo systems. By combining different imaging modalities with melanin particles, we can use the complexes to obtain faster diagnoses by computed tomography deep-body imaging and greater detailed pathological diagnostic information by magnetic resonance imaging. The ion-doped melanin nanoparticles also have applications for radio-diagnostic treatment and radio imaging-guided surgery, warranting further proof of concept experimental.

  2. Multimodal imaging system for dental caries detection

    NASA Astrophysics Data System (ADS)

    Liang, Rongguang; Wong, Victor; Marcus, Michael; Burns, Peter; McLaughlin, Paul

    2007-02-01

    Dental caries is a disease in which minerals of the tooth are dissolved by surrounding bacterial plaques. A caries process present for some time may result in a caries lesion. However, if it is detected early enough, the dentist and dental professionals can implement measures to reverse and control caries. Several optical, nonionized methods have been investigated and used to detect dental caries in early stages. However, there is not a method that can singly detect the caries process with both high sensitivity and high specificity. In this paper, we present a multimodal imaging system that combines visible reflectance, fluorescence, and Optical Coherence Tomography (OCT) imaging. This imaging system is designed to obtain one or more two-dimensional images of the tooth (reflectance and fluorescence images) and a three-dimensional OCT image providing depth and size information of the caries. The combination of two- and three-dimensional images of the tooth has the potential for highly sensitive and specific detection of dental caries.

  3. How do nurses in palliative care perceive the concept of self-image?

    PubMed

    Jeppsson, Margareth; Thomé, Bibbi

    2015-09-01

    Nursing research indicates that serious illness and impending death influence the individual's self-image. Few studies define what self-image means. Thus it seems to be urgent to explore how nurses in palliative care perceive the concept of self-image, to get a deeper insight into the concept's applicability in palliative care. To explore how nurses in palliative care perceive the concept of self-image. Qualitative descriptive design. In-depth interviews with 17 nurses in palliative care were analysed using phenomenography. The study gained ethical approval. The concept of self-image was perceived as both a familiar and an unfamiliar concept. Four categories of description with a gradually increasing complexity were distinguished: Identity, Self-assessment, Social function and Self-knowledge. They represent the collective understanding of the concept and are illustrated in a 'self-image map'. The identity-category emerged as the most comprehensive one and includes the understanding of 'Who I am' in a multidimensional way. The collective understanding of the concept of self-image include multi-dimensional aspects which not always were evident for the individual nurse. Thus, the concept of self-image needs to be more verbalised and reflected on if nurses are to be comfortable with it and adopt it in their caring context. The 'self-image map' can be used in this reflection to expand the understanding of the concept. If the multi-dimensional aspects of the concept self-image could be explored there are improved possibilities to make identity-promoting strategies visible and support person-centred care. © 2014 Nordic College of Caring Science.

  4. Multimodal Imaging of Brain Connectivity Using the MIBCA Toolbox: Preliminary Application to Alzheimer's Disease

    NASA Astrophysics Data System (ADS)

    Ribeiro, André Santos; Lacerda, Luís Miguel; Silva, Nuno André da; Ferreira, Hugo Alexandre

    2015-06-01

    The Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox is a fully automated all-in-one connectivity analysis toolbox that offers both pre-processing, connectivity, and graph theory analysis of multimodal images such as anatomical, diffusion, and functional MRI, and PET. In this work, the MIBCA functionalities were used to study Alzheimer's Disease (AD) in a multimodal MR/PET approach. Materials and Methods: Data from 12 healthy controls, and 36 patients with EMCI, LMCI and AD (12 patients for each group) were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu), including T1-weighted (T1-w), Diffusion Tensor Imaging (DTI) data, and 18F-AV-45 (florbetapir) dynamic PET data from 40-60 min post injection (4x5 min). Both MR and PET data were automatically pre-processed for all subjects using MIBCA. T1-w data was parcellated into cortical and subcortical regions-of-interest (ROIs), and the corresponding thicknesses and volumes were calculated. DTI data was used to compute structural connectivity matrices based on fibers connecting pairs of ROIs. Lastly, dynamic PET images were summed, and the relative Standard Uptake Values calculated for each ROI. Results: An overall higher uptake of 18F-AV-45, consistent with an increased deposition of beta-amyloid, was observed for the AD group. Additionally, patients showed significant cortical atrophy (thickness and volume) especially in the entorhinal cortex and temporal areas, and a significant increase in Mean Diffusivity (MD) in the hippocampus, amygdala and temporal areas. Furthermore, patients showed a reduction of fiber connectivity with the progression of the disease, especially for intra-hemispherical connections. Conclusion: This work shows the potential of the MIBCA toolbox for the study of AD, as findings were shown to be in agreement with the literature. Here, only structural changes and beta-amyloid accumulation were considered. Yet, MIBCA is further able to process fMRI and different radiotracers, thus leading to integration of functional information, and supporting the research for new multimodal biomarkers for AD and other neurodegenerative diseases.

  5. Evaluation of the role of CD207 on Langerhans cells in a murine model of atopic dermatitis by in situ imaging using Cr:forsterite laser-based multimodality nonlinear microscopy

    NASA Astrophysics Data System (ADS)

    Lee, Jyh-Hong; Tsai, Ming-Rung; Sun, Chi-Kuang; Chiang, Bor-Luen

    2012-11-01

    Atopic dermatitis (AD) is an allergic inflammatory disease of skin. It remains unclear that CD207 of Langerhans cells (LCs) plays a central role in the development of allergic sensitization. There is little data on LCs within the microenviroment in vivo. We used a murine model of epicutaneous (EC) ovalbumin (OVA) sensitization inducing an inflammatory skin resembling AD to explore the role of CD207 in the pathogenesis of AD. Cr:forsterite laser-based multimodality nonlinear microscopy was applied for in situ imaging. Peritoneal injections of Alexa Fluor 647-rat anti-mouse CD207 into mice were performed to specifically trace the LCs. Peritoneal injections of OVA-Alexa Fluor 647 conjugate into mice were performed to specifically trace the OVA. We found that combining Alexa Fluor fluorescent probes with multimodality nonlinear microscopy permitted the unequivocal in situ imaging of CD207-expressing LCs. The relevant time-course, expressional, and functional studies reveal that CD207 of LCs plays an essential role during the induction of EC sensitization. We establish and validate that Cr:forsterite laser-based multimodality nonlinear microscopy is applicable for the specific detection of labeled mAb-bound LCs and labeled antigen. We suggest that CD207-expressing LCs initiate the allergic response through the CD207 mediated epicutaneous sensitization associated with the development of AD.

  6. PET-CMR in heart failure - synergistic or redundant imaging?

    PubMed

    Quail, Michael A; Sinusas, Albert J

    2017-07-01

    Imaging in heart failure (HF) provides data for diagnosis, prognosis and disease monitoring. Both MRI and nuclear imaging techniques have been successfully used for this purpose in HF. Positron Emission Tomography-Cardiac Magnetic Resonance (PET-CMR) is an example of a new multimodality diagnostic imaging technique with potential applications in HF. The threshold for adopting a new diagnostic tool to clinical practice must necessarily be high, lest they exacerbate costs without improving care. New modalities must demonstrate clinical superiority, or at least equivalence, combined with another important advantage, such as lower cost or improved patient safety. The purpose of this review is to outline the current status of multimodality PET-CMR with regard to HF applications, and determine whether the clinical utility of this new technology justifies the cost.

  7. Multimodal nonlinear imaging of arabidopsis thaliana root cell

    NASA Astrophysics Data System (ADS)

    Jang, Bumjoon; Lee, Sung-Ho; Woo, Sooah; Park, Jong-Hyun; Lee, Myeong Min; Park, Seung-Han

    2017-07-01

    Nonlinear optical microscopy has enabled the possibility to explore inside the living organisms. It utilizes ultrashort laser pulse with long wavelength (greater than 800nm). Ultrashort pulse produces high peak power to induce nonlinear optical phenomenon such as two-photon excitation fluorescence (TPEF) and harmonic generations in the medium while maintaining relatively low average energy pre area. In plant developmental biology, confocal microscopy is widely used in plant cell imaging after the development of biological fluorescence labels in mid-1990s. However, fluorescence labeling itself affects the sample and the sample deviates from intact condition especially when labelling the entire cell. In this work, we report the dynamic images of Arabidopsis thaliana root cells. This demonstrates the multimodal nonlinear optical microscopy is an effective tool for long-term plant cell imaging.

  8. Simultaneous acquisition of magnetic resonance spectroscopy (MRS) data and positron emission tomography (PET) images with a prototype MR-compatible, small animal PET imager

    NASA Astrophysics Data System (ADS)

    Raylman, Raymond R.; Majewski, Stan; Velan, S. Sendhil; Lemieux, Susan; Kross, Brian; Popov, Vladimir; Smith, Mark F.; Weisenberger, Andrew G.

    2007-06-01

    Multi-modality imaging (such as PET-CT) is rapidly becoming a valuable tool in the diagnosis of disease and in the development of new drugs. Functional images produced with PET, fused with anatomical images created by MRI, allow the correlation of form with function. Perhaps more exciting than the combination of anatomical MRI with PET, is the melding of PET with MR spectroscopy (MRS). Thus, two aspects of physiology could be combined in novel ways to produce new insights into the physiology of normal and pathological processes. Our team is developing a system to acquire MRI images and MRS spectra, and PET images contemporaneously. The prototype MR-compatible PET system consists of two opposed detector heads (appropriate in size for small animal imaging), operating in coincidence mode with an active field-of-view of ˜14 cm in diameter. Each detector consists of an array of LSO detector elements coupled through a 2-m long fiber optic light guide to a single position-sensitive photomultiplier tube. The use of light guides allows these magnetic field-sensitive elements of the PET imager to be positioned outside the strong magnetic field of our 3T MRI scanner. The PET scanner imager was integrated with a 12-cm diameter, 12-leg custom, birdcage coil. Simultaneous MRS spectra and PET images were successfully acquired from a multi-modality phantom consisting of a sphere filled with 17 brain relevant substances and a positron-emitting radionuclide. There were no significant changes in MRI or PET scanner performance when both were present in the MRI magnet bore. This successful initial test demonstrates the potential for using such a multi-modality to obtain complementary MRS and PET data.

  9. Multimode C-arm fluoroscopy, tomosynthesis, and cone-beam CT for image-guided interventions: from proof of principle to patient protocols

    NASA Astrophysics Data System (ADS)

    Siewerdsen, J. H.; Daly, M. J.; Bachar, G.; Moseley, D. J.; Bootsma, G.; Brock, K. K.; Ansell, S.; Wilson, G. A.; Chhabra, S.; Jaffray, D. A.; Irish, J. C.

    2007-03-01

    High-performance intraoperative imaging is essential to an ever-expanding scope of therapeutic procedures ranging from tumor surgery to interventional radiology. The need for precise visualization of bony and soft-tissue structures with minimal obstruction to the therapy setup presents challenges and opportunities in the development of novel imaging technologies specifically for image-guided procedures. Over the past ~5 years, a mobile C-arm has been modified in collaboration with Siemens Medical Solutions for 3D imaging. Based upon a Siemens PowerMobil, the device includes: a flat-panel detector (Varian PaxScan 4030CB); a motorized orbit; a system for geometric calibration; integration with real-time tracking and navigation (NDI Polaris); and a computer control system for multi-mode fluoroscopy, tomosynthesis, and cone-beam CT. Investigation of 3D imaging performance (noise-equivalent quanta), image quality (human observer studies), and image artifacts (scatter, truncation, and cone-beam artifacts) has driven the development of imaging techniques appropriate to a host of image-guided interventions. Multi-mode functionality presents a valuable spectrum of acquisition techniques: i.) fluoroscopy for real-time 2D guidance; ii.) limited-angle tomosynthesis for fast 3D imaging (e.g., ~10 sec acquisition of coronal slices containing the surgical target); and iii.) fully 3D cone-beam CT (e.g., ~30-60 sec acquisition providing bony and soft-tissue visualization across the field of view). Phantom and cadaver studies clearly indicate the potential for improved surgical performance - up to a factor of 2 increase in challenging surgical target excisions. The C-arm system is currently being deployed in patient protocols ranging from brachytherapy to chest, breast, spine, and head and neck surgery.

  10. Adaptive wavefront shaping for controlling nonlinear multimode interactions in optical fibres

    NASA Astrophysics Data System (ADS)

    Tzang, Omer; Caravaca-Aguirre, Antonio M.; Wagner, Kelvin; Piestun, Rafael

    2018-06-01

    Recent progress in wavefront shaping has enabled control of light propagation inside linear media to focus and image through scattering objects. In particular, light propagation in multimode fibres comprises complex intermodal interactions and rich spatiotemporal dynamics. Control of physical phenomena in multimode fibres and its applications are in their infancy, opening opportunities to take advantage of complex nonlinear modal dynamics. Here, we demonstrate a wavefront shaping approach for controlling nonlinear phenomena in multimode fibres. Using a spatial light modulator at the fibre input, real-time spectral feedback and a genetic algorithm optimization, we control a highly nonlinear multimode stimulated Raman scattering cascade and its interplay with four-wave mixing via a flexible implicit control on the superposition of modes coupled into the fibre. We show versatile spectrum manipulations including shifts, suppression, and enhancement of Stokes and anti-Stokes peaks. These demonstrations illustrate the power of wavefront shaping to control and optimize nonlinear wave propagation.

  11. A Remaking Pedagogy: Adaptation and Archetypes in the Child's Multimodal Reading and Writing

    ERIC Educational Resources Information Center

    Berger, Richard; Zezulkova, Marketa

    2018-01-01

    This paper proposes combining theories about, and practices of, using archetypes and adaptation in education for the purposes of multimodal literacy learning. Within such contexts, children of primary school age act as readers, performers and researchers, exploring and analysing existing adaptations of archetypal stories and images across time,…

  12. Multimodal MRI for early diabetic mild cognitive impairment: study protocol of a prospective diagnostic trial.

    PubMed

    Yu, Ying; Sun, Qian; Yan, Lin-Feng; Hu, Yu-Chuan; Nan, Hai-Yan; Yang, Yang; Liu, Zhi-Cheng; Wang, Wen; Cui, Guang-Bin

    2016-08-24

    Type 2 diabetes mellitus (T2DM) is a risk factor for dementia. Mild cognitive impairment (MCI), an intermediary state between normal cognition and dementia, often occurs during the prodromal diabetic stage, making early diagnosis and intervention of MCI very important. Latest neuroimaging techniques revealed some underlying microstructure alterations for diabetic MCI, from certain aspects. But there still lacks an integrated multimodal MRI system to detect early neuroimaging changes in diabetic MCI patients. Thus, we intended to conduct a diagnostic trial using multimodal MRI techniques to detect early diabetic MCI that is determined by the Montreal Cognitive Assessment (MoCA). In this study, healthy controls, prodromal diabetes and diabetes subjects (53 subjects/group) aged 40-60 years will be recruited from the physical examination center of Tangdu Hospital. The neuroimaging and psychometric measurements will be repeated at a 0.5 year-interval for 2.5 years' follow-up. The primary outcome measures are 1) Microstructural and functional alterations revealed with multimodal MRI scans including structure magnetic resonance imaging (sMRI), resting state functional magnetic resonance imaging (rs-fMRI), diffusion kurtosis imaging (DKI), and three-dimensional pseudo-continuous arterial spin labeling (3D-pCASL); 2) Cognition evaluation with MoCA. The second outcome measures are obesity, metabolic characteristics, lifestyle and quality of life. The study will provide evidence for the potential use of multimodal MRI techniques with psychometric evaluation in diagnosing MCI at prodromal diabetic stage so as to help decision making in early intervention and improve the prognosis of T2DM. This study has been registered to ClinicalTrials.gov ( NCT02420470 ) on April 2, 2015 and published on July 29, 2015.

  13. On-road anomaly detection by multimodal sensor analysis and multimedia processing

    NASA Astrophysics Data System (ADS)

    Orhan, Fatih; Eren, P. E.

    2014-03-01

    The use of smartphones in Intelligent Transportation Systems is gaining popularity, yet many challenges exist in developing functional applications. Due to the dynamic nature of transportation, vehicular social applications face complexities such as developing robust sensor management, performing signal and image processing tasks, and sharing information among users. This study utilizes a multimodal sensor analysis framework which enables the analysis of sensors in multimodal aspect. It also provides plugin-based analyzing interfaces to develop sensor and image processing based applications, and connects its users via a centralized application as well as to social networks to facilitate communication and socialization. With the usage of this framework, an on-road anomaly detector is being developed and tested. The detector utilizes the sensors of a mobile device and is able to identify anomalies such as hard brake, pothole crossing, and speed bump crossing. Upon such detection, the video portion containing the anomaly is automatically extracted in order to enable further image processing analysis. The detection results are shared on a central portal application for online traffic condition monitoring.

  14. Non-responders to cardiac resynchronization therapy: Insights from multimodality imaging and electrocardiography. A brief review.

    PubMed

    Carità, Patrizia; Corrado, Egle; Pontone, Gianluca; Curnis, Antonio; Bontempi, Luca; Novo, Giuseppina; Guglielmo, Marco; Ciaramitaro, Gianfranco; Assennato, Pasquale; Novo, Salvatore; Coppola, Giuseppe

    2016-12-15

    Cardiac resynchronization therapy (CRT) is a successful strategy for heart failure (HF) patients. The pre-requisite for the response is the evidence of electrical dyssynchrony on the surface electrocardiogram usually as left bundle branch block (LBBB). Non-response to CRT is a significant problem in clinical practice. Patient selection, inadequate delivery and sub-optimal left ventricle lead position may be important causes. In an effort to improve CRT response multimodality imaging (especially echocardiography, computed tomography and cardiac magnetic resonance) could play a decisive role and extensive literature has been published on the matter. However, we are so far from routinary use in clinical practice. Electrocardiography (with respect to left ventricle capture and QRS narrowing) may represent a simple and low cost approach for early prediction of potential non-responder, with immediate practical implications. This brief review covers the current recommendations for CRT in HF patients with particular attention to the potential benefits of multimodality imaging and electrocardiography in improving response rate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Application of a multicompartment dynamical model to multimodal optical imaging for investigating individual cerebrovascular properties

    NASA Astrophysics Data System (ADS)

    Desjardins, Michèle; Gagnon, Louis; Gauthier, Claudine; Hoge, Rick D.; Dehaes, Mathieu; Desjardins-Crépeau, Laurence; Bherer, Louis; Lesage, Frédéric

    2009-02-01

    Biophysical models of hemodynamics provide a tool for quantitative multimodal brain imaging by allowing a deeper understanding of the interplay between neural activity and blood oxygenation, volume and flow responses to stimuli. Multicompartment dynamical models that describe the dynamics and interactions of the vascular and metabolic components of evoked hemodynamic responses have been developed in the literature. In this work, multimodal data using near-infrared spectroscopy (NIRS) and diffuse correlation flowmetry (DCF) is used to estimate total baseline hemoglobin concentration (HBT0) in 7 adult subjects. A validation of the model estimate and investigation of the partial volume effect is done by comparing with time-resolved spectroscopy (TRS) measures of absolute HBT0. Simultaneous NIRS and DCF measurements during hypercapnia are then performed, but are found to be hardly reproducible. The results raise questions about the feasibility of an all-optical model-based estimation of individual vascular properties.

  16. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2010-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also

  17. Imaging the pleura.

    PubMed

    Mortensen, Chloe; Bhatnagar, Rahul; Edey, Anthony J

    2012-11-01

    Pleural disease is now recognized as an important subspecialty of pulmonary medicine, with increasing provision being made for specialist services and procedures. In response, the field of pleural imaging has advanced in recent years, especially with regard to ultrasound. Salient multimodality imaging techniques are discussed.

  18. Simulation of brain tumors in MR images for evaluation of segmentation efficacy.

    PubMed

    Prastawa, Marcel; Bullitt, Elizabeth; Gerig, Guido

    2009-04-01

    Obtaining validation data and comparison metrics for segmentation of magnetic resonance images (MRI) are difficult tasks due to the lack of reliable ground truth. This problem is even more evident for images presenting pathology, which can both alter tissue appearance through infiltration and cause geometric distortions. Systems for generating synthetic images with user-defined degradation by noise and intensity inhomogeneity offer the possibility for testing and comparison of segmentation methods. Such systems do not yet offer simulation of sufficiently realistic looking pathology. This paper presents a system that combines physical and statistical modeling to generate synthetic multi-modal 3D brain MRI with tumor and edema, along with the underlying anatomical ground truth, Main emphasis is placed on simulation of the major effects known for tumor MRI, such as contrast enhancement, local distortion of healthy tissue, infiltrating edema adjacent to tumors, destruction and deformation of fiber tracts, and multi-modal MRI contrast of healthy tissue and pathology. The new method synthesizes pathology in multi-modal MRI and diffusion tensor imaging (DTI) by simulating mass effect, warping and destruction of white matter fibers, and infiltration of brain tissues by tumor cells. We generate synthetic contrast enhanced MR images by simulating the accumulation of contrast agent within the brain. The appearance of the the brain tissue and tumor in MRI is simulated by synthesizing texture images from real MR images. The proposed method is able to generate synthetic ground truth and synthesized MR images with tumor and edema that exhibit comparable segmentation challenges to real tumor MRI. Such image data sets will find use in segmentation reliability studies, comparison and validation of different segmentation methods, training and teaching, or even in evaluating standards for tumor size like the RECIST criteria (response evaluation criteria in solid tumors).

  19. A meta-classifier for detecting prostate cancer by quantitative integration of in vivo magnetic resonance spectroscopy and magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Viswanath, Satish; Tiwari, Pallavi; Rosen, Mark; Madabhushi, Anant

    2008-03-01

    Recently, in vivo Magnetic Resonance Imaging (MRI) and Magnetic Resonance Spectroscopy (MRS) have emerged as promising new modalities to aid in prostate cancer (CaP) detection. MRI provides anatomic and structural information of the prostate while MRS provides functional data pertaining to biochemical concentrations of metabolites such as creatine, choline and citrate. We have previously presented a hierarchical clustering scheme for CaP detection on in vivo prostate MRS and have recently developed a computer-aided method for CaP detection on in vivo prostate MRI. In this paper we present a novel scheme to develop a meta-classifier to detect CaP in vivo via quantitative integration of multimodal prostate MRS and MRI by use of non-linear dimensionality reduction (NLDR) methods including spectral clustering and locally linear embedding (LLE). Quantitative integration of multimodal image data (MRI and PET) involves the concatenation of image intensities following image registration. However multimodal data integration is non-trivial when the individual modalities include spectral and image intensity data. We propose a data combination solution wherein we project the feature spaces (image intensities and spectral data) associated with each of the modalities into a lower dimensional embedding space via NLDR. NLDR methods preserve the relationships between the objects in the original high dimensional space when projecting them into the reduced low dimensional space. Since the original spectral and image intensity data are divorced from their original physical meaning in the reduced dimensional space, data at the same spatial location can be integrated by concatenating the respective embedding vectors. Unsupervised consensus clustering is then used to partition objects into different classes in the combined MRS and MRI embedding space. Quantitative results of our multimodal computer-aided diagnosis scheme on 16 sets of patient data obtained from the ACRIN trial, for which corresponding histological ground truth for spatial extent of CaP is known, show a marginally higher sensitivity, specificity, and positive predictive value compared to corresponding CAD results with the individual modalities.

  20. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  1. Multimode nonlinear optical imaging of the dermis in ex vivo human skin based on the combination of multichannel mode and Lambda mode.

    PubMed

    Zhuo, Shuangmu; Chen, Jianxin; Luo, Tianshu; Zou, Dingsong

    2006-08-21

    A Multimode nonlinear optical imaging technique based on the combination of multichannel mode and Lambda mode is developed to investigate human dermis. Our findings show that this technique not only improves the image contrast of the structural proteins of extracellular matrix (ECM) but also provides an image-guided spectral analysis method to identify both cellular and ECM intrinsic components including collagen, elastin, NAD(P)H and flavin. By the combined use of multichannel mode and Lambda mode in tandem, the obtained in-depth two photon-excited fluorescence (TPEF) and second-harmonic generation (SHG) imaging and TPEF/SHG signals depth-dependence decay can offer a sensitive tool for obtaining quantitative tissue structural and biochemical information. These results suggest that the technique has the potential to provide more accurate information for determining tissue physiological and pathological states.

  2. Red fluorescent zinc oxide nanoparticle: A novel platform for cancer targeting

    DOE PAGES

    Hong, Hao; Wang, Fei; Zhang, Yin; ...

    2015-01-21

    Multifunctional zinc oxide (ZnO) nanoparticles (NPs) with well-integrated multimodality imaging capacities have generated increasing research interest in the past decade. However, limited progress has been made in developing ZnO NP-based multimodality tumor-imaging agents. In this paper, we developed novel red fluorescent ZnO NPs and described the successful conjugation of 64Cu ( t 1/2 = 12.7 h) and TRC105, a chimeric monoclonal antibody against CD105, to these ZnO NPs via well-developed surface engineering procedures. The produced dual-modality ZnO NPs were readily applicable for positron emission tomography (PET) imaging and fluorescence imaging of the tumor vasculature. Their pharmacokinetics and tumor-targeting efficacy/specificity inmore » mice bearing murine breast 4T1 tumor were thoroughly investigated. In conclusion, ZnO NPs with dual-modality imaging properties can serve as an attractive candidate for future cancer theranostics.« less

  3. The Interactive Origin and the Aesthetic Modelling of Image-Schemas and Primary Metaphors.

    PubMed

    Martínez, Isabel C; Español, Silvia A; Pérez, Diana I

    2018-06-02

    According to the theory of conceptual metaphor, image-schemas and primary metaphors are preconceptual structures configured in human cognition, based on sensory-motor environmental activity. Focusing on the way both non-conceptual structures are embedded in early social interaction, we provide empirical evidence for the interactive and intersubjective ontogenesis of image-schemas and primary metaphors. We present the results of a multimodal image-schematic microanalysis of three interactive infant-directed performances (the composition of movement, touch, speech, and vocalization that adults produce for-and-with the infants). The microanalyses show that adults aesthetically highlight the image-schematic structures embedded in the multimodal composition of the performance, and that primary metaphors are also lived as embedded in these inter-enactive experiences. The findings allow corroborating that the psychological domains of cognition and affection are not in rivalry or conflict but rather intertwined in meaningful experiences.

  4. Assessment of fibrotic liver disease with multimodal nonlinear optical microscopy

    NASA Astrophysics Data System (ADS)

    Lu, Fake; Zheng, Wei; Tai, Dean C. S.; Lin, Jian; Yu, Hanry; Huang, Zhiwei

    2010-02-01

    Liver fibrosis is the excessive accumulation of extracellular matrix proteins such as collagens, which may result in cirrhosis, liver failure, and portal hypertension. In this study, we apply a multimodal nonlinear optical microscopy platform developed to investigate the fibrotic liver diseases in rat models established by performing bile duct ligation (BDL) surgery. The three nonlinear microscopy imaging modalities are implemented on the same sectioned tissues of diseased model sequentially: i.e., second harmonic generation (SHG) imaging quantifies the contents of the collagens, the two-photon excitation fluorescence (TPEF) imaging reveals the morphology of hepatic cells, while coherent anti-Stokes Raman scattering (CARS) imaging maps the distributions of fats or lipids quantitatively across the tissue. Our imaging results show that during the development of liver fibrosis (collagens) in BDL model, fatty liver disease also occurs. The aggregated concentrations of collagen and fat constituents in liver fibrosis model show a certain correlationship between each other.

  5. Multi-modality molecular imaging: pre-clinical laboratory configuration

    NASA Astrophysics Data System (ADS)

    Wu, Yanjun; Wellen, Jeremy W.; Sarkar, Susanta K.

    2006-02-01

    In recent years, the prevalence of in vivo molecular imaging applications has rapidly increased. Here we report on the construction of a multi-modality imaging facility in a pharmaceutical setting that is expected to further advance existing capabilities for in vivo imaging of drug distribution and the interaction with their target. The imaging instrumentation in our facility includes a microPET scanner, a four wavelength time-domain optical imaging scanner, a 9.4T/30cm MRI scanner and a SPECT/X-ray CT scanner. An electronics shop and a computer room dedicated to image analysis are additional features of the facility. The layout of the facility was designed with a central animal preparation room surrounded by separate laboratory rooms for each of the major imaging modalities to accommodate the work-flow of simultaneous in vivo imaging experiments. This report will focus on the design of and anticipated applications for our microPET and optical imaging laboratory spaces. Additionally, we will discuss efforts to maximize the daily throughput of animal scans through development of efficient experimental work-flows and the use of multiple animals in a single scanning session.

  6. Quantitative reconstructions in multi-modal photoacoustic and optical coherence tomography imaging

    NASA Astrophysics Data System (ADS)

    Elbau, P.; Mindrinos, L.; Scherzer, O.

    2018-01-01

    In this paper we perform quantitative reconstruction of the electric susceptibility and the Grüneisen parameter of a non-magnetic linear dielectric medium using measurement of a multi-modal photoacoustic and optical coherence tomography system. We consider the mathematical model presented in Elbau et al (2015 Handbook of Mathematical Methods in Imaging ed O Scherzer (New York: Springer) pp 1169-204), where a Fredholm integral equation of the first kind for the Grüneisen parameter was derived. For the numerical solution of the integral equation we consider a Galerkin type method.

  7. Detection of suspicious pain regions on a digital infrared thermal image using the multimodal function optimization.

    PubMed

    Lee, Junghoon; Lee, Joosung; Song, Sangha; Lee, Hyunsook; Lee, Kyoungjoung; Yoon, Youngro

    2008-01-01

    Automatic detection of suspicious pain regions is very useful in the medical digital infrared thermal imaging research area. To detect those regions, we use the SOFES (Survival Of the Fitness kind of the Evolution Strategy) algorithm which is one of the multimodal function optimization methods. We apply this algorithm to famous diseases, such as a foot of the glycosuria, the degenerative arthritis and the varicose vein. The SOFES algorithm is available to detect some hot spots or warm lines as veins. And according to a hundred of trials, the algorithm is very fast to converge.

  8. Body image construct of Sri Lankan adolescents

    PubMed

    Goonapienuwala, B L; Agampodi, S B; Kalupahana, N S; Siribaddana, S

    2017-03-31

    “Body image” is more than the visual perception of size and it is probably multidimensional. It is known to influence eating behaviors and self-esteem of adolescents. Although widely studied in developed countries, it has been studied little in Sri Lanka. This study was designed to translate and culturally adapt a tool to assess dimensions of body image in Sri Lankan adolescents. The study was carried out in the Anuradhapura District on school going children in grades nine to eleven. A multidimensional body image questionnaire was translated to Sinhalese language using the nominal group consensus method. The translated version was administered to 278 (114 boys) students after content validation and pre-testing. To assess test-retest reliability, the same questionnaire was administered to the same sample after two weeks. Psychometric properties were assessed using exploratory factor analysis. Three-factor model emerged when dimensions in body image were analysed. Both boys and girls had almost identical factor structure. The three dimensions identified were “affective body image”, “body perception” and “orientation on body size”. All factors had good internal consistency with Cronbach’s alpha > 0.76 and explained more than 56% of the total variance in both sexes. The translated body image questionnaire was a valid and reliable tool which can be used in Sri Lankan adolescents. Both genders had a similar, multidimensional body image construct.

  9. Invariance test of the Multidimensional Body Self-Relations Questionnaire: do women with breast cancer interpret this measure differently?

    PubMed

    Sabiston, Catherine M; Rusticus, Shayna; Brunet, Jennifer; McDonough, Meghan H; Hadd, Valerie; Hubley, Anita M; Crocker, Peter R E

    2010-10-01

    To examine whether the meaning and interpretation of body image are similar for breast cancer survivors and women without breast cancer. Women completed the Multidimensional Body Self-Relations Questionnaire--Appearance Scales as part of two studies. There were 469 women with breast cancer and 385 women without breast cancer. Invariance testing was conducted to examine whether the items assessing the body image dimensions were similar, whether the dimensions were interpreted similarly, whether the items were equally salient and meaningful, and whether there were mean differences on the body image dimensions across the two groups. The meaning and interpretation of body image dimensions related to appearance evaluation and appearance orientation were similar across the groups, yet some group differences were found for overweight preoccupation and body areas satisfaction (and not testable for self-classified weight). Breast cancer survivors reported a small yet significantly higher mean on appearance evaluation and lower mean on appearance orientation compared to the women without breast cancer. Meaningful comparisons in body image across cancer and non-cancer women can be made using two of the Multidimensional Body Self-Relations Questionnaire--Appearance Scales. The overweight preoccupation subscale could be used to assess body image but should not be used if group mean differences are desirable. Assessing satisfaction with body areas across these groups is not recommended and may introduce systematic bias.

  10. Clinically Approved Nanoparticle Imaging Agents

    PubMed Central

    Thakor, Avnesh S.; Jokerst, Jesse V.; Ghanouni, Pejman; Campbell, Jos L.; Mittra, Erik

    2016-01-01

    Nanoparticles are a new class of imaging agent used for both anatomic and molecular imaging. Nanoparticle-based imaging exploits the signal intensity, stability, and biodistribution behavior of submicron-diameter molecular imaging agents. This review focuses on nanoparticles used in human medical imaging, with an emphasis on radionuclide imaging and MRI. Newer nanoparticle platforms are also discussed in relation to theranostic and multimodal uses. PMID:27738007

  11. Transferring biomarker into molecular probe: Melanin nanoparticle as a naturally active platform for multimodality imaging

    DOE PAGES

    Fan, Quli; Cheng, Kai; Hu, Xiang; ...

    2014-10-07

    Developing multifunctional and easily prepared nanoplatforms with integrated different modalities is highly challenging for molecular imaging. Here, we report the successful transfer of an important molecular target, melanin, into a novel multimodality imaging nanoplatform. Melanin is abundantly expressed in melanotic melanomas and thus has been actively studied as a target for melanoma imaging. In our work, the multifunctional biopolymer nanoplatform based on ultrasmall (<10 nm) water-soluble melanin nanoparticle (MNP) was developed and showed unique photoacoustic property and natural binding ability with metal ions (for example, 64Cu 2+, Fe 3+). Therefore, MNP can serve not only as a photoacoustic contrast agent,more » but also as a nanoplatform for positron emission tomography (PET) and magnetic resonance imaging (MRI). Traditional passive nanoplatforms require complicated and time-consuming processes for prebuilding reporting moieties or chemical modifications using active groups to integrate different contrast properties into one entity. In comparison, utilizing functional biomarker melanin can greatly simplify the building process. We further conjugated α vβ 3 integrins, cyclic c(RGDfC) peptide, to MNPs to allow for U87MG tumor accumulation due to its targeting property combined with the enhanced permeability and retention (EPR) effect. As a result, the multimodal properties of MNPs demonstrate the high potential of endogenous materials with multifunctions as nanoplatforms for molecular theranostics and clinical translation.« less

  12. Observation of Geometric Parametric Instability Induced by the Periodic Spatial Self-Imaging of Multimode Waves

    NASA Astrophysics Data System (ADS)

    Krupa, Katarzyna; Tonello, Alessandro; Barthélémy, Alain; Couderc, Vincent; Shalaby, Badr Mohamed; Bendahmane, Abdelkrim; Millot, Guy; Wabnitz, Stefan

    2016-05-01

    Spatiotemporal mode coupling in highly multimode physical systems permits new routes for exploring complex instabilities and forming coherent wave structures. We present here the first experimental demonstration of multiple geometric parametric instability sidebands, generated in the frequency domain through resonant space-time coupling, owing to the natural periodic spatial self-imaging of a multimode quasi-continuous-wave beam in a standard graded-index multimode fiber. The input beam was launched in the fiber by means of an amplified microchip laser emitting sub-ns pulses at 1064 nm. The experimentally observed frequency spacing among sidebands agrees well with analytical predictions and numerical simulations. The first-order peaks are located at the considerably large detuning of 123.5 THz from the pump. These results open the remarkable possibility to convert a near-infrared laser directly into a broad spectral range spanning visible and infrared wavelengths, by means of a single resonant parametric nonlinear effect occurring in the normal dispersion regime. As further evidence of our strong space-time coupling regime, we observed the striking effect that all of the different sideband peaks were carried by a well-defined and stable bell-shaped spatial profile.

  13. Portable laser synthesizer for high-speed multi-dimensional spectroscopy

    DOEpatents

    Demos, Stavros G [Livermore, CA; Shverdin, Miroslav Y [Sunnyvale, CA; Shirk, Michael D [Brentwood, CA

    2012-05-29

    Portable, field-deployable laser synthesizer devices designed for multi-dimensional spectrometry and time-resolved and/or hyperspectral imaging include a coherent light source which simultaneously produces a very broad, energetic, discrete spectrum spanning through or within the ultraviolet, visible, and near infrared wavelengths. The light output is spectrally resolved and each wavelength is delayed with respect to each other. A probe enables light delivery to a target. For multidimensional spectroscopy applications, the probe can collect the resulting emission and deliver this radiation to a time gated spectrometer for temporal and spectral analysis.

  14. [Isolated left ventricular non-compaction associated with Ebstein's anomaly. Multimodality non-invasive imaging for the assessment of congenital heart disease].

    PubMed

    Renilla, Alfredo; Santamarta, Elena; Corros, Cecilia; Martín, María; Barreiro, Manuel; de la Hera, Jesús

    2013-01-01

    To establish the etiology of heart failure in patients with congenital heart disease can be challenging. Multiple concomitant anomalies that can be missed after an initial diagnosis could be seen in these patients. In patients with congenital heart disease, a more accurate evaluation of cardiac morphology and left ventricular systolic function could be evaluated by recent non-invasive cardiac imaging techniques. We present a rare case where multimodal cardiac imaging was useful to establish the final diagnosis of left ventricular non-compaction associated with Ebstein's anomaly. Copyright © 2012 Instituto Nacional de Cardiología Ignacio Chávez. Published by Masson Doyma México S.A. All rights reserved.

  15. AMIDE: a free software tool for multimodality medical image analysis.

    PubMed

    Loening, Andreas Markus; Gambhir, Sanjiv Sam

    2003-07-01

    Amide's a Medical Image Data Examiner (AMIDE) has been developed as a user-friendly, open-source software tool for displaying and analyzing multimodality volumetric medical images. Central to the package's abilities to simultaneously display multiple data sets (e.g., PET, CT, MRI) and regions of interest is the on-demand data reslicing implemented within the program. Data sets can be freely shifted, rotated, viewed, and analyzed with the program automatically handling interpolation as needed from the original data. Validation has been performed by comparing the output of AMIDE with that of several existing software packages. AMIDE runs on UNIX, Macintosh OS X, and Microsoft Windows platforms, and it is freely available with source code under the terms of the GNU General Public License.

  16. Multimodal and synthetic aperture approach to full-field 3D shape and displacement measurements

    NASA Astrophysics Data System (ADS)

    Kujawińska, M.; Sitnik, R.

    2017-08-01

    Recently most of the measurement tasks in industry, civil engineering and culture heritage applications require archiving, characterization and monitoring of 3D objects and structures and their performance under changing conditions. These requirements can be met if multimodal measurement (MM) strategy is applied. It rely on effective combining structured light method and 3D digital image correlation with laser scanning/ToF, thermal imaging, multispectral imaging and BDRF measurements. In the case of big size and/or complicated objects MM have to be combined with hierarchical or synthetic aperture (SA) measurements. The new solutions in MM and SA strategies are presented and their applicability is shown at interesting cultural heritage and civil engineering applications.

  17. Toward in vivo diagnosis of skin cancer using multimode imaging dermoscopy: (II) molecular mapping of highly pigmented lesions

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas; Farkas, Daniel L.

    2014-03-01

    We have developed a multimode imaging dermoscope that combines polarization and hyperspectral imaging with a computationally rapid analytical model. This approach employs specific spectral ranges of visible and near infrared wavelengths for mapping the distribution of specific skin bio-molecules. This corrects for the melanin-hemoglobin misestimation common to other systems, without resorting to complex and computationally intensive tissue optical models that are prone to inaccuracies due to over-modeling. Various human skin measurements including a melanocytic nevus, and venous occlusion conditions were investigated and compared with other ratiometric spectral imaging approaches. Access to the broad range of hyperspectral data in the visible and near-infrared range allows our algorithm to flexibly use different wavelength ranges for chromophore estimation while minimizing melanin-hemoglobin optical signature cross-talk.

  18. Visual tracking for multi-modality computer-assisted image guidance

    NASA Astrophysics Data System (ADS)

    Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp

    2017-03-01

    With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.

  19. Delivery of ultrashort spatially focused pulses through a multimode fiber

    NASA Astrophysics Data System (ADS)

    Morales-Delgado, Edgar E.; Papadopoulos, Ioannis N.; Farahi, Salma; Psaltis, Demetri; Moser, Christophe

    2015-08-01

    Multimode optical fibers potentially allow the transmission of larger amounts of information than their single mode counterparts because of their high number of supported modes. However, propagation of a light pulse through a multimode fiber suffers from spatial distortions due to the superposition of the various exited modes and from time broadening due to modal dispersion. We present a method based on digital phase conjugation to selectively excite in a multimode fiber specific optical fiber modes that follow similar optical paths as they travel through the fiber. The excited modes interfere constructively at the fiber output generating an ultrashort spatially focused pulse. The excitation of a limited number of modes following similar optical paths limits modal dispersion, allowing the transmission of the ultrashort pulse. We have experimentally demonstrated the delivery of a focused spot of pulse width equal to 500 fs through a 30 cm, 200 micrometer core step index multimode fiber. The results of this study show that two-photon imaging capability can be added to ultra-thin lensless endoscopy using commercial multimode fibers.

  20. Delivery of an ultrashort spatially focused pulse to the other end of a multimode fiber using digital phase conjugation

    NASA Astrophysics Data System (ADS)

    Morales Delgado, Edgar E.; Papadopoulos, Ioannis N.; Farahi, Salma; Psaltis, Demetri; Moser, Christophe

    2015-03-01

    Multimode optical fibers potentially allow the transmission of larger amounts of information than their single mode counterparts because of their high number of supported modes. However, propagation of a light pulse through a multimode fiber suffers from spatial distortions due to the superposition of the various exited modes and from time broadening due to modal dispersion. We present a method based on digital phase conjugation to selectively excite in a multimode fiber specific optical fiber modes that follow similar optical paths as they travel through the fiber. The excited modes interfere constructively at the fiber output generating an ultrashort spatially focused pulse. The excitation of a limited number of modes following similar optical paths limits modal dispersion, allowing the transmission of the ultrashort pulse. We have experimentally demonstrated the delivery of a focused spot of pulse width equal to 500 fs through a 30 cm, 200 micrometer core step-index multimode fiber. The results of this study show that two-photon imaging capability can be added to ultra-thin lensless endoscopy using commercial multimode fibers.

  1. Robust temporal alignment of multimodal cardiac sequences

    NASA Astrophysics Data System (ADS)

    Perissinotto, Andrea; Queirós, Sandro; Morais, Pedro; Baptista, Maria J.; Monaghan, Mark; Rodrigues, Nuno F.; D'hooge, Jan; Vilaça, João. L.; Barbosa, Daniel

    2015-03-01

    Given the dynamic nature of cardiac function, correct temporal alignment of pre-operative models and intraoperative images is crucial for augmented reality in cardiac image-guided interventions. As such, the current study focuses on the development of an image-based strategy for temporal alignment of multimodal cardiac imaging sequences, such as cine Magnetic Resonance Imaging (MRI) or 3D Ultrasound (US). First, we derive a robust, modality-independent signal from the image sequences, estimated by computing the normalized cross-correlation between each frame in the temporal sequence and the end-diastolic frame. This signal is a resembler for the left-ventricle (LV) volume curve over time, whose variation indicates different temporal landmarks of the cardiac cycle. We then perform the temporal alignment of these surrogate signals derived from MRI and US sequences of the same patient through Dynamic Time Warping (DTW), allowing to synchronize both sequences. The proposed framework was evaluated in 98 patients, which have undergone both 3D+t MRI and US scans. The end-systolic frame could be accurately estimated as the minimum of the image-derived surrogate signal, presenting a relative error of 1.6 +/- 1.9% and 4.0 +/- 4.2% for the MRI and US sequences, respectively, thus supporting its association with key temporal instants of the cardiac cycle. The use of DTW reduces the desynchronization of the cardiac events in MRI and US sequences, allowing to temporally align multimodal cardiac imaging sequences. Overall, a generic, fast and accurate method for temporal synchronization of MRI and US sequences of the same patient was introduced. This approach could be straightforwardly used for the correct temporal alignment of pre-operative MRI information and intra-operative US images.

  2. Prussian blue nanocubes: multi-functional nanoparticles for multimodal imaging and image-guided therapy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Cook, Jason R.; Dumani, Diego S.; Kubelick, Kelsey P.; Luci, Jeffrey; Emelianov, Stanislav Y.

    2017-03-01

    Imaging modalities utilize contrast agents to improve morphological visualization and to assess functional and molecular/cellular information. Here we present a new type of nanometer scale multi-functional particle that can be used for multi-modal imaging and therapeutic applications. Specifically, we synthesized monodisperse 20 nm Prussian Blue Nanocubes (PBNCs) with desired optical absorption in the near-infrared region and superparamagnetic properties. PBNCs showed excellent contrast in photoacoustic (700 nm wavelength) and MR (3T) imaging. Furthermore, photostability was assessed by exposing the PBNCs to nearly 1,000 laser pulses (5 ns pulse width) with up to 30 mJ/cm2 laser fluences. The PBNCs exhibited insignificant changes in photoacoustic signal, demonstrating enhanced robustness compared to the commonly used gold nanorods (substantial photodegradation with fluences greater than 5 mJ/cm2). Furthermore, the PBNCs exhibited superparamagnetism with a magnetic saturation of 105 emu/g, a 5x improvement over superparamagnetic iron-oxide (SPIO) nanoparticles. PBNCs exhibited enhanced T2 contrast measured using 3T clinical MRI. Because of the excellent optical absorption and magnetism, PBNCs have potential uses in other imaging modalities including optical tomography, microscopy, magneto-motive OCT/ultrasound, etc. In addition to multi-modal imaging, the PBNCs are multi-functional and, for example, can be used to enhance magnetic delivery and as therapeutic agents. Our initial studies show that stem cells can be labeled with PBNCs to perform image-guided magnetic delivery. Overall, PBNCs can act as imaging/therapeutic agents in diverse applications including cancer, cardiovascular disease, ophthalmology, and tissue engineering. Furthermore, PBNCs are based on FDA approved Prussian Blue thus potentially easing clinical translation of PBNCs.

  3. Amyloid PET in clinical practice: Its place in the multidimensional space of Alzheimer's disease☆

    PubMed Central

    Vandenberghe, Rik; Adamczuk, Katarzyna; Dupont, Patrick; Laere, Koen Van; Chételat, Gaël

    2013-01-01

    Amyloid imaging is currently introduced to the market for clinical use. We will review the evidence demonstrating that the different amyloid PET ligands that are currently available are valid biomarkers for Alzheimer-related β amyloidosis. Based on recent findings from cross-sectional and longitudinal imaging studies using different modalities, we will incorporate amyloid imaging into a multidimensional model of Alzheimer's disease. Aside from the critical role in improving clinical trial design for amyloid-lowering drugs, we will also propose a tentative algorithm for when it may be useful in a memory clinic environment. Gaps in our evidence-based knowledge of the added value of amyloid imaging in a clinical context will be identified and will need to be addressed by dedicated studies of clinical utility. PMID:24179802

  4. STRUCTURAL AND FUNCTIONAL CHARACTERIZATION OF BENIGN FLECK RETINA USING MULTIMODAL IMAGING.

    PubMed

    Neriyanuri, Srividya; Rao, Chetan; Raman, Rajiv

    2017-01-01

    To report structural and functional features in a case series of benign fleck retina using multimodal imaging. Four cases with benign fleck retina underwent complete ophthalmic examination that included detailed history, visual acuity, and refractive error testing, FM-100 hue test, dilated fundus evaluation, full field electroretinogram, fundus photography with autofluorescence, fundus fluorescein angiography, and swept-source optical coherence tomography. Age group of the cases ranged from 19 years to 35 years (3 males and 1 female). Parental consanguinity was reported in two cases. All of them were visually asymptomatic with best-corrected visual acuity of 20/20 (moderate astigmatism) in both the eyes. Low color discrimination was seen in two cases. Fundus photography showed pisciform flecks which were compactly placed on posterior pole and were discrete, diverging towards periphery. Lesions were seen as smaller dots within 1500 microns from fovea and were hyperfluorescent on autofluorescence. Palisading retinal pigment epithelium defects were seen in posterior pole on fundus fluorescein angiography imaging; irregular hyper fluorescence was also noted. One case had reduced cone responses on full field electroretinogram; the other three cases had normal electroretinogram. On optical coherence tomography, level of lesions varied from retinal pigment epithelium, inner segment to outer segment extending till external limiting membrane. Functional and structural deficits in benign fleck retina were picked up using multimodal imaging.

  5. Design and validation of a diffuse reflectance and spectroscopic microendoscope with poly(dimethylsioxane)-based phantoms

    PubMed Central

    Greening, Gage J.; Powless, Amy J.; Hutcheson, Joshua A.; Prieto, Sandra P.; Majid, Aneeka A.; Muldoon, Timothy J.

    2015-01-01

    Many cases of epithelial cancer originate in basal layers of tissue and are initially undetected by conventional microendoscopy techniques. We present a bench-top, fiber-bundle microendoscope capable of providing high resolution images of surface cell morphology. Additionally, the microendoscope has the capability to interrogate deeper into material by using diffuse reflectance and broadband diffuse reflectance spectroscopy. The purpose of this multimodal technique was to overcome the limitation of microendoscopy techniques that are limited to only visualizing morphology at the tissue or cellular level. Using a custom fiber optic probe, high resolution surface images were acquired using topical proflavine to fluorescently stain non-keratinized epithelia. A 635 nm laser coupled to a 200 μm multimode fiber delivers light to the sample and the diffuse reflectance signal was captured by a 1 mm image guide fiber. Finally, a tungsten-halogen lamp coupled to a 200 μm multimode fiber delivers broadband light to the sample to acquire spectra at source-detector separations of 374, 729, and 1051 μm. To test the instrumentation, a high resolution proflavine-induced fluorescent image of resected healthy mouse colon was acquired. Additionally, five monolayer poly(dimethylsiloxane)-based optical phantoms with varying absorption and scattering properties were created to acquire diffuse reflectance profiles and broadband spectra. PMID:25983372

  6. Design and validation of a diffuse reflectance and spectroscopic microendoscope with poly(dimethylsiloxane)-based phantoms

    NASA Astrophysics Data System (ADS)

    Greening, Gage J.; Powless, Amy J.; Hutcheson, Joshua A.; Prieto, Sandra P.; Majid, Aneeka A.; Muldoon, Timothy J.

    2015-03-01

    Many cases of epithelial cancer originate in basal layers of tissue and are initially undetected by conventional microendoscopy techniques. We present a bench-top, fiber-bundle microendoscope capable of providing high resolution images of surface cell morphology. Additionally, the microendoscope has the capability to interrogate deeper into material by using diffuse reflectance and broadband diffuse reflectance spectroscopy. The purpose of this multimodal technique was to overcome the limitation of microendoscopy techniques that are limited to only visualizing morphology at the tissue or cellular level. Using a custom fiber optic probe, high resolution surface images were acquired using topical proflavine to fluorescently stain non-keratinized epithelia. A 635 nm laser coupled to a 200 μm multimode fiber delivers light to the sample and the diffuse reflectance signal was captured by a 1 mm image guide fiber. Finally, a tungsten-halogen lamp coupled to a 200 μm multimode fiber delivers broadband light to the sample to acquire spectra at source-detector separations of 374, 729, and 1051 μm. To test the instrumentation, a high resolution proflavine-induced fluorescent image of resected healthy mouse colon was acquired. Additionally, five monolayer poly(dimethylsiloxane)-based optical phantoms with varying absorption and scattering properties were created to acquire diffuse reflectance profiles and broadband spectra.

  7. Near-infrared light-triggered theranostics for tumor-specific enhanced multimodal imaging and photothermal therapy

    PubMed Central

    Wu, Bo; Wan, Bing; Lu, Shu-Ting; Deng, Kai; Li, Xiao-Qi; Wu, Bao-Lin; Li, Yu-Shuang; Liao, Ru-Fang; Huang, Shi-Wen; Xu, Hai-Bo

    2017-01-01

    The major challenge in current clinic contrast agents (CAs) and chemotherapy is the poor tumor selectivity and response. Based on the self-quench property of IR820 at high concentrations, and different contrast effect ability of Gd-DOTA between inner and outer of liposome, we developed “bomb-like” light-triggered CAs (LTCAs) for enhanced CT/MRI/FI multimodal imaging, which can improve the signal-to-noise ratio of tumor tissue specifically. IR820, Iohexol and Gd-chelates were firstly encapsulated into the thermal-sensitive nanocarrier with a high concentration. This will result in protection and fluorescence quenching. Then, the release of CAs was triggered by near-infrared (NIR) light laser irradiation, which will lead to fluorescence and MRI activation and enable imaging of inflammation. In vitro and in vivo experiments demonstrated that LTCAs with 808 nm laser irradiation have shorter T1 relaxation time in MRI and stronger intensity in FI compared to those without irradiation. Additionally, due to the high photothermal conversion efficiency of IR820, the injection of LTCAs was demonstrated to completely inhibit C6 tumor growth in nude mice up to 17 days after NIR laser irradiation. The results indicate that the LTCAs can serve as a promising platform for NIR-activated multimodal imaging and photothermal therapy. PMID:28670120

  8. SU-E-I-23: Design and Clinical Application of External Marking Body in Multi- Mode Medical Images Registration and Fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Z; Gong, G

    2014-06-01

    Purpose: To design an external marking body (EMB) that could be visible on computed tomography (CT), magnetic resonance (MR), positron emission tomography (PET) and single-photon emission computed tomography (SPECT) images and to investigate the use of the EMB for multiple medical images registration and fusion in the clinic. Methods: We generated a solution containing paramagnetic metal ions and iodide ions (CT'MR dual-visible solution) that could be viewed on CT and MR images and multi-mode image visible solution (MIVS) that could be obtained by mixing radioactive nuclear material. A globular plastic theca (diameter: 3–6 mm) that mothball the MIVS and themore » EMB was brought by filling MIVS. The EMBs were fixed on the patient surface and CT, MR, PET and SPECT scans were obtained. The feasibility of clinical application and the display and registration error of EMB among different image modalities were investigated. Results: The dual-visible solution was highly dense on CT images (HU>700). A high signal was also found in all MR scanning (T1, T2, STIR and FLAIR) images, and the signal was higher than subcutaneous fat. EMB with radioactive nuclear material caused a radionuclide concentration area on PET and SPECT images, and the signal of EMB was similar to or higher than tumor signals. The theca with MIVS was clearly visible on all the images without artifact, and the shape was round or oval with a sharp edge. The maximum diameter display error was 0.3 ± 0.2mm on CT and MRI images, and 1.0 ± 0.3mm on PET and SPECT images. In addition, the registration accuracy of the theca center among multi-mode images was less than 1mm. Conclusion: The application of EMB with MIVS improves the registration and fusion accuracy of multi-mode medical images. Furthermore, it has the potential to ameliorate disease diagnosis and treatment outcome.« less

  9. Stent-induced coronary artery stenosis characterized by multimodal nonlinear optical microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Han-Wei; Simianu, Vlad; Locker, Mattew J.; Cheng, Ji-Xin; Sturek, Michael

    2011-02-01

    We demonstrate for the first time the applicability of multimodal nonlinear optical (NLO) microscopy to the interrogation of stented coronary arteries under different diet and stent deployment conditions. Bare metal stents and Taxus drug-eluting stents (DES) were placed in coronary arteries of Ossabaw pigs of control and atherogenic diet groups. Multimodal NLO imaging was performed to inspect changes in arterial structures and compositions after stenting. Sum frequency generation, one of the multimodalities, was used for the quantitative analysis of collagen content in the peristent and in-stent artery segments of both pig groups. Atherogenic diet increased lipid and collagen in peristent segments. In-stent segments showed decreased collagen expression in neointima compared to media. Deployment of DES in atheromatous arteries inhibited collagen expression in the arterial media.

  10. Quantitative label-free multimodality nonlinear optical imaging for in situ differentiation of cancerous lesions

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoyun; Li, Xiaoyan; Cheng, Jie; Liu, Zhengfan; Thrall, Michael J.; Wang, Xi; Wang, Zhiyong; Wong, Stephen T. C.

    2013-03-01

    The development of real-time, label-free imaging techniques has recently attracted research interest for in situ differentiation of cancerous lesions from normal tissues. Molecule-specific intrinsic contrast can arise from label-free imaging techniques such as Coherent Anti-Stokes Raman Scattering (CARS), Two-Photon Excited AutoFluorescence (TPEAF), and Second Harmonic Generation (SHG), which, in combination, would hold the promise of a powerful label-free tool for cancer diagnosis. Among cancer-related deaths, lung carcinoma is the leading cause for both sexes. Although early treatment can increase the survival rate dramatically, lesion detection and precise diagnosis at an early stage is unusual due to its asymptomatic nature and limitations of current diagnostic techniques that make screening difficult. We investigated the potential of using multimodality nonlinear optical microscopy that incorporates CARS, TPEAF, and SHG techniques for differentiation of lung cancer from normal tissue. Cancerous and non-cancerous lung tissue samples from patients were imaged using CARS, TPEAF, and SHG techniques for comparison. These images showed good pathology correlation with hematoxylin and eosin (H and E) stained sections from the same tissue samples. Ongoing work includes imaging at various penetration depths to show three-dimensional morphologies of tumor cell nuclei using CARS, elastin using TPEAF, and collagen using SHG and developing classification algorithms for quantitative feature extraction to enable lung cancer diagnosis. Our results indicate that via real-time morphology analyses, a multimodality nonlinear optical imaging platform potentially offers a powerful minimally-invasive way to differentiate cancer lesions from surrounding non-tumor tissues in vivo for clinical applications.

  11. Intra-operative label-free multimodal multiphoton imaging of breast cancer margins and microenvironment (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Sun, Yi; You, Sixian; Tu, Haohua; Spillman, Darold R.; Marjanovic, Marina; Chaney, Eric J.; Liu, George Z.; Ray, Partha S.; Higham, Anna; Boppart, Stephen A.

    2017-02-01

    Label-free multi-photon imaging has been a powerful tool for studying tissue microstructures and biochemical distributions, particularly for investigating tumors and their microenvironments. However, it remains challenging for traditional bench-top multi-photon microscope systems to conduct ex vivo tumor tissue imaging in the operating room due to their bulky setups and laser sources. In this study, we designed, built, and clinically demonstrated a portable multi-modal nonlinear label-free microscope system that combined four modalities, including two- and three- photon fluorescence for studying the distributions of FAD and NADH, and second and third harmonic generation, respectively, for collagen fiber structures and the distribution of micro-vesicles found in tumors and the microenvironment. Optical realignments and switching between modalities were motorized for more rapid and efficient imaging and for a light-tight enclosure, reducing ambient light noise to only 5% within the brightly lit operating room. Using up to 20 mW of laser power after a 20x objective, this system can acquire multi-modal sets of images over 600 μm × 600 μm at an acquisition rate of 60 seconds using galvo-mirror scanning. This portable microscope system was demonstrated in the operating room for imaging fresh, resected, unstained breast tissue specimens, and for assessing tumor margins and the tumor microenvironment. This real-time label-free nonlinear imaging system has the potential to uniquely characterize breast cancer margins and the microenvironment of tumors to intraoperatively identify structural, functional, and molecular changes that could indicate the aggressiveness of the tumor.

  12. Investigating the Abscopal Effects of Radioablation on Shielded Bone Marrow in Rodent Models Using Multimodality Imaging.

    PubMed

    Afshar, Solmaz F; Zawaski, Janice A; Inoue, Taeko; Rendon, David A; Zieske, Arthur W; Punia, Jyotinder N; Sabek, Omaima M; Gaber, M Waleed

    2017-07-01

    The abscopal effect is the response to radiation at sites that are distant from the irradiated site of an organism, and it is thought to play a role in bone marrow (BM) recovery by initiating responses in the unirradiated bone marrow. Understanding the mechanism of this effect has applications in treating BM failure (BMF) and BM transplantation (BMT), and improving survival of nuclear disaster victims. Here, we investigated the use of multimodality imaging as a translational tool to longitudinally assess bone marrow recovery. We used positron emission tomography/computed tomography (PET/CT), magnetic resonance imaging (MRI) and optical imaging to quantify bone marrow activity, vascular response and marrow repopulation in fully and partially irradiated rodent models. We further measured the effects of radiation on serum cytokine levels, hematopoietic cell counts and histology. PET/CT imaging revealed a radiation-induced increase in proliferation in the shielded bone marrow (SBM) compared to exposed bone marrow (EBM) and sham controls. T 2 -weighted MRI showed radiation-induced hemorrhaging in the EBM and unirradiated SBM. In the EBM and SBM groups, we found alterations in serum cytokine and hormone levels and in hematopoietic cell population proportions, and histological evidence of osteoblast activation at the bone marrow interface. Importantly, we generated a BMT mouse model using fluorescent-labeled bone marrow donor cells and performed fluorescent imaging to reveal the migration of bone marrow cells from shielded to radioablated sites. Our study validates the use of multimodality imaging to monitor bone marrow recovery and provides evidence for the abscopal response in promoting bone marrow recovery after irradiation.

  13. Folic acid-targeted magnetic Tb-doped CeF3 fluorescent nanoparticles as bimodal probes for cellular fluorescence and magnetic resonance imaging.

    PubMed

    Ma, Zhi-Ya; Liu, Yu-Ping; Bai, Ling-Yu; An, Jie; Zhang, Lin; Xuan, Yang; Zhang, Xiao-Shuai; Zhao, Yuan-Di

    2015-10-07

    Magnetic fluorescent nanoparticles (NPs) have great potential applications for diagnostics, imaging and therapy. We developed a facile polyol method to synthesize multifunctional Fe3O4@CeF3:Tb@CeF3 NPs with small size (<20 nm), high water solubility and good biocompatibility. The NPs were modified by ligand exchange reactions with citric acid (CA) to obtain carboxyl-functionalized NPs (Fe3O4@CeF3:Tb@CeF3-COOH). Folic acid (FA) as an affinity ligand was then covalently conjugated onto NPs to yield Fe3O4@CeF3:Tb@CeF3-FA NPs. They were then applied as multimodal imaging agents for simultaneous in vitro targeted fluorescence imaging and magnetic resonance imaging (MRI) of HeLa cells with overexpressed folate receptors (FR). The results indicated that these NPs had strong luminescence and enhanced T2-weighted MR contrast and would be promising candidates as multimodal probes for both fluorescence and MRI imaging.

  14. Predictive assessment of kidney functional recovery following ischemic injury using optical spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raman, Rajesh N.; Pivetti, Christopher D.; Ramsamooj, Rajendra

    Functional changes in rat kidneys during the induced ischemic injury and recovery phases were explored using multimodal autofluorescence and light scattering imaging. We aim to evaluate the use of noncontact optical signatures for rapid assessment of tissue function and viability. Specifically, autofluorescence images were acquired in vivo under 355, 325, and 266 nm illumination while light scattering images were collected at the excitation wavelengths as well as using relatively narrowband light centered at 500 nm. The images were simultaneously recorded using a multimodal optical imaging system. We also analyzed to obtain time constants, which were correlated to kidney dysfunction asmore » determined by a subsequent survival study and histopathological analysis. This analysis of both the light scattering and autofluorescence images suggests that changes in tissue microstructure, fluorophore emission, and blood absorption spectral characteristics, coupled with vascular response, contribute to the behavior of the observed signal, which may be used to obtain tissue functional information and offer the ability to predict posttransplant kidney function.« less

  15. A versatile clearing agent for multi-modal brain imaging

    PubMed Central

    Costantini, Irene; Ghobril, Jean-Pierre; Di Giovanna, Antonino Paolo; Mascaro, Anna Letizia Allegra; Silvestri, Ludovico; Müllenbroich, Marie Caroline; Onofri, Leonardo; Conti, Valerio; Vanzi, Francesco; Sacconi, Leonardo; Guerrini, Renzo; Markram, Henry; Iannello, Giulio; Pavone, Francesco Saverio

    2015-01-01

    Extensive mapping of neuronal connections in the central nervous system requires high-throughput µm-scale imaging of large volumes. In recent years, different approaches have been developed to overcome the limitations due to tissue light scattering. These methods are generally developed to improve the performance of a specific imaging modality, thus limiting comprehensive neuroanatomical exploration by multi-modal optical techniques. Here, we introduce a versatile brain clearing agent (2,2′-thiodiethanol; TDE) suitable for various applications and imaging techniques. TDE is cost-efficient, water-soluble and low-viscous and, more importantly, it preserves fluorescence, is compatible with immunostaining and does not cause deformations at sub-cellular level. We demonstrate the effectiveness of this method in different applications: in fixed samples by imaging a whole mouse hippocampus with serial two-photon tomography; in combination with CLARITY by reconstructing an entire mouse brain with light sheet microscopy and in translational research by imaging immunostained human dysplastic brain tissue. PMID:25950610

  16. Predictive assessment of kidney functional recovery following ischemic injury using optical spectroscopy

    DOE PAGES

    Raman, Rajesh N.; Pivetti, Christopher D.; Ramsamooj, Rajendra; ...

    2017-05-03

    Functional changes in rat kidneys during the induced ischemic injury and recovery phases were explored using multimodal autofluorescence and light scattering imaging. We aim to evaluate the use of noncontact optical signatures for rapid assessment of tissue function and viability. Specifically, autofluorescence images were acquired in vivo under 355, 325, and 266 nm illumination while light scattering images were collected at the excitation wavelengths as well as using relatively narrowband light centered at 500 nm. The images were simultaneously recorded using a multimodal optical imaging system. We also analyzed to obtain time constants, which were correlated to kidney dysfunction asmore » determined by a subsequent survival study and histopathological analysis. This analysis of both the light scattering and autofluorescence images suggests that changes in tissue microstructure, fluorophore emission, and blood absorption spectral characteristics, coupled with vascular response, contribute to the behavior of the observed signal, which may be used to obtain tissue functional information and offer the ability to predict posttransplant kidney function.« less

  17. Adaptive Optics Imaging in Laser Pointer Maculopathy.

    PubMed

    Sheyman, Alan T; Nesper, Peter L; Fawzi, Amani A; Jampol, Lee M

    2016-08-01

    The authors report multimodal imaging including adaptive optics scanning laser ophthalmoscopy (AOSLO) (Apaeros retinal image system AOSLO prototype; Boston Micromachines Corporation, Boston, MA) in a case of previously diagnosed unilateral acute idiopathic maculopathy (UAIM) that demonstrated features of laser pointer maculopathy. The authors also show the adaptive optics images of a laser pointer maculopathy case previously reported. A 15-year-old girl was referred for the evaluation of a maculopathy suspected to be UAIM. The authors reviewed the patient's history and obtained fluorescein angiography, autofluorescence, optical coherence tomography, infrared reflectance, and AOSLO. The time course of disease and clinical examination did not fit with UAIM, but the linear pattern of lesions was suspicious for self-inflicted laser pointer injury. This was confirmed on subsequent questioning of the patient. The presence of linear lesions in the macula that are best highlighted with multimodal imaging techniques should alert the physician to the possibility of laser pointer injury. AOSLO further characterizes photoreceptor damage in this condition. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:782-785.]. Copyright 2016, SLACK Incorporated.

  18. Multimodal Fusion with Reference: Searching for Joint Neuromarkers of Working Memory Deficits in Schizophrenia

    PubMed Central

    Qi, Shile; Calhoun, Vince D.; van Erp, Theo G. M.; Bustillo, Juan; Damaraju, Eswar; Turner, Jessica A.; Du, Yuhui; Chen, Jiayu; Yu, Qingbao; Mathalon, Daniel H.; Ford, Judith M.; Voyvodic, James; Mueller, Bryon A.; Belger, Aysenil; Ewen, Sarah Mc; Potkin, Steven G.; Preda, Adrian; Jiang, Tianzi

    2017-01-01

    Multimodal fusion is an effective approach to take advantage of cross-information among multiple imaging data to better understand brain diseases. However, most current fusion approaches are blind, without adopting any prior information. To date, there is increasing interest to uncover the neurocognitive mapping of specific behavioral measurement on enriched brain imaging data; hence, a supervised, goal-directed model that enables a priori information as a reference to guide multimodal data fusion is in need and a natural option. Here we proposed a fusion with reference model, called “multi-site canonical correlation analysis with reference plus joint independent component analysis” (MCCAR+jICA), which can precisely identify co-varying multimodal imaging patterns closely related to reference information, such as cognitive scores. In a 3-way fusion simulation, the proposed method was compared with its alternatives on estimation accuracy of both target component decomposition and modality linkage detection. MCCAR+jICA outperforms others with higher precision. In human imaging data, working memory performance was utilized as a reference to investigate the covarying functional and structural brain patterns among 3 modalities and how they are impaired in schizophrenia. Two independent cohorts (294 and 83 subjects respectively) were used. Interestingly, similar brain maps were identified between the two cohorts, with substantial overlap in the executive control networks in fMRI, salience network in sMRI, and major white matter tracts in dMRI. These regions have been linked with working memory deficits in schizophrenia in multiple reports, while MCCAR+jICA further verified them in a repeatable, joint manner, demonstrating the potential of such results to identify potential neuromarkers for mental disorders. PMID:28708547

  19. MULTIMODAL IMAGING ADDS NEW INSIGHTS INTO ACUTE SYPHILITIC POSTERIOR PLACOID CHORIORETINITIS.

    PubMed

    Tsui, Edmund; Gal-Or, Orly; Ghadiali, Quraish; Freund, K Bailey

    2017-10-11

    Acute syphilitic posterior placoid chorioretinitis (ASPPC) is an uncommon manifestation of ocular syphilis with distinct clinical features. We describe new multimodal imaging findings in a patient with ASPPC. Observational case report with multimodal imaging. A 44-year-old woman presented with 5 days of decreased vision in her right eye. Visual acuity was counting fingers in her right eye and 20/20 in her left eye. Funduscopic examination of the right eye showed a yellow placoid macular lesion with extension beyond the equator, which was encircled by an annular ring of outer retinal whitening. Ultra-widefield fundus autofluorescence demonstrated hyperautofluorescence corresponding to the placoid lesion. Examination of the left eye appeared unremarkable, but ultra-widefield fundus autofluorescence showed an area of hyperautofluorescence located superonasal to the optic nerve. Optical coherence tomography of the right eye demonstrated subretinal fluid and overlying disruption of the ellipsoid zone. Fluorescein angiography demonstrated early hypofluorescent and hyperfluorescent spots and late staining within the placoid lesion. Optical coherence tomography angiography showed several areas of decreased flow signal within the placoid lesion at the level of the choriocapillaris. Laboratory testing revealed a rapid plasma reagin titer of 1:1,024. Two months after treatment with intravenous penicillin G, visual acuity had improved to 20/25 in her right eye, and optical coherence tomography showed partial restoration of the ellipsoid zone. The annular ring resolved with near normalization of fundus autofluorescence and optical coherence tomography angiography demonstrated resolution of flow. Multimodal imaging provides further insight into the pathogenesis of ASPPC. Ultra-widefield fundus autofluorescence may show evidence of ellipsoid zone disruption in areas that clinically appear normal. Flow voids within the choriocapillaris in ASPPC appear to resolve with appropriate treatment, a finding that suggests a transient disruption of choriocapillaris flow in ASPPC.

  20. A Review of Multidimensional, Multifluid Intermediate-scale Experiments: Flow Behavior, Saturation Imaging, and Tracer Detection and Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oostrom, Mart; Dane, J. H.; Wietsma, Thomas W.

    2007-08-01

    A review is presented of original multidimensional, intermediate-scale experiments involving non-aqueous phase liquid (NAPL) flow behavior, imaging, and detection/quantification with solute tracers. In a companion paper (Oostrom, M., J.H. Dane, and T.W. Wietsma. 2006. A review of multidimensional, multifluid intermediate-scale experiments: Nonaqueous phase dissolution and enhanced remediation. Vadose Zone Journal 5:570-598) experiments related to aqueous dissolution and enhanced remediation were discussed. The experiments investigating flow behavior include infiltration and redistribution experiments with both light and dense NAPLs in homogeneous and heterogeneous porous medium systems. The techniques used for NAPL saturation mapping for intermediate-scale experiments include photon-attenuation methods such as gammamore » and X-ray techniques, and photographic methods such as the light reflection, light transmission, and multispectral image analysis techniques. Solute tracer methods used for detection and quantification of NAPL in the subsurface are primarily limited to variations of techniques comparing the behavior of conservative and partitioning tracers. Besides a discussion of the experimental efforts, recommendations for future research at this laboratory scale are provided.« less

  1. Nanoparticles in practice for molecular-imaging applications: An overview.

    PubMed

    Padmanabhan, Parasuraman; Kumar, Ajay; Kumar, Sundramurthy; Chaudhary, Ravi Kumar; Gulyás, Balázs

    2016-09-01

    Nanoparticles (NPs) are playing a progressively more significant role in multimodal and multifunctional molecular imaging. The agents like Superparamagnetic iron oxide (SPIO), manganese oxide (MnO), gold NPs/nanorods and quantum dots (QDs) possess specific properties like paramagnetism, superparamagnetism, surface plasmon resonance (SPR) and photoluminescence respectively. These specific properties make them able for single/multi-modal and single/multi-functional molecular imaging. NPs generally have nanomolar or micromolar sensitivity range and can be detected via imaging instrumentation. The distinctive characteristics of these NPs make them suitable for imaging, therapy and delivery of drugs. Multifunctional nanoparticles (MNPs) can be produced through either modification of shell or surface or by attaching an affinity ligand to the nanoparticles. They are utilized for targeted imaging by magnetic resonance imaging (MRI), single photon emission computed tomography (SPECT), positron emission tomography (PET), computed tomography (CT), photo acoustic imaging (PAI), two photon or fluorescent imaging and ultra sound etc. Toxicity factor of NPs is also a very important concern and toxic effect should be eliminated. First generation NPs have been designed, developed and tested in living subjects and few of them are already in clinical use. In near future, molecular imaging will get advanced with multimodality and multifunctionality to detect diseases like cancer, neurodegenerative diseases, cardiac diseases, inflammation, stroke, atherosclerosis and many others in their early stages. In the current review, we discussed single/multifunctional nanoparticles along with molecular imaging modalities. The present article intends to reveal recent avenues for nanomaterials in multimodal and multifunctional molecular imaging through a review of pertinent literatures. The topic emphasises on the distinctive characteristics of nanomaterial which makes them, suitable for biomedical imaging, therapy and delivery of drugs. This review is more informative of indicative technologies which will be helpful in a way to plan, understand and lead the nanotechnology related work. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  2. [New aspects of complex chronic tinnitus. II: The lost silence: effects and psychotherapeutic possibilities in complex chronic tinnitus].

    PubMed

    Goebel, G; Keeser, W; Fichter, M; Rief, W

    1991-01-01

    "Complex tinnitus" is a diagnostic term denoting a disturbance pattern where the patient hears highly annoying and painful noises or sounds that do not originate from a recognisable external source and can be described only by the patient himself. It seems that the suffering mainly depends upon the extent to which the tinnitus is experienced as a phenomenon that is beyond control. Part I reports on an examination of the treatment success achieved with 28 consecutive patients who had been treated according to an integrative multimodal behavioural medicine concept. This resulted--despite continual loudness--in a decrease in the degree of unpleasantness of the tinnitus, by 17% (p less than 0.01) with corresponding normalisation of decisive symptom factors in Hopkins-Symptom-Check-List (SCL-90-R) and Freiburg Personality-Inventary (FPI-R). On the whole, 19 out of the total of 28 patients showed essential to marked improvement of the disturbance pattern. Part II presents a multidimensional tinnitus model and the essential psychotherapeutic focal points of a multimodal psychotherapy concept in complex chronic tinnitus, as well as the parallel phenomena in the chronic pain syndrome.

  3. Computer object segmentation by nonlinear image enhancement, multidimensional clustering, and geometrically constrained contour optimization

    NASA Astrophysics Data System (ADS)

    Bruynooghe, Michel M.

    1998-04-01

    In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.

  4. Dermatological Feasibility of Multimodal Facial Color Imaging Modality for Cross-Evaluation of Facial Actinic Keratosis

    PubMed Central

    Bae, Youngwoo; Son, Taeyoon; Nelson, J. Stuart; Kim, Jae-Hong; Choi, Eung Ho; Jung, Byungjo

    2010-01-01

    Background/Purpose Digital color image analysis is currently considered as a routine procedure in dermatology. In our previous study, a multimodal facial color imaging modality (MFCIM), which provides a conventional, parallel- and cross-polarization, and fluorescent color image, was introduced for objective evaluation of various facial skin lesions. This study introduces a commercial version of MFCIM, DermaVision-PRO, for routine clinical use in dermatology and demonstrates its dermatological feasibility for cross-evaluation of skin lesions. Methods/Results Sample images of subjects with actinic keratosis or non-melanoma skin cancers were obtained at four different imaging modes. Various image analysis methods were applied to cross-evaluate the skin lesion and, finally, extract valuable diagnostic information. DermaVision-PRO is potentially a useful tool as an objective macroscopic imaging modality for quick prescreening and cross-evaluation of facial skin lesions. Conclusion DermaVision-PRO may be utilized as a useful tool for cross-evaluation of widely distributed facial skin lesions and an efficient database management of patient information. PMID:20923462

  5. Multi-modality imaging of tumor phenotype and response to therapy

    NASA Astrophysics Data System (ADS)

    Nyflot, Matthew J.

    2011-12-01

    Imaging and radiation oncology have historically been closely linked. However, the vast majority of techniques used in the clinic involve anatomical imaging. Biological imaging offers the potential for innovation in the areas of cancer diagnosis and staging, radiotherapy target definition, and treatment response assessment. Some relevant imaging techniques are FDG PET (for imaging cellular metabolism), FLT PET (proliferation), CuATSM PET (hypoxia), and contrast-enhanced CT (vasculature and perfusion). Here, a technique for quantitative spatial correlation of tumor phenotype is presented for FDG PET, FLT PET, and CuATSM PET images. Additionally, multimodality imaging of treatment response with FLT PET, CuATSM, and dynamic contrast-enhanced CT is presented, in a trial of patients receiving an antiangiogenic agent (Avastin) combined with cisplatin and radiotherapy. Results are also presented for translational applications in animal models, including quantitative assessment of proliferative response to cetuximab with FLT PET and quantification of vascular volume with a blood-pool contrast agent (Fenestra). These techniques have clear applications to radiobiological research and optimized treatment strategies, and may eventually be used for personalized therapy for patients.

  6. Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data

    DOE PAGES

    Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.

    2016-08-09

    In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less

  7. Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.

    In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less

  8. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  9. VoxelStats: A MATLAB Package for Multi-Modal Voxel-Wise Brain Image Analysis.

    PubMed

    Mathotaarachchi, Sulantha; Wang, Seqian; Shin, Monica; Pascoal, Tharick A; Benedet, Andrea L; Kang, Min Su; Beaudry, Thomas; Fonov, Vladimir S; Gauthier, Serge; Labbe, Aurélie; Rosa-Neto, Pedro

    2016-01-01

    In healthy individuals, behavioral outcomes are highly associated with the variability on brain regional structure or neurochemical phenotypes. Similarly, in the context of neurodegenerative conditions, neuroimaging reveals that cognitive decline is linked to the magnitude of atrophy, neurochemical declines, or concentrations of abnormal protein aggregates across brain regions. However, modeling the effects of multiple regional abnormalities as determinants of cognitive decline at the voxel level remains largely unexplored by multimodal imaging research, given the high computational cost of estimating regression models for every single voxel from various imaging modalities. VoxelStats is a voxel-wise computational framework to overcome these computational limitations and to perform statistical operations on multiple scalar variables and imaging modalities at the voxel level. VoxelStats package has been developed in Matlab(®) and supports imaging formats such as Nifti-1, ANALYZE, and MINC v2. Prebuilt functions in VoxelStats enable the user to perform voxel-wise general and generalized linear models and mixed effect models with multiple volumetric covariates. Importantly, VoxelStats can recognize scalar values or image volumes as response variables and can accommodate volumetric statistical covariates as well as their interaction effects with other variables. Furthermore, this package includes built-in functionality to perform voxel-wise receiver operating characteristic analysis and paired and unpaired group contrast analysis. Validation of VoxelStats was conducted by comparing the linear regression functionality with existing toolboxes such as glim_image and RMINC. The validation results were identical to existing methods and the additional functionality was demonstrated by generating feature case assessments (t-statistics, odds ratio, and true positive rate maps). In summary, VoxelStats expands the current methods for multimodal imaging analysis by allowing the estimation of advanced regional association metrics at the voxel level.

  10. A graph-based approach for the retrieval of multi-modality medical images.

    PubMed

    Kumar, Ashnil; Kim, Jinman; Wen, Lingfeng; Fulham, Michael; Feng, Dagan

    2014-02-01

    In this paper, we address the retrieval of multi-modality medical volumes, which consist of two different imaging modalities, acquired sequentially, from the same scanner. One such example, positron emission tomography and computed tomography (PET-CT), provides physicians with complementary functional and anatomical features as well as spatial relationships and has led to improved cancer diagnosis, localisation, and staging. The challenge of multi-modality volume retrieval for cancer patients lies in representing the complementary geometric and topologic attributes between tumours and organs. These attributes and relationships, which are used for tumour staging and classification, can be formulated as a graph. It has been demonstrated that graph-based methods have high accuracy for retrieval by spatial similarity. However, naïvely representing all relationships on a complete graph obscures the structure of the tumour-anatomy relationships. We propose a new graph structure derived from complete graphs that structurally constrains the edges connected to tumour vertices based upon the spatial proximity of tumours and organs. This enables retrieval on the basis of tumour localisation. We also present a similarity matching algorithm that accounts for different feature sets for graph elements from different imaging modalities. Our method emphasises the relationships between a tumour and related organs, while still modelling patient-specific anatomical variations. Constraining tumours to related anatomical structures improves the discrimination potential of graphs, making it easier to retrieve similar images based on tumour location. We evaluated our retrieval methodology on a dataset of clinical PET-CT volumes. Our results showed that our method enabled the retrieval of multi-modality images using spatial features. Our graph-based retrieval algorithm achieved a higher precision than several other retrieval techniques: gray-level histograms as well as state-of-the-art methods such as visual words using the scale- invariant feature transform (SIFT) and relational matrices representing the spatial arrangements of objects. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance

    PubMed Central

    Mela, Christopher A.; Patterson, Carrie; Thompson, William K.; Papay, Francis; Liu, Yang

    2015-01-01

    We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a) the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b) the first wearable system offering both large FOV and microscopic imaging simultaneously, (c) the first wearable system that offers both ultrasound imaging and fluorescence imaging capacities, and (d) the first demonstration of goggle-to-goggle communication to share stereoscopic views for medical guidance. PMID:26529249

  12. Multi-modal spectroscopic imaging with synchrotron light to study mechanisms of brain disease

    NASA Astrophysics Data System (ADS)

    Summers, Kelly L.; Fimognari, Nicholas; Hollings, Ashley; Kiernan, Mitchell; Lam, Virginie; Tidy, Rebecca J.; Takechi, Ryu; George, Graham N.; Pickering, Ingrid J.; Mamo, John C.; Harris, Hugh H.; Hackett, Mark J.

    2017-04-01

    The international health care costs associated with Alzheimer's disease (AD) and dementia have been predicted to reach $2 trillion USD by 2030. As such, there is urgent need to develop new treatments and diagnostic methods to stem an international health crisis. A major limitation to therapy and diagnostic development is the lack of complete understanding about the disease mechanisms. Spectroscopic methods at synchrotron light sources, such as FTIR, XRF, and XAS, offer a "multi-modal imaging platform" to reveal a wealth of important biochemical information in situ within ex vivo tissue sections, to increase our understanding of disease mechanisms.

  13. Multimodal biophotonic workstation for live cell analysis.

    PubMed

    Esseling, Michael; Kemper, Björn; Antkowiak, Maciej; Stevenson, David J; Chaudet, Lionel; Neil, Mark A A; French, Paul W; von Bally, Gert; Dholakia, Kishan; Denz, Cornelia

    2012-01-01

    A reliable description and quantification of the complex physiology and reactions of living cells requires a multimodal analysis with various measurement techniques. We have investigated the integration of different techniques into a biophotonic workstation that can provide biological researchers with these capabilities. The combination of a micromanipulation tool with three different imaging principles is accomplished in a single inverted microscope which makes the results from all the techniques directly comparable. Chinese Hamster Ovary (CHO) cells were manipulated by optical tweezers while the feedback was directly analyzed by fluorescence lifetime imaging, digital holographic microscopy and dynamic phase-contrast microscopy. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Multimodal optical coherence tomography for in vivo imaging of brain tissue structure and microvascular network at glioblastoma

    NASA Astrophysics Data System (ADS)

    Yashin, Konstantin S.; Kiseleva, Elena B.; Gubarkova, Ekaterina V.; Matveev, Lev A.; Karabut, Maria M.; Elagin, Vadim V.; Sirotkina, Marina A.; Medyanik, Igor A.; Kravets, L. Y.; Gladkova, Natalia D.

    2017-02-01

    In the case of infiltrative brain tumors the surgeon faces difficulties in determining their boundaries to achieve total resection. The aim of the investigation was to evaluate the performance of multimodal OCT (MM OCT) for differential diagnostics of normal brain tissue and glioma using an experimental model of glioblastoma. The spectral domain OCT device that was used for the study provides simultaneously two modes: cross-polarization and microangiographic OCT. The comparative analysis of the both OCT modalities images from tumorous and normal brain tissue areas concurrently with histologic correlation shows certain difference between when accordingly to morphological and microvascular tissue features.

  15. Results from the commissioning of a multi-modal endoscope for ultrasound and time of flight PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bugalho, Ricardo

    2015-07-01

    The EndoTOFPET-US collaboration has developed a multi-modal imaging system combining Ultrasound with Time-of-Flight Positron Emission Tomography into an endoscopic imaging device. The objective of the project is to obtain a coincidence time resolution of about 200 ps FWHM and to achieve about 1 mm spatial resolution of the PET system, while integrating all the components in a very compact detector suitable for endoscopic use. This scanner aims to be exploited for diagnostic and surgical oncology, as well as being instrumental in the clinical test of new biomarkers especially targeted for prostate and pancreatic cancer. (authors)

  16. Multimodality imaging to plan and guide transcatheter tricuspid valve interventions.

    PubMed

    Prihadi, Edgard A; Delgado, Victoria; Bax, Jeroen J

    2017-10-01

    Tricuspid regurgitation (TR) is a highly prevalent valvular heart disease. The natural history of untreated significant TR portends an unfavorable outcome, but only a minority of patients is currently referred for surgical treatment. Organic TR (caused by primary abnormality of the leaflets) is relatively infrequent whereas secondary or functional TR (caused by dilatation of the tricuspid annulus, right ventricle [RV] and right atrium) is the predominant mechanism. The success of transcatheter therapies for left valvular heart disease over the last decade, has fueled similar development of novel transcatheter devices for the treatment of TR. Currently being tested in several clinical trials, each of these devices requires specific needs to define the procedural suitability. In addition, an accurate evaluation of the complex tricuspid anatomy, RV geometry and their relationship with the surrounding structures is mandatory. Therefore, accurate pre-procedural assessment using multimodality imaging techniques will undoubtedly play a pivotal role in achieving procedural success and safety. This review article provides a comprehensive overview on the etiology and different mechanisms of TR, and highlights the role of multimodality imaging techniques in the assessment of TR severity, RV dysfunction and fulfilment of device-specific selection criteria.

  17. Use of anomolous thermal imaging effects for multi-mode systems control during crystal growth

    NASA Technical Reports Server (NTRS)

    Wargo, Michael J.

    1989-01-01

    Real time image processing techniques, combined with multitasking computational capabilities are used to establish thermal imaging as a multimode sensor for systems control during crystal growth. Whereas certain regions of the high temperature scene are presently unusable for quantitative determination of temperature, the anomalous information thus obtained is found to serve as a potentially low noise source of other important systems control output. Using this approach, the light emission/reflection characteristics of the crystal, meniscus and melt system are used to infer the crystal diameter and a linear regression algorithm is employed to determine the local diameter trend. This data is utilized as input for closed loop control of crystal shape. No performance penalty in thermal imaging speed is paid for this added functionality. Approach to secondary (diameter) sensor design and systems control structure is discussed. Preliminary experimental results are presented.

  18. Advanced multimodality imaging of an anomalous vessel between the ascending aorta and main pulmonary artery in a dog

    PubMed Central

    Markovic, Lauren E.; Kellihan, Heidi B.; Roldán-Alzate, Alejandro; Drees, Randi; Bjorling, Dale E.; Francois, Chris J.

    2014-01-01

    A 1-year-old male German shorthaired pointer was referred for evaluation of tachypnea and hemoptysis. A grade VI/VI left basilar continuous murmur was ausculted. Multimodality imaging consisting of thoracic radiographs, transthoracic and transesophageal echocardiography, fluoroscopy-guided selective angiography, computed tomography angiogram (CTA) and magnetic resonance angiogram (MRA), was performed on this patient. The defect included a left-to-right shunting anomalous vessel between the ascending aorta and main pulmonary artery, along with a dissecting aneurysm of the main and right pulmonary artery. An MRA postprocessing technique (PC-VIPR) was used to allow for high resolution angiographic images and further assessment of the patient’s hemodynamics prior to surgical correction. This case report describes the clinical course of a canine patient with a rare form of congenital cardiac disease, and the multiple imaging modalities that were used to aid in diagnosis and treatment. PMID:24485987

  19. Multi-Modal Ultra-Widefield Imaging Features in Waardenburg Syndrome

    PubMed Central

    Choudhry, Netan; Rao, Rajesh C.

    2015-01-01

    Background Waardenburg syndrome is characterized by a group of features including; telecanthus, a broad nasal root, synophrys of the eyebrows, piedbaldism, heterochromia irides, and deaf-mutism. Hypopigmentation of the choroid is a unique feature of this condition examined with multi-modal Ultra-Widefield Imaging in this report. Material/Methods Report of a single case. Results Bilateral symmetric choroidal hypopigmentation was observed with hypoautofluorescence in the region of hypopigmentation. Fluorescein angiography revealed a normal vasculature, however a thickened choroid was seen on Enhanced-Depth Imaging Spectral-Domain OCT (EDI SD-OCT). Conclusion(s) Choroidal hypopigmentation is a unique feature of Waardenburg syndrome, which can be visualized with ultra-widefield fundus autofluorescence. The choroid may also be thickened in this condition and its thickness measured with EDI SD-OCT. PMID:26114849

  20. Multimodality Imaging of Myocardial Injury and Remodeling

    PubMed Central

    Kramer, Christopher M.; Sinusas, Albert J.; Sosnovik, David E.; French, Brent A.; Bengel, Frank M.

    2011-01-01

    Advances in cardiovascular molecular imaging have come at a rapid pace over the last several years. Multiple approaches have been taken to better understand the structural, molecular, and cellular events that underlie the progression from myocardial injury to myocardial infarction (MI) and, ultimately, to congestive heart failure. Multimodality molecular imaging including SPECT, PET, cardiac MRI, and optical approaches is offering new insights into the pathophysiology of MI and left ventricular remodeling in small-animal models. Targets that are being probed include, among others, angiotensin receptors, matrix metalloproteinases, integrins, apoptosis, macrophages, and sympathetic innervation. It is only a matter of time before these advances are applied in the clinical setting to improve post-MI prognostication and identify appropriate therapies in patients to prevent the onset of congestive heart failure. PMID:20395347

  1. Multimodal medical information retrieval with unsupervised rank fusion.

    PubMed

    Mourão, André; Martins, Flávio; Magalhães, João

    2015-01-01

    Modern medical information retrieval systems are paramount to manage the insurmountable quantities of clinical data. These systems empower health care experts in the diagnosis of patients and play an important role in the clinical decision process. However, the ever-growing heterogeneous information generated in medical environments poses several challenges for retrieval systems. We propose a medical information retrieval system with support for multimodal medical case-based retrieval. The system supports medical information discovery by providing multimodal search, through a novel data fusion algorithm, and term suggestions from a medical thesaurus. Our search system compared favorably to other systems in 2013 ImageCLEFMedical. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Note: Broadly tunable all-fiber ytterbium laser with 0.05 nm spectral width based on multimode interference filter.

    PubMed

    Mukhopadhyay, Pranab K; Gupta, Pradeep K; Singh, Amarjeet; Sharma, Sunil K; Bindra, Kushvinder S; Oak, Shrikant M

    2014-05-01

    A multimode interference filter with narrow transmission bandwidth and large self-imaging wavelength interval is constructed and implemented in an ytterbium doped fiber laser in all-fiber format for broad wavelength tunability as well as narrow spectral width of the output beam. The peak transmission wavelength of the multimode interference filter was tuned with the help of a standard in-fiber polarization controller. With this simple mechanism more than 30 nm (1038 nm-1070 nm) tuning range is demonstrated. The spectral width of the output beam from the laser was measured to be 0.05 nm.

  3. Note: Broadly tunable all-fiber ytterbium laser with 0.05 nm spectral width based on multimode interference filter

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Pranab K.; Gupta, Pradeep K.; Singh, Amarjeet; Sharma, Sunil K.; Bindra, Kushvinder S.; Oak, Shrikant M.

    2014-05-01

    A multimode interference filter with narrow transmission bandwidth and large self-imaging wavelength interval is constructed and implemented in an ytterbium doped fiber laser in all-fiber format for broad wavelength tunability as well as narrow spectral width of the output beam. The peak transmission wavelength of the multimode interference filter was tuned with the help of a standard in-fiber polarization controller. With this simple mechanism more than 30 nm (1038 nm-1070 nm) tuning range is demonstrated. The spectral width of the output beam from the laser was measured to be 0.05 nm.

  4. Note: Broadly tunable all-fiber ytterbium laser with 0.05 nm spectral width based on multimode interference filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukhopadhyay, Pranab K., E-mail: pkm@rrcat.gov.in; Gupta, Pradeep K.; Singh, Amarjeet

    2014-05-15

    A multimode interference filter with narrow transmission bandwidth and large self-imaging wavelength interval is constructed and implemented in an ytterbium doped fiber laser in all-fiber format for broad wavelength tunability as well as narrow spectral width of the output beam. The peak transmission wavelength of the multimode interference filter was tuned with the help of a standard in-fiber polarization controller. With this simple mechanism more than 30 nm (1038 nm–1070 nm) tuning range is demonstrated. The spectral width of the output beam from the laser was measured to be 0.05 nm.

  5. A Multi-Dimensional Approach to Gradient Change in Phonological Acquisition: A Case Study of Disordered Speech Development

    ERIC Educational Resources Information Center

    Glaspey, Amy M.; MacLeod, Andrea A. N.

    2010-01-01

    The purpose of the current study is to document phonological change from a multidimensional perspective for a 3-year-old boy with phonological disorder by comparing three measures: (1) accuracy of consonant productions, (2) dynamic assessment, and (3) acoustic analysis. The methods included collecting a sample of the targets /s, [image omitted],…

  6. An Evaluation of Body Image Assessments in Hispanic College Women: The Multidimensional Body-Self Relations Questionnaire and the Appearance Schemas Inventory-Revised

    ERIC Educational Resources Information Center

    Smith, Ashlea R.; Davenport, Becky R.

    2012-01-01

    The authors evaluated the utility of the Multidimensional Body-Self Relations Questionnaire (MBSRQ; Brown, Cash, & Mikulka, 1990) and the Appearance Schemas Inventory-Revised (ASI-R; Cash, Melnyk, & Hrabosky, 2004) by administering the instruments to Hispanic female college students. Results indicated that the means of the MBSRQ and the…

  7. Multimodality imaging of reporter gene expression using a novel fusion vector in living cells and animals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gambhir, Sanjiv; Pritha, Ray

    Novel double and triple fusion reporter gene constructs harboring distinct imagable reporter genes are provided, as well as applications for the use of such double and triple fusion constructs in living cells and in living animals using distinct imaging technologies.

  8. Multimodality imaging of reporter gene expression using a novel fusion vector in living cells and animals

    DOEpatents

    Gambhir, Sanjiv; Pritha, Ray

    2015-07-14

    Novel double and triple fusion reporter gene constructs harboring distinct imagable reporter genes are provided, as well as applications for the use of such double and triple fusion constructs in living cells and in living animals using distinct imaging technologies.

  9. Multimodal Imaging in Diabetic Macular Edema.

    PubMed

    Acón, Dhariana; Wu, Lihteh

    2018-01-01

    Throughout ophthalmic history it has been shown that progress has gone hand in hand with technological breakthroughs. In the past, fluorescein angiography and fundus photographs were the most commonly used imaging modalities in the management of diabetic macular edema (DME). Today, despite the moderate correlation between macular thickness and functional outcomes, spectral domain optical coherence tomography (SD-OCT) has become the DME workhorse in clinical practice. Several SD-OCT biomarkers have been looked at including presence of epiretinal membrane, vitreomacular adhesion, disorganization of the inner retinal layers, central macular thickness, integrity of the ellipsoid layer, and subretinal fluid, among others. Emerging imaging modalities include fundus autofluorescence, macular pigment optical density, fluorescence lifetime imaging ophthalmoscopy, OCT angiography, and adaptive optics. Technological advances in imaging of the posterior segment of the eye have enabled ophthalmologists to develop hypotheses about pathological mechanisms of disease, monitor disease progression, and assess response to treatment. Spectral domain OCT is the most commonly performed imaging modality in the management of DME. However, reliable biomarkers have yet to be identified. Machine learning may provide treatment algorithms based on multimodal imaging. Copyright 2018 Asia-Pacific Academy of Ophthalmology.

  10. A Review of Intravascular Ultrasound–Based Multimodal Intravascular Imaging: The Synergistic Approach to Characterizing Vulnerable Plaques

    PubMed Central

    Ma, Teng; Zhou, Bill; Hsiai, Tzung K.; Shung, K. Kirk

    2015-01-01

    Catheter-based intravascular imaging modalities are being developed to visualize pathologies in coronary arteries, such as high-risk vulnerable atherosclerotic plaques known as thin-cap fibroatheroma, to guide therapeutic strategy at preventing heart attacks. Mounting evidences have shown three distinctive histopathological features—the presence of a thin fibrous cap, a lipid-rich necrotic core, and numerous infiltrating macrophages—are key markers of increased vulnerability in atherosclerotic plaques. To visualize these changes, the majority of catheter-based imaging modalities used intravascular ultrasound (IVUS) as the technical foundation and integrated emerging intravascular imaging techniques to enhance the characterization of vulnerable plaques. However, no current imaging technology is the unequivocal “gold standard” for the diagnosis of vulnerable atherosclerotic plaques. Each intravascular imaging technology possesses its own unique features that yield valuable information although encumbered by inherent limitations not seen in other modalities. In this context, the aim of this review is to discuss current scientific innovations, technical challenges, and prospective strategies in the development of IVUS-based multi-modality intravascular imaging systems aimed at assessing atherosclerotic plaque vulnerability. PMID:26400676

  11. Integration of Multi-Modal Biomedical Data to Predict Cancer Grade and Patient Survival.

    PubMed

    Phan, John H; Hoffman, Ryan; Kothari, Sonal; Wu, Po-Yen; Wang, May D

    2016-02-01

    The Big Data era in Biomedical research has resulted in large-cohort data repositories such as The Cancer Genome Atlas (TCGA). These repositories routinely contain hundreds of matched patient samples for genomic, proteomic, imaging, and clinical data modalities, enabling holistic and multi-modal integrative analysis of human disease. Using TCGA renal and ovarian cancer data, we conducted a novel investigation of multi-modal data integration by combining histopathological image and RNA-seq data. We compared the performances of two integrative prediction methods: majority vote and stacked generalization. Results indicate that integration of multiple data modalities improves prediction of cancer grade and outcome. Specifically, stacked generalization, a method that integrates multiple data modalities to produce a single prediction result, outperforms both single-data-modality prediction and majority vote. Moreover, stacked generalization reveals the contribution of each data modality (and specific features within each data modality) to the final prediction result and may provide biological insights to explain prediction performance.

  12. Versatile quantitative phase imaging system applied to high-speed, low noise and multimodal imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Federici, Antoine; Aknoun, Sherazade; Savatier, Julien; Wattellier, Benoit F.

    2017-02-01

    Quadriwave lateral shearing interferometry (QWLSI) is a well-established quantitative phase imaging (QPI) technique based on the analysis of interference patterns of four diffraction orders by an optical grating set in front of an array detector [1]. As a QPI modality, this is a non-invasive imaging technique which allow to measure the optical path difference (OPD) of semi-transparent samples. We present a system enabling QWLSI with high-performance sCMOS cameras [2] and apply it to perform high-speed imaging, low noise as well as multimodal imaging. This modified QWLSI system contains a versatile optomechanical device which images the optical grating near the detector plane. Such a device is coupled with any kind of camera by varying its magnification. In this paper, we study the use of a sCMOS Zyla5.5 camera from Andor along with our modified QWLSI system. We will present high-speed live cell imaging, up to 200Hz frame rate, in order to follow intracellular fast motions while measuring the quantitative phase information. The structural and density information extracted from the OPD signal is complementary to the specific and localized fluorescence signal [2]. In addition, QPI detects cells even when the fluorophore is not expressed. This is very useful to follow a protein expression with time. The 10 µm spatial pixel resolution of our modified QWLSI associated to the high sensitivity of the Zyla5.5 enabling to perform high quality fluorescence imaging, we have carried out multimodal imaging revealing fine structures cells, like actin filaments, merged with the morphological information of the phase. References [1]. P. Bon, G. Maucort, B. Wattellier, and S. Monneret, "Quadriwave lateral shearing interferometry for quantitative phase microscopy of living cells," Opt. Express, vol. 17, pp. 13080-13094, 2009. [2] P. Bon, S. Lécart, E. Fort and S. Lévêque-Fort, "Fast label-free cytoskeletal network imaging in living mammalian cells," Biophysical journal, 106(8), pp. 1588-1595, 2014

  13. Multifunctional Fe3O4 @ Au core/shell nanostars: a unique platform for multimode imaging and photothermal therapy of tumors

    PubMed Central

    Hu, Yong; Wang, Ruizhi; Wang, Shige; Ding, Ling; Li, Jingchao; Luo, Yu; Wang, Xiaolin; Shen, Mingwu; Shi, Xiangyang

    2016-01-01

    We herein report the development of multifunctional folic acid (FA)-targeted Fe3O4 @ Au nanostars (NSs) for targeted multi-mode magnetic resonance (MR)/computed tomography (CT)/photoacoustic (PA) imaging and photothermal therapy (PTT) of tumors. In this present work, citric acid-stabilized Fe3O4/Ag composite nanoparticles prepared by a mild reduction route were utilized as seeds and exposed to the Au growth solution to induce the formation of Fe3O4 @ Au core/shell NSs. Followed by successive decoration of thiolated polyethyleneimine (PEI-SH), FA via a polyethylene glycol spacer, and acetylation of the residual PEI amines, multifunctional Fe3O4 @ Au NSs were formed. The designed multifunctional NSs possess excellent colloidal stability, good cytocompatibility in a given concentration range, and specific recognition to cancer cells overexpressing FA receptors. Due to co-existence of Fe3O4 core and star-shaped Au shell, the NSs can be used for MR and CT imaging of tumors, respectively. Likewise, the near infrared plasmonic absorption feature also enables the NSs to be used for PA imaging and PTT of tumors. Our study clearly demonstrates a unique theranostic nanoplatform that can be used for high performance multi-mode imaging-guided PTT of tumors, which may be extendable for theranostics of different diseases in translational medicine. PMID:27325015

  14. A practical salient region feature based 3D multi-modality registration method for medical images

    NASA Astrophysics Data System (ADS)

    Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang

    2006-03-01

    We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.

  15. Multimodal imaging of the disease progression of birdshot chorioretinopathy.

    PubMed

    Teussink, Michel M; Huis In Het Veld, Paulien I; de Vries, Lieuwe A M; Hoyng, Carel B; Klevering, B Jeroen; Theelen, Thomas

    2016-12-01

    To study outer retinal deterioration in relation to clinical disease activity in patients with birdshot chorioretinopathy using fundus autofluorescence and spectral-domain optical coherence tomography (OCT). A single-centre retrospective cohort study was carried out on 42 eyes of 21 patients with birdshot disease, using a multimodal imaging approach including fundus autofluorescence, OCT, fluorescein angiography and indocyanine green angiography in combination with a patient chart review. The patients' overall clinical activity of retinal vasculitis during the follow-up period was determined by periods of clinical activity as indicated by fluorescein angiography and associated treatment decisions. Image analysis was performed to examine the spatial correspondence between autofluorescence changes and disruption of the photoreceptor inner segment ellipsoid zone on OCT. Three common types of outer retinal lesions were observed in fovea-centred images of 43% of patients: circular patches of chorioretinal atrophy, ellipsoid zone disruption on OCT, and outer retinal atrophy on autofluorescence and OCT. There was good spatial correspondence between ellipsoid zone disruption and areas of diffuse hyper-autofluorescence outside the fovea. Interestingly, the ellipsoid zone disruption recovered in four out of seven patients upon intensified therapeutic immunosuppression. Most patients only developed peripapillary atrophy and occasional perivascular hypo-autofluorescence. A multimodal imaging approach with autofluorescence imaging and OCT may help to detect ellipsoid zone disruption in the central retina of patients with birdshot disease. Our results suggest that ellipsoid zone disruption may be related to both the activity and duration of retinal vasculitis, and could help to determine therapeutic success in birdshot disease. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  16. Multimodal Chemical Imaging of Amyloid Plaque Polymorphism Reveals Aβ Aggregation Dependent Anionic Lipid Accumulations and Metabolism.

    PubMed

    Michno, Wojciech; Kaya, Ibrahim; Nyström, Sofie; Guerard, Laurent; Nilsson, K Peter R; Hammarström, Per; Blennow, Kaj; Zetterberg, Henrik; Hanrieder, Jörg

    2018-06-01

    Amyloid plaque formation constitutes one of the main pathological hallmark of Alzheimer's disease (AD) and is suggested to be a critical factor driving disease pathogenesis. Interestingly, in patients that display amyloid pathology but remain cognitively normal, Aβ deposits are predominantly of diffuse morphology suggesting that cored plaque formation is primarily associated with cognitive deterioration and AD pathogenesis. Little is known about the molecular mechanism responsible for conversion of monomeric Aβ into neurotoxic aggregates and the predominantly cored deposits observed in AD. The structural diversity among Aβ plaques, including cored/compact- and diffuse, may be linked to their distinct Aβ profile and other chemical species including neuronal lipids. We developed a novel, chemical imaging paradigm combining matrix assisted laser desorption/ionization imaging mass spectrometry (MALDI IMS) and fluorescent amyloid staining. This multimodal imaging approach was used to probe the lipid chemistry associated with structural plaque heterogeneity in transgenic AD mice (tgAPPSwe) and was correlated to Aβ profiles determined by subsequent laser microdissection and immunoprecipitation-mass spectrometry. Multivariate image analysis revealed an inverse localization of ceramides and their matching metabolites to diffuse and cored structures within single plaques, respectively. Moreover, phosphatidylinositols implicated in AD pathogenesis, were found to localise to the diffuse Aβ structures and correlate with Aβ1-42. Further, lysophospholipids implicated in neuroinflammation were increased in all Aβ deposits. The results support previous clinical findings on the importance of lipid disturbances in AD pathophysiology and associated sphingolipid processing. These data highlight the potential of multimodal imaging as a powerful technology to probe neuropathological mechanisms.

  17. A digital 3D atlas of the marmoset brain based on multi-modal MRI.

    PubMed

    Liu, Cirong; Ye, Frank Q; Yen, Cecil Chern-Chyi; Newman, John D; Glen, Daniel; Leopold, David A; Silva, Afonso C

    2018-04-01

    The common marmoset (Callithrix jacchus) is a New-World monkey of growing interest in neuroscience. Magnetic resonance imaging (MRI) is an essential tool to unveil the anatomical and functional organization of the marmoset brain. To facilitate identification of regions of interest, it is desirable to register MR images to an atlas of the brain. However, currently available atlases of the marmoset brain are mainly based on 2D histological data, which are difficult to apply to 3D imaging techniques. Here, we constructed a 3D digital atlas based on high-resolution ex-vivo MRI images, including magnetization transfer ratio (a T1-like contrast), T2w images, and multi-shell diffusion MRI. Based on the multi-modal MRI images, we manually delineated 54 cortical areas and 16 subcortical regions on one hemisphere of the brain (the core version). The 54 cortical areas were merged into 13 larger cortical regions according to their locations to yield a coarse version of the atlas, and also parcellated into 106 sub-regions using a connectivity-based parcellation method to produce a refined atlas. Finally, we compared the new atlas set with existing histology atlases and demonstrated its applications in connectome studies, and in resting state and stimulus-based fMRI. The atlas set has been integrated into the widely-distributed neuroimaging data analysis software AFNI and SUMA, providing a readily usable multi-modal template space with multi-level anatomical labels (including labels from the Paxinos atlas) that can facilitate various neuroimaging studies of marmosets. Published by Elsevier Inc.

  18. Multimodal MSI in Conjunction with Broad Coverage Spatially Resolved MS 2 Increases Confidence in Both Molecular Identification and Localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veličković, Dušan; Chu, Rosalie K.; Carrell, Alyssa A.

    One critical aspect of mass spectrometry imaging (MSI) is the need to confidently identify detected analytes. While orthogonal tandem MS (e.g., LC-MS 2) experiments from sample extracts can assist in annotating ions, the spatial information about these molecules is lost. Accordingly, this could cause mislead conclusions, especially in cases where isobaric species exhibit different distributions within a sample. In this Technical Note, we employed a multimodal imaging approach, using matrix assisted laser desorption/ionization (MALDI)-MSI and liquid extraction surface analysis (LESA)-MS 2I, to confidently annotate and One critical aspect of mass spectrometry imaging (MSI) is the need to confidently identify detectedmore » analytes. While orthogonal tandem MS (e.g., LC-MS2) experiments from sample extracts can assist in annotating ions, the spatial information about these molecules is lost. Accordingly, this could cause mislead conclusions, especially in cases where isobaric species exhibit different distributions within a sample. In this Technical Note, we employed a multimodal imaging approach, using matrix assisted laser desorption/ionization (MALDI)-MSI and liquid extraction surface analysis (LESA)-MS 2I, to confidently annotate and localize a broad range of metabolites involved in a tripartite symbiosis system of moss, cyanobacteria, and fungus. We found that the combination of these two imaging modalities generated very congruent ion images, providing the link between highly accurate structural information onfered by LESA and high spatial resolution attainable by MALDI. These results demonstrate how this combined methodology could be very useful in differentiating metabolite routes in complex systems.« less

  19. IMAGING WITH MULTIMODAL ADAPTIVE-OPTICS OPTICAL COHERENCE TOMOGRAPHY IN MULTIPLE EVANESCENT WHITE DOT SYNDROME: THE STRUCTURE AND FUNCTIONAL RELATIONSHIP.

    PubMed

    Labriola, Leanne T; Legarreta, Andrew D; Legarreta, John E; Nadler, Zach; Gallagher, Denise; Hammer, Daniel X; Ferguson, R Daniel; Iftimia, Nicusor; Wollstein, Gadi; Schuman, Joel S

    2016-01-01

    To elucidate the location of pathological changes in multiple evanescent white dot syndrome (MEWDS) with the use of multimodal adaptive optics (AO) imaging. A 5-year observational case study of a 24-year-old female with recurrent MEWDS. Full examination included history, Snellen chart visual acuity, pupil assessment, intraocular pressures, slit lamp evaluation, dilated fundoscopic exam, imaging with Fourier-domain optical coherence tomography (FD-OCT), blue-light fundus autofluorescence (FAF), fundus photography, fluorescein angiography, and adaptive-optics optical coherence tomography. Three distinct acute episodes of MEWDS occurred during the period of follow-up. Fourier-domain optical coherence tomography and adaptive-optics imaging showed disturbance in the photoreceptor outer segments (PR OS) in the posterior pole with each flare. The degree of disturbance at the photoreceptor level corresponded to size and extent of the visual field changes. All findings were transient with delineation of the photoreceptor recovery from the outer edges of the lesion inward. Hyperautofluorescence was seen during acute flares. Increase in choroidal thickness did occur with each active flare but resolved. Although changes in the choroid and RPE can be observed in MEWDS, Fourier-domain optical coherence tomography, and multimodal adaptive optics imaging localized the visually significant changes seen in this disease at the level of the photoreceptors. These transient retinal changes specifically occur at the level of the inner segment ellipsoid and OS/RPE line. En face optical coherence tomography imaging provides a detailed, yet noninvasive method for following the convalescence of MEWDS and provides insight into the structural and functional relationship of this transient inflammatory retinal disease.

  20. The Cognitive Visualization System with the Dynamic Projection of Multidimensional Data

    NASA Astrophysics Data System (ADS)

    Gorohov, V.; Vitkovskiy, V.

    2008-08-01

    The phenomenon of cognitive machine drawing consists in the generation on the screen the special graphic representations, which create in the brain of human operator entertainment means. These means seem man by aesthetically attractive and, thus, they stimulate its descriptive imagination, closely related to the intuitive mechanisms of thinking. The essence of cognitive effect lies in the fact that man receives the moving projection as pseudo-three-dimensional object characterizing multidimensional means in the multidimensional space. After the thorough qualitative study of the visual aspects of multidimensional means with the aid of the enumerated algorithms appears the possibility, using algorithms of standard machine drawing to paint the interesting user separate objects or the groups of objects. Then it is possible to again return to the dynamic behavior of the rotation of means for the purpose of checking the intuitive ideas of user about the clusters and the connections in multidimensional data. Is possible the development of the methods of cognitive machine drawing in combination with other information technologies, first of all with the packets of digital processing of images and multidimensional statistical analysis.

  1. Multimodal registration via spatial-context mutual information.

    PubMed

    Yi, Zhao; Soatto, Stefano

    2011-01-01

    We propose a method to efficiently compute mutual information between high-dimensional distributions of image patches. This in turn is used to perform accurate registration of images captured under different modalities, while exploiting their local structure otherwise missed in traditional mutual information definition. We achieve this by organizing the space of image patches into orbits under the action of Euclidean transformations of the image plane, and estimating the modes of a distribution in such an orbit space using affinity propagation. This way, large collections of patches that are equivalent up to translations and rotations are mapped to the same representative, or "dictionary element". We then show analytically that computing mutual information for a joint distribution in this space reduces to computing mutual information between the (scalar) label maps, and between the transformations mapping each patch into its closest dictionary element. We show that our approach improves registration performance compared with the state of the art in multimodal registration, using both synthetic and real images with quantitative ground truth.

  2. Multimodality 3D Superposition and Automated Whole Brain Tractography: Comprehensive Printing of the Functional Brain

    PubMed Central

    Brimley, Cameron J; Sublett, Jesna Mathew; Stefanowicz, Edward; Flora, Sarah; Mongelluzzo, Gino; Schirmer, Clemens M

    2017-01-01

    Whole brain tractography using diffusion tensor imaging (DTI) sequences can be used to map cerebral connectivity; however, this can be time-consuming due to the manual component of image manipulation required, calling for the need for a standardized, automated, and accurate fiber tracking protocol with automatic whole brain tractography (AWBT). Interpreting conventional two-dimensional (2D) images, such as computed tomography (CT) and magnetic resonance imaging (MRI), as an intraoperative three-dimensional (3D) environment is a difficult task with recognized inter-operator variability. Three-dimensional printing in neurosurgery has gained significant traction in the past decade, and as software, equipment, and practices become more refined, trainee education, surgical skills, research endeavors, innovation, patient education, and outcomes via valued care is projected to improve. We describe a novel multimodality 3D superposition (MMTS) technique, which fuses multiple imaging sequences alongside cerebral tractography into one patient-specific 3D printed model. Inferences on cost and improved outcomes fueled by encouraging patient engagement are explored. PMID:29201580

  3. Optical and nuclear imaging of glioblastoma with phosphatidylserine-targeted nanovesicles.

    PubMed

    Blanco, Víctor M; Chu, Zhengtao; LaSance, Kathleen; Gray, Brian D; Pak, Koon Yan; Rider, Therese; Greis, Kenneth D; Qi, Xiaoyang

    2016-05-31

    Multimodal tumor imaging with targeted nanoparticles potentially offers both enhanced specificity and sensitivity, leading to more precise cancer diagnosis and monitoring. We describe the synthesis and characterization of phenol-substituted, lipophilic orange and far-red fluorescent dyes and a simple radioiodination procedure to generate a dual (optical and nuclear) imaging probe. MALDI-ToF analyses revealed high iodination efficiency of the lipophilic reporters, achieved by electrophilic aromatic substitution using the chloramide 1,3,4,6-tetrachloro-3α,6α-diphenyl glycoluril (Iodogen) as the oxidizing agent in an organic/aqueous co-solvent mixture. Upon conjugation of iodine-127 or iodine-124-labeled reporters to tumor-targeting SapC-DOPS nanovesicles, optical (fluorescent) and PET imaging was performed in mice bearing intracranial glioblastomas. In addition, tumor vs non-tumor (normal brain) uptake was compared using iodine-125. These data provide proof-of-principle for the potential value of SapC-DOPS for multimodal imaging of glioblastoma, the most aggressive primary brain tumor.

  4. Nanoparticles speckled by ready-to-conjugate lanthanide complexes for multimodal imaging

    NASA Astrophysics Data System (ADS)

    Biju, Vasudevanpillai; Hamada, Morihiko; Ono, Kenji; Sugino, Sakiko; Ohnishi, Takashi; Shibu, Edakkattuparambil Sidharth; Yamamura, Shohei; Sawada, Makoto; Nakanishi, Shunsuke; Shigeri, Yasushi; Wakida, Shin-Ichi

    2015-09-01

    Multimodal and multifunctional contrast agents receive enormous attention in the biomedical imaging field. Such contrast agents are routinely prepared by the incorporation of organic molecules and inorganic nanoparticles (NPs) into host materials such as gold NPs, silica NPs, polymer NPs, and liposomes. Despite their non-cytotoxic nature, the large size of these NPs limits the in vivo distribution and clearance and inflames complex pharmacokinetics, which hinder the regulatory approval for clinical applications. Herein, we report a unique method that combines magnetic resonance imaging (MRI) and fluorescence imaging modalities together in nanoscale entities by the simple, direct and stable conjugation of novel biotinylated coordination complexes of gadolinium(iii) to CdSe/ZnS quantum dots (QD) and terbium(iii) to super paramagnetic iron oxide NPs (SPION) but without any host material. Subsequently, we evaluate the potentials of such lanthanide-speckled fluorescent-magnetic NPs for bioimaging at single-molecule, cell and in vivo levels. The simple preparation and small size make such fluorescent-magnetic NPs promising contrast agents for biomedical imaging.

  5. Highlights lecture EANM 2016: "Embracing molecular imaging and multi-modal imaging: a smart move for nuclear medicine towards personalized medicine".

    PubMed

    Aboagye, Eric O; Kraeber-Bodéré, Françoise

    2017-08-01

    The 2016 EANM Congress took place in Barcelona, Spain, from 15 to 19 October under the leadership of Prof. Wim Oyen, chair of the EANM Scientific Committee. With more than 6,000 participants, this congress was the most important European event in nuclear medicine, bringing together a multidisciplinary community involved in the different fields of nuclear medicine. There were over 600 oral and 1,200 poster or e-Poster presentations with an overwhelming focus on development and application of imaging for personalized care, which is timely for the community. Beyond FDG PET, major highlights included progress in the use of PSMA and SSTR receptor-targeted radiopharmaceuticals and associated theranostics in oncology. Innovations in radiopharmaceuticals for imaging pathologies of the brain and cardiovascular system, as well as infection and inflammation, were also highlighted. In the areas of physics and instrumentation, multimodality imaging and radiomics were highlighted as promising areas of research.

  6. Separating Bulk and Surface Contributions to Electronic Excited-State Processes in Hybrid Mixed Perovskite Thin Films via Multimodal All-Optical Imaging.

    PubMed

    Simpson, Mary Jane; Doughty, Benjamin; Das, Sanjib; Xiao, Kai; Ma, Ying-Zhong

    2017-07-20

    A comprehensive understanding of electronic excited-state phenomena underlying the impressive performance of solution-processed hybrid halide perovskite solar cells requires access to both spatially resolved electronic processes and corresponding sample morphological characteristics. Here, we demonstrate an all-optical multimodal imaging approach that enables us to obtain both electronic excited-state and morphological information on a single optical microscope platform with simultaneous high temporal and spatial resolution. Specifically, images were acquired for the same region of interest in thin films of chloride containing mixed lead halide perovskites (CH 3 NH 3 PbI 3-x Cl x ) using femtosecond transient absorption, time-integrated photoluminescence, confocal reflectance, and transmission microscopies. Comprehensive image analysis revealed the presence of surface- and bulk-dominated contributions to the various images, which describe either spatially dependent electronic excited-state properties or morphological variations across the probed region of the thin films. These results show that PL probes effectively the species near or at the film surface.

  7. Multimodality 3D Superposition and Automated Whole Brain Tractography: Comprehensive Printing of the Functional Brain.

    PubMed

    Konakondla, Sanjay; Brimley, Cameron J; Sublett, Jesna Mathew; Stefanowicz, Edward; Flora, Sarah; Mongelluzzo, Gino; Schirmer, Clemens M

    2017-09-29

    Whole brain tractography using diffusion tensor imaging (DTI) sequences can be used to map cerebral connectivity; however, this can be time-consuming due to the manual component of image manipulation required, calling for the need for a standardized, automated, and accurate fiber tracking protocol with automatic whole brain tractography (AWBT). Interpreting conventional two-dimensional (2D) images, such as computed tomography (CT) and magnetic resonance imaging (MRI), as an intraoperative three-dimensional (3D) environment is a difficult task with recognized inter-operator variability. Three-dimensional printing in neurosurgery has gained significant traction in the past decade, and as software, equipment, and practices become more refined, trainee education, surgical skills, research endeavors, innovation, patient education, and outcomes via valued care is projected to improve. We describe a novel multimodality 3D superposition (MMTS) technique, which fuses multiple imaging sequences alongside cerebral tractography into one patient-specific 3D printed model. Inferences on cost and improved outcomes fueled by encouraging patient engagement are explored.

  8. Multimodal imaging of language reorganization in patients with left temporal lobe epilepsy.

    PubMed

    Chang, Yu-Hsuan A; Kemmotsu, Nobuko; Leyden, Kelly M; Kucukboyaci, N Erkut; Iragui, Vicente J; Tecoma, Evelyn S; Kansal, Leena; Norman, Marc A; Compton, Rachelle; Ehrlich, Tobin J; Uttarwar, Vedang S; Reyes, Anny; Paul, Brianna M; McDonald, Carrie R

    2017-07-01

    This study explored the relationships among multimodal imaging, clinical features, and language impairment in patients with left temporal lobe epilepsy (LTLE). Fourteen patients with LTLE and 26 controls underwent structural MRI, functional MRI, diffusion tensor imaging, and neuropsychological language tasks. Laterality indices were calculated for each imaging modality and a principal component (PC) was derived from language measures. Correlations were performed among imaging measures, as well as to the language PC. In controls, better language performance was associated with stronger left-lateralized temporo-parietal and temporo-occipital activations. In LTLE, better language performance was associated with stronger right-lateralized inferior frontal, temporo-parietal, and temporo-occipital activations. These right-lateralized activations in LTLE were associated with right-lateralized arcuate fasciculus fractional anisotropy. These data suggest that interhemispheric language reorganization in LTLE is associated with alterations to perisylvian white matter. These concurrent structural and functional shifts from left to right may help to mitigate language impairment in LTLE. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Robust multi-site MR data processing: iterative optimization of bias correction, tissue classification, and registration.

    PubMed

    Young Kim, Eun; Johnson, Hans J

    2013-01-01

    A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.

  10. Decomposed multidimensional control grid interpolation for common consumer electronic image processing applications

    NASA Astrophysics Data System (ADS)

    Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.

    2012-10-01

    Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.

  11. A multimodal 3D framework for fire characteristics estimation

    NASA Astrophysics Data System (ADS)

    Toulouse, T.; Rossi, L.; Akhloufi, M. A.; Pieri, A.; Maldague, X.

    2018-02-01

    In the last decade we have witnessed an increasing interest in using computer vision and image processing in forest fire research. Image processing techniques have been successfully used in different fire analysis areas such as early detection, monitoring, modeling and fire front characteristics estimation. While the majority of the work deals with the use of 2D visible spectrum images, recent work has introduced the use of 3D vision in this field. This work proposes a new multimodal vision framework permitting the extraction of the three-dimensional geometrical characteristics of fires captured by multiple 3D vision systems. The 3D system is a multispectral stereo system operating in both the visible and near-infrared (NIR) spectral bands. The framework supports the use of multiple stereo pairs positioned so as to capture complementary views of the fire front during its propagation. Multimodal registration is conducted using the captured views in order to build a complete 3D model of the fire front. The registration process is achieved using multisensory fusion based on visual data (2D and NIR images), GPS positions and IMU inertial data. Experiments were conducted outdoors in order to show the performance of the proposed framework. The obtained results are promising and show the potential of using the proposed framework in operational scenarios for wildland fire research and as a decision management system in fighting.

  12. Tunable and noncytotoxic PET/SPECT-MRI multimodality imaging probes using colloidally stable ligand-free superparamagnetic iron oxide nanoparticles

    PubMed Central

    Pham, TH Nguyen; Lengkeek, Nigel A; Greguric, Ivan; Kim, Byung J; Pellegrini, Paul A; Bickley, Stephanie A; Tanudji, Marcel R; Jones, Stephen K; Hawkett, Brian S; Pham, Binh TT

    2017-01-01

    Physiologically stable multimodality imaging probes for positron emission tomography/single-photon emission computed tomography (PET/SPECT)-magnetic resonance imaging (MRI) were synthesized using the superparamagnetic maghemite iron oxide (γ-Fe2O3) nanoparticles (SPIONs). The SPIONs were sterically stabilized with a finely tuned mixture of diblock copolymers with either methoxypolyethylene glycol (MPEG) or primary amine NH2 end groups. The radioisotope for PET or SPECT imaging was incorporated with the SPIONs at high temperature. 57Co2+ ions with a long half-life of 270.9 days were used as a model for the radiotracer to study the kinetics of radiolabeling, characterization, and the stability of the radiolabeled SPIONs. Radioactive 67Ga3+ and Cu2+-labeled SPIONs were also produced successfully using the optimized conditions from the 57Co2+-labeling process. No free radioisotopes were detected in the aqueous phase for the radiolabeled SPIONs 1 week after dispersion in phosphate-buffered saline (PBS). All labeled SPIONs were not only well dispersed and stable under physiological conditions but also noncytotoxic in vitro. The ability to design and produce physiologically stable radiolabeled magnetic nanoparticles with a finely controlled number of functionalizable end groups on the SPIONs enables the generation of a desirable and biologically compatible multimodality PET/SPECT-MRI agent on a single T2 contrast MRI probe. PMID:28184160

  13. A framework for biomedical figure segmentation towards image-based document retrieval

    PubMed Central

    2013-01-01

    The figures included in many of the biomedical publications play an important role in understanding the biological experiments and facts described within. Recent studies have shown that it is possible to integrate the information that is extracted from figures in classical document classification and retrieval tasks in order to improve their accuracy. One important observation about the figures included in biomedical publications is that they are often composed of multiple subfigures or panels, each describing different methodologies or results. The use of these multimodal figures is a common practice in bioscience, as experimental results are graphically validated via multiple methodologies or procedures. Thus, for a better use of multimodal figures in document classification or retrieval tasks, as well as for providing the evidence source for derived assertions, it is important to automatically segment multimodal figures into subfigures and panels. This is a challenging task, however, as different panels can contain similar objects (i.e., barcharts and linecharts) with multiple layouts. Also, certain types of biomedical figures are text-heavy (e.g., DNA sequences and protein sequences images) and they differ from traditional images. As a result, classical image segmentation techniques based on low-level image features, such as edges or color, are not directly applicable to robustly partition multimodal figures into single modal panels. In this paper, we describe a robust solution for automatically identifying and segmenting unimodal panels from a multimodal figure. Our framework starts by robustly harvesting figure-caption pairs from biomedical articles. We base our approach on the observation that the document layout can be used to identify encoded figures and figure boundaries within PDF files. Taking into consideration the document layout allows us to correctly extract figures from the PDF document and associate their corresponding caption. We combine pixel-level representations of the extracted images with information gathered from their corresponding captions to estimate the number of panels in the figure. Thus, our approach simultaneously identifies the number of panels and the layout of figures. In order to evaluate the approach described here, we applied our system on documents containing protein-protein interactions (PPIs) and compared the results against a gold standard that was annotated by biologists. Experimental results showed that our automatic figure segmentation approach surpasses pure caption-based and image-based approaches, achieving a 96.64% accuracy. To allow for efficient retrieval of information, as well as to provide the basis for integration into document classification and retrieval systems among other, we further developed a web-based interface that lets users easily retrieve panels containing the terms specified in the user queries. PMID:24565394

  14. Multi-focus beam shaping of high power multimode lasers

    NASA Astrophysics Data System (ADS)

    Laskin, Alexander; Volpp, Joerg; Laskin, Vadim; Ostrun, Aleksei

    2017-08-01

    Beam shaping of powerful multimode fiber lasers, fiber-coupled solid-state and diode lasers is of great importance for improvements of industrial laser applications. Welding, cladding with millimetre scale working spots benefit from "inverseGauss" intensity profiles; performance of thick metal sheet cutting, deep penetration welding can be enhanced when distributing the laser energy along the optical axis as more efficient usage of laser energy, higher edge quality and reduction of the heat affected zone can be achieved. Building of beam shaping optics for multimode lasers encounters physical limitations due to the low beam spatial coherence of multimode fiber-coupled lasers resulting in big Beam Parameter Products (BPP) or M² values. The laser radiation emerging from a multimode fiber presents a mixture of wavefronts. The fiber end can be considered as a light source which optical properties are intermediate between a Lambertian source and a single mode laser beam. Imaging of the fiber end, using a collimator and a focusing objective, is a robust and widely used beam delivery approach. Beam shaping solutions are suggested in form of optics combining fiber end imaging and geometrical separation of focused spots either perpendicular to or along the optical axis. Thus, energy of high power lasers is distributed among multiple foci. In order to provide reliable operation with multi-kW lasers and avoid damages the optics are designed as refractive elements with smooth optical surfaces. The paper presents descriptions of multi-focus optics as well as examples of intensity profile measurements of beam caustics and application results.

  15. Fluorescent magnetic hybrid nanoprobe for multimodal bioimaging

    PubMed Central

    Bright, Vanessa

    2011-01-01

    A fluorescent magnetic hybrid imaging nanoprobe (HINP) was fabricated by conjugation of superparamagnetic Fe3O4 nanoparticles and visible light-emitting (~600 nm) fluorescent CdTe/CdS quantum dots (QDs). The assembly strategy used the covalent linking of the oxidized dextran shell of magnetic particles to the glutathione ligands of QDs. Synthesized HINP formed stable water-soluble colloidal dispersions. The structure and properties of the particles were characterized by transmission electron and atomic force microscopy, energy dispersive X-ray analysis and inductively coupled plasma optical emission spectroscopy, dynamic light scattering analysis, optical absorption and photoluminescence spectroscopy, and fluorescent imaging. The luminescence imaging region of the nanoprobe was extended to the near-infrared (NIR) (~800 nm) by conjugation of superparamagnetic nanoparticles with synthesized CdHgTe/CdS QDs. Cadmium, mercury based QDs in HINP can be easily replaced by novel water soluble glutathione stabilized AgInS2/ZnS QDs to present a new class of cadmium-free multimodal imaging agents. Observed NIR photoluminescence of fluorescent magnetic nanocomposites supports their use for bioimaging. The developed HINP provides dual-imaging channels for simultaneous optical and magnetic resonance imaging. PMID:21597146

  16. Multimodal breast cancer imaging using coregistered dynamic diffuse optical tomography and digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Zimmermann, Bernhard B.; Deng, Bin; Singh, Bhawana; Martino, Mark; Selb, Juliette; Fang, Qianqian; Sajjadi, Amir Y.; Cormier, Jayne; Moore, Richard H.; Kopans, Daniel B.; Boas, David A.; Saksena, Mansi A.; Carp, Stefan A.

    2017-04-01

    Diffuse optical tomography (DOT) is emerging as a noninvasive functional imaging method for breast cancer diagnosis and neoadjuvant chemotherapy monitoring. In particular, the multimodal approach of combining DOT with x-ray digital breast tomosynthesis (DBT) is especially synergistic as DBT prior information can be used to enhance the DOT reconstruction. DOT, in turn, provides a functional information overlay onto the mammographic images, increasing sensitivity and specificity to cancer pathology. We describe a dynamic DOT apparatus designed for tight integration with commercial DBT scanners and providing a fast (up to 1 Hz) image acquisition rate to enable tracking hemodynamic changes induced by the mammographic breast compression. The system integrates 96 continuous-wave and 24 frequency-domain source locations as well as 32 continuous wave and 20 frequency-domain detection locations into low-profile plastic plates that can easily mate to the DBT compression paddle and x-ray detector cover, respectively. We demonstrate system performance using static and dynamic tissue-like phantoms as well as in vivo images acquired from the pool of patients recalled for breast biopsies at the Massachusetts General Hospital Breast Imaging Division.

  17. Multifunctional PHPMA-Derived Polymer for Ratiometric pH Sensing, Fluorescence Imaging, and Magnetic Resonance Imaging.

    PubMed

    Su, Fengyu; Agarwal, Shubhangi; Pan, Tingting; Qiao, Yuan; Zhang, Liqiang; Shi, Zhengwei; Kong, Xiangxing; Day, Kevin; Chen, Meiwan; Meldrum, Deirdre; Kodibagkar, Vikram D; Tian, Yanqing

    2018-01-17

    In this paper, we report synthesis and characterization of a novel multimodality (MRI/fluorescence) probe for pH sensing and imaging. A multifunctional polymer was derived from poly(N-(2-hydroxypropyl)methacrylamide) (PHPMA) and integrated with a naphthalimide-based-ratiometric fluorescence probe and a gadolinium-1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid complex (Gd-DOTA complex). The polymer was characterized using UV-vis absorption spectrophotometry, fluorescence spectrofluorophotometry, magnetic resonance imaging (MRI), and confocal microscopy for optical and MRI-based pH sensing and cellular imaging. In vitro labeling of macrophage J774 and esophageal CP-A cell lines shows the polymer's ability to be internalized in the cells. The transverse relaxation time (T 2 ) of the polymer was observed to be pH-dependent, whereas the spin-lattice relaxation time (T 1 ) was not. The pH probe in the polymer shows a strong fluorescence-based ratiometric pH response with emission window changes, exhibiting blue emission under acidic conditions and green emission under basic conditions, respectively. This study provides new materials with multimodalities for pH sensing and imaging.

  18. Multimodal computational microscopy based on transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao

    2016-12-01

    Transport of intensity equation (TIE) is a powerful tool for phase retrieval and quantitative phase imaging, which requires intensity measurements only at axially closely spaced planes without a separate reference beam. It does not require coherent illumination and works well on conventional bright-field microscopes. The quantitative phase reconstructed by TIE gives valuable information that has been encoded in the complex wave field by passage through a sample of interest. Such information may provide tremendous flexibility to emulate various microscopy modalities computationally without requiring specialized hardware components. We develop a requisite theory to describe such a hybrid computational multimodal imaging system, which yields quantitative phase, Zernike phase contrast, differential interference contrast, and light field moment imaging, simultaneously. It makes the various observations for biomedical samples easy. Then we give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens-based TIE system, combined with the appropriate postprocessing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.

  19. Deformable image registration for multimodal lung-cancer staging

    NASA Astrophysics Data System (ADS)

    Cheirsilp, Ronnarit; Zang, Xiaonan; Bascom, Rebecca; Allen, Thomas W.; Mahraj, Rickhesvar P. M.; Higgins, William E.

    2016-03-01

    Positron emission tomography (PET) and X-ray computed tomography (CT) serve as major diagnostic imaging modalities in the lung-cancer staging process. Modern scanners provide co-registered whole-body PET/CT studies, collected while the patient breathes freely, and high-resolution chest CT scans, collected under a brief patient breath hold. Unfortunately, no method exists for registering a PET/CT study into the space of a high-resolution chest CT scan. If this could be done, vital diagnostic information offered by the PET/CT study could be brought seamlessly into the procedure plan used during live cancer-staging bronchoscopy. We propose a method for the deformable registration of whole-body PET/CT data into the space of a high-resolution chest CT study. We then demonstrate its potential for procedure planning and subsequent use in multimodal image-guided bronchoscopy.

  20. An efficient sampling algorithm for uncertain abnormal data detection in biomedical image processing and disease prediction.

    PubMed

    Liu, Fei; Zhang, Xi; Jia, Yan

    2015-01-01

    In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.

  1. Multimodality imaging of the orbit

    PubMed Central

    Hande, Pradipta C; Talwar, Inder

    2012-01-01

    The role of imaging is well established in the evaluation of orbital diseases. Ultrasonography, Computed tomography and Magnetic resonance imaging are complementary modalities, which allow direct visualization of regional anatomy, accurate localization and help to characterize lesions to make a reliable radiological diagnosis. The purpose of this pictorial essay is to highlight the imaging features of commonly encountered pathologies which involve the orbit. PMID:23599570

  2. [Fusion of MRI, fMRI and intraoperative MRI data. Methods and clinical significance exemplified by neurosurgical interventions].

    PubMed

    Moche, M; Busse, H; Dannenberg, C; Schulz, T; Schmitgen, A; Trantakis, C; Winkler, D; Schmidt, F; Kahn, T

    2001-11-01

    The aim of this work was to realize and clinically evaluate an image fusion platform for the integration of preoperative MRI and fMRI data into the intraoperative images of an interventional MRI system with a focus on neurosurgical procedures. A vertically open 0.5 T MRI scanner was equipped with a dedicated navigation system enabling the registration of additional imaging modalities (MRI, fMRI, CT) with the intraoperatively acquired data sets. These merged image data served as the basis for interventional planning and multimodal navigation. So far, the system has been used in 70 neurosurgical interventions (13 of which involved image data fusion--requiring 15 minutes extra time). The augmented navigation system is characterized by a higher frame rate and a higher image quality as compared to the system-integrated navigation based on continuously acquired (near) real time images. Patient movement and tissue shifts can be immediately detected by monitoring the morphological differences between both navigation scenes. The multimodal image fusion allowed a refined navigation planning especially for the resection of deeply seated brain lesions or pathologies close to eloquent areas. Augmented intraoperative orientation and instrument guidance improve the safety and accuracy of neurosurgical interventions.

  3. Research-oriented image registry for multimodal image integration.

    PubMed

    Tanaka, M; Sadato, N; Ishimori, Y; Yonekura, Y; Yamashita, Y; Komuro, H; Hayahsi, N; Ishii, Y

    1998-01-01

    To provide multimodal biomedical images automatically, we constructed the research-oriented image registry, Data Delivery System (DDS). DDS was constructed on the campus local area network. Machines which generate images (imagers: DSA, ultrasound, PET, MRI, SPECT and CT) were connected to the campus LAN. Once a patient is registered, all his images are automatically picked up by DDS as they are generated, transferred through the gateway server to the intermediate server, and copied into the directory of the user who registered the patient. DDS informs the user through e-mail that new data have been generated and transferred. Data format is automatically converted into one which is chosen by the user. Data inactive for a certain period in the intermediate server are automatically achieved into the final and permanent data server based on compact disk. As a soft link is automatically generated through this step, a user has access to all (old or new) image data of the patient of his interest. As DDS runs with minimal maintenance, cost and time for data transfer are significantly saved. By making the complex process of data transfer and conversion invisible, DDS has made it easy for naive-to-computer researchers to concentrate on their biomedical interest.

  4. An on-chip silicon compact triplexer based on cascaded tilted multimode interference couplers

    NASA Astrophysics Data System (ADS)

    Chen, Jingye; Liu, Penghao; Shi, Yaocheng

    2018-03-01

    An on-chip triplexer based on cascaded tilted multimode interference (MMI) couplers has been demonstrated to separate the 1310 nm wavelength band into one port and 1490 nm and 1550 nm wavelength bands into the other two ports respectively. By utilizing the dispersive self-imaging and pseudo self-imaging, the device length is not critically determined by the common multiple of beat lengths for different wavelengths. The total device size can be reduced to ∼450 μm, which is half of the butterfly structure reported. The whole device, fabricated with only one fully-etching step, is characterized with <-15 dB low crosstalk (CT) and ∼1 dB insertion loss (IL).

  5. Multimodal imaging of spike propagation: a technical case report.

    PubMed

    Tanaka, N; Grant, P E; Suzuki, N; Madsen, J R; Bergin, A M; Hämäläinen, M S; Stufflebeam, S M

    2012-06-01

    We report an 11-year-old boy with intractable epilepsy, who had cortical dysplasia in the right superior frontal gyrus. Spatiotemporal source analysis of MEG and EEG spikes demonstrated a similar time course of spike propagation from the superior to inferior frontal gyri, as observed on intracranial EEG. The tractography reconstructed from DTI showed a fiber connection between these areas. Our multimodal approach demonstrates spike propagation and a white matter tract guiding the propagation.

  6. Integrative, multimodal analysis of glioblastoma using TCGA molecular data, pathology images, and clinical outcomes.

    PubMed

    Kong, Jun; Cooper, Lee A D; Wang, Fusheng; Gutman, David A; Gao, Jingjing; Chisolm, Candace; Sharma, Ashish; Pan, Tony; Van Meir, Erwin G; Kurc, Tahsin M; Moreno, Carlos S; Saltz, Joel H; Brat, Daniel J

    2011-12-01

    Multimodal, multiscale data synthesis is becoming increasingly critical for successful translational biomedical research. In this letter, we present a large-scale investigative initiative on glioblastoma, a high-grade brain tumor, with complementary data types using in silico approaches. We integrate and analyze data from The Cancer Genome Atlas Project on glioblastoma that includes novel nuclear phenotypic data derived from microscopic slides, genotypic signatures described by transcriptional class and genetic alterations, and clinical outcomes defined by response to therapy and patient survival. Our preliminary results demonstrate numerous clinically and biologically significant correlations across multiple data types, revealing the power of in silico multimodal data integration for cancer research.

  7. Multimodal fiber source for nonlinear microscopy based on a dissipative soliton laser

    PubMed Central

    Lamb, Erin S.; Wise, Frank W.

    2015-01-01

    Recent developments in high energy femtosecond fiber lasers have enabled robust and lower-cost sources for multiphoton-fluorescence and harmonic-generation imaging. However, picosecond pulses are better suited for Raman scattering microscopy, so the ideal multimodal source for nonlinear microcopy needs to provide both durations. Here we present spectral compression of a high-power femtosecond fiber laser as a route to producing transform-limited picosecond pulses. These pulses pump a fiber optical parametric oscillator to yield a robust fiber source capable of providing the synchronized picosecond pulse trains needed for Raman scattering microscopy. Thus, this system can be used as a multimodal platform for nonlinear microscopy techniques. PMID:26417497

  8. Multimodal image registration based on binary gradient angle descriptor.

    PubMed

    Jiang, Dongsheng; Shi, Yonghong; Yao, Demin; Fan, Yifeng; Wang, Manning; Song, Zhijian

    2017-12-01

    Multimodal image registration plays an important role in image-guided interventions/therapy and atlas building, and it is still a challenging task due to the complex intensity variations in different modalities. The paper addresses the problem and proposes a simple, compact, fast and generally applicable modality-independent binary gradient angle descriptor (BGA) based on the rationale of gradient orientation alignment. The BGA can be easily calculated at each voxel by coding the quadrant in which a local gradient vector falls, and it has an extremely low computational complexity, requiring only three convolutions, two multiplication operations and two comparison operations. Meanwhile, the binarized encoding of the gradient orientation makes the BGA more resistant to image degradations compared with conventional gradient orientation methods. The BGA can extract similar feature descriptors for different modalities and enable the use of simple similarity measures, which makes it applicable within a wide range of optimization frameworks. The results for pairwise multimodal and monomodal registrations between various images (T1, T2, PD, T1c, Flair) consistently show that the BGA significantly outperforms localized mutual information. The experimental results also confirm that the BGA can be a reliable alternative to the sum of absolute difference in monomodal image registration. The BGA can also achieve an accuracy of [Formula: see text], similar to that of the SSC, for the deformable registration of inhale and exhale CT scans. Specifically, for the highly challenging deformable registration of preoperative MRI and 3D intraoperative ultrasound images, the BGA achieves a similar registration accuracy of [Formula: see text] compared with state-of-the-art approaches, with a computation time of 18.3 s per case. The BGA improves the registration performance in terms of both accuracy and time efficiency. With further acceleration, the framework has the potential for application in time-sensitive clinical environments, such as for preoperative MRI and intraoperative US image registration for image-guided intervention.

  9. Computer-assisted surgical planning and automation of laser delivery systems

    NASA Astrophysics Data System (ADS)

    Zamorano, Lucia J.; Dujovny, Manuel; Dong, Ada; Kadi, A. Majeed

    1991-05-01

    This paper describes a 'real time' surgical treatment planning interactive workstation, utilizing multimodality imaging (computer tomography, magnetic resonance imaging, digital angiography) that has been developed to provide the neurosurgeon with two-dimensional multiplanar and three-dimensional 'display' of a patient's lesion.

  10. Novel DOTA-based prochelator for divalent peptide vectorization: synthesis of dimeric bombesin analogues for multimodality tumor imaging and therapy.

    PubMed

    Abiraj, Keelara; Jaccard, Hugues; Kretzschmar, Martin; Helm, Lothar; Maecke, Helmut R

    2008-07-28

    Dimeric peptidic vectors, obtained by the divalent grafting of bombesin analogues on a newly synthesized DOTA-based prochelator, showed improved qualities as tumor targeted imaging probes in comparison to their monomeric analogues.

  11. Big Data and Deep data in scanning and electron microscopies: functionality from multidimensional data sets

    DOE PAGES

    Belianinov, Alex; Vasudevan, Rama K; Strelcov, Evgheni; ...

    2015-05-13

    The development of electron, and scanning probe microscopies in the second half of the twentieth century have produced spectacular images of internal structure and composition of matter with, at nanometer, molecular, and atomic resolution. Largely, this progress was enabled by computer-assisted methods of microscope operation, data acquisition and analysis. The progress in imaging technologies in the beginning of the twenty first century has opened the proverbial floodgates of high-veracity information on structure and functionality. High resolution imaging now allows information on atomic positions with picometer precision, allowing for quantitative measurements of individual bond length and angles. Functional imaging often leadsmore » to multidimensional data sets containing partial or full information on properties of interest, acquired as a function of multiple parameters (time, temperature, or other external stimuli). Here, we review several recent applications of the big and deep data analysis methods to visualize, compress, and translate this data into physically and chemically relevant information from imaging data.« less

  12. Big Data and Deep data in scanning and electron microscopies: functionality from multidimensional data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belianinov, Alex; Vasudevan, Rama K; Strelcov, Evgheni

    The development of electron, and scanning probe microscopies in the second half of the twentieth century have produced spectacular images of internal structure and composition of matter with, at nanometer, molecular, and atomic resolution. Largely, this progress was enabled by computer-assisted methods of microscope operation, data acquisition and analysis. The progress in imaging technologies in the beginning of the twenty first century has opened the proverbial floodgates of high-veracity information on structure and functionality. High resolution imaging now allows information on atomic positions with picometer precision, allowing for quantitative measurements of individual bond length and angles. Functional imaging often leadsmore » to multidimensional data sets containing partial or full information on properties of interest, acquired as a function of multiple parameters (time, temperature, or other external stimuli). Here, we review several recent applications of the big and deep data analysis methods to visualize, compress, and translate this data into physically and chemically relevant information from imaging data.« less

  13. In vivo evaluation of adipose- and muscle-derived stem cells as a treatment for nonhealing diabetic wounds using multimodal microscopy

    NASA Astrophysics Data System (ADS)

    Li, Joanne; Pincu, Yair; Marjanovic, Marina; Bower, Andrew J.; Chaney, Eric J.; Jensen, Tor; Boppart, Marni D.; Boppart, Stephen A.

    2016-08-01

    Impaired skin wound healing is a significant comorbid condition of diabetes, which often results in nonhealing diabetic ulcers due to poor peripheral microcirculation, among other factors. The effectiveness of the regeneration of adipose-derived stem cells (ADSCs) and muscle-derived stem cells (MDSCs) was assessed using an integrated multimodal microscopy system equipped with two-photon fluorescence and second-harmonic generation imaging. These imaging modalities, integrated in a single platform for spatial and temporal coregistration, allowed us to monitor in vivo changes in the collagen network and cell dynamics in a skin wound. Fluorescently labeled ADSCs and MDSCs were applied topically to the wound bed of wild-type and diabetic (db/db) mice following punch biopsy. Longitudinal imaging demonstrated that ADSCs and MDSCs provided remarkable capacity for improved diabetic wound healing, and integrated microscopy revealed a more organized collagen remodeling in the wound bed of treated mice. The results from this study verify the regenerative capacity of stem cells toward healing and, with multimodal microscopy, provide insight regarding their impact on the skin microenvironment. The optical method outlined in this study, which has the potential for in vivo human use, may optimize the care and treatment of diabetic nonhealing wounds.

  14. Mixture-Tuned, Clutter Matched Filter for Remote Detection of Subpixel Spectral Signals

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Mandrake, Lukas; Green, Robert O.

    2013-01-01

    Mapping localized spectral features in large images demands sensitive and robust detection algorithms. Two aspects of large images that can harm matched-filter detection performance are addressed simultaneously. First, multimodal backgrounds may thwart the typical Gaussian model. Second, outlier features can trigger false detections from large projections onto the target vector. Two state-of-the-art approaches are combined that independently address outlier false positives and multimodal backgrounds. The background clustering models multimodal backgrounds, and the mixture tuned matched filter (MT-MF) addresses outliers. Combining the two methods captures significant additional performance benefits. The resulting mixture tuned clutter matched filter (MT-CMF) shows effective performance on simulated and airborne datasets. The classical MNF transform was applied, followed by k-means clustering. Then, each cluster s mean, covariance, and the corresponding eigenvalues were estimated. This yields a cluster-specific matched filter estimate as well as a cluster- specific feasibility score to flag outlier false positives. The technology described is a proof of concept that may be employed in future target detection and mapping applications for remote imaging spectrometers. It is of most direct relevance to JPL proposals for airborne and orbital hyperspectral instruments. Applications include subpixel target detection in hyperspectral scenes for military surveillance. Earth science applications include mineralogical mapping, species discrimination for ecosystem health monitoring, and land use classification.

  15. Rotational electrical impedance tomography using electrodes with limited surface coverage provides window for multimodal sensing

    NASA Astrophysics Data System (ADS)

    Lehti-Polojärvi, Mari; Koskela, Olli; Seppänen, Aku; Figueiras, Edite; Hyttinen, Jari

    2018-02-01

    Electrical impedance tomography (EIT) is an imaging method that could become a valuable tool in multimodal applications. One challenge in simultaneous multimodal imaging is that typically the EIT electrodes cover a large portion of the object surface. This paper investigates the feasibility of rotational EIT (rEIT) in applications where electrodes cover only a limited angle of the surface of the object. In the studied rEIT, the object is rotated a full 360° during a set of measurements to increase the information content of the data. We call this approach limited angle full revolution rEIT (LAFR-rEIT). We test LAFR-rEIT setups in two-dimensional geometries with computational and experimental data. We use up to 256 rotational measurement positions, which requires a new way to solve the forward and inverse problem of rEIT. For this, we provide a modification, available for EIDORS, in the supplementary material. The computational results demonstrate that LAFR-rEIT with eight electrodes produce the same image quality as conventional 16-electrode rEIT, when data from an adequate number of rotational measurement positions are used. Both computational and experimental results indicate that the novel LAFR-rEIT provides good EIT with setups with limited surface coverage and a small number of electrodes.

  16. Multidimensionally encoded magnetic resonance imaging.

    PubMed

    Lin, Fa-Hsuan

    2013-07-01

    Magnetic resonance imaging (MRI) typically achieves spatial encoding by measuring the projection of a q-dimensional object over q-dimensional spatial bases created by linear spatial encoding magnetic fields (SEMs). Recently, imaging strategies using nonlinear SEMs have demonstrated potential advantages for reconstructing images with higher spatiotemporal resolution and reducing peripheral nerve stimulation. In practice, nonlinear SEMs and linear SEMs can be used jointly to further improve the image reconstruction performance. Here, we propose the multidimensionally encoded (MDE) MRI to map a q-dimensional object onto a p-dimensional encoding space where p > q. MDE MRI is a theoretical framework linking imaging strategies using linear and nonlinear SEMs. Using a system of eight surface SEM coils with an eight-channel radiofrequency coil array, we demonstrate the five-dimensional MDE MRI for a two-dimensional object as a further generalization of PatLoc imaging and O-space imaging. We also present a method of optimizing spatial bases in MDE MRI. Results show that MDE MRI with a higher dimensional encoding space can reconstruct images more efficiently and with a smaller reconstruction error when the k-space sampling distribution and the number of samples are controlled. Copyright © 2012 Wiley Periodicals, Inc.

  17. Data mining graphene: Correlative analysis of structure and electronic degrees of freedom in graphenic monolayers with defects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziatdinov, Maxim A.; Fujii, Shintaro; Kiguchi, Manabu

    The link between changes in the material crystal structure and its mechanical, electronic, magnetic, and optical functionalities known as the structure-property relationship is the cornerstone of the contemporary materials science research. The recent advances in scanning transmission electron and scanning probe microscopies (STEM and SPM) have opened an unprecedented path towards examining the materials structure property relationships on the single-impurity and atomic-configuration levels. Lacking, however, are the statistics-based approaches for cross-correlation of structure and property variables obtained in different information channels of the STEM and SPM experiments. Here we have designed an approach based on a combination of sliding windowmore » Fast Fourier Transform, Pearson correlation matrix, linear and kernel canonical correlation, to study a relationship between lattice distortions and electron scattering from the SPM data on graphene with defects. Our analysis revealed that the strength of coupling to strain is altered between different scattering channels which can explain coexistence of several quasiparticle interference patterns in the nanoscale regions of interest. In addition, the application of the kernel functions allowed us extracting a non-linear component of the relationship between the lattice strain and scattering intensity in graphene. Lastly, the outlined approach can be further utilized to analyzing correlations in various multi-modal imaging techniques where the information of interest is spatially distributed and has usually a complex multidimensional nature.« less

  18. Data mining graphene: Correlative analysis of structure and electronic degrees of freedom in graphenic monolayers with defects

    DOE PAGES

    Ziatdinov, Maxim A.; Fujii, Shintaro; Kiguchi, Manabu; ...

    2016-11-09

    The link between changes in the material crystal structure and its mechanical, electronic, magnetic, and optical functionalities known as the structure-property relationship is the cornerstone of the contemporary materials science research. The recent advances in scanning transmission electron and scanning probe microscopies (STEM and SPM) have opened an unprecedented path towards examining the materials structure property relationships on the single-impurity and atomic-configuration levels. Lacking, however, are the statistics-based approaches for cross-correlation of structure and property variables obtained in different information channels of the STEM and SPM experiments. Here we have designed an approach based on a combination of sliding windowmore » Fast Fourier Transform, Pearson correlation matrix, linear and kernel canonical correlation, to study a relationship between lattice distortions and electron scattering from the SPM data on graphene with defects. Our analysis revealed that the strength of coupling to strain is altered between different scattering channels which can explain coexistence of several quasiparticle interference patterns in the nanoscale regions of interest. In addition, the application of the kernel functions allowed us extracting a non-linear component of the relationship between the lattice strain and scattering intensity in graphene. Lastly, the outlined approach can be further utilized to analyzing correlations in various multi-modal imaging techniques where the information of interest is spatially distributed and has usually a complex multidimensional nature.« less

  19. Multimodal MRI in cerebral small vessel disease: its relationship with cognition and sensitivity to change over time.

    PubMed

    Nitkunan, Arani; Barrick, Tom R; Charlton, Rebecca A; Clark, Chris A; Markus, Hugh S

    2008-07-01

    Cerebral small vessel disease is the most common cause of vascular dementia. Interest in using MRI parameters as surrogate markers of disease to assess therapies is increasing. In patients with symptomatic sporadic small vessel disease, we determined which MRI parameters best correlated with cognitive function on cross-sectional analysis and which changed over a period of 1 year. Thirty-five patients with lacunar stroke and leukoaraiosis were recruited. They underwent multimodal MRI (brain volume, fluid-attenuated inversion recovery lesion load, lacunar infarct number, fractional anisotropy, and mean diffusivity from diffusion tensor imaging) and neuropsychological testing. Twenty-seven agreed to reattend for repeat MRI and neuropsychology at 1 year. An executive function score correlated most strongly with diffusion tensor imaging (fractional anisotropy histogram, r=-0.640, P=0.004) and brain volume (r=0.501, P=0.034). Associations with diffusion tensor imaging were stronger than with all other MRI parameters. On multiple regression of all imaging parameters, a model that contained brain volume and fractional anisotropy, together with age, gender, and premorbid IQ, explained 74% of the variance of the executive function score (P=0.0001). Changes in mean diffusivity and fractional anisotropy were detectable over the 1-year follow-up; in contrast, no change in other MRI parameters was detectable over this time period. A multimodal MRI model explains a large proportion of the variation in executive function in cerebral small vessel disease. In particular, diffusion tensor imaging correlates best with executive function and is the most sensitive to change. This supports the use of MRI, in particular diffusion tensor imaging, as a surrogate marker in treatment trials.

  20. Analyzing multimodality tomographic images and associated regions of interest with MIDAS

    NASA Astrophysics Data System (ADS)

    Tsui, Wai-Hon; Rusinek, Henry; Van Gelder, Peter; Lebedev, Sergey

    2001-07-01

    This paper outlines the design and features incorporated in a software package for analyzing multi-modality tomographic images. The package MIDAS has been evolving for the past 15 years and is in wide use by researchers at New York University School of Medicine and a number of collaborating research sites. It was written in the C language and runs on Sun workstations and Intel PCs under the Solaris operating system. A unique strength of the MIDAS package lies in its ability to generate, manipulate and analyze a practically unlimited number of regions of interest (ROIs). These regions are automatically saved in an efficient data structure and linked to associated images. A wide selection of set theoretical (e.g. union, xor, difference), geometrical (e.g. move, rotate) and morphological (grow, peel) operators can be applied to an arbitrary selection of ROIs. ROIs are constructed as a result of image segmentation algorithms incorporated in MIDAS; they also can be drawn interactively. These ROI editing operations can be applied in either 2D or 3D mode. ROI statistics generated by MIDAS include means, standard deviations, centroids and histograms. Other image manipulation tools incorporated in MIDAS are multimodality and within modality coregistration methods (including landmark matching, surface fitting and Woods' correlation methods) and image reformatting methods (using nearest-neighbor, tri-linear or sinc interpolation). Applications of MIDAS include: (1) neuroanatomy research: marking anatomical structures in one orientation, reformatting marks to another orientation; (2) tissue volume measurements: brain structures (PET, MRI, CT), lung nodules (low dose CT), breast density (MRI); (3) analysis of functional (SPECT, PET) experiments by overlaying corresponding structural scans; (4) longitudinal studies: regional measurement of atrophy.

Top