Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.
Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu
2016-01-01
The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.
Towards Omni-Tomography—Grand Fusion of Multiple Modalities for Simultaneous Interior Tomography
Wang, Ge; Zhang, Jie; Gao, Hao; Weir, Victor; Yu, Hengyong; Cong, Wenxiang; Xu, Xiaochen; Shen, Haiou; Bennett, James; Furth, Mark; Wang, Yue; Vannier, Michael
2012-01-01
We recently elevated interior tomography from its origin in computed tomography (CT) to a general tomographic principle, and proved its validity for other tomographic modalities including SPECT, MRI, and others. Here we propose “omni-tomography”, a novel concept for the grand fusion of multiple tomographic modalities for simultaneous data acquisition in a region of interest (ROI). Omni-tomography can be instrumental when physiological processes under investigation are multi-dimensional, multi-scale, multi-temporal and multi-parametric. Both preclinical and clinical studies now depend on in vivo tomography, often requiring separate evaluations by different imaging modalities. Over the past decade, two approaches have been used for multimodality fusion: Software based image registration and hybrid scanners such as PET-CT, PET-MRI, and SPECT-CT among others. While there are intrinsic limitations with both approaches, the main obstacle to the seamless fusion of multiple imaging modalities has been the bulkiness of each individual imager and the conflict of their physical (especially spatial) requirements. To address this challenge, omni-tomography is now unveiled as an emerging direction for biomedical imaging and systems biomedicine. PMID:22768108
Cox, Benjamin L; Mackie, Thomas R; Eliceiri, Kevin W
2015-01-01
Multi-modal imaging approaches of tumor metabolism that provide improved specificity, physiological relevance and spatial resolution would improve diagnosing of tumors and evaluation of tumor progression. Currently, the molecular probe FDG, glucose fluorinated with 18F at the 2-carbon, is the primary metabolic approach for clinical diagnostics with PET imaging. However, PET lacks the resolution necessary to yield intratumoral distributions of deoxyglucose, on the cellular level. Multi-modal imaging could elucidate this problem, but requires the development of new glucose analogs that are better suited for other imaging modalities. Several such analogs have been created and are reviewed here. Also reviewed are several multi-modal imaging studies that have been performed that attempt to shed light on the cellular distribution of glucose analogs within tumors. Some of these studies are performed in vitro, while others are performed in vivo, in an animal model. The results from these studies introduce a visualization gap between the in vitro and in vivo studies that, if solved, could enable the early detection of tumors, the high resolution monitoring of tumors during treatment, and the greater accuracy in assessment of different imaging agents. PMID:25625022
NASA Astrophysics Data System (ADS)
Kang, Jeeun; Chang, Jin Ho; Wilson, Brian C.; Veilleux, Israel; Bai, Yanhui; DaCosta, Ralph; Kim, Kang; Ha, Seunghan; Lee, Jong Gun; Kim, Jeong Seok; Lee, Sang-Goo; Kim, Sun Mi; Lee, Hak Jong; Ahn, Young Bok; Han, Seunghee; Yoo, Yangmo; Song, Tai-Kyong
2015-03-01
Multi-modality imaging is beneficial for both preclinical and clinical applications as it enables complementary information from each modality to be obtained in a single procedure. In this paper, we report the design, fabrication, and testing of a novel tri-modal in vivo imaging system to exploit molecular/functional information from fluorescence (FL) and photoacoustic (PA) imaging as well as anatomical information from ultrasound (US) imaging. The same ultrasound transducer was used for both US and PA imaging, bringing the pulsed laser light into a compact probe by fiberoptic bundles. The FL subsystem is independent of the acoustic components but the front end that delivers and collects the light is physically integrated into the same probe. The tri-modal imaging system was implemented to provide each modality image in real time as well as co-registration of the images. The performance of the system was evaluated through phantom and in vivo animal experiments. The results demonstrate that combining the modalities does not significantly compromise the performance of each of the separate US, PA, and FL imaging techniques, while enabling multi-modality registration. The potential applications of this novel approach to multi-modality imaging range from preclinical research to clinical diagnosis, especially in detection/localization and surgical guidance of accessible solid tumors.
Whole-body diffusion-weighted MR image stitching and alignment to anatomical MRI
NASA Astrophysics Data System (ADS)
Ceranka, Jakub; Polfliet, Mathias; Lecouvet, Frederic; Michoux, Nicolas; Vandemeulebroucke, Jef
2017-02-01
Whole-body diffusion-weighted (WB-DW) MRI in combination with anatomical MRI has shown a great poten- tial in bone and soft tissue tumour detection, evaluation of lymph nodes and treatment response assessment. Because of the vast body coverage, whole-body MRI is acquired in separate stations, which are subsequently combined into a whole-body image. However, inter-station and inter-modality image misalignments can occur due to image distortions and patient motion during acquisition, which may lead to inaccurate representations of patient anatomy and hinder visual assessment. Automated and accurate whole-body image formation and alignment of the multi-modal MRI images is therefore crucial. We investigated several registration approaches for the formation or stitching of the whole-body image stations, followed by a deformable alignment of the multi- modal whole-body images. We compared a pairwise approach, where diffusion-weighted (DW) image stations were sequentially aligned to a reference station (pelvis), to a groupwise approach, where all stations were simultaneously mapped to a common reference space while minimizing the overall transformation. For each, a choice of input images and corresponding metrics was investigated. Performance was evaluated by assessing the quality of the obtained whole-body images, and by verifying the accuracy of the alignment with whole-body anatomical sequences. The groupwise registration approach provided the best compromise between the formation of WB- DW images and multi-modal alignment. The fully automated method was found to be robust, making its use in the clinic feasible.
NASA Astrophysics Data System (ADS)
Peter, Jörg; Semmler, Wolfhard
2007-10-01
Alongside and in part motivated by recent advances in molecular diagnostics, the development of dual-modality instruments for patient and dedicated small animal imaging has gained attention by diverse research groups. The desire for such systems is high not only to link molecular or functional information with the anatomical structures, but also for detecting multiple molecular events simultaneously at shorter total acquisition times. While PET and SPECT have been integrated successfully with X-ray CT, the advance of optical imaging approaches (OT) and the integration thereof into existing modalities carry a high application potential, particularly for imaging small animals. A multi-modality Monte Carlo (MC) simulation approach at present has been developed that is able to trace high-energy (keV) as well as optical (eV) photons concurrently within identical phantom representation models. We show that the involved two approaches for ray-tracing keV and eV photons can be integrated into a unique simulation framework which enables both photon classes to be propagated through various geometry models representing both phantoms and scanners. The main advantage of such integrated framework for our specific application is the investigation of novel tomographic multi-modality instrumentation intended for in vivo small animal imaging through time-resolved MC simulation upon identical phantom geometries. Design examples are provided for recently proposed SPECT-OT and PET-OT imaging systems.
Liu, Tongtong; Ge, Xifeng; Yu, Jinhua; Guo, Yi; Wang, Yuanyuan; Wang, Wenping; Cui, Ligang
2018-06-21
B-mode ultrasound (B-US) and strain elastography ultrasound (SE-US) images have a potential to distinguish thyroid tumor with different lymph node (LN) status. The purpose of our study is to investigate whether the application of multi-modality images including B-US and SE-US can improve the discriminability of thyroid tumor with LN metastasis based on a radiomics approach. Ultrasound (US) images including B-US and SE-US images of 75 papillary thyroid carcinoma (PTC) cases were retrospectively collected. A radiomics approach was developed in this study to estimate LNs status of PTC patients. The approach included image segmentation, quantitative feature extraction, feature selection and classification. Three feature sets were extracted from B-US, SE-US, and multi-modality containing B-US and SE-US. They were used to evaluate the contribution of different modalities. A total of 684 radiomics features have been extracted in our study. We used sparse representation coefficient-based feature selection method with 10-bootstrap to reduce the dimension of feature sets. Support vector machine with leave-one-out cross-validation was used to build the model for estimating LN status. Using features extracted from both B-US and SE-US, the radiomics-based model produced an area under the receiver operating characteristic curve (AUC) [Formula: see text] 0.90, accuracy (ACC) [Formula: see text] 0.85, sensitivity (SENS) [Formula: see text] 0.77 and specificity (SPEC) [Formula: see text] 0.88, which was better than using features extracted from B-US or SE-US separately. Multi-modality images provided more information in radiomics study. Combining use of B-US and SE-US could improve the LN metastasis estimation accuracy for PTC patients.
NASA Astrophysics Data System (ADS)
Smith, Edward M.; Wandtke, John; Robinson, Arvin E.
1999-07-01
The Medical Information, Communication and Archive System (MICAS) is a multi-modality integrated image management system that is seamlessly integrated with the Radiology Information System (RIS). This project was initiated in the summer of 1995 with the first phase being installed during the first half of 1997 and the second phase installed during the summer of 1998. Phase II enhancements include a permanent archive, automated workflow including modality worklist, study caches, NT diagnostic workstations with all components adhering to Digital Imaging and Communications in Medicine (DICOM) standards. This multi-vendor phased approach to PACS implementation is designed as an enterprise-wide PACS to provide images and reports throughout our healthcare network. MICAS demonstrates that aa multi-vendor open system phased approach to PACS is feasible, cost-effective, and has significant advantages over a single vendor implementation.
Patel, Meenal J; Andreescu, Carmen; Price, Julie C; Edelman, Kathryn L; Reynolds, Charles F; Aizenstein, Howard J
2015-10-01
Currently, depression diagnosis relies primarily on behavioral symptoms and signs, and treatment is guided by trial and error instead of evaluating associated underlying brain characteristics. Unlike past studies, we attempted to estimate accurate prediction models for late-life depression diagnosis and treatment response using multiple machine learning methods with inputs of multi-modal imaging and non-imaging whole brain and network-based features. Late-life depression patients (medicated post-recruitment) (n = 33) and older non-depressed individuals (n = 35) were recruited. Their demographics and cognitive ability scores were recorded, and brain characteristics were acquired using multi-modal magnetic resonance imaging pretreatment. Linear and nonlinear learning methods were tested for estimating accurate prediction models. A learning method called alternating decision trees estimated the most accurate prediction models for late-life depression diagnosis (87.27% accuracy) and treatment response (89.47% accuracy). The diagnosis model included measures of age, Mini-mental state examination score, and structural imaging (e.g. whole brain atrophy and global white mater hyperintensity burden). The treatment response model included measures of structural and functional connectivity. Combinations of multi-modal imaging and/or non-imaging measures may help better predict late-life depression diagnosis and treatment response. As a preliminary observation, we speculate that the results may also suggest that different underlying brain characteristics defined by multi-modal imaging measures-rather than region-based differences-are associated with depression versus depression recovery because to our knowledge this is the first depression study to accurately predict both using the same approach. These findings may help better understand late-life depression and identify preliminary steps toward personalized late-life depression treatment. Copyright © 2015 John Wiley & Sons, Ltd.
Multi-Modality Phantom Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, Jennifer S.; Peng, Qiyu; Moses, William W.
2009-03-20
Multi-modality imaging has an increasing role in the diagnosis and treatment of a large number of diseases, particularly if both functional and anatomical information are acquired and accurately co-registered. Hence, there is a resulting need for multi modality phantoms in order to validate image co-registration and calibrate the imaging systems. We present our PET-ultrasound phantom development, including PET and ultrasound images of a simple prostate phantom. We use agar and gelatin mixed with a radioactive solution. We also present our development of custom multi-modality phantoms that are compatible with PET, transrectal ultrasound (TRUS), MRI and CT imaging. We describe bothmore » our selection of tissue mimicking materials and phantom construction procedures. These custom PET-TRUS-CT-MRI prostate phantoms use agargelatin radioactive mixtures with additional contrast agents and preservatives. We show multi-modality images of these custom prostate phantoms, as well as discuss phantom construction alternatives. Although we are currently focused on prostate imaging, this phantom development is applicable to many multi-modality imaging applications.« less
Multi-Modal Nano-Probes for Radionuclide and 5-color Near Infrared Optical Lymphatic Imaging
Kobayashi, Hisataka; Koyama, Yoshinori; Barrett, Tristan; Hama, Yukihiro; Regino, Celeste A. S.; Shin, In Soo; Jang, Beom-Su; Le, Nhat; Paik, Chang H.; Choyke, Peter L.; Urano, Yasuteru
2008-01-01
Current contrast agents generally have one function and can only be imaged in monochrome, therefore, the majority of imaging methods can only impart uniparametric information. A single nano-particle has the potential to be loaded with multiple payloads. Such multi-modality probes have the ability to be imaged by more than one imaging technique, which could compensate for the weakness or even combine the advantages of each individual modality. Furthermore, optical imaging using different optical probes enables us to achieve multi-color in vivo imaging, wherein multiple parameters can be read from a single image. To allow differentiation of multiple optical signals in vivo, each probe should have a close but different near infrared emission. To this end, we synthesized nano-probes with multi-modal and multi-color potential, which employed a polyamidoamine dendrimer platform linked to both radionuclides and optical probes, permitting dual-modality scintigraphic and 5-color near infrared optical lymphatic imaging using a multiple excitation spectrally-resolved fluorescence imaging technique. PMID:19079788
Fully Convolutional Neural Networks Improve Abdominal Organ Segmentation.
Bobo, Meg F; Bao, Shunxing; Huo, Yuankai; Yao, Yuang; Virostko, Jack; Plassard, Andrew J; Lyu, Ilwoo; Assad, Albert; Abramson, Richard G; Hilmes, Melissa A; Landman, Bennett A
2018-03-01
Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities.
Fully convolutional neural networks improve abdominal organ segmentation
NASA Astrophysics Data System (ADS)
Bobo, Meg F.; Bao, Shunxing; Huo, Yuankai; Yao, Yuang; Virostko, Jack; Plassard, Andrew J.; Lyu, Ilwoo; Assad, Albert; Abramson, Richard G.; Hilmes, Melissa A.; Landman, Bennett A.
2018-03-01
Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities. 1
Cross contrast multi-channel image registration using image synthesis for MR brain images.
Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L
2017-02-01
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.
Calhoun, Vince D; Sui, Jing
2016-01-01
It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness. PMID:27347565
Calhoun, Vince D; Sui, Jing
2016-05-01
It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness.
A multi-image approach to CADx of breast cancer with integration into PACS
NASA Astrophysics Data System (ADS)
Elter, Matthias; Wittenberg, Thomas; Schulz-Wendtland, Rüdiger; Deserno, Thomas M.
2009-02-01
While screening mammography is accepted as the most adequate technique for the early detection of breast cancer, its low positive predictive value leads to many breast biopsies performed on benign lesions. Therefore, we have previously developed a knowledge-based system for computer-aided diagnosis (CADx) of mammographic lesions. It supports the radiologist in the discrimination of benign and malignant lesions. So far, our approach operates on the lesion level and employs the paradigm of content-based image retrieval (CBIR). Similar lesions with known diagnosis are retrieved automatically from a library of references. However, radiologists base their diagnostic decisions on additional resources, such as related mammographic projections, other modalities (e.g. ultrasound, MRI), and clinical data. Nonetheless, most CADx systems disregard the relation between the craniocaudal (CC) and mediolateral-oblique (MLO) views of conventional mammography. Therefore, we extend our approach to the full case level: (i) Multi-frame features are developed that jointly describe a lesion in different views of mammography. Taking into account the geometric relation between different images, these features can also be extracted from multi-modal data; (ii) the CADx system architecture is extended appropriately; (iii) the CADx system is integrated into the radiology information system (RIS) and the picture archiving and communication system (PACS). Here, the framework for image retrieval in medical applications (IRMA) is used to support access to the patient's health care record. Of particular interest is the application of the proposed CADx system to digital breast tomosynthesis (DBT), which has the potential to succeed digital mammography as the standard technique for breast cancer screening. The proposed system is a natural extension of CADx approaches that integrate only two modalities. However, we are still collecting a large enough database of breast lesions with images from multiple modalities to evaluate the benefits of the proposed approach on.
A practical salient region feature based 3D multi-modality registration method for medical images
NASA Astrophysics Data System (ADS)
Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang
2006-03-01
We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.
A versatile clearing agent for multi-modal brain imaging
Costantini, Irene; Ghobril, Jean-Pierre; Di Giovanna, Antonino Paolo; Mascaro, Anna Letizia Allegra; Silvestri, Ludovico; Müllenbroich, Marie Caroline; Onofri, Leonardo; Conti, Valerio; Vanzi, Francesco; Sacconi, Leonardo; Guerrini, Renzo; Markram, Henry; Iannello, Giulio; Pavone, Francesco Saverio
2015-01-01
Extensive mapping of neuronal connections in the central nervous system requires high-throughput µm-scale imaging of large volumes. In recent years, different approaches have been developed to overcome the limitations due to tissue light scattering. These methods are generally developed to improve the performance of a specific imaging modality, thus limiting comprehensive neuroanatomical exploration by multi-modal optical techniques. Here, we introduce a versatile brain clearing agent (2,2′-thiodiethanol; TDE) suitable for various applications and imaging techniques. TDE is cost-efficient, water-soluble and low-viscous and, more importantly, it preserves fluorescence, is compatible with immunostaining and does not cause deformations at sub-cellular level. We demonstrate the effectiveness of this method in different applications: in fixed samples by imaging a whole mouse hippocampus with serial two-photon tomography; in combination with CLARITY by reconstructing an entire mouse brain with light sheet microscopy and in translational research by imaging immunostained human dysplastic brain tissue. PMID:25950610
Multi-modal Registration for Correlative Microscopy using Image Analogies
Cao, Tian; Zach, Christopher; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc
2014-01-01
Correlative microscopy is a methodology combining the functionality of light microscopy with the high resolution of electron microscopy and other microscopy technologies for the same biological specimen. In this paper, we propose an image registration method for correlative microscopy, which is challenging due to the distinct appearance of biological structures when imaged with different modalities. Our method is based on image analogies and allows to transform images of a given modality into the appearance-space of another modality. Hence, the registration between two different types of microscopy images can be transformed to a mono-modality image registration. We use a sparse representation model to obtain image analogies. The method makes use of corresponding image training patches of two different imaging modalities to learn a dictionary capturing appearance relations. We test our approach on backscattered electron (BSE) scanning electron microscopy (SEM)/confocal and transmission electron microscopy (TEM)/confocal images. We perform rigid, affine, and deformable registration via B-splines and show improvements over direct registration using both mutual information and sum of squared differences similarity measures to account for differences in image appearance. PMID:24387943
Multimodal Imaging of Human Brain Activity: Rational, Biophysical Aspects and Modes of Integration
Blinowska, Katarzyna; Müller-Putz, Gernot; Kaiser, Vera; Astolfi, Laura; Vanderperren, Katrien; Van Huffel, Sabine; Lemieux, Louis
2009-01-01
Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims of multi-modal imaging and the ways in which it can be accomplished using representative applications. Given the importance of haemodynamic and electrophysiological signals in current multi-modal imaging applications, we also review some of the basic physiology relevant to understanding their relationship. PMID:19547657
NASA Astrophysics Data System (ADS)
Murukeshan, Vadakke M.; Hoong Ta, Lim
2014-11-01
Medical diagnostics in the recent past has seen the challenging trend to come up with dual and multi-modality imaging for implementing better diagnostic procedures. The changes in tissues in the early disease stages are often subtle and can occur beneath the tissue surface. In most of these cases, conventional types of medical imaging using optics may not be able to detect these changes easily due to its penetration depth of the orders of 1 mm. Each imaging modality has its own advantages and limitations, and the use of a single modality is not suitable for every diagnostic applications. Therefore the need for multi or hybrid-modality imaging arises. Combining more than one imaging modalities overcomes the limitation of individual imaging method and integrates the respective advantages into a single setting. In this context, this paper will be focusing on the research and development of two multi-modality imaging platforms. The first platform combines ultrasound and photoacoustic imaging for diagnostic applications in the eye. The second platform consists of optical hyperspectral and photoacoustic imaging for diagnostic applications in the colon. Photoacoustic imaging is used as one of the modalities in both platforms as it can offer deeper penetration depth compared to optical imaging. The optical engineering and research challenges in developing the dual/multi-modality platforms will be discussed, followed by initial results validating the proposed scheme. The proposed schemes offer high spatial and spectral resolution imaging and sensing, and is expected to offer potential biomedical imaging solutions in the near future.
Huang, Yawen; Shao, Ling; Frangi, Alejandro F
2018-03-01
Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors, such as patient discomfort, increased cost, prolonged scanning time, and scanner unavailability. In additionally, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. In this paper, we propose a weakly coupled and geometry co-regularized joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting the large amounts of training data is often impractical. Our learning stage requires only a few registered multi-modality image pairs as training data. To employ both paired images and a large set of unpaired data, a cross-modality image matching criterion is proposed. Then, we propose a unified model by integrating such a criterion into the joint dictionary learning and the observed common feature space for associating cross-modality data for the purpose of synthesis. Furthermore, two regularization terms are added to construct robust sparse representations. Our experimental results demonstrate superior performance of the proposed model over state-of-the-art methods.
A gantry-based tri-modality system for bioluminescence tomography
Yan, Han; Lin, Yuting; Barber, William C.; Unlu, Mehmet Burcin; Gulsen, Gultekin
2012-01-01
A gantry-based tri-modality system that combines bioluminescence (BLT), diffuse optical (DOT), and x-ray computed tomography (XCT) into the same setting is presented here. The purpose of this system is to perform bioluminescence tomography using a multi-modality imaging approach. As parts of this hybrid system, XCT and DOT provide anatomical information and background optical property maps. This structural and functional a priori information is used to guide and restrain bioluminescence reconstruction algorithm and ultimately improve the BLT results. The performance of the combined system is evaluated using multi-modality phantoms. In particular, a cylindrical heterogeneous multi-modality phantom that contains regions with higher optical absorption and x-ray attenuation is constructed. We showed that a 1.5 mm diameter bioluminescence inclusion can be localized accurately with the functional a priori information while its source strength can be recovered more accurately using both structural and the functional a priori information. PMID:22559540
Li, Xiaomeng; Dou, Qi; Chen, Hao; Fu, Chi-Wing; Qi, Xiaojuan; Belavý, Daniel L; Armbrecht, Gabriele; Felsenberg, Dieter; Zheng, Guoyan; Heng, Pheng-Ann
2018-04-01
Intervertebral discs (IVDs) are small joints that lie between adjacent vertebrae. The localization and segmentation of IVDs are important for spine disease diagnosis and measurement quantification. However, manual annotation is time-consuming and error-prone with limited reproducibility, particularly for volumetric data. In this work, our goal is to develop an automatic and accurate method based on fully convolutional networks (FCN) for the localization and segmentation of IVDs from multi-modality 3D MR data. Compared with single modality data, multi-modality MR images provide complementary contextual information, which contributes to better recognition performance. However, how to effectively integrate such multi-modality information to generate accurate segmentation results remains to be further explored. In this paper, we present a novel multi-scale and modality dropout learning framework to locate and segment IVDs from four-modality MR images. First, we design a 3D multi-scale context fully convolutional network, which processes the input data in multiple scales of context and then merges the high-level features to enhance the representation capability of the network for handling the scale variation of anatomical structures. Second, to harness the complementary information from different modalities, we present a random modality voxel dropout strategy which alleviates the co-adaption issue and increases the discriminative capability of the network. Our method achieved the 1st place in the MICCAI challenge on automatic localization and segmentation of IVDs from multi-modality MR images, with a mean segmentation Dice coefficient of 91.2% and a mean localization error of 0.62 mm. We further conduct extensive experiments on the extended dataset to validate our method. We demonstrate that the proposed modality dropout strategy with multi-modality images as contextual information improved the segmentation accuracy significantly. Furthermore, experiments conducted on extended data collected from two different time points demonstrate the efficacy of our method on tracking the morphological changes in a longitudinal study. Copyright © 2018 Elsevier B.V. All rights reserved.
Peissig, Peggy L; Rasmussen, Luke V; Berg, Richard L; Linneman, James G; McCarty, Catherine A; Waudby, Carol; Chen, Lin; Denny, Joshua C; Wilke, Russell A; Pathak, Jyotishman; Carrell, David; Kho, Abel N; Starren, Justin B
2012-01-01
There is increasing interest in using electronic health records (EHRs) to identify subjects for genomic association studies, due in part to the availability of large amounts of clinical data and the expected cost efficiencies of subject identification. We describe the construction and validation of an EHR-based algorithm to identify subjects with age-related cataracts. We used a multi-modal strategy consisting of structured database querying, natural language processing on free-text documents, and optical character recognition on scanned clinical images to identify cataract subjects and related cataract attributes. Extensive validation on 3657 subjects compared the multi-modal results to manual chart review. The algorithm was also implemented at participating electronic MEdical Records and GEnomics (eMERGE) institutions. An EHR-based cataract phenotyping algorithm was successfully developed and validated, resulting in positive predictive values (PPVs) >95%. The multi-modal approach increased the identification of cataract subject attributes by a factor of three compared to single-mode approaches while maintaining high PPV. Components of the cataract algorithm were successfully deployed at three other institutions with similar accuracy. A multi-modal strategy incorporating optical character recognition and natural language processing may increase the number of cases identified while maintaining similar PPVs. Such algorithms, however, require that the needed information be embedded within clinical documents. We have demonstrated that algorithms to identify and characterize cataracts can be developed utilizing data collected via the EHR. These algorithms provide a high level of accuracy even when implemented across multiple EHRs and institutional boundaries.
NASA Astrophysics Data System (ADS)
Smith, Edward M.; Wright, Jeffrey; Fontaine, Marc T.; Robinson, Arvin E.
1998-07-01
The Medical Information, Communication and Archive System (MICAS) is a multi-vendor incremental approach to PACS. MICAS is a multi-modality integrated image management system that incorporates the radiology information system (RIS) and radiology image database (RID) with future 'hooks' to other hospital databases. Even though this approach to PACS is more risky than a single-vendor turn-key approach, it offers significant advantages. The vendors involved in the initial phase of MICAS are IDX Corp., ImageLabs, Inc. and Digital Equipment Corp (DEC). The network architecture operates at 100 MBits per sec except between the modalities and the stackable intelligent switch which is used to segment MICAS by modality. Each modality segment contains the acquisition engine for the modality, a temporary archive and one or more diagnostic workstations. All archived studies are available at all workstations, but there is no permanent archive at this time. At present, the RIS vendor is responsible for study acquisition and workflow as well as maintenance of the temporary archive. Management of study acquisition, workflow and the permanent archive will become the responsibility of the archive vendor when the archive is installed in the second quarter of 1998. The modalities currently interfaced to MICAS are MRI, CT and a Howtek film digitizer with Nuclear Medicine and computed radiography (CR) to be added when the permanent archive is installed. There are six dual-monitor diagnostic workstations which use ImageLabs Shared Vision viewer software located in MRI, CT, Nuclear Medicine, musculoskeletal reading areas and two in Radiology's main reading area. One of the major lessons learned to date is that the permanent archive should have been part of the initial MICAS installation and the archive vendor should have been responsible for image acquisition rather than the RIS vendor. Currently an archive vendor is being selected who will be responsible for the management of the archive plus the HIS/RIS interface, image acquisition, modality work list manager and interfacing to the current DICOM viewer software. The next phase of MICAS will include interfacing ultrasound, locating servers outside of the Radiology LAN to support the distribution of images and reports to the clinical floors and physician offices both within and outside of the University of Rochester Medical Center (URMC) campus and the teaching archive.
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2016-09-01
In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.
NASA Astrophysics Data System (ADS)
Liu, Xiaonan; Chen, Kewei; Wu, Teresa; Weidman, David; Lure, Fleming; Li, Jing
2018-02-01
Alzheimer's Disease (AD) is the most common cause of dementia and currently has no cure. Treatments targeting early stages of AD such as Mild Cognitive Impairment (MCI) may be most effective to deaccelerate AD, thus attracting increasing attention. However, MCI has substantial heterogeneity in that it can be caused by various underlying conditions, not only AD. To detect MCI due to AD, NIA-AA published updated consensus criteria in 2011, in which the use of multi-modality images was highlighted as one of the most promising methods. It is of great interest to develop a CAD system based on automatic, quantitative analysis of multi-modality images and machine learning algorithms to help physicians more adequately diagnose MCI due to AD. The challenge, however, is that multi-modality images are not universally available for many patients due to cost, access, safety, and lack of consent. We developed a novel Missing Modality Transfer Learning (MMTL) algorithm capable of utilizing whatever imaging modalities are available for an MCI patient to diagnose the patient's likelihood of MCI due to AD. Furthermore, we integrated MMTL with radiomics steps including image processing, feature extraction, and feature screening, and a post-processing for uncertainty quantification (UQ), and developed a CAD system called "ADMultiImg" to assist clinical diagnosis of MCI due to AD using multi-modality images together with patient demographic and genetic information. Tested on ADNI date, our system can generate a diagnosis with high accuracy even for patients with only partially available image modalities (AUC=0.94), and therefore may have broad clinical utility.
A graph-based approach for the retrieval of multi-modality medical images.
Kumar, Ashnil; Kim, Jinman; Wen, Lingfeng; Fulham, Michael; Feng, Dagan
2014-02-01
In this paper, we address the retrieval of multi-modality medical volumes, which consist of two different imaging modalities, acquired sequentially, from the same scanner. One such example, positron emission tomography and computed tomography (PET-CT), provides physicians with complementary functional and anatomical features as well as spatial relationships and has led to improved cancer diagnosis, localisation, and staging. The challenge of multi-modality volume retrieval for cancer patients lies in representing the complementary geometric and topologic attributes between tumours and organs. These attributes and relationships, which are used for tumour staging and classification, can be formulated as a graph. It has been demonstrated that graph-based methods have high accuracy for retrieval by spatial similarity. However, naïvely representing all relationships on a complete graph obscures the structure of the tumour-anatomy relationships. We propose a new graph structure derived from complete graphs that structurally constrains the edges connected to tumour vertices based upon the spatial proximity of tumours and organs. This enables retrieval on the basis of tumour localisation. We also present a similarity matching algorithm that accounts for different feature sets for graph elements from different imaging modalities. Our method emphasises the relationships between a tumour and related organs, while still modelling patient-specific anatomical variations. Constraining tumours to related anatomical structures improves the discrimination potential of graphs, making it easier to retrieve similar images based on tumour location. We evaluated our retrieval methodology on a dataset of clinical PET-CT volumes. Our results showed that our method enabled the retrieval of multi-modality images using spatial features. Our graph-based retrieval algorithm achieved a higher precision than several other retrieval techniques: gray-level histograms as well as state-of-the-art methods such as visual words using the scale- invariant feature transform (SIFT) and relational matrices representing the spatial arrangements of objects. Copyright © 2013 Elsevier B.V. All rights reserved.
Feature and Intensity Based Medical Image Registration Using Particle Swarm Optimization.
Abdel-Basset, Mohamed; Fakhry, Ahmed E; El-Henawy, Ibrahim; Qiu, Tie; Sangaiah, Arun Kumar
2017-11-03
Image registration is an important aspect in medical image analysis, and kinds use in a variety of medical applications. Examples include diagnosis, pre/post surgery guidance, comparing/merging/integrating images from multi-modal like Magnetic Resonance Imaging (MRI), and Computed Tomography (CT). Whether registering images across modalities for a single patient or registering across patients for a single modality, registration is an effective way to combine information from different images into a normalized frame for reference. Registered datasets can be used for providing information relating to the structure, function, and pathology of the organ or individual being imaged. In this paper a hybrid approach for medical images registration has been developed. It employs a modified Mutual Information (MI) as a similarity metric and Particle Swarm Optimization (PSO) method. Computation of mutual information is modified using a weighted linear combination of image intensity and image gradient vector flow (GVF) intensity. In this manner, statistical as well as spatial image information is included into the image registration process. Maximization of the modified mutual information is effected using the versatile Particle Swarm Optimization which is developed easily with adjusted less parameter. The developed approach has been tested and verified successfully on a number of medical image data sets that include images with missing parts, noise contamination, and/or of different modalities (CT, MRI). The registration results indicate the proposed model as accurate and effective, and show the posture contribution in inclusion of both statistical and spatial image data to the developed approach.
Rasmussen, Luke V; Berg, Richard L; Linneman, James G; McCarty, Catherine A; Waudby, Carol; Chen, Lin; Denny, Joshua C; Wilke, Russell A; Pathak, Jyotishman; Carrell, David; Kho, Abel N; Starren, Justin B
2012-01-01
Objective There is increasing interest in using electronic health records (EHRs) to identify subjects for genomic association studies, due in part to the availability of large amounts of clinical data and the expected cost efficiencies of subject identification. We describe the construction and validation of an EHR-based algorithm to identify subjects with age-related cataracts. Materials and methods We used a multi-modal strategy consisting of structured database querying, natural language processing on free-text documents, and optical character recognition on scanned clinical images to identify cataract subjects and related cataract attributes. Extensive validation on 3657 subjects compared the multi-modal results to manual chart review. The algorithm was also implemented at participating electronic MEdical Records and GEnomics (eMERGE) institutions. Results An EHR-based cataract phenotyping algorithm was successfully developed and validated, resulting in positive predictive values (PPVs) >95%. The multi-modal approach increased the identification of cataract subject attributes by a factor of three compared to single-mode approaches while maintaining high PPV. Components of the cataract algorithm were successfully deployed at three other institutions with similar accuracy. Discussion A multi-modal strategy incorporating optical character recognition and natural language processing may increase the number of cases identified while maintaining similar PPVs. Such algorithms, however, require that the needed information be embedded within clinical documents. Conclusion We have demonstrated that algorithms to identify and characterize cataracts can be developed utilizing data collected via the EHR. These algorithms provide a high level of accuracy even when implemented across multiple EHRs and institutional boundaries. PMID:22319176
A multimodal image sensor system for identifying water stress in grapevines
NASA Astrophysics Data System (ADS)
Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong
2012-11-01
Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.
Multimodality imaging of ovarian cystic lesions: Review with an imaging based algorithmic approach
Wasnik, Ashish P; Menias, Christine O; Platt, Joel F; Lalchandani, Usha R; Bedi, Deepak G; Elsayes, Khaled M
2013-01-01
Ovarian cystic masses include a spectrum of benign, borderline and high grade malignant neoplasms. Imaging plays a crucial role in characterization and pretreatment planning of incidentally detected or suspected adnexal masses, as diagnosis of ovarian malignancy at an early stage is correlated with a better prognosis. Knowledge of differential diagnosis, imaging features, management trends and an algorithmic approach of such lesions is important for optimal clinical management. This article illustrates a multi-modality approach in the diagnosis of a spectrum of ovarian cystic masses and also proposes an algorithmic approach for the diagnosis of these lesions. PMID:23671748
Photoacoustic tomography guided diffuse optical tomography for small-animal model
NASA Astrophysics Data System (ADS)
Wang, Yihan; Gao, Feng; Wan, Wenbo; Zhang, Yan; Li, Jiao
2015-03-01
Diffuse optical tomography (DOT) is a biomedical imaging technology for noninvasive visualization of spatial variation about the optical properties of tissue, which can be applied to in vivo small-animal disease model. However, traditional DOT suffers low spatial resolution due to tissue scattering. To overcome this intrinsic shortcoming, multi-modal approaches that incorporate DOT with other imaging techniques have been intensively investigated, where a priori information provided by the other modalities is normally used to reasonably regularize the inverse problem of DOT. Nevertheless, these approaches usually consider the anatomical structure, which is different from the optical structure. Photoacoustic tomography (PAT) is an emerging imaging modality that is particularly useful for visualizing lightabsorbing structures embedded in soft tissue with higher spatial resolution compared with pure optical imaging. Thus, we present a PAT-guided DOT approach, to obtain the location a priori information of optical structure provided by PAT first, and then guide DOT to reconstruct the optical parameters quantitatively. The results of reconstruction of phantom experiments demonstrate that both quantification and spatial resolution of DOT could be highly improved by the regularization of feasible-region information provided by PAT.
Feature-based Alignment of Volumetric Multi-modal Images
Toews, Matthew; Zöllei, Lilla; Wells, William M.
2014-01-01
This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955
Neural network fusion: a novel CT-MR aortic aneurysm image segmentation method
NASA Astrophysics Data System (ADS)
Wang, Duo; Zhang, Rui; Zhu, Jin; Teng, Zhongzhao; Huang, Yuan; Spiga, Filippo; Du, Michael Hong-Fei; Gillard, Jonathan H.; Lu, Qingsheng; Liò, Pietro
2018-03-01
Medical imaging examination on patients usually involves more than one imaging modalities, such as Computed Tomography (CT), Magnetic Resonance (MR) and Positron Emission Tomography(PET) imaging. Multimodal imaging allows examiners to benefit from the advantage of each modalities. For example, for Abdominal Aortic Aneurysm, CT imaging shows calcium deposits in the aorta clearly while MR imaging distinguishes thrombus and soft tissues better.1 Analysing and segmenting both CT and MR images to combine the results will greatly help radiologists and doctors to treat the disease. In this work, we present methods on using deep neural network models to perform such multi-modal medical image segmentation. As CT image and MR image of the abdominal area cannot be well registered due to non-affine deformations, a naive approach is to train CT and MR segmentation network separately. However, such approach is time-consuming and resource-inefficient. We propose a new approach to fuse the high-level part of the CT and MR network together, hypothesizing that neurons recognizing the high level concepts of Aortic Aneurysm can be shared across multiple modalities. Such network is able to be trained end-to-end with non-registered CT and MR image using shorter training time. Moreover network fusion allows a shared representation of Aorta in both CT and MR images to be learnt. Through experiments we discovered that for parts of Aorta showing similar aneurysm conditions, their neural presentations in neural network has shorter distances. Such distances on the feature level is helpful for registering CT and MR image.
[Research on non-rigid registration of multi-modal medical image based on Demons algorithm].
Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang
2014-02-01
Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.
LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang
2015-03-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.
LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188
Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation
Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang
2015-01-01
The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829
Vergara, Victor M; Ulloa, Alvaro; Calhoun, Vince D; Boutte, David; Chen, Jiayu; Liu, Jingyu
2014-09-01
Multi-modal data analysis techniques, such as the Parallel Independent Component Analysis (pICA), are essential in neuroscience, medical imaging and genetic studies. The pICA algorithm allows the simultaneous decomposition of up to two data modalities achieving better performance than separate ICA decompositions and enabling the discovery of links between modalities. However, advances in data acquisition techniques facilitate the collection of more than two data modalities from each subject. Examples of commonly measured modalities include genetic information, structural magnetic resonance imaging (MRI) and functional MRI. In order to take full advantage of the available data, this work extends the pICA approach to incorporate three modalities in one comprehensive analysis. Simulations demonstrate the three-way pICA performance in identifying pairwise links between modalities and estimating independent components which more closely resemble the true sources than components found by pICA or separate ICA analyses. In addition, the three-way pICA algorithm is applied to real experimental data obtained from a study that investigate genetic effects on alcohol dependence. Considered data modalities include functional MRI (contrast images during alcohol exposure paradigm), gray matter concentration images from structural MRI and genetic single nucleotide polymorphism (SNP). The three-way pICA approach identified links between a SNP component (pointing to brain function and mental disorder associated genes, including BDNF, GRIN2B and NRG1), a functional component related to increased activation in the precuneus area, and a gray matter component comprising part of the default mode network and the caudate. Although such findings need further verification, the simulation and in-vivo results validate the three-way pICA algorithm presented here as a useful tool in biomedical data fusion applications. Copyright © 2014 Elsevier Inc. All rights reserved.
Ma, Teng; Zhou, Bill; Hsiai, Tzung K.; Shung, K. Kirk
2015-01-01
Catheter-based intravascular imaging modalities are being developed to visualize pathologies in coronary arteries, such as high-risk vulnerable atherosclerotic plaques known as thin-cap fibroatheroma, to guide therapeutic strategy at preventing heart attacks. Mounting evidences have shown three distinctive histopathological features—the presence of a thin fibrous cap, a lipid-rich necrotic core, and numerous infiltrating macrophages—are key markers of increased vulnerability in atherosclerotic plaques. To visualize these changes, the majority of catheter-based imaging modalities used intravascular ultrasound (IVUS) as the technical foundation and integrated emerging intravascular imaging techniques to enhance the characterization of vulnerable plaques. However, no current imaging technology is the unequivocal “gold standard” for the diagnosis of vulnerable atherosclerotic plaques. Each intravascular imaging technology possesses its own unique features that yield valuable information although encumbered by inherent limitations not seen in other modalities. In this context, the aim of this review is to discuss current scientific innovations, technical challenges, and prospective strategies in the development of IVUS-based multi-modality intravascular imaging systems aimed at assessing atherosclerotic plaque vulnerability. PMID:26400676
Multi-pass transmission electron microscopy
Juffmann, Thomas; Koppell, Stewart A.; Klopfer, Brannon B.; ...
2017-05-10
Feynman once asked physicists to build better electron microscopes to be able to watch biology at work. While electron microscopes can now provide atomic resolution, electron beam induced specimen damage precludes high resolution imaging of sensitive materials, such as single proteins or polymers. Here, we use simulations to show that an electron microscope based on a multi-pass measurement protocol enables imaging of single proteins, without averaging structures over multiple images. While we demonstrate the method for particular imaging targets, the approach is broadly applicable and is expected to improve resolution and sensitivity for a range of electron microscopy imaging modalities,more » including, for example, scanning and spectroscopic techniques. The approach implements a quantum mechanically optimal strategy which under idealized conditions can be considered interaction-free.« less
Multiscale and multi-modality visualization of angiogenesis in a human breast cancer model
Cebulla, Jana; Kim, Eugene; Rhie, Kevin; Zhang, Jiangyang
2017-01-01
Angiogenesis in breast cancer helps fulfill the metabolic demands of the progressing tumor and plays a critical role in tumor metastasis. Therefore, various imaging modalities have been used to characterize tumor angiogenesis. While micro-CT (μCT) is a powerful tool for analyzing the tumor microvascular architecture at micron-scale resolution, magnetic resonance imaging (MRI) with its sub-millimeter resolution is useful for obtaining in vivo vascular data (e.g. tumor blood volume and vessel size index). However, integration of these microscopic and macroscopic angiogenesis data across spatial resolutions remains challenging. Here we demonstrate the feasibility of ‘multiscale’ angiogenesis imaging in a human breast cancer model, wherein we bridge the resolution gap between ex vivo μCT and in vivo MRI using intermediate resolution ex vivo MR microscopy (μMRI). To achieve this integration, we developed suitable vessel segmentation techniques for the ex vivo imaging data and co-registered the vascular data from all three imaging modalities. We showcase two applications of this multiscale, multi-modality imaging approach: (1) creation of co-registered maps of vascular volume from three independent imaging modalities, and (2) visualization of differences in tumor vasculature between viable and necrotic tumor regions by integrating μCT vascular data with tumor cellularity data obtained using diffusion-weighted MRI. Collectively, these results demonstrate the utility of ‘mesoscopic’ resolution μMRI for integrating macroscopic in vivo MRI data and microscopic μCT data. Although focused on the breast tumor xenograft vasculature, our imaging platform could be extended to include additional data types for a detailed characterization of the tumor microenvironment and computational systems biology applications. PMID:24719185
NASA Astrophysics Data System (ADS)
Badea, C. T.; Ghaghada, K.; Espinosa, G.; Strong, L.; Annapragada, A.
2011-03-01
Multi-modality PET-CT imaging is playing an important role in the field of oncology. While PET imaging facilitates functional interrogation of tumor status, the use of CT imaging is primarily limited to anatomical reference. In an attempt to extract comprehensive information about tumor cells and its microenvironment, we used a nanoparticle xray contrast agent to image tumor vasculature and vessel 'leakiness' and 18F-FDG to investigate the metabolic status of tumor cells. In vivo PET/CT studies were performed in mice implanted with 4T1 mammary breast cancer cells.Early-phase micro-CT imaging enabled visualization 3D vascular architecture of the tumors whereas delayedphase micro-CT demonstrated highly permeable vessels as evident by nanoparticle accumulation within the tumor. Both imaging modalities demonstrated the presence of a necrotic core as indicated by a hypo-enhanced region in the center of the tumor. At early time-points, the CT-derived fractional blood volume did not correlate with 18F-FDG uptake. At delayed time-points, the tumor enhancement in 18F-FDG micro-PET images correlated with the delayed signal enhanced due to nanoparticle extravasation seen in CT images. The proposed hybrid imaging approach could be used to better understand tumor angiogenesis and to be the basis for monitoring and evaluating anti-angiogenic and nano-chemotherapies.
MIND: modality independent neighbourhood descriptor for multi-modal deformable registration.
Heinrich, Mattias P; Jenkinson, Mark; Bhushan, Manav; Matin, Tahreema; Gleeson, Fergus V; Brady, Sir Michael; Schnabel, Julia A
2012-10-01
Deformable registration of images obtained from different modalities remains a challenging task in medical image analysis. This paper addresses this important problem and proposes a modality independent neighbourhood descriptor (MIND) for both linear and deformable multi-modal registration. Based on the similarity of small image patches within one image, it aims to extract the distinctive structure in a local neighbourhood, which is preserved across modalities. The descriptor is based on the concept of image self-similarity, which has been introduced for non-local means filtering for image denoising. It is able to distinguish between different types of features such as corners, edges and homogeneously textured regions. MIND is robust to the most considerable differences between modalities: non-functional intensity relations, image noise and non-uniform bias fields. The multi-dimensional descriptor can be efficiently computed in a dense fashion across the whole image and provides point-wise local similarity across modalities based on the absolute or squared difference between descriptors, making it applicable for a wide range of transformation models and optimisation algorithms. We use the sum of squared differences of the MIND representations of the images as a similarity metric within a symmetric non-parametric Gauss-Newton registration framework. In principle, MIND would be applicable to the registration of arbitrary modalities. In this work, we apply and validate it for the registration of clinical 3D thoracic CT scans between inhale and exhale as well as the alignment of 3D CT and MRI scans. Experimental results show the advantages of MIND over state-of-the-art techniques such as conditional mutual information and entropy images, with respect to clinically annotated landmark locations. Copyright © 2012 Elsevier B.V. All rights reserved.
Kandukuri, Jayanth; Yu, Shuai; Cheng, Bingbing; Bandi, Venugopal; D’Souza, Francis; Nguyen, Kytai T.; Hong, Yi; Yuan, Baohong
2017-01-01
Simultaneous imaging of multiple targets (SIMT) in opaque biological tissues is an important goal for molecular imaging in the future. Multi-color fluorescence imaging in deep tissues is a promising technology to reach this goal. In this work, we developed a dual-modality imaging system by combining our recently developed ultrasound-switchable fluorescence (USF) imaging technology with the conventional ultrasound (US) B-mode imaging. This dual-modality system can simultaneously image tissue acoustic structure information and multi-color fluorophores in centimeter-deep tissue with comparable spatial resolutions. To conduct USF imaging on the same plane (i.e., x-z plane) as US imaging, we adopted two 90°-crossed ultrasound transducers with an overlapped focal region, while the US transducer (the third one) was positioned at the center of these two USF transducers. Thus, the axial resolution of USF is close to the lateral resolution, which allows a point-by-point USF scanning on the same plane as the US imaging. Both multi-color USF and ultrasound imaging of a tissue phantom were demonstrated. PMID:28165390
Imaging and machine learning techniques for diagnosis of Alzheimer's disease.
Mirzaei, Golrokh; Adeli, Anahita; Adeli, Hojjat
2016-12-01
Alzheimer's disease (AD) is a common health problem in elderly people. There has been considerable research toward the diagnosis and early detection of this disease in the past decade. The sensitivity of biomarkers and the accuracy of the detection techniques have been defined to be the key to an accurate diagnosis. This paper presents a state-of-the-art review of the research performed on the diagnosis of AD based on imaging and machine learning techniques. Different segmentation and machine learning techniques used for the diagnosis of AD are reviewed including thresholding, supervised and unsupervised learning, probabilistic techniques, Atlas-based approaches, and fusion of different image modalities. More recent and powerful classification techniques such as the enhanced probabilistic neural network of Ahmadlou and Adeli should be investigated with the goal of improving the diagnosis accuracy. A combination of different image modalities can help improve the diagnosis accuracy rate. Research is needed on the combination of modalities to discover multi-modal biomarkers.
NASA Astrophysics Data System (ADS)
McReynolds, Naomi; Cooke, Fiona G. M.; Chen, Mingzhou; Powis, Simon J.; Dholakia, Kishan
2017-03-01
The ability to identify and characterise individual cells of the immune system under label-free conditions would be a significant advantage in biomedical and clinical studies where untouched and unmodified cells are required. We present a multi-modal system capable of simultaneously acquiring both single point Raman spectra and digital holographic images of single cells. We use this combined approach to identify and discriminate between immune cell populations CD4+ T cells, B cells and monocytes. We investigate several approaches to interpret the phase images including signal intensity histograms and texture analysis. Both modalities are independently able to discriminate between cell subsets and dual-modality may therefore be used a means for validation. We demonstrate here sensitivities achieved in the range of 86.8% to 100%, and specificities in the range of 85.4% to 100%. Additionally each modality provides information not available from the other providing both a molecular and a morphological signature of each cell.
Sauwen, N; Acou, M; Van Cauter, S; Sima, D M; Veraart, J; Maes, F; Himmelreich, U; Achten, E; Van Huffel, S
2016-01-01
Tumor segmentation is a particularly challenging task in high-grade gliomas (HGGs), as they are among the most heterogeneous tumors in oncology. An accurate delineation of the lesion and its main subcomponents contributes to optimal treatment planning, prognosis and follow-up. Conventional MRI (cMRI) is the imaging modality of choice for manual segmentation, and is also considered in the vast majority of automated segmentation studies. Advanced MRI modalities such as perfusion-weighted imaging (PWI), diffusion-weighted imaging (DWI) and magnetic resonance spectroscopic imaging (MRSI) have already shown their added value in tumor tissue characterization, hence there have been recent suggestions of combining different MRI modalities into a multi-parametric MRI (MP-MRI) approach for brain tumor segmentation. In this paper, we compare the performance of several unsupervised classification methods for HGG segmentation based on MP-MRI data including cMRI, DWI, MRSI and PWI. Two independent MP-MRI datasets with a different acquisition protocol were available from different hospitals. We demonstrate that a hierarchical non-negative matrix factorization variant which was previously introduced for MP-MRI tumor segmentation gives the best performance in terms of mean Dice-scores for the pathologic tissue classes on both datasets.
Drug-related webpages classification based on multi-modal local decision fusion
NASA Astrophysics Data System (ADS)
Hu, Ruiguang; Su, Xiaojing; Liu, Yanxin
2018-03-01
In this paper, multi-modal local decision fusion is used for drug-related webpages classification. First, meaningful text are extracted through HTML parsing, and effective images are chosen by the FOCARSS algorithm. Second, six SVM classifiers are trained for six kinds of drug-taking instruments, which are represented by PHOG. One SVM classifier is trained for the cannabis, which is represented by the mid-feature of BOW model. For each instance in a webpage, seven SVMs give seven labels for its image, and other seven labels are given by searching the names of drug-taking instruments and cannabis in its related text. Concatenating seven labels of image and seven labels of text, the representation of those instances in webpages are generated. Last, Multi-Instance Learning is used to classify those drugrelated webpages. Experimental results demonstrate that the classification accuracy of multi-instance learning with multi-modal local decision fusion is much higher than those of single-modal classification.
The evolution of gadolinium based contrast agents: from single-modality to multi-modality
NASA Astrophysics Data System (ADS)
Zhang, Li; Liu, Ruiqing; Peng, Hui; Li, Penghui; Xu, Zushun; Whittaker, Andrew K.
2016-05-01
Gadolinium-based contrast agents are extensively used as magnetic resonance imaging (MRI) contrast agents due to their outstanding signal enhancement and ease of chemical modification. However, it is increasingly recognized that information obtained from single modal molecular imaging cannot satisfy the higher requirements on the efficiency and accuracy for clinical diagnosis and medical research, due to its limitation and default rooted in single molecular imaging technique itself. To compensate for the deficiencies of single function magnetic resonance imaging contrast agents, the combination of multi-modality imaging has turned to be the research hotpot in recent years. This review presents an overview on the recent developments of the functionalization of gadolinium-based contrast agents, and their application in biomedicine applications.
Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.
Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie
2016-07-01
Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.
Design of magnetic and fluorescent nanoparticles for in vivo MR and NIRF cancer imaging
NASA Astrophysics Data System (ADS)
Key, Jaehong
One big challenge for cancer treatment is that it has many errors in detection of cancers in the early stages before metastasis occurs. Using a current imaging modality, the detection of small tumors having potential metastasis is still very difficult. Thus, the development of multi-component nanoparticles (NPs) for dual modality cancer imaging is invaluable. The multi-component NPs can be an alternative to overcome the limitations from an imaging modality. For example, the multi-component NPs can visualize small tumors in both magnetic resonance imaging (MRI) and near infrared fluorescence (NIRF) imaging, which can help find the location of the tumors deep inside the body using MRI and subsequently guide surgeons to delineate the margin of tumors using highly sensitive NIRF imaging during a surgical operation. In this dissertation, we demonstrated the potential of the MRI and NIRF dual-modality NPs for skin and bladder cancer imaging. The multi-component NPs consisted of glycol chitosan, superparamagnetic iron oxide, NIRF dye, and cancer targeting peptides. We characterized the NPs and evaluated them with tumor bearing mice as well as various cancer cells. The findings of this research will contribute to the development of cancer diagnostic imaging and it can also be extensively applied to drug delivery system and fluorescence-guided surgical removal of cancer.
Magnetic Nanoparticles for Multi-Imaging and Drug Delivery
Lee, Jae-Hyun; Kim, Ji-wook; Cheon, Jinwoo
2013-01-01
Various bio-medical applications of magnetic nanoparticles have been explored during the past few decades. As tools that hold great potential for advancing biological sciences, magnetic nanoparticles have been used as platform materials for enhanced magnetic resonance imaging (MRI) agents, biological separation and magnetic drug delivery systems, and magnetic hyperthermia treatment. Furthermore, approaches that integrate various imaging and bioactive moieties have been used in the design of multi-modality systems, which possess synergistically enhanced properties such as better imaging resolution and sensitivity, molecular recognition capabilities, stimulus responsive drug delivery with on-demand control, and spatio-temporally controlled cell signal activation. Below, recent studies that focus on the design and synthesis of multi-mode magnetic nanoparticles will be briefly reviewed and their potential applications in the imaging and therapy areas will be also discussed. PMID:23579479
About CIB | Division of Cancer Prevention
The Consortium was created to improve cancer screening, early detection of aggressive cancer, assessment of cancer risk and cancer diagnosis aimed at integrating multi-modality imaging strategies and multiplexed biomarker methodologies into a singular complementary approach. Investigator perform collaborative studies, exchange information, share knowledge and leverage common
Enhancing resource coordination for multi-modal evacuation planning.
DOT National Transportation Integrated Search
2013-01-01
This research project seeks to increase knowledge about coordinating effective multi-modal evacuation for disasters. It does so by identifying, evaluating, and assessing : current transportation management approaches for multi-modal evacuation planni...
Integration of Multi-Modal Biomedical Data to Predict Cancer Grade and Patient Survival.
Phan, John H; Hoffman, Ryan; Kothari, Sonal; Wu, Po-Yen; Wang, May D
2016-02-01
The Big Data era in Biomedical research has resulted in large-cohort data repositories such as The Cancer Genome Atlas (TCGA). These repositories routinely contain hundreds of matched patient samples for genomic, proteomic, imaging, and clinical data modalities, enabling holistic and multi-modal integrative analysis of human disease. Using TCGA renal and ovarian cancer data, we conducted a novel investigation of multi-modal data integration by combining histopathological image and RNA-seq data. We compared the performances of two integrative prediction methods: majority vote and stacked generalization. Results indicate that integration of multiple data modalities improves prediction of cancer grade and outcome. Specifically, stacked generalization, a method that integrates multiple data modalities to produce a single prediction result, outperforms both single-data-modality prediction and majority vote. Moreover, stacked generalization reveals the contribution of each data modality (and specific features within each data modality) to the final prediction result and may provide biological insights to explain prediction performance.
Computational method for multi-modal microscopy based on transport of intensity equation
NASA Astrophysics Data System (ADS)
Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao
2017-02-01
In this paper, we develop the requisite theory to describe a hybrid virtual-physical multi-modal imaging system which yields quantitative phase, Zernike phase contrast, differential interference contrast (DIC), and light field moment imaging simultaneously based on transport of intensity equation(TIE). We then give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens based TIE system, combined with the appropriate post-processing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.
Introduction to clinical and laboratory (small-animal) image registration and fusion.
Zanzonico, Pat B; Nehmeh, Sadek A
2006-01-01
Imaging has long been a vital component of clinical medicine and, increasingly, of biomedical research in small-animals. Clinical and laboratory imaging modalities can be divided into two general categories, structural (or anatomical) and functional (or physiological). The latter, in particular, has spawned what has come to be known as "molecular imaging". Image registration and fusion have rapidly emerged as invaluable components of both clinical and small-animal imaging and has lead to the development and marketing of a variety of multi-modality, e.g. PET-CT, devices which provide registered and fused three-dimensional image sets. This paper briefly reviews the basics of image registration and fusion and available clinical and small-animal multi-modality instrumentation.
Multi-modality image registration for effective thermographic fever screening
NASA Astrophysics Data System (ADS)
Dwith, C. Y. N.; Ghassemi, Pejhman; Pfefer, Joshua; Casamento, Jon; Wang, Quanzeng
2017-02-01
Fever screening based on infrared thermographs (IRTs) is a viable mass screening approach during infectious disease pandemics, such as Ebola and Severe Acute Respiratory Syndrome (SARS), for temperature monitoring in public places like hospitals and airports. IRTs have been found to be powerful, quick and non-invasive methods for detecting elevated temperatures. Moreover, regions medially adjacent to the inner canthi (called the canthi regions in this paper) are preferred sites for fever screening. Accurate localization of the canthi regions can be achieved through multi-modality registration of infrared (IR) and white-light images. Here we propose a registration method through a coarse-fine registration strategy using different registration models based on landmarks and edge detection on eye contours. We have evaluated the registration accuracy to be within +/- 2.7 mm, which enables accurate localization of the canthi regions.
Exogenous Molecular Probes for Targeted Imaging in Cancer: Focus on Multi-modal Imaging
Joshi, Bishnu P.; Wang, Thomas D.
2010-01-01
Cancer is one of the major causes of mortality and morbidity in our healthcare system. Molecular imaging is an emerging methodology for the early detection of cancer, guidance of therapy, and monitoring of response. The development of new instruments and exogenous molecular probes that can be labeled for multi-modality imaging is critical to this process. Today, molecular imaging is at a crossroad, and new targeted imaging agents are expected to broadly expand our ability to detect and manage cancer. This integrated imaging strategy will permit clinicians to not only localize lesions within the body but also to manage their therapy by visualizing the expression and activity of specific molecules. This information is expected to have a major impact on drug development and understanding of basic cancer biology. At this time, a number of molecular probes have been developed by conjugating various labels to affinity ligands for targeting in different imaging modalities. This review will describe the current status of exogenous molecular probes for optical, scintigraphic, MRI and ultrasound imaging platforms. Furthermore, we will also shed light on how these techniques can be used synergistically in multi-modal platforms and how these techniques are being employed in current research. PMID:22180839
Mobile, Multi-modal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia
2015-10-01
eyes and image choroidal vessels/capillaries using CARS intravital microscopy Subtask 3: Measure oxy-hemoglobin levels in PBI test and control eyes...AWARD NUMBER: W81XWH-14-1-0537 TITLE: Mobile, Multi-modal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia...4. TITLE AND SUBTITLE Mobile, Multimodal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia 5a. CONTRACT NUMBER W81XWH
A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps
Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun
2014-01-01
In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290
Quantitative multi-modal NDT data analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heideklang, René; Shokouhi, Parisa
2014-02-18
A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundantmore » information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity.« less
Ulloa, Alvaro; Jingyu Liu; Vergara, Victor; Jiayu Chen; Calhoun, Vince; Pattichis, Marios
2014-01-01
In the biomedical field, current technology allows for the collection of multiple data modalities from the same subject. In consequence, there is an increasing interest for methods to analyze multi-modal data sets. Methods based on independent component analysis have proven to be effective in jointly analyzing multiple modalities, including brain imaging and genetic data. This paper describes a new algorithm, three-way parallel independent component analysis (3pICA), for jointly identifying genomic loci associated with brain function and structure. The proposed algorithm relies on the use of multi-objective optimization methods to identify correlations among the modalities and maximally independent sources within modality. We test the robustness of the proposed approach by varying the effect size, cross-modality correlation, noise level, and dimensionality of the data. Simulation results suggest that 3p-ICA is robust to data with SNR levels from 0 to 10 dB and effect-sizes from 0 to 3, while presenting its best performance with high cross-modality correlations, and more than one subject per 1,000 variables. In an experimental study with 112 human subjects, the method identified links between a genetic component (pointing to brain function and mental disorder associated genes, including PPP3CC, KCNQ5, and CYP7B1), a functional component related to signal decreases in the default mode network during the task, and a brain structure component indicating increases of gray matter in brain regions of the default mode region. Although such findings need further replication, the simulation and in-vivo results validate the three-way parallel ICA algorithm presented here as a useful tool in biomedical data decomposition applications.
Detection of relationships among multi-modal brain imaging meta-features via information flow.
Miller, Robyn L; Vergara, Victor M; Calhoun, Vince D
2018-01-15
Neuroscientists and clinical researchers are awash in data from an ever-growing number of imaging and other bio-behavioral modalities. This flow of brain imaging data, taken under resting and various task conditions, combines with available cognitive measures, behavioral information, genetic data plus other potentially salient biomedical and environmental information to create a rich but diffuse data landscape. The conditions being studied with brain imaging data are often extremely complex and it is common for researchers to employ more than one imaging, behavioral or biological data modality (e.g., genetics) in their investigations. While the field has advanced significantly in its approach to multimodal data, the vast majority of studies still ignore joint information among two or more features or modalities. We propose an intuitive framework based on conditional probabilities for understanding information exchange between features in what we are calling a feature meta-space; that is, a space consisting of many individual featurae spaces. Features can have any dimension and can be drawn from any data source or modality. No a priori assumptions are made about the functional form (e.g., linear, polynomial, exponential) of captured inter-feature relationships. We demonstrate the framework's ability to identify relationships between disparate features of varying dimensionality by applying it to a large multi-site, multi-modal clinical dataset, balance between schizophrenia patients and controls. In our application it exposes both expected (previously observed) relationships, and novel relationships rarely considered investigated by clinical researchers. To the best of our knowledge there is not presently a comparably efficient way to capture relationships of indeterminate functional form between features of arbitrary dimension and type. We are introducing this method as an initial foray into a space that remains relatively underpopulated. The framework we propose is powerful, intuitive and very efficiently provides a high-level overview of a massive data space. In our application it exposes both expected relationships and relationships very rarely considered worth investigating by clinical researchers. Copyright © 2017 Elsevier B.V. All rights reserved.
Spinal fusion-hardware construct: Basic concepts and imaging review
Nouh, Mohamed Ragab
2012-01-01
The interpretation of spinal images fixed with metallic hardware forms an increasing bulk of daily practice in a busy imaging department. Radiologists are required to be familiar with the instrumentation and operative options used in spinal fixation and fusion procedures, especially in his or her institute. This is critical in evaluating the position of implants and potential complications associated with the operative approaches and spinal fixation devices used. Thus, the radiologist can play an important role in patient care and outcome. This review outlines the advantages and disadvantages of commonly used imaging methods and reports on the best yield for each modality and how to overcome the problematic issues associated with the presence of metallic hardware during imaging. Baseline radiographs are essential as they are the baseline point for evaluation of future studies should patients develop symptoms suggesting possible complications. They may justify further imaging workup with computed tomography, magnetic resonance and/or nuclear medicine studies as the evaluation of a patient with a spinal implant involves a multi-modality approach. This review describes imaging features of potential complications associated with spinal fusion surgery as well as the instrumentation used. This basic knowledge aims to help radiologists approach everyday practice in clinical imaging. PMID:22761979
Cardiac Sarcoidosis: Clinical Manifestations, Imaging Characteristics, and Therapeutic Approach
Houston, Brian A; Mukherjee, Monica
2014-01-01
Sarcoidosis is a multi-system disease pathologically characterized by the accumulation of T-lymphocytes and mononuclear phagocytes into the sine qua non pathologic structure of the noncaseating granuloma. Cardiac involvement remains a key source of morbidity and mortality in sarcoidosis. Definitive diagnosis of cardiac sarcoidosis, particularly early enough in the disease course to provide maximal therapeutic impact, has proven a particularly difficult challenge. However, major advancements in imaging techniques have been made in the last decade. Advancements in imaging modalities including echocardiography, nuclear spectroscopy, positron emission tomography, and magnetic resonance imaging all have improved our ability to diagnose cardiac sarcoidosis, and in many cases to provide a more accurate prognosis and thus targeted therapy. Likewise, therapy for cardiac sarcoidosis is beginning to advance past a “steroids-only” approach, as novel immunosuppressant agents provide effective steroid-sparing options. The following focused review will provide a brief discussion of the epidemiology and clinical presentation of cardiac sarcoidosis followed by a discussion of up-to-date imaging modalities employed in its assessment and therapeutic approaches. PMID:25452702
A Review of Multivariate Methods for Multimodal Fusion of Brain Imaging Data
Adali, Tülay; Yu, Qingbao; Calhoun, Vince D.
2011-01-01
The development of various neuroimaging techniques is rapidly improving the measurements of brain function/structure. However, despite improvements in individual modalities, it is becoming increasingly clear that the most effective research approaches will utilize multi-modal fusion, which takes advantage of the fact that each modality provides a limited view of the brain. The goal of multimodal fusion is to capitalize on the strength of each modality in a joint analysis, rather than a separate analysis of each. This is a more complicated endeavor that must be approached more carefully and efficient methods should be developed to draw generalized and valid conclusions from high dimensional data with a limited number of subjects. Numerous research efforts have been reported in the field based on various statistical approaches, e.g. independent component analysis (ICA), canonical correlation analysis (CCA) and partial least squares (PLS). In this review paper, we survey a number of multivariate methods appearing in previous reports, which are performed with or without prior information and may have utility for identifying potential brain illness biomarkers. We also discuss the possible strengths and limitations of each method, and review their applications to brain imaging data. PMID:22108139
Joint MR-PET reconstruction using a multi-channel image regularizer
Koesters, Thomas; Otazo, Ricardo; Bredies, Kristian; Sodickson, Daniel K
2016-01-01
While current state of the art MR-PET scanners enable simultaneous MR and PET measurements, the acquired data sets are still usually reconstructed separately. We propose a new multi-modality reconstruction framework using second order Total Generalized Variation (TGV) as a dedicated multi-channel regularization functional that jointly reconstructs images from both modalities. In this way, information about the underlying anatomy is shared during the image reconstruction process while unique differences are preserved. Results from numerical simulations and in-vivo experiments using a range of accelerated MR acquisitions and different MR image contrasts demonstrate improved PET image quality, resolution, and quantitative accuracy. PMID:28055827
Angiogram, fundus, and oxygen saturation optic nerve head image fusion
NASA Astrophysics Data System (ADS)
Cao, Hua; Khoobehi, Bahram
2009-02-01
A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.
NASA Astrophysics Data System (ADS)
Wang, Yan; Ji, Lei; Zhang, Bingbo; Yin, Peihao; Qiu, Yanyan; Song, Daqian; Zhou, Juying; Li, Qi
2013-05-01
Multi-modal imaging based on multifunctional nanoparticles is a promising alternative approach to improve the sensitivity of early cancer diagnosis. In this study, highly upconverting fluorescence and strong relaxivity rare-earth nanoparticles coated with paramagnetic lanthanide complex shells and polyethylene glycol (PEGylated UCNPs@DTPA-Gd3+) are synthesized as dual-modality imaging contrast agents (CAs) for upconverting fluorescent and magnetic resonance dual-modality imaging. PEGylated UCNPs@DTPA-Gd3+ with sizes in the range of 32-86 nm are colloidally stable. They exhibit higher longitudinal relaxivity and transverse relaxivity in water (r1 and r2 values are 7.4 and 27.8 s-1 per mM Gd3+, respectively) than does commercial Gd-DTPA (r1 and r2 values of 3.7 and 4.6 s-1 per mM Gd3+, respectively). They are found to be biocompatible. In vitro cancer cell imaging shows good imaging contrast of PEGylated UCNPs@DTPA-Gd3+. In vivo upconversion fluorescent imaging and T1-weighted MRI show excellent enhancement of both fluorescent and MR signals in the livers of mice administered PEGylated UCNPs@DTPA-Gd3+. All the experimental results indicate that the synthesized PEGylated UCNPs@DTPA-Gd3+ present great potential for biomedical upconversion of fluorescent and magnetic resonance dual-modality imaging applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kapur, T.
At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less
MO-DE-202-02: Advances in Image Registration and Reconstruction for Image-Guided Neurosurgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siewerdsen, J.
At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less
Cross-modal face recognition using multi-matcher face scores
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2015-05-01
The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.
MO-DE-202-01: Image-Guided Focused Ultrasound Surgery and Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farahani, K.
At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less
MO-DE-202-04: Multimodality Image-Guided Surgery and Intervention: For the Rest of Us
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shekhar, R.
At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41CA192504 Disclosure and CoI: IGI Technologies, small-business partner on the grants.« less
Zu, Chen; Jie, Biao; Liu, Mingxia; Chen, Songcan
2015-01-01
Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer’s disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI. PMID:26572145
TU-G-303-03: Machine Learning to Improve Human Learning From Longitudinal Image Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veeraraghavan, H.
‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with othermore » biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding requirements for reliable radiomic models, including robustness of metrics, adequate predictive accuracy, and generalizability. Understanding the methodology behind radiomic-genomic (’radiogenomics’) correlations. Research supported by NIH (US), CIHR (Canada), and NSERC (Canada)« less
Liu, Mengyang; Chen, Zhe; Zabihian, Behrooz; Sinz, Christoph; Zhang, Edward; Beard, Paul C.; Ginner, Laurin; Hoover, Erich; Minneman, Micheal P.; Leitgeb, Rainer A.; Kittler, Harald; Drexler, Wolfgang
2016-01-01
Cutaneous blood flow accounts for approximately 5% of cardiac output in human and plays a key role in a number of a physiological and pathological processes. We show for the first time a multi-modal photoacoustic tomography (PAT), optical coherence tomography (OCT) and OCT angiography system with an articulated probe to extract human cutaneous vasculature in vivo in various skin regions. OCT angiography supplements the microvasculature which PAT alone is unable to provide. Co-registered volumes for vessel network is further embedded in the morphologic image provided by OCT. This multi-modal system is therefore demonstrated as a valuable tool for comprehensive non-invasive human skin vasculature and morphology imaging in vivo. PMID:27699106
Multi-Source Learning for Joint Analysis of Incomplete Multi-Modality Neuroimaging Data
Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping
2013-01-01
Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. We address this problem by proposing two novel learning methods where all the samples (with at least one available data source) can be used. In the first method, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. Our second method learns a base classifier for each data source independently, based on which we represent each source using a single column of prediction scores; we then estimate the missing prediction scores, which, combined with the existing prediction scores, are used to build a multi-source fusion model. To illustrate the proposed approaches, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 Normal), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithms. Comprehensive experiments show that our proposed methods yield stable and promising results. PMID:24014189
NASA Astrophysics Data System (ADS)
Cook, Jason R.; Dumani, Diego S.; Kubelick, Kelsey P.; Luci, Jeffrey; Emelianov, Stanislav Y.
2017-03-01
Imaging modalities utilize contrast agents to improve morphological visualization and to assess functional and molecular/cellular information. Here we present a new type of nanometer scale multi-functional particle that can be used for multi-modal imaging and therapeutic applications. Specifically, we synthesized monodisperse 20 nm Prussian Blue Nanocubes (PBNCs) with desired optical absorption in the near-infrared region and superparamagnetic properties. PBNCs showed excellent contrast in photoacoustic (700 nm wavelength) and MR (3T) imaging. Furthermore, photostability was assessed by exposing the PBNCs to nearly 1,000 laser pulses (5 ns pulse width) with up to 30 mJ/cm2 laser fluences. The PBNCs exhibited insignificant changes in photoacoustic signal, demonstrating enhanced robustness compared to the commonly used gold nanorods (substantial photodegradation with fluences greater than 5 mJ/cm2). Furthermore, the PBNCs exhibited superparamagnetism with a magnetic saturation of 105 emu/g, a 5x improvement over superparamagnetic iron-oxide (SPIO) nanoparticles. PBNCs exhibited enhanced T2 contrast measured using 3T clinical MRI. Because of the excellent optical absorption and magnetism, PBNCs have potential uses in other imaging modalities including optical tomography, microscopy, magneto-motive OCT/ultrasound, etc. In addition to multi-modal imaging, the PBNCs are multi-functional and, for example, can be used to enhance magnetic delivery and as therapeutic agents. Our initial studies show that stem cells can be labeled with PBNCs to perform image-guided magnetic delivery. Overall, PBNCs can act as imaging/therapeutic agents in diverse applications including cancer, cardiovascular disease, ophthalmology, and tissue engineering. Furthermore, PBNCs are based on FDA approved Prussian Blue thus potentially easing clinical translation of PBNCs.
XML-based scripting of multimodality image presentations in multidisciplinary clinical conferences
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Allada, Vivekanand; Dahlbom, Magdalena; Marcus, Phillip; Fine, Ian; Lapstra, Lorelle
2002-05-01
We developed a multi-modality image presentation software for display and analysis of images and related data from different imaging modalities. The software is part of a cardiac image review and presentation platform that supports integration of digital images and data from digital and analog media such as videotapes, analog x-ray films and 35 mm cine films. The software supports standard DICOM image files as well as AVI and PDF data formats. The system is integrated in a digital conferencing room that includes projections of digital and analog sources, remote videoconferencing capabilities, and an electronic whiteboard. The goal of this pilot project is to: 1) develop a new paradigm for image and data management for presentation in a clinically meaningful sequence adapted to case-specific scenarios, 2) design and implement a multi-modality review and conferencing workstation using component technology and customizable 'plug-in' architecture to support complex review and diagnostic tasks applicable to all cardiac imaging modalities and 3) develop an XML-based scripting model of image and data presentation for clinical review and decision making during routine clinical tasks and multidisciplinary clinical conferences.
TU-G-303-04: Radiomics and the Coming Pan-Omics Revolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Naqa, I.
‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with othermore » biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding requirements for reliable radiomic models, including robustness of metrics, adequate predictive accuracy, and generalizability. Understanding the methodology behind radiomic-genomic (’radiogenomics’) correlations. Research supported by NIH (US), CIHR (Canada), and NSERC (Canada)« less
DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool
Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary
2008-01-01
Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444
Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer's Disease Diagnosis.
Liu, Manhua; Cheng, Danni; Wang, Kundong; Wang, Yaping
2018-03-23
Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance.
Multi-detector CT imaging in the postoperative orthopedic patient with metal hardware.
Vande Berg, Bruno; Malghem, Jacques; Maldague, Baudouin; Lecouvet, Frederic
2006-12-01
Multi-detector CT imaging (MDCT) becomes routine imaging modality in the assessment of the postoperative orthopedic patients with metallic instrumentation that degrades image quality at MR imaging. This article reviews the physical basis and CT appearance of such metal-related artifacts. It also addresses the clinical value of MDCT in postoperative orthopedic patients with emphasis on fracture healing, spinal fusion or arthrodesis, and joint replacement. MDCT imaging shows limitations in the assessment of the bone marrow cavity and of the soft tissues for which MR imaging remains the imaging modality of choice despite metal-related anatomic distortions and signal alteration.
Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan
2014-01-01
Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery.
Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan
2014-01-01
Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery. PMID:25250092
Multi-modal automatic montaging of adaptive optics retinal images
Chen, Min; Cooper, Robert F.; Han, Grace K.; Gee, James; Brainard, David H.; Morgan, Jessica I. W.
2016-01-01
We present a fully automated adaptive optics (AO) retinal image montaging algorithm using classic scale invariant feature transform with random sample consensus for outlier removal. Our approach is capable of using information from multiple AO modalities (confocal, split detection, and dark field) and can accurately detect discontinuities in the montage. The algorithm output is compared to manual montaging by evaluating the similarity of the overlapping regions after montaging, and calculating the detection rate of discontinuities in the montage. Our results show that the proposed algorithm has high alignment accuracy and a discontinuity detection rate that is comparable (and often superior) to manual montaging. In addition, we analyze and show the benefits of using multiple modalities in the montaging process. We provide the algorithm presented in this paper as open-source and freely available to download. PMID:28018714
Math Snacks: Using Animations and Games to Fill the Gaps in Mathematics
ERIC Educational Resources Information Center
Valdiz, Alfred; Trujillo, Karen; Wiburg, Karin
2013-01-01
Math Snacks animations and support materials were developed for use on the web and mobile technologies to teach ratio, proportion, scale factor, and number line concepts using a multi-modal approach. Included in Math Snacks are: Animations which promote the visualization of a concept image; written lessons which provide cognitive complexity for…
A Multi-Modality CMOS Sensor Array for Cell-Based Assay and Drug Screening.
Chi, Taiyun; Park, Jong Seok; Butts, Jessica C; Hookway, Tracy A; Su, Amy; Zhu, Chengjie; Styczynski, Mark P; McDevitt, Todd C; Wang, Hua
2015-12-01
In this paper, we present a fully integrated multi-modality CMOS cellular sensor array with four sensing modalities to characterize different cell physiological responses, including extracellular voltage recording, cellular impedance mapping, optical detection with shadow imaging and bioluminescence sensing, and thermal monitoring. The sensor array consists of nine parallel pixel groups and nine corresponding signal conditioning blocks. Each pixel group comprises one temperature sensor and 16 tri-modality sensor pixels, while each tri-modality sensor pixel can be independently configured for extracellular voltage recording, cellular impedance measurement (voltage excitation/current sensing), and optical detection. This sensor array supports multi-modality cellular sensing at the pixel level, which enables holistic cell characterization and joint-modality physiological monitoring on the same cellular sample with a pixel resolution of 80 μm × 100 μm. Comprehensive biological experiments with different living cell samples demonstrate the functionality and benefit of the proposed multi-modality sensing in cell-based assay and drug screening.
Zuluaga, Maria A; Rodionov, Roman; Nowell, Mark; Achhala, Sufyan; Zombori, Gergely; Mendelson, Alex F; Cardoso, M Jorge; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sébastien
2015-08-01
Brain vessels are among the most critical landmarks that need to be assessed for mitigating surgical risks in stereo-electroencephalography (SEEG) implantation. Intracranial haemorrhage is the most common complication associated with implantation, carrying significantly associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice, neurosurgeons have no assistance in the planning of electrode trajectories. There is great interest in developing computer-assisted planning systems that can optimise the safety profile of electrode trajectories, maximising the distance to critical structures. This paper presents a method that integrates the concepts of scale, neighbourhood structure and feature stability with the aim of improving robustness and accuracy of vessel extraction within a SEEG planning system. The developed method accounts for scale and vicinity of a voxel by formulating the problem within a multi-scale tensor voting framework. Feature stability is achieved through a similarity measure that evaluates the multi-modal consistency in vesselness responses. The proposed measurement allows the combination of multiple images modalities into a single image that is used within the planning system to visualise critical vessels. Twelve paired data sets from two image modalities available within the planning system were used for evaluation. The mean Dice similarity coefficient was 0.89 ± 0.04, representing a statistically significantly improvement when compared to a semi-automated single human rater, single-modality segmentation protocol used in clinical practice (0.80 ± 0.03). Multi-modal vessel extraction is superior to semi-automated single-modality segmentation, indicating the possibility of safer SEEG planning, with reduced patient morbidity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juffmann, Thomas; Koppell, Stewart A.; Klopfer, Brannon B.
Feynman once asked physicists to build better electron microscopes to be able to watch biology at work. While electron microscopes can now provide atomic resolution, electron beam induced specimen damage precludes high resolution imaging of sensitive materials, such as single proteins or polymers. Here, we use simulations to show that an electron microscope based on a multi-pass measurement protocol enables imaging of single proteins, without averaging structures over multiple images. While we demonstrate the method for particular imaging targets, the approach is broadly applicable and is expected to improve resolution and sensitivity for a range of electron microscopy imaging modalities,more » including, for example, scanning and spectroscopic techniques. The approach implements a quantum mechanically optimal strategy which under idealized conditions can be considered interaction-free.« less
α-Information Based Registration of Dynamic Scans for Magnetic Resonance Cystography
Han, Hao; Lin, Qin; Li, Lihong; Duan, Chaijie; Lu, Hongbing; Li, Haifang; Yan, Zengmin; Fitzgerald, John
2015-01-01
To continue our effort on developing magnetic resonance (MR) cystography, we introduce a novel non–rigid 3D registration method to compensate for bladder wall motion and deformation in dynamic MR scans, which are impaired by relatively low signal–to–noise ratio in each time frame. The registration method is developed on the similarity measure of α–information, which has the potential of achieving higher registration accuracy than the commonly-used mutual information (MI) measure for either mono-modality or multi-modality image registration. The α–information metric was also demonstrated to be superior to both the mean squares and the cross-correlation metrics in multi-modality scenarios. The proposed α–registration method was applied for bladder motion compensation via real patient studies, and its effect to the automatic and accurate segmentation of bladder wall was also evaluated. Compared with the prevailing MI-based image registration approach, the presented α–information based registration was more effective to capture the bladder wall motion and deformation, which ensured the success of the following bladder wall segmentation to achieve the goal of evaluating the entire bladder wall for detection and diagnosis of abnormality. PMID:26087506
NASA Astrophysics Data System (ADS)
Sun, Yi; You, Sixian; Tu, Haohua; Spillman, Darold R.; Marjanovic, Marina; Chaney, Eric J.; Liu, George Z.; Ray, Partha S.; Higham, Anna; Boppart, Stephen A.
2017-02-01
Label-free multi-photon imaging has been a powerful tool for studying tissue microstructures and biochemical distributions, particularly for investigating tumors and their microenvironments. However, it remains challenging for traditional bench-top multi-photon microscope systems to conduct ex vivo tumor tissue imaging in the operating room due to their bulky setups and laser sources. In this study, we designed, built, and clinically demonstrated a portable multi-modal nonlinear label-free microscope system that combined four modalities, including two- and three- photon fluorescence for studying the distributions of FAD and NADH, and second and third harmonic generation, respectively, for collagen fiber structures and the distribution of micro-vesicles found in tumors and the microenvironment. Optical realignments and switching between modalities were motorized for more rapid and efficient imaging and for a light-tight enclosure, reducing ambient light noise to only 5% within the brightly lit operating room. Using up to 20 mW of laser power after a 20x objective, this system can acquire multi-modal sets of images over 600 μm × 600 μm at an acquisition rate of 60 seconds using galvo-mirror scanning. This portable microscope system was demonstrated in the operating room for imaging fresh, resected, unstained breast tissue specimens, and for assessing tumor margins and the tumor microenvironment. This real-time label-free nonlinear imaging system has the potential to uniquely characterize breast cancer margins and the microenvironment of tumors to intraoperatively identify structural, functional, and molecular changes that could indicate the aggressiveness of the tumor.
NASA Astrophysics Data System (ADS)
Cochran, Jeffrey M.; Busch, David R.; Ban, Han Y.; Kavuri, Venkaiah C.; Schweiger, Martin J.; Arridge, Simon R.; Yodh, Arjun G.
2017-02-01
We present high spatial density, multi-modal, parallel-plate Diffuse Optical Tomography (DOT) imaging systems for the purpose of breast tumor detection. One hybrid instrument provides time domain (TD) and continuous wave (CW) DOT at 64 source fiber positions. The TD diffuse optical spectroscopy with PMT- detection produces low-resolution images of absolute tissue scattering and absorption while the spatially dense array of CCD-coupled detector fibers (108 detectors) provides higher-resolution CW images of relative tissue optical properties. Reconstruction of the tissue optical properties, along with total hemoglobin concentration and tissue oxygen saturation, is performed using the TOAST software suite. Comparison of the spatially-dense DOT images and MR images allows for a robust validation of DOT against an accepted clinical modality. Additionally, the structural information from co-registered MR images is used as a spatial prior to improve the quality of the functional optical images and provide more accurate quantification of the optical and hemodynamic properties of tumors. We also present an optical-only imaging system that provides frequency domain (FD) DOT at 209 source positions with full CCD detection and incorporates optical fringe projection profilometry to determine the breast boundary. This profilometry serves as a spatial constraint, improving the quality of the DOT reconstructions while retaining the benefits of an optical-only device. We present initial images from both human subjects and phantoms to display the utility of high spatial density data and multi-modal information in DOT reconstruction with the two systems.
Multi-atlas segmentation enables robust multi-contrast MRI spleen segmentation for splenomegaly
NASA Astrophysics Data System (ADS)
Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L.; Assad, Albert; Abramson, Richard G.; Landman, Bennett A.
2017-02-01
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≍1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
Multi-atlas Segmentation Enables Robust Multi-contrast MRI Spleen Segmentation for Splenomegaly.
Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L; Assad, Albert; Abramson, Richard G; Landman, Bennett A
2017-02-11
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU
NASA Astrophysics Data System (ADS)
Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.
2007-03-01
In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.
Young Kim, Eun; Johnson, Hans J
2013-01-01
A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.
Sun, Yang; Stephens, Douglas N.; Park, Jesung; Sun, Yinghua; Marcu, Laura; Cannata, Jonathan M.; Shung, K. Kirk
2010-01-01
We report the development and validate a multi-modal tissue diagnostic technology, which combines three complementary techniques into one system including ultrasound backscatter microscopy (UBM), photoacoustic imaging (PAI), and time-resolved laser-induced fluorescence spectroscopy (TR-LIFS). UBM enables the reconstruction of the tissue microanatomy. PAI maps the optical absorption heterogeneity of the tissue associated with structure information and has the potential to provide functional imaging of the tissue. Examination of the UBM and PAI images allows for localization of regions of interest for TR-LIFS evaluation of the tissue composition. The hybrid probe consists of a single element ring transducer with concentric fiber optics for multi-modal data acquisition. Validation and characterization of the multi-modal system and ultrasonic, photoacoustic, and spectroscopic data coregistration were conducted in a physical phantom with properties of ultrasound scattering, optical absorption, and fluorescence. The UBM system with the 41 MHz ring transducer can reach the axial and lateral resolution of 30 and 65 μm, respectively. The PAI system with 532 nm excitation light from a Nd:YAG laser shows great contrast for the distribution of optical absorbers. The TR-LIFS system records the fluorescence decay with the time resolution of ~300 ps and a high sensitivity of nM concentration range. Biological phantom constructed with different types of tissues (tendon and fat) was used to demonstrate the complementary information provided by the three modalities. Fluorescence spectra and lifetimes were compared to differentiate chemical composition of tissues at the regions of interest determined by the coregistered high resolution UBM and PAI image. Current results demonstrate that the fusion of these techniques enables sequentially detection of functional, morphological, and compositional features of biological tissue, suggesting potential applications in diagnosis of tumors and atherosclerotic plaques. PMID:21894259
Sun, Yang; Stephens, Douglas N; Park, Jesung; Sun, Yinghua; Marcu, Laura; Cannata, Jonathan M; Shung, K Kirk
2008-01-01
We report the development and validate a multi-modal tissue diagnostic technology, which combines three complementary techniques into one system including ultrasound backscatter microscopy (UBM), photoacoustic imaging (PAI), and time-resolved laser-induced fluorescence spectroscopy (TR-LIFS). UBM enables the reconstruction of the tissue microanatomy. PAI maps the optical absorption heterogeneity of the tissue associated with structure information and has the potential to provide functional imaging of the tissue. Examination of the UBM and PAI images allows for localization of regions of interest for TR-LIFS evaluation of the tissue composition. The hybrid probe consists of a single element ring transducer with concentric fiber optics for multi-modal data acquisition. Validation and characterization of the multi-modal system and ultrasonic, photoacoustic, and spectroscopic data coregistration were conducted in a physical phantom with properties of ultrasound scattering, optical absorption, and fluorescence. The UBM system with the 41 MHz ring transducer can reach the axial and lateral resolution of 30 and 65 μm, respectively. The PAI system with 532 nm excitation light from a Nd:YAG laser shows great contrast for the distribution of optical absorbers. The TR-LIFS system records the fluorescence decay with the time resolution of ~300 ps and a high sensitivity of nM concentration range. Biological phantom constructed with different types of tissues (tendon and fat) was used to demonstrate the complementary information provided by the three modalities. Fluorescence spectra and lifetimes were compared to differentiate chemical composition of tissues at the regions of interest determined by the coregistered high resolution UBM and PAI image. Current results demonstrate that the fusion of these techniques enables sequentially detection of functional, morphological, and compositional features of biological tissue, suggesting potential applications in diagnosis of tumors and atherosclerotic plaques.
NASA Astrophysics Data System (ADS)
Tang, Qinggong; Frank, Aaron; Wang, Jianting; Chen, Chao-wei; Jin, Lily; Lin, Jon; Chan, Joanne M.; Chen, Yu
2016-03-01
Early detection of neoplastic changes remains a critical challenge in clinical cancer diagnosis and treatment. Many cancers arise from epithelial layers such as those of the gastrointestinal (GI) tract. Current standard endoscopic technology is unable to detect those subsurface lesions. Since cancer development is associated with both morphological and molecular alterations, imaging technologies that can quantitative image tissue's morphological and molecular biomarkers and assess the depth extent of a lesion in real time, without the need for tissue excision, would be a major advance in GI cancer diagnostics and therapy. In this research, we investigated the feasibility of multi-modal optical imaging including high-resolution optical coherence tomography (OCT) and depth-resolved high-sensitivity fluorescence laminar optical tomography (FLOT) for structural and molecular imaging. APC (adenomatous polyposis coli) mice model were imaged using OCT and FLOT and the correlated histopathological diagnosis was obtained. Quantitative structural (the scattering coefficient) and molecular imaging parameters (fluorescence intensity) from OCT and FLOT images were developed for multi-parametric analysis. This multi-modal imaging method has demonstrated the feasibility for more accurate diagnosis with 87.4% (87.3%) for sensitivity (specificity) which gives the most optimal diagnosis (the largest area under receiver operating characteristic (ROC) curve). This project results in a new non-invasive multi-modal imaging platform for improved GI cancer detection, which is expected to have a major impact on detection, diagnosis, and characterization of GI cancers, as well as a wide range of epithelial cancers.
NASA Astrophysics Data System (ADS)
Rusu, Mirabela; Wang, Haibo; Golden, Thea; Gow, Andrew; Madabhushi, Anant
2013-03-01
Mouse lung models facilitate the investigation of conditions such as chronic inflammation which are associated with common lung diseases. The multi-scale manifestation of lung inflammation prompted us to use multi-scale imaging - both in vivo, ex vivo MRI along with ex vivo histology, for its study in a new quantitative way. Some imaging modalities, such as MRI, are non-invasive and capture macroscopic features of the pathology, while others, e.g. ex vivo histology, depict detailed structures. Registering such multi-modal data to the same spatial coordinates will allow the construction of a comprehensive 3D model to enable the multi-scale study of diseases. Moreover, it may facilitate the identification and definition of quantitative of in vivo imaging signatures for diseases and pathologic processes. We introduce a quantitative, image analytic framework to integrate in vivo MR images of the entire mouse with ex vivo histology of the lung alone, using lung ex vivo MRI as conduit to facilitate their co-registration. In our framework, we first align the MR images by registering the in vivo and ex vivo MRI of the lung using an interactive rigid registration approach. Then we reconstruct the 3D volume of the ex vivo histological specimen by efficient group wise registration of the 2D slices. The resulting 3D histologic volume is subsequently registered to the MRI volumes by interactive rigid registration, directly to the ex vivo MRI, and implicitly to in vivo MRI. Qualitative evaluation of the registration framework was performed by comparing airway tree structures in ex vivo MRI and ex vivo histology where airways are visible and may be annotated. We present a use case for evaluation of our co-registration framework in the context of studying chronic inammation in a diseased mouse.
WE-H-206-02: Recent Advances in Multi-Modality Molecular Imaging of Small Animals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsui, B.
Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less
Content-independent embedding scheme for multi-modal medical image watermarking.
Nyeem, Hussain; Boles, Wageeh; Boyd, Colin
2015-02-04
As the increasing adoption of information technology continues to offer better distant medical services, the distribution of, and remote access to digital medical images over public networks continues to grow significantly. Such use of medical images raises serious concerns for their continuous security protection, which digital watermarking has shown great potential to address. We present a content-independent embedding scheme for medical image watermarking. We observe that the perceptual content of medical images varies widely with their modalities. Recent medical image watermarking schemes are image-content dependent and thus they may suffer from inconsistent embedding capacity and visual artefacts. To attain the image content-independent embedding property, we generalise RONI (region of non-interest, to the medical professionals) selection process and use it for embedding by utilising RONI's least significant bit-planes. The proposed scheme thus avoids the need for RONI segmentation that incurs capacity and computational overheads. Our experimental results demonstrate that the proposed embedding scheme performs consistently over a dataset of 370 medical images including their 7 different modalities. Experimental results also verify how the state-of-the-art reversible schemes can have an inconsistent performance for different modalities of medical images. Our scheme has MSSIM (Mean Structural SIMilarity) larger than 0.999 with a deterministically adaptable embedding capacity. Our proposed image-content independent embedding scheme is modality-wise consistent, and maintains a good image quality of RONI while keeping all other pixels in the image untouched. Thus, with an appropriate watermarking framework (i.e., with the considerations of watermark generation, embedding and detection functions), our proposed scheme can be viable for the multi-modality medical image applications and distant medical services such as teleradiology and eHealth.
Heideklang, René; Shokouhi, Parisa
2016-01-01
This article focuses on the fusion of flaw indications from multi-sensor nondestructive materials testing. Because each testing method makes use of a different physical principle, a multi-method approach has the potential of effectively differentiating actual defect indications from the many false alarms, thus enhancing detection reliability. In this study, we propose a new technique for aggregating scattered two- or three-dimensional sensory data. Using a density-based approach, the proposed method explicitly addresses localization uncertainties such as registration errors. This feature marks one of the major of advantages of this approach over pixel-based image fusion techniques. We provide guidelines on how to set all the key parameters and demonstrate the technique’s robustness. Finally, we apply our fusion approach to experimental data and demonstrate its capability to locate small defects by substantially reducing false alarms under conditions where no single-sensor method is adequate. PMID:26784200
Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.
2014-01-01
The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019
NASA Astrophysics Data System (ADS)
Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo
2012-02-01
As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.
Kim, Kang; Wagner, William R
2016-03-01
With the rapid expansion of biomaterial development and coupled efforts to translate such advances toward the clinic, non-invasive and non-destructive imaging tools to evaluate implants in situ in a timely manner are critically needed. The required multi-level information is comprehensive, including structural, mechanical, and biological changes such as scaffold degradation, mechanical strength, cell infiltration, extracellular matrix formation and vascularization to name a few. With its inherent advantages of non-invasiveness and non-destructiveness, ultrasound imaging can be an ideal tool for both preclinical and clinical uses. In this review, currently available ultrasound imaging technologies that have been applied in vitro and in vivo for tissue engineering and regenerative medicine are discussed and some new emerging ultrasound technologies and multi-modality approaches utilizing ultrasound are introduced.
Multi-modality molecular imaging: pre-clinical laboratory configuration
NASA Astrophysics Data System (ADS)
Wu, Yanjun; Wellen, Jeremy W.; Sarkar, Susanta K.
2006-02-01
In recent years, the prevalence of in vivo molecular imaging applications has rapidly increased. Here we report on the construction of a multi-modality imaging facility in a pharmaceutical setting that is expected to further advance existing capabilities for in vivo imaging of drug distribution and the interaction with their target. The imaging instrumentation in our facility includes a microPET scanner, a four wavelength time-domain optical imaging scanner, a 9.4T/30cm MRI scanner and a SPECT/X-ray CT scanner. An electronics shop and a computer room dedicated to image analysis are additional features of the facility. The layout of the facility was designed with a central animal preparation room surrounded by separate laboratory rooms for each of the major imaging modalities to accommodate the work-flow of simultaneous in vivo imaging experiments. This report will focus on the design of and anticipated applications for our microPET and optical imaging laboratory spaces. Additionally, we will discuss efforts to maximize the daily throughput of animal scans through development of efficient experimental work-flows and the use of multiple animals in a single scanning session.
Brain tumor image segmentation using kernel dictionary learning.
Jeon Lee; Seung-Jun Kim; Rong Chen; Herskovits, Edward H
2015-08-01
Automated brain tumor image segmentation with high accuracy and reproducibility holds a big potential to enhance the current clinical practice. Dictionary learning (DL) techniques have been applied successfully to various image processing tasks recently. In this work, kernel extensions of the DL approach are adopted. Both reconstructive and discriminative versions of the kernel DL technique are considered, which can efficiently incorporate multi-modal nonlinear feature mappings based on the kernel trick. Our novel discriminative kernel DL formulation allows joint learning of a task-driven kernel-based dictionary and a linear classifier using a K-SVD-type algorithm. The proposed approaches were tested using real brain magnetic resonance (MR) images of patients with high-grade glioma. The obtained preliminary performances are competitive with the state of the art. The discriminative kernel DL approach is seen to reduce computational burden without much sacrifice in performance.
Probabilistic sparse matching for robust 3D/3D fusion in minimally invasive surgery.
Neumann, Dominik; Grbic, Sasa; John, Matthias; Navab, Nassir; Hornegger, Joachim; Ionasec, Razvan
2015-01-01
Classical surgery is being overtaken by minimally invasive and transcatheter procedures. As there is no direct view or access to the affected anatomy, advanced imaging techniques such as 3D C-arm computed tomography (CT) and C-arm fluoroscopy are routinely used in clinical practice for intraoperative guidance. However, due to constraints regarding acquisition time and device configuration, intraoperative modalities have limited soft tissue image quality and reliable assessment of the cardiac anatomy typically requires contrast agent, which is harmful to the patient and requires complex acquisition protocols. We propose a probabilistic sparse matching approach to fuse high-quality preoperative CT images and nongated, noncontrast intraoperative C-arm CT images by utilizing robust machine learning and numerical optimization techniques. Thus, high-quality patient-specific models can be extracted from the preoperative CT and mapped to the intraoperative imaging environment to guide minimally invasive procedures. Extensive quantitative experiments on 95 clinical datasets demonstrate that our model-based fusion approach has an average execution time of 1.56 s, while the accuracy of 5.48 mm between the anchor anatomy in both images lies within expert user confidence intervals. In direct comparison with image-to-image registration based on an open-source state-of-the-art medical imaging library and a recently proposed quasi-global, knowledge-driven multi-modal fusion approach for thoracic-abdominal images, our model-based method exhibits superior performance in terms of registration accuracy and robustness with respect to both target anatomy and anchor anatomy alignment errors.
Enhancing image classification models with multi-modal biomarkers
NASA Astrophysics Data System (ADS)
Caban, Jesus J.; Liao, David; Yao, Jianhua; Mollura, Daniel J.; Gochuico, Bernadette; Yoo, Terry
2011-03-01
Currently, most computer-aided diagnosis (CAD) systems rely on image analysis and statistical models to diagnose, quantify, and monitor the progression of a particular disease. In general, CAD systems have proven to be effective at providing quantitative measurements and assisting physicians during the decision-making process. As the need for more flexible and effective CADs continues to grow, questions about how to enhance their accuracy have surged. In this paper, we show how statistical image models can be augmented with multi-modal physiological values to create more robust, stable, and accurate CAD systems. In particular, this paper demonstrates how highly correlated blood and EKG features can be treated as biomarkers and used to enhance image classification models designed to automatically score subjects with pulmonary fibrosis. In our results, a 3-5% improvement was observed when comparing the accuracy of CADs that use multi-modal biomarkers with those that only used image features. Our results show that lab values such as Erythrocyte Sedimentation Rate and Fibrinogen, as well as EKG measurements such as QRS and I:40, are statistically significant and can provide valuable insights about the severity of the pulmonary fibrosis disease.
Visual tracking for multi-modality computer-assisted image guidance
NASA Astrophysics Data System (ADS)
Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp
2017-03-01
With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
Prospectus on Multi-Modal Aspects of Human Factors in Transportation
DOT National Transportation Integrated Search
1991-02-01
This prospectus identifies and discusses a series of human factors : issues which are critical to transportation safety and productivity, and : examines the potential benefits that can accrue from taking a multi-modal : approach to human factors rese...
The year 2012 in the European Heart Journal-Cardiovascular Imaging: Part I.
Edvardsen, Thor; Plein, Sven; Saraste, Antti; Knuuti, Juhani; Maurer, Gerald; Lancellotti, Patrizio
2013-06-01
The new multi-modality cardiovascular imaging journal, European Heart Journal - Cardiovascular Imaging, was started in 2012. During its first year, the new Journal has published an impressive collection of cardiovascular studies utilizing all cardiovascular imaging modalities. We will summarize the most important studies from its first year in two articles. The present 'Part I' of the review will focus on studies in myocardial function, myocardial ischaemia, and emerging techniques in cardiovascular imaging.
A digital 3D atlas of the marmoset brain based on multi-modal MRI.
Liu, Cirong; Ye, Frank Q; Yen, Cecil Chern-Chyi; Newman, John D; Glen, Daniel; Leopold, David A; Silva, Afonso C
2018-04-01
The common marmoset (Callithrix jacchus) is a New-World monkey of growing interest in neuroscience. Magnetic resonance imaging (MRI) is an essential tool to unveil the anatomical and functional organization of the marmoset brain. To facilitate identification of regions of interest, it is desirable to register MR images to an atlas of the brain. However, currently available atlases of the marmoset brain are mainly based on 2D histological data, which are difficult to apply to 3D imaging techniques. Here, we constructed a 3D digital atlas based on high-resolution ex-vivo MRI images, including magnetization transfer ratio (a T1-like contrast), T2w images, and multi-shell diffusion MRI. Based on the multi-modal MRI images, we manually delineated 54 cortical areas and 16 subcortical regions on one hemisphere of the brain (the core version). The 54 cortical areas were merged into 13 larger cortical regions according to their locations to yield a coarse version of the atlas, and also parcellated into 106 sub-regions using a connectivity-based parcellation method to produce a refined atlas. Finally, we compared the new atlas set with existing histology atlases and demonstrated its applications in connectome studies, and in resting state and stimulus-based fMRI. The atlas set has been integrated into the widely-distributed neuroimaging data analysis software AFNI and SUMA, providing a readily usable multi-modal template space with multi-level anatomical labels (including labels from the Paxinos atlas) that can facilitate various neuroimaging studies of marmosets. Published by Elsevier Inc.
AlJaroudi, Wael A; Einstein, Andrew J; Chaudhry, Farooq A; Lloyd, Steven G; Hage, Fadi G
2015-04-01
A large number of studies were presented at the 2014 American Heart Association Scientific Sessions. In this review, we will summarize key studies in nuclear cardiology, computed tomography, echocardiography, and cardiac magnetic resonance imaging. This brief review will be helpful for readers of the Journal who are interested in being updated on the latest research covering these imaging modalities.
James, Joseph; Murukeshan, Vadakke Matham; Woh, Lye Sun
2014-07-01
The structural and molecular heterogeneities of biological tissues demand the interrogation of the samples with multiple energy sources and provide visualization capabilities at varying spatial resolution and depth scales for obtaining complementary diagnostic information. A novel multi-modal imaging approach that uses optical and acoustic energies to perform photoacoustic, ultrasound and fluorescence imaging at multiple resolution scales from the tissue surface and depth is proposed in this paper. The system comprises of two distinct forms of hardware level integration so as to have an integrated imaging system under a single instrumentation set-up. The experimental studies show that the system is capable of mapping high resolution fluorescence signatures from the surface, optical absorption and acoustic heterogeneities along the depth (>2cm) of the tissue at multi-scale resolution (<1µm to <0.5mm).
Longmire, Michelle R.; Ogawa, Mikako; Choyke, Peter L.
2012-01-01
In recent years, numerous in vivo molecular imaging probes have been developed. As a consequence, much has been published on the design and synthesis of molecular imaging probes focusing on each modality, each type of material, or each target disease. More recently, second generation molecular imaging probes with unique, multi-functional, or multiplexed characteristics have been designed. This critical review focuses on (i) molecular imaging using combinations of modalities and signals that employ the full range of the electromagnetic spectra, (ii) optimized chemical design of molecular imaging probes for in vivo kinetics based on biology and physiology across a range of physical sizes, (iii) practical examples of second generation molecular imaging probes designed to extract complementary data from targets using multiple modalities, color, and comprehensive signals (277 references). PMID:21607237
Quantitative Imaging Biomarkers of NAFLD
Kinner, Sonja; Reeder, Scott B.
2016-01-01
Conventional imaging modalities, including ultrasonography (US), computed tomography (CT), and magnetic resonance (MR), play an important role in the diagnosis and management of patients with nonalcoholic fatty liver disease (NAFLD) by allowing noninvasive diagnosis of hepatic steatosis. However, conventional imaging modalities are limited as biomarkers of NAFLD for various reasons. Multi-parametric quantitative MRI techniques overcome many of the shortcomings of conventional imaging and allow comprehensive and objective evaluation of NAFLD. MRI can provide unconfounded biomarkers of hepatic fat, iron, and fibrosis in a single examination—a virtual biopsy has become a clinical reality. In this article, we will review the utility and limitation of conventional US, CT, and MR imaging for the diagnosis NAFLD. Recent advances in imaging biomarkers of NAFLD are also discussed with an emphasis in multi-parametric quantitative MRI. PMID:26848588
Markel, D; Naqa, I El; Freeman, C; Vallières, M
2012-06-01
To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. It was found that JR divergence when used for segmentation has an improved robustness to noise compared to using mutual information, or other entropy-based metrics. The MI metric failed at around 2/3 the noise power than the JR divergence. The JR divergence metric is useful for the task of joint segmentation/registration of multimodality images and shows improved results compared entropy based metric. The algorithm can be easily modified to incorporate non-intensity based images, which would allow applications into multi-modality and texture analysis. © 2012 American Association of Physicists in Medicine.
Thermal-to-visible face recognition using partial least squares.
Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson
2015-03-01
Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.
Lu, Pei; Xia, Jun; Li, Zhicheng; Xiong, Jing; Yang, Jian; Zhou, Shoujun; Wang, Lei; Chen, Mingyang; Wang, Cheng
2016-11-08
Accurate segmentation of blood vessels plays an important role in the computer-aided diagnosis and interventional treatment of vascular diseases. The statistical method is an important component of effective vessel segmentation; however, several limitations discourage the segmentation effect, i.e., dependence of the image modality, uneven contrast media, bias field, and overlapping intensity distribution of the object and background. In addition, the mixture models of the statistical methods are constructed relaying on the characteristics of the image histograms. Thus, it is a challenging issue for the traditional methods to be available in vessel segmentation from multi-modality angiographic images. To overcome these limitations, a flexible segmentation method with a fixed mixture model has been proposed for various angiography modalities. Our method mainly consists of three parts. Firstly, multi-scale filtering algorithm was used on the original images to enhance vessels and suppress noises. As a result, the filtered data achieved a new statistical characteristic. Secondly, a mixture model formed by three probabilistic distributions (two Exponential distributions and one Gaussian distribution) was built to fit the histogram curve of the filtered data, where the expectation maximization (EM) algorithm was used for parameters estimation. Finally, three-dimensional (3D) Markov random field (MRF) were employed to improve the accuracy of pixel-wise classification and posterior probability estimation. To quantitatively evaluate the performance of the proposed method, two phantoms simulating blood vessels with different tubular structures and noises have been devised. Meanwhile, four clinical angiographic data sets from different human organs have been used to qualitatively validate the method. To further test the performance, comparison tests between the proposed method and the traditional ones have been conducted on two different brain magnetic resonance angiography (MRA) data sets. The results of the phantoms were satisfying, e.g., the noise was greatly suppressed, the percentages of the misclassified voxels, i.e., the segmentation error ratios, were no more than 0.3%, and the Dice similarity coefficients (DSCs) were above 94%. According to the opinions of clinical vascular specialists, the vessels in various data sets were extracted with high accuracy since complete vessel trees were extracted while lesser non-vessels and background were falsely classified as vessel. In the comparison experiments, the proposed method showed its superiority in accuracy and robustness for extracting vascular structures from multi-modality angiographic images with complicated background noises. The experimental results demonstrated that our proposed method was available for various angiographic data. The main reason was that the constructed mixture probability model could unitarily classify vessel object from the multi-scale filtered data of various angiography images. The advantages of the proposed method lie in the following aspects: firstly, it can extract the vessels with poor angiography quality, since the multi-scale filtering algorithm can improve the vessel intensity in the circumstance such as uneven contrast media and bias field; secondly, it performed well for extracting the vessels in multi-modality angiographic images despite various signal-noises; and thirdly, it was implemented with better accuracy, and robustness than the traditional methods. Generally, these traits declare that the proposed method would have significant clinical application.
Seeley, Erin H.; Wilson, Kevin J.; Yankeelov, Thomas E.; Johnson, Rachelle W.; Gore, John C.; Caprioli, Richard M.; Matrisian, Lynn M.; Sterling, Julie A.
2014-01-01
Bone metastases are a clinically significant problem that arises in approximately 70% of metastatic breast cancer patients. Once established in bone, tumor cells induce changes in the bone microenvironment that lead to bone destruction, pain, and significant morbidity. While much is known about the later stages of bone disease, less is known about the earlier stages or the changes in protein expression in the tumor micro-environment. Due to promising results of combining magnetic resonance imaging (MRI) and Matrix-Assisted Laser Desorption/Ionization Imaging Mass Spectrometry (MALDI IMS) ion images in the brain, we developed methods for applying these modalities to models of tumor-induced bone disease in order to better understand the changes in protein expression that occur within the tumor-bone microenvironment. Specifically, we integrated three dimensional-volume reconstructions of spatially resolved MALDI IMS with high-resolution anatomical and diffusion weighted MRI data and histology in an intratibial model of breast tumor-induced bone disease. This approach enables us to analyze proteomic profiles from MALDI IMS data with corresponding in vivo imaging and ex vivo histology data. To the best of our knowledge, this is the first time these three modalities have been rigorously registered in the bone. The MALDI mass-to-charge ratio peaks indicate differential expression of calcyclin, ubiquitin, and other proteins within the tumor cells, while peaks corresponding to hemoglobin A and calgranulin A provided molecular information that aided in the identification of areas rich in red and white blood cells, respectively. This multimodality approach will allow us to comprehensively understand the bone-tumor microenvironment and thus may allow us to better develop and test approaches for inhibiting bone metastases. PMID:24487126
Sarica, Alessia; Cerasa, Antonio; Quattrone, Aldo
2017-01-01
Objective: Machine learning classification has been the most important computational development in the last years to satisfy the primary need of clinicians for automatic early diagnosis and prognosis. Nowadays, Random Forest (RF) algorithm has been successfully applied for reducing high dimensional and multi-source data in many scientific realms. Our aim was to explore the state of the art of the application of RF on single and multi-modal neuroimaging data for the prediction of Alzheimer's disease. Methods: A systematic review following PRISMA guidelines was conducted on this field of study. In particular, we constructed an advanced query using boolean operators as follows: ("random forest" OR "random forests") AND neuroimaging AND ("alzheimer's disease" OR alzheimer's OR alzheimer) AND (prediction OR classification) . The query was then searched in four well-known scientific databases: Pubmed, Scopus, Google Scholar and Web of Science. Results: Twelve articles-published between the 2007 and 2017-have been included in this systematic review after a quantitative and qualitative selection. The lesson learnt from these works suggest that when RF was applied on multi-modal data for prediction of Alzheimer's disease (AD) conversion from the Mild Cognitive Impairment (MCI), it produces one of the best accuracies to date. Moreover, the RF has important advantages in terms of robustness to overfitting, ability to handle highly non-linear data, stability in the presence of outliers and opportunity for efficient parallel processing mainly when applied on multi-modality neuroimaging data, such as, MRI morphometric, diffusion tensor imaging, and PET images. Conclusions: We discussed the strengths of RF, considering also possible limitations and by encouraging further studies on the comparisons of this algorithm with other commonly used classification approaches, particularly in the early prediction of the progression from MCI to AD.
NASA Astrophysics Data System (ADS)
Luk, Alex T.; Lin, Yuting; Grimmond, Brian; Sood, Anup; Uzgiris, Egidijus E.; Nalcioglu, Orhan; Gulsen, Gultekin
2013-03-01
Since diffuse optical tomography (DOT) is a low spatial resolution modality, it is desirable to validate its quantitative accuracy with another well-established imaging modality, such as magnetic resonance imaging (MRI). In this work, we have used a polymer based bi-functional MRI-optical contrast agent (Gd-DTPA-polylysine-IR800) in collaboration with GE Global Research. This multi-modality contrast agent provided not only co-localization but also the same kinetics, to cross-validate two imaging modalities. Bi-functional agents are injected to the rats and pharmacokinetics at the bladder are recovered using both optical and MR imaging. DOT results are validated using MRI results as "gold standard"
NASA Astrophysics Data System (ADS)
Peng, Dong; Du, Yang; Shi, Yiwen; Mao, Duo; Jia, Xiaohua; Li, Hui; Zhu, Yukun; Wang, Kun; Tian, Jie
2016-07-01
Photoacoustic imaging and fluorescence molecular imaging are emerging as important research tools for biomedical studies. Photoacoustic imaging offers both strong optical absorption contrast and high ultrasonic resolution, and fluorescence molecular imaging provides excellent superficial resolution, high sensitivity, high throughput, and the ability for real-time imaging. Therefore, combining the imaging information of both modalities can provide comprehensive in vivo physiological and pathological information. However, currently there are limited probes available that can realize both fluorescence and photoacoustic imaging, and advanced biomedical applications for applying this dual-modality imaging approach remain underexplored. In this study, we developed a dual-modality photoacoustic-fluorescence imaging nanoprobe, ICG-loaded Au@SiO2, which was uniquely designed, consisting of gold nanorod cores and indocyanine green with silica shell spacer layers to overcome fluorophore quenching. This nanoprobe was examined by both PAI and FMI for in vivo imaging on tumor and ischemia mouse models. Our results demonstrated that the nanoparticles can specifically accumulate at the tumor and ischemic areas and be detected by both imaging modalities. Moreover, this dual-modality imaging strategy exhibited superior advantages for a precise diagnosis in different scenarios. The new nanoprobe with the dual-modality imaging approach holds great potential for diagnosis and stage classification of tumor and ischemia related diseases.Photoacoustic imaging and fluorescence molecular imaging are emerging as important research tools for biomedical studies. Photoacoustic imaging offers both strong optical absorption contrast and high ultrasonic resolution, and fluorescence molecular imaging provides excellent superficial resolution, high sensitivity, high throughput, and the ability for real-time imaging. Therefore, combining the imaging information of both modalities can provide comprehensive in vivo physiological and pathological information. However, currently there are limited probes available that can realize both fluorescence and photoacoustic imaging, and advanced biomedical applications for applying this dual-modality imaging approach remain underexplored. In this study, we developed a dual-modality photoacoustic-fluorescence imaging nanoprobe, ICG-loaded Au@SiO2, which was uniquely designed, consisting of gold nanorod cores and indocyanine green with silica shell spacer layers to overcome fluorophore quenching. This nanoprobe was examined by both PAI and FMI for in vivo imaging on tumor and ischemia mouse models. Our results demonstrated that the nanoparticles can specifically accumulate at the tumor and ischemic areas and be detected by both imaging modalities. Moreover, this dual-modality imaging strategy exhibited superior advantages for a precise diagnosis in different scenarios. The new nanoprobe with the dual-modality imaging approach holds great potential for diagnosis and stage classification of tumor and ischemia related diseases. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr03809c
Multi-modal molecular diffuse optical tomography system for small animal imaging
Guggenheim, James A.; Basevi, Hector R. A.; Frampton, Jon; Styles, Iain B.; Dehghani, Hamid
2013-01-01
A multi-modal optical imaging system for quantitative 3D bioluminescence and functional diffuse imaging is presented, which has no moving parts and uses mirrors to provide multi-view tomographic data for image reconstruction. It is demonstrated that through the use of trans-illuminated spectral near infrared measurements and spectrally constrained tomographic reconstruction, recovered concentrations of absorbing agents can be used as prior knowledge for bioluminescence imaging within the visible spectrum. Additionally, the first use of a recently developed multi-view optical surface capture technique is shown and its application to model-based image reconstruction and free-space light modelling is demonstrated. The benefits of model-based tomographic image recovery as compared to 2D planar imaging are highlighted in a number of scenarios where the internal luminescence source is not visible or is confounding in 2D images. The results presented show that the luminescence tomographic imaging method produces 3D reconstructions of individual light sources within a mouse-sized solid phantom that are accurately localised to within 1.5mm for a range of target locations and depths indicating sensitivity and accurate imaging throughout the phantom volume. Additionally the total reconstructed luminescence source intensity is consistent to within 15% which is a dramatic improvement upon standard bioluminescence imaging. Finally, results from a heterogeneous phantom with an absorbing anomaly are presented demonstrating the use and benefits of a multi-view, spectrally constrained coupled imaging system that provides accurate 3D luminescence images. PMID:24954977
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, S.
Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, L.
Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less
WE-H-206-00: Advances in Preclinical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less
NASA Astrophysics Data System (ADS)
Kuzmak, Peter M.; Dayhoff, Ruth E.
1998-07-01
The U.S. Department of Veterans Affairs is integrating imaging into the healthcare enterprise using the Digital Imaging and Communication in Medicine (DICOM) standard protocols. Image management is directly integrated into the VistA Hospital Information System (HIS) software and clinical database. Radiology images are acquired via DICOM, and are stored directly in the HIS database. Images can be displayed on low- cost clinician's workstations throughout the medical center. High-resolution diagnostic quality multi-monitor VistA workstations with specialized viewing software can be used for reading radiology images. DICOM has played critical roles in the ability to integrate imaging functionality into the Healthcare Enterprise. Because of its openness, it allows the integration of system components from commercial and non- commercial sources to work together to provide functional cost-effective solutions (see Figure 1). Two approaches are used to acquire and handle images within the radiology department. At some VA Medical Centers, DICOM is used to interface a commercial Picture Archiving and Communications System (PACS) to the VistA HIS. At other medical centers, DICOM is used to interface the image producing modalities directly to the image acquisition and display capabilities of VistA itself. Both of these approaches use a small set of DICOM services that has been implemented by VistA to allow patient and study text data to be transmitted to image producing modalities and the commercial PACS, and to enable images and study data to be transferred back.
Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.
Chao, Zhen; Kim, Dohyeon; Kim, Hee-Joung
2018-04-01
In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. Recently, neural network technique was applied to medical image fusion by many researchers, but there are still many deficiencies. In this study, we propose a novel fusion method to combine multi-modality medical images based on the enhanced fuzzy radial basis function neural network (Fuzzy-RBFNN), which includes five layers: input, fuzzy partition, front combination, inference, and output. Moreover, we propose a hybrid of the gravitational search algorithm (GSA) and error back propagation algorithm (EBPA) to train the network to update the parameters of the network. Two different patterns of images are used as inputs of the neural network, and the output is the fused image. A comparison with the conventional fusion methods and another neural network method through subjective observation and objective evaluation indexes reveals that the proposed method effectively synthesized the information of input images and achieved better results. Meanwhile, we also trained the network by using the EBPA and GSA, individually. The results reveal that the EBPGSA not only outperformed both EBPA and GSA, but also trained the neural network more accurately by analyzing the same evaluation indexes. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan
2012-01-01
Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images. Copyright © 2011 Elsevier Ltd. All rights reserved.
Noninvasive imaging of oral premalignancy and malignancy
NASA Astrophysics Data System (ADS)
Wilder-Smith, Petra; Krasieva, T.; Jung, W.; You, J. S.; Chen, Z.; Osann, K.; Tromberg, B.
2005-04-01
Objectives: Early detection of cancer and its curable precursors remains the best way to ensure patient survival and quality of life. Despite significant advances in treatment, oral cancer still results in 10,000 U.S. deaths annually, mainly due to the late detection of most oral lesions. Specific aim was to use a combination of non-invasive optical in vivo technologies to test a multi-modality approach to non-invasive diagnostics of oral premalignancy and malignancy. Methods: In the hamster cheek pouch model (120 hamsters), in vivo optical coherence tomography (OCT) and optical Doppler tomography (ODT) mapped epithelial, subepithelial and vascular change throughout carcinogenesis in specific, marked sites. In vivo multi-wavelength multi-photon (MPM) and second harmonic generated (SHG) fluorescence techniques provided parallel data on surface and subsurface tissue structure, specifically collagen presence and structure, cellular presence, and vasculature. Images were diagnosed by 2 blinded, pre-standardized investigators using a standardized scale from 0-6 for all modalities. After sacrifice, histopathological sections were prepared and pathology evaluated on a scale of 0-6. ANOVA techniques compared imaging diagnostics with histopathology. 95% confidence limits of the sensitivity and specificity were established for the diagnostic capability of OCT/ODT+ MPM/SHG using ROC curves and kappa statistics. Results: Imaging data were reproducibly obtained with good accuracy. Carcinogenesis-related structural and vascular changes were clearly visible to tissue depths of 2mm. Sensitivity (OCT/ODT alone: 71-88%; OCT+MPM/SHG: 79-91%) and specificity (OCT alone: 62-83%;OCT+MPM/SHG: 67-90%) compared well with conventional techniques. Conclusions: OCT/ODT and MPM/SHG are promising non-invasive in vivo diagnostic modalities for oral dysplasia and malignancy. Supported by CRFA 30003, CCRP 00-01391V-20235, NIH (LAMMP) RR01192, DOE DE903-91ER 61227, NIH EB-00293 CA91717, NSF BES-86924, AFOSR FA 9550-04-1-0101.
Wang, Li; Shi, Feng; Gao, Yaozong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination process. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6–8 months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6 months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter. PMID:24291615
Muzic, Raymond F.; DiFilippo, Frank P.
2015-01-01
PET/MR is a hybrid imaging technology with the potential to combine the molecular and functional information of PET with the soft-tissue contrast of MR. Herein we review the technical features and challenges of putting these different technologies together. We emphasize the conceptual to make the material accessible to a wide audience. We begin by reviewing PET/CT, a more mature multi-modality imaging technology, to provide a basis for comparison to the history of PET/MR development. We discuss the motivation and challenges of PET/MR and different approaches that have been used to meet the challenges. We conclude with a speculation about the future of this exciting imaging method. PMID:25497909
MINC 2.0: A Flexible Format for Multi-Modal Images.
Vincent, Robert D; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L; Fonov, Vladimir S; Robbins, Steven M; Baghdadi, Leila; Lerch, Jason; Sled, John G; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P; Collins, D Louis; Evans, Alan C
2016-01-01
It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities.
Rockne, Russell C.; Trister, Andrew D.; Jacobs, Joshua; Hawkins-Daarud, Andrea J.; Neal, Maxwell L.; Hendrickson, Kristi; Mrugala, Maciej M.; Rockhill, Jason K.; Kinahan, Paul; Krohn, Kenneth A.; Swanson, Kristin R.
2015-01-01
Glioblastoma multiforme (GBM) is a highly invasive primary brain tumour that has poor prognosis despite aggressive treatment. A hallmark of these tumours is diffuse invasion into the surrounding brain, necessitating a multi-modal treatment approach, including surgery, radiation and chemotherapy. We have previously demonstrated the ability of our model to predict radiographic response immediately following radiation therapy in individual GBM patients using a simplified geometry of the brain and theoretical radiation dose. Using only two pre-treatment magnetic resonance imaging scans, we calculate net rates of proliferation and invasion as well as radiation sensitivity for a patient's disease. Here, we present the application of our clinically targeted modelling approach to a single glioblastoma patient as a demonstration of our method. We apply our model in the full three-dimensional architecture of the brain to quantify the effects of regional resistance to radiation owing to hypoxia in vivo determined by [18F]-fluoromisonidazole positron emission tomography (FMISO-PET) and the patient-specific three-dimensional radiation treatment plan. Incorporation of hypoxia into our model with FMISO-PET increases the model–data agreement by an order of magnitude. This improvement was robust to our definition of hypoxia or the degree of radiation resistance quantified with the FMISO-PET image and our computational model, respectively. This work demonstrates a useful application of patient-specific modelling in personalized medicine and how mathematical modelling has the potential to unify multi-modality imaging and radiation treatment planning. PMID:25540239
Integrative Data Analysis of Multi-Platform Cancer Data with a Multimodal Deep Learning Approach.
Liang, Muxuan; Li, Zhizhong; Chen, Ting; Zeng, Jianyang
2015-01-01
Identification of cancer subtypes plays an important role in revealing useful insights into disease pathogenesis and advancing personalized therapy. The recent development of high-throughput sequencing technologies has enabled the rapid collection of multi-platform genomic data (e.g., gene expression, miRNA expression, and DNA methylation) for the same set of tumor samples. Although numerous integrative clustering approaches have been developed to analyze cancer data, few of them are particularly designed to exploit both deep intrinsic statistical properties of each input modality and complex cross-modality correlations among multi-platform input data. In this paper, we propose a new machine learning model, called multimodal deep belief network (DBN), to cluster cancer patients from multi-platform observation data. In our integrative clustering framework, relationships among inherent features of each single modality are first encoded into multiple layers of hidden variables, and then a joint latent model is employed to fuse common features derived from multiple input modalities. A practical learning algorithm, called contrastive divergence (CD), is applied to infer the parameters of our multimodal DBN model in an unsupervised manner. Tests on two available cancer datasets show that our integrative data analysis approach can effectively extract a unified representation of latent features to capture both intra- and cross-modality correlations, and identify meaningful disease subtypes from multi-platform cancer data. In addition, our approach can identify key genes and miRNAs that may play distinct roles in the pathogenesis of different cancer subtypes. Among those key miRNAs, we found that the expression level of miR-29a is highly correlated with survival time in ovarian cancer patients. These results indicate that our multimodal DBN based data analysis approach may have practical applications in cancer pathogenesis studies and provide useful guidelines for personalized cancer therapy.
NASA Astrophysics Data System (ADS)
De Montigny, Étienne; Goulamhoussen, Nadir; Madore, Wendy-Julie; Strupler, Mathias; Maniakas, Anastasios; Ayad, Tareck; Boudoux, Caroline
2016-02-01
While thyroidectomy is considered a safe surgery, dedicated tools facilitating tissue identification during surgery could improve its outcome. The most common complication following surgery is hypocalcaemia, which results from iatrogenic removal or damage to parathyroid glands. This research project aims at developing and validating an instrument based on optical microscopy modalities to identify tissues in real time during surgery. Our approach is based on a combination of reflectance confocal microscopy (RCM) and optical coherence tomography (OCT) to obtain multi-scale morphological contrast images. The orthogonal field of views provide information to navigate through the sample. To allow simultaneous, synchronized video-rate imaging in both modalities, we designed and built a dual-band wavelength-swept laser which scans a 30 nm band centered at 780 nm and a 90 nm band centered at 1310 nm. We built an imaging setup integrating a custom-made objective lens and a double-clad fibre coupler optimized for confocal microscopy. It features high resolutions in RCM (2µm lateral and 20 µm axial) in a 500 µm x 500 µm field-of-view and a larger field-of-view of 2 mm (lateral) x 5 mm (axial) with 20 µm lateral and axial resolutions in OCT. Imaging of ex vivo animal samples is demonstrated on a bench-top system. Tissues that are visually difficult to distinguish from each other intra-operatively such as parathyroid gland, lymph nodes and adipose tissue are imaged to show the potential of this approach in differentiating neck tissues. We will also provide an update on our ongoing clinical pilot study on patients undergoing thyroidectomy.
Yang, C; Paulson, E; Li, X
2012-06-01
To develop and evaluate a tool that can improve the accuracy of contour transfer between different image modalities under challenging conditions of low image contrast and large image deformation, comparing to a few commonly used methods, for radiation treatment planning. The software tool includes the following steps and functionalities: (1) accepting input of images of different modalities, (2) converting existing contours on reference images (e.g., MRI) into delineated volumes and adjusting the intensity within the volumes to match target images (e.g., CT) intensity distribution for enhanced similarity metric, (3) registering reference and target images using appropriate deformable registration algorithms (e.g., B-spline, demons) and generate deformed contours, (4) mapping the deformed volumes on target images, calculating mean, variance, and center of mass as the initialization parameters for consecutive fuzzy connectedness (FC) image segmentation on target images, (5) generate affinity map from FC segmentation, (6) achieving final contours by modifying the deformed contours using the affinity map with a gradient distance weighting algorithm. The tool was tested with the CT and MR images of four pancreatic cancer patients acquired at the same respiration phase to minimize motion distortion. Dice's Coefficient was calculated against direct delineation on target image. Contours generated by various methods, including rigid transfer, auto-segmentation, deformable only transfer and proposed method, were compared. Fuzzy connected image segmentation needs careful parameter initialization and user involvement. Automatic contour transfer by multi-modality deformable registration leads up to 10% of accuracy improvement over the rigid transfer. Two extra proposed steps of adjusting intensity distribution and modifying the deformed contour with affinity map improve the transfer accuracy further to 14% averagely. Deformable image registration aided by contrast adjustment and fuzzy connectedness segmentation improves the contour transfer accuracy between multi-modality images, particularly with large deformation and low image contrast. © 2012 American Association of Physicists in Medicine.
Quantitative reconstructions in multi-modal photoacoustic and optical coherence tomography imaging
NASA Astrophysics Data System (ADS)
Elbau, P.; Mindrinos, L.; Scherzer, O.
2018-01-01
In this paper we perform quantitative reconstruction of the electric susceptibility and the Grüneisen parameter of a non-magnetic linear dielectric medium using measurement of a multi-modal photoacoustic and optical coherence tomography system. We consider the mathematical model presented in Elbau et al (2015 Handbook of Mathematical Methods in Imaging ed O Scherzer (New York: Springer) pp 1169-204), where a Fredholm integral equation of the first kind for the Grüneisen parameter was derived. For the numerical solution of the integral equation we consider a Galerkin type method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy
Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less
Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming
2018-02-19
The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.
Kampmann, Peter; Kirchner, Frank
2014-01-01
With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158
NASA Astrophysics Data System (ADS)
Faizrahnemoon, Mahsa; Schlote, Arieh; Maggi, Lorenzo; Crisostomi, Emanuele; Shorten, Robert
2015-11-01
This paper describes a Markov-chain-based approach to modelling multi-modal transportation networks. An advantage of the model is the ability to accommodate complex dynamics and handle huge amounts of data. The transition matrix of the Markov chain is built and the model is validated using the data extracted from a traffic simulator. A realistic test-case using multi-modal data from the city of London is given to further support the ability of the proposed methodology to handle big quantities of data. Then, we use the Markov chain as a control tool to improve the overall efficiency of a transportation network, and some practical examples are described to illustrate the potentials of the approach.
Advanced magnetic resonance imaging in glioblastoma: a review.
Shukla, Gaurav; Alexander, Gregory S; Bakas, Spyridon; Nikam, Rahul; Talekar, Kiran; Palmer, Joshua D; Shi, Wenyin
2017-08-01
Glioblastoma, the most common and most rapidly progressing primary malignant tumor of the central nervous system, continues to portend a dismal prognosis, despite improvements in diagnostic and therapeutic strategies over the last 20 years. The standard of care radiographic characterization of glioblastoma is magnetic resonance imaging (MRI), which is a widely utilized examination in the diagnosis and post-treatment management of patients with glioblastoma. Basic MRI modalities available from any clinical scanner, including native T1-weighted (T1w) and contrast-enhanced (T1CE), T2-weighted (T2w), and T2-fluid-attenuated inversion recovery (T2-FLAIR) sequences, provide critical clinical information about various processes in the tumor environment. In the last decade, advanced MRI modalities are increasingly utilized to further characterize glioblastomas more comprehensively. These include multi-parametric MRI sequences, such as dynamic susceptibility contrast (DSC), dynamic contrast enhancement (DCE), higher order diffusion techniques such as diffusion tensor imaging (DTI), and MR spectroscopy (MRS). Significant efforts are ongoing to implement these advanced imaging modalities into improved clinical workflows and personalized therapy approaches. Functional MRI (fMRI) and tractography are increasingly being used to identify eloquent cortices and important tracts to minimize postsurgical neuro-deficits. A contemporary review of the application of standard and advanced MRI in clinical neuro-oncologic practice is presented here.
Milchenko, Mikhail; Snyder, Abraham Z; LaMontagne, Pamela; Shimony, Joshua S; Benzinger, Tammie L; Fouke, Sarah Jost; Marcus, Daniel S
2016-07-01
Neuroimaging research often relies on clinically acquired magnetic resonance imaging (MRI) datasets that can originate from multiple institutions. Such datasets are characterized by high heterogeneity of modalities and variability of sequence parameters. This heterogeneity complicates the automation of image processing tasks such as spatial co-registration and physiological or functional image analysis. Given this heterogeneity, conventional processing workflows developed for research purposes are not optimal for clinical data. In this work, we describe an approach called Heterogeneous Optimization Framework (HOF) for developing image analysis pipelines that can handle the high degree of clinical data non-uniformity. HOF provides a set of guidelines for configuration, algorithm development, deployment, interpretation of results and quality control for such pipelines. At each step, we illustrate the HOF approach using the implementation of an automated pipeline for Multimodal Glioma Analysis (MGA) as an example. The MGA pipeline computes tissue diffusion characteristics of diffusion tensor imaging (DTI) acquisitions, hemodynamic characteristics using a perfusion model of susceptibility contrast (DSC) MRI, and spatial cross-modal co-registration of available anatomical, physiological and derived patient images. Developing MGA within HOF enabled the processing of neuro-oncology MR imaging studies to be fully automated. MGA has been successfully used to analyze over 160 clinical tumor studies to date within several research projects. Introduction of the MGA pipeline improved image processing throughput and, most importantly, effectively produced co-registered datasets that were suitable for advanced analysis despite high heterogeneity in acquisition protocols.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barstow, Del R; Patlolla, Dilip Reddy; Mann, Christopher J
Abstract The data captured by existing standoff biometric systems typically has lower biometric recognition performance than their close range counterparts due to imaging challenges, pose challenges, and other factors. To assist in overcoming these limitations systems typically perform in a multi-modal capacity such as Honeywell s Combined Face and Iris (CFAIRS) [21] system. While this improves the systems performance, standoff systems have yet to be proven as accurate as their close range equivalents. We will present a standoff system capable of operating up to 7 meters in range. Unlike many systems such as the CFAIRS our system captures high qualitymore » 12 MP video allowing for a multi-sample as well as multi-modal comparison. We found that for standoff systems multi-sample improved performance more than multi-modal. For a small test group of 50 subjects we were able to achieve 100% rank one recognition performance with our system.« less
Chen, Xiao-Liang; Li, Qian; Cao, Lin; Jiang, Shi-Xi
2014-01-01
The bone metastasis appeared early before the bone imaging for most of the above patients. (99)Tc(m)-MDP ((99)Tc(m) marked methylene diphosphonate) bone imaging could diagnosis the bone metastasis with highly sensitivity, but with lower specificity. The aim of this study is to explore the diagnostic value of (99)Tc(m)-MDP SPECT/CT combined SPECT/MRI Multi modality imaging for the early period atypical bone metastases. 15 to 30 mCi (99)Tc(m)-MDP was intravenously injected to the 34 malignant patients diagnosed as doubtful early bone metastases. SPECT, CT and SPECT/CT images were captured and analyzed consequently. For the patients diagnosed as early period atypical bone metastases by SPECT/CT, combining the SPECT/CT and MRI together as the SPECT/MRI integrated image. The obtained SPECT/MRI image was analyzed and compared with the pathogenic results of patients. The results indicated that 34 early period doubtful metastatic focus, including 34 SPECT positive focus, 17 focus without special changes by using CT method, 11 bone metastases focus by using SPECT/CT method, 23 doubtful bone metastases focus, 8 doubtful bone metastases focus, 14 doubtful bone metastases focus and 2 focus without clear image. Totally, SPECT/CT combined with SPECT/MRI method diagnosed 30 bone metastatic focus and 4 doubtfully metastatic focus. In conclusion, (99)Tc(m)-MDP SPECT/CT combined SPECT/MRI Multi modality imaging shows a higher diagnostic value for the early period bone metastases, which also enhances the diagnostic accuracy rate.
Automated unsupervised multi-parametric classification of adipose tissue depots in skeletal muscle
Valentinitsch, Alexander; Karampinos, Dimitrios C.; Alizai, Hamza; Subburaj, Karupppasamy; Kumar, Deepak; Link, Thomas M.; Majumdar, Sharmila
2012-01-01
Purpose To introduce and validate an automated unsupervised multi-parametric method for segmentation of the subcutaneous fat and muscle regions in order to determine subcutaneous adipose tissue (SAT) and intermuscular adipose tissue (IMAT) areas based on data from a quantitative chemical shift-based water-fat separation approach. Materials and Methods Unsupervised standard k-means clustering was employed to define sets of similar features (k = 2) within the whole multi-modal image after the water-fat separation. The automated image processing chain was composed of three primary stages including tissue, muscle and bone region segmentation. The algorithm was applied on calf and thigh datasets to compute SAT and IMAT areas and was compared to a manual segmentation. Results The IMAT area using the automatic segmentation had excellent agreement with the IMAT area using the manual segmentation for all the cases in the thigh (R2: 0.96) and for cases with up to moderate IMAT area in the calf (R2: 0.92). The group with the highest grade of muscle fat infiltration in the calf had the highest error in the inner SAT contour calculation. Conclusion The proposed multi-parametric segmentation approach combined with quantitative water-fat imaging provides an accurate and reliable method for an automated calculation of the SAT and IMAT areas reducing considerably the total post-processing time. PMID:23097409
Big data sharing and analysis to advance research in post-traumatic epilepsy.
Duncan, Dominique; Vespa, Paul; Pitkanen, Asla; Braimah, Adebayo; Lapinlampi, Nina; Toga, Arthur W
2018-06-01
We describe the infrastructure and functionality for a centralized preclinical and clinical data repository and analytic platform to support importing heterogeneous multi-modal data, automatically and manually linking data across modalities and sites, and searching content. We have developed and applied innovative image and electrophysiology processing methods to identify candidate biomarkers from MRI, EEG, and multi-modal data. Based on heterogeneous biomarkers, we present novel analytic tools designed to study epileptogenesis in animal model and human with the goal of tracking the probability of developing epilepsy over time. Copyright © 2017. Published by Elsevier Inc.
Li, Ziyao; Tian, Jiawei; Wang, Xiaowei; Wang, Ying; Wang, Zhenzhen; Zhang, Lei; Jing, Hui; Wu, Tong
2016-04-01
The objective of this study was to identify multi-modal ultrasound imaging parameters that could potentially help to differentiate between triple negative breast cancer (TNBC) and non-TNBC. Conventional ultrasonography, ultrasound strain elastography and 3-D ultrasound (3-D-US) findings from 50 TNBC and 179 non-TNBC patients were retrospectively reviewed. Immunohistochemical examination was used as the reference gold standard for cancer subtyping. Different ultrasound modalities were initially analyzed to define TNBC-related features. Subsequently, logistic regression analysis was applied to TNBC-related features to establish models for predicting TNBC. TNBCs often presented as micro-lobulated, markedly hypo-echoic masses with an abrupt interface (p = 0.015, 0.0015 and 0.004, compared with non-TNBCs, respectively) on conventional ultrasound, and showed a diminished retraction pattern phenomenon in the coronal plane (p = 0.035) on 3-D-US. Our findings suggest that B-mode ultrasound and 3-D-US in multi-modality ultrasonography could be a useful non-invasive technique for differentiating TNBCs from non-TNBCs. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Helvie, Mark A.; Cha, Kenny H.; Richter, Caleb D.
2017-12-01
Transfer learning in deep convolutional neural networks (DCNNs) is an important step in its application to medical imaging tasks. We propose a multi-task transfer learning DCNN with the aim of translating the ‘knowledge’ learned from non-medical images to medical diagnostic tasks through supervised training and increasing the generalization capabilities of DCNNs by simultaneously learning auxiliary tasks. We studied this approach in an important application: classification of malignant and benign breast masses. With Institutional Review Board (IRB) approval, digitized screen-film mammograms (SFMs) and digital mammograms (DMs) were collected from our patient files and additional SFMs were obtained from the Digital Database for Screening Mammography. The data set consisted of 2242 views with 2454 masses (1057 malignant, 1397 benign). In single-task transfer learning, the DCNN was trained and tested on SFMs. In multi-task transfer learning, SFMs and DMs were used to train the DCNN, which was then tested on SFMs. N-fold cross-validation with the training set was used for training and parameter optimization. On the independent test set, the multi-task transfer learning DCNN was found to have significantly (p = 0.007) higher performance compared to the single-task transfer learning DCNN. This study demonstrates that multi-task transfer learning may be an effective approach for training DCNN in medical imaging applications when training samples from a single modality are limited.
Multi-modal spectroscopic imaging with synchrotron light to study mechanisms of brain disease
NASA Astrophysics Data System (ADS)
Summers, Kelly L.; Fimognari, Nicholas; Hollings, Ashley; Kiernan, Mitchell; Lam, Virginie; Tidy, Rebecca J.; Takechi, Ryu; George, Graham N.; Pickering, Ingrid J.; Mamo, John C.; Harris, Hugh H.; Hackett, Mark J.
2017-04-01
The international health care costs associated with Alzheimer's disease (AD) and dementia have been predicted to reach $2 trillion USD by 2030. As such, there is urgent need to develop new treatments and diagnostic methods to stem an international health crisis. A major limitation to therapy and diagnostic development is the lack of complete understanding about the disease mechanisms. Spectroscopic methods at synchrotron light sources, such as FTIR, XRF, and XAS, offer a "multi-modal imaging platform" to reveal a wealth of important biochemical information in situ within ex vivo tissue sections, to increase our understanding of disease mechanisms.
Results from the commissioning of a multi-modal endoscope for ultrasound and time of flight PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bugalho, Ricardo
2015-07-01
The EndoTOFPET-US collaboration has developed a multi-modal imaging system combining Ultrasound with Time-of-Flight Positron Emission Tomography into an endoscopic imaging device. The objective of the project is to obtain a coincidence time resolution of about 200 ps FWHM and to achieve about 1 mm spatial resolution of the PET system, while integrating all the components in a very compact detector suitable for endoscopic use. This scanner aims to be exploited for diagnostic and surgical oncology, as well as being instrumental in the clinical test of new biomarkers especially targeted for prostate and pancreatic cancer. (authors)
Direct visualization of gastrointestinal tract with lanthanide-doped BaYbF5 upconversion nanoprobes.
Liu, Zhen; Ju, Enguo; Liu, Jianhua; Du, Yingda; Li, Zhengqiang; Yuan, Qinghai; Ren, Jinsong; Qu, Xiaogang
2013-10-01
Nanoparticulate contrast agents have attracted a great deal of attention along with the rapid development of modern medicine. Here, a binary contrast agent based on PAA modified BaYbF5:Tm nanoparticles for direct visualization of gastrointestinal (GI) tract has been designed and developed via a one-pot solvothermal route. By taking advantages of excellent colloidal stability, low cytotoxicity, and neglectable hemolysis of these well-designed nanoparticles, their feasibility as a multi-modal contrast agent for GI tract was intensively investigated. Significant enhancement of contrast efficacy relative to clinical barium meal and iodine-based contrast agent was evaluated via X-ray imaging and CT imaging in vivo. By doping Tm(3+) ions into these nanoprobes, in vivo NIR-NIR imaging was then demonstrated. Unlike some invasive imaging modalities, non-invasive imaging strategy including X-ray imaging, CT imaging, and UCL imaging for GI tract could extremely reduce the painlessness to patients, effectively facilitate imaging procedure, as well as rationality economize diagnostic time. Critical to clinical applications, long-term toxicity of our contrast agent was additionally investigated in detail, indicating their overall safety. Based on our results, PAA-BaYbF5:Tm nanoparticles were the excellent multi-modal contrast agent to integrate X-ray imaging, CT imaging, and UCL imaging for direct visualization of GI tract with low systemic toxicity. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Raylman, Raymond R.; Majewski, Stan; Velan, S. Sendhil; Lemieux, Susan; Kross, Brian; Popov, Vladimir; Smith, Mark F.; Weisenberger, Andrew G.
2007-06-01
Multi-modality imaging (such as PET-CT) is rapidly becoming a valuable tool in the diagnosis of disease and in the development of new drugs. Functional images produced with PET, fused with anatomical images created by MRI, allow the correlation of form with function. Perhaps more exciting than the combination of anatomical MRI with PET, is the melding of PET with MR spectroscopy (MRS). Thus, two aspects of physiology could be combined in novel ways to produce new insights into the physiology of normal and pathological processes. Our team is developing a system to acquire MRI images and MRS spectra, and PET images contemporaneously. The prototype MR-compatible PET system consists of two opposed detector heads (appropriate in size for small animal imaging), operating in coincidence mode with an active field-of-view of ˜14 cm in diameter. Each detector consists of an array of LSO detector elements coupled through a 2-m long fiber optic light guide to a single position-sensitive photomultiplier tube. The use of light guides allows these magnetic field-sensitive elements of the PET imager to be positioned outside the strong magnetic field of our 3T MRI scanner. The PET scanner imager was integrated with a 12-cm diameter, 12-leg custom, birdcage coil. Simultaneous MRS spectra and PET images were successfully acquired from a multi-modality phantom consisting of a sphere filled with 17 brain relevant substances and a positron-emitting radionuclide. There were no significant changes in MRI or PET scanner performance when both were present in the MRI magnet bore. This successful initial test demonstrates the potential for using such a multi-modality to obtain complementary MRS and PET data.
Imaging of oxygenation in 3D tissue models with multi-modal phosphorescent probes
NASA Astrophysics Data System (ADS)
Papkovsky, Dmitri B.; Dmitriev, Ruslan I.; Borisov, Sergei
2015-03-01
Cell-penetrating phosphorescence based probes allow real-time, high-resolution imaging of O2 concentration in respiring cells and 3D tissue models. We have developed a panel of such probes, small molecule and nanoparticle structures, which have different spectral characteristics, cell penetrating and tissue staining behavior. The probes are compatible with conventional live cell imaging platforms and can be used in different detection modalities, including ratiometric intensity and PLIM (Phosphorescence Lifetime IMaging) under one- or two-photon excitation. Analytical performance of these probes and utility of the O2 imaging method have been demonstrated with different types of samples: 2D cell cultures, multi-cellular spheroids from cancer cell lines and primary neurons, excised slices from mouse brain, colon and bladder tissue, and live animals. They are particularly useful for hypoxia research, ex-vivo studies of tissue physiology, cell metabolism, cancer, inflammation, and multiplexing with many conventional fluorophors and markers of cellular function.
Grid-Enabled Quantitative Analysis of Breast Cancer
2010-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also
Brücher, Björn LDM; Stojadinovic, Alexander; Bilchik, Anton J.; Protic, Mladjan; Daumer, Martin; Nissan, Aviram; Avital, Itzhak
2013-01-01
Peritoneal surface malignancy (PSM) is a frequent occurrence in the natural history of colorectal cancer (CRC). Although significant advances have been made in screening of CRC, similar progress has yet to be made in the early detection of PSM of colorectal cancer origin. The fact that advanced CRC can be confined to the peritoneal surface without distant dissemination forms the basis for aggressive multi-modality therapy consisting of cytoreductive surgery (CRS) plus hyperthermic intra-peritoneal chemotherapy (HIPEC), and neoadjuvant and/or adjuvant systemic therapy. Reported overall survival with complete CRS+HIPEC exceeds that of systemic therapy alone for the treatment of PSM from CRC, underscoring the advantage of this multi-modality therapeutic approach. Patients with limited peritoneal disease from CRC can undergo complete cytoreduction, which is associated with the best reported outcomes. As early or limited peritoneal carcinomatosis is undetectable by conventional imaging modalities, second look laparotomy is an important means to identify disease in high-risk patients at a stage most amenable to complete cytoreduction. This review focuses on the identification of patients at risk for PSM from CRC and discusses the role of second look laparotomy. PMID:23459716
Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge
Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip “Eddie”; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant
2014-01-01
Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p < 0.05) and had an efficient implementation with a run time of 8 minutes and 3 second per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/. PMID:24418598
Simulation of brain tumors in MR images for evaluation of segmentation efficacy.
Prastawa, Marcel; Bullitt, Elizabeth; Gerig, Guido
2009-04-01
Obtaining validation data and comparison metrics for segmentation of magnetic resonance images (MRI) are difficult tasks due to the lack of reliable ground truth. This problem is even more evident for images presenting pathology, which can both alter tissue appearance through infiltration and cause geometric distortions. Systems for generating synthetic images with user-defined degradation by noise and intensity inhomogeneity offer the possibility for testing and comparison of segmentation methods. Such systems do not yet offer simulation of sufficiently realistic looking pathology. This paper presents a system that combines physical and statistical modeling to generate synthetic multi-modal 3D brain MRI with tumor and edema, along with the underlying anatomical ground truth, Main emphasis is placed on simulation of the major effects known for tumor MRI, such as contrast enhancement, local distortion of healthy tissue, infiltrating edema adjacent to tumors, destruction and deformation of fiber tracts, and multi-modal MRI contrast of healthy tissue and pathology. The new method synthesizes pathology in multi-modal MRI and diffusion tensor imaging (DTI) by simulating mass effect, warping and destruction of white matter fibers, and infiltration of brain tissues by tumor cells. We generate synthetic contrast enhanced MR images by simulating the accumulation of contrast agent within the brain. The appearance of the the brain tissue and tumor in MRI is simulated by synthesizing texture images from real MR images. The proposed method is able to generate synthetic ground truth and synthesized MR images with tumor and edema that exhibit comparable segmentation challenges to real tumor MRI. Such image data sets will find use in segmentation reliability studies, comparison and validation of different segmentation methods, training and teaching, or even in evaluating standards for tumor size like the RECIST criteria (response evaluation criteria in solid tumors).
Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S
2016-12-01
We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P
2016-03-24
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.
Quicksilver: Fast predictive image registration - A deep learning approach.
Yang, Xiao; Kwitt, Roland; Styner, Martin; Niethammer, Marc
2017-09-01
This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise prediction of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the prediction model. While the prediction strategy is general, we focus on predictions for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the momentum-parameterization of LDDMM, which facilitates a patch-wise prediction strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our prediction network which can be sampled during the testing time to calculate uncertainties in the predicted deformations. Finally, we introduce a new correction network which greatly increases the prediction accuracy of an already existing prediction network. We show experimental results for uni-modal atlas-to-image as well as uni-/multi-modal image-to-image registrations. These experiments demonstrate that our method accurately predicts registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration results on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software. Copyright © 2017 Elsevier Inc. All rights reserved.
Volume curtaining: a focus+context effect for multimodal volume visualization
NASA Astrophysics Data System (ADS)
Fairfield, Adam J.; Plasencia, Jonathan; Jang, Yun; Theodore, Nicholas; Crawford, Neil R.; Frakes, David H.; Maciejewski, Ross
2014-03-01
In surgical preparation, physicians will often utilize multimodal imaging scans to capture complementary information to improve diagnosis and to drive patient-specific treatment. These imaging scans may consist of data from magnetic resonance imaging (MR), computed tomography (CT), or other various sources. The challenge in using these different modalities is that the physician must mentally map the two modalities together during the diagnosis and planning phase. Furthermore, the different imaging modalities will be generated at various resolutions as well as slightly different orientations due to patient placement during scans. In this work, we present an interactive system for multimodal data fusion, analysis and visualization. Developed with partners from neurological clinics, this work discusses initial system requirements and physician feedback at the various stages of component development. Finally, we present a novel focus+context technique for the interactive exploration of coregistered multi-modal data.
Multi-Modal Ultra-Widefield Imaging Features in Waardenburg Syndrome
Choudhry, Netan; Rao, Rajesh C.
2015-01-01
Background Waardenburg syndrome is characterized by a group of features including; telecanthus, a broad nasal root, synophrys of the eyebrows, piedbaldism, heterochromia irides, and deaf-mutism. Hypopigmentation of the choroid is a unique feature of this condition examined with multi-modal Ultra-Widefield Imaging in this report. Material/Methods Report of a single case. Results Bilateral symmetric choroidal hypopigmentation was observed with hypoautofluorescence in the region of hypopigmentation. Fluorescein angiography revealed a normal vasculature, however a thickened choroid was seen on Enhanced-Depth Imaging Spectral-Domain OCT (EDI SD-OCT). Conclusion(s) Choroidal hypopigmentation is a unique feature of Waardenburg syndrome, which can be visualized with ultra-widefield fundus autofluorescence. The choroid may also be thickened in this condition and its thickness measured with EDI SD-OCT. PMID:26114849
Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping
2012-01-01
Analysis of incomplete data is a big challenge when integrating large-scale brain imaging datasets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. In this paper, we address this problem by proposing an incomplete Multi-Source Feature (iMSF) learning method where all the samples (with at least one available data source) can be used. To illustrate the proposed approach, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 NC), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithm. Depending on the problem being solved, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. To build a practical and robust system, we construct a classifier ensemble by combining our method with four other methods for missing value estimation. Comprehensive experiments with various parameters show that our proposed iMSF method and the ensemble model yield stable and promising results. PMID:22498655
Hao, Xiaoke; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L.; Saykin, Andrew J.; Zhang, Daoqiang; Shen, Li
2016-01-01
Neuroimaging genetics has attracted growing attention and interest, which is thought to be a powerful strategy to examine the influence of genetic variants (i.e., single nucleotide polymorphisms (SNPs)) on structures or functions of human brain. In recent studies, univariate or multivariate regression analysis methods are typically used to capture the effective associations between genetic variants and quantitative traits (QTs) such as brain imaging phenotypes. The identified imaging QTs, although associated with certain genetic markers, may not be all disease specific. A useful, but underexplored, scenario could be to discover only those QTs associated with both genetic markers and disease status for revealing the chain from genotype to phenotype to symptom. In addition, multimodal brain imaging phenotypes are extracted from different perspectives and imaging markers consistently showing up in multimodalities may provide more insights for mechanistic understanding of diseases (i.e., Alzheimer’s disease (AD)). In this work, we propose a general framework to exploit multi-modal brain imaging phenotypes as intermediate traits that bridge genetic risk factors and multi-class disease status. We applied our proposed method to explore the relation between the well-known AD risk SNP APOE rs429358 and three baseline brain imaging modalities (i.e., structural magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET) and F-18 florbetapir PET scans amyloid imaging (AV45)) from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The empirical results demonstrate that our proposed method not only helps improve the performances of imaging genetic associations, but also discovers robust and consistent regions of interests (ROIs) across multi-modalities to guide the disease-induced interpretation. PMID:27277494
Melbourne, Andrew; Eaton-Rosen, Zach; De Vita, Enrico; Bainbridge, Alan; Cardoso, Manuel Jorge; Price, David; Cady, Ernest; Kendall, Giles S; Robertson, Nicola J; Marlow, Neil; Ourselin, Sébastien
2014-01-01
Infants born prematurely are at increased risk of adverse functional outcome. The measurement of white matter tissue composition and structure can help predict functional performance and this motivates the search for new multi-modal imaging biomarkers. In this work we develop a novel combined biomarker from diffusion MRI and multi-component T2 relaxation measurements in a group of infants born very preterm and scanned between 30 and 40 weeks equivalent gestational age. We also investigate this biomarker on a group of seven adult controls, using a multi-modal joint model-fitting strategy. The proposed emergent biomarker is tentatively related to axonal energetic efficiency (in terms of axonal membrane charge storage) and conduction velocity and is thus linked to the tissue electrical properties, giving it a good theoretical justification as a predictive measurement of functional outcome.
NASA Astrophysics Data System (ADS)
Bidaut, Luc M.
1991-06-01
In order to help in analyzing PET data and really take advantage of their metabolic content, a system was designed and implemented to align and process data from various medical imaging modalities, particularly (but not only) for brain studies. Although this system is for now mostly used for anatomical localization, multi-modality ROIs and pharmaco-kinetic modeling, more multi-modality protocols will be implemented in the future, not only to help in PET reconstruction data correction and semi-automated ROI definition, but also for helping in improving diagnostic accuracy along with surgery and therapy planning.
MMX-I: data-processing software for multimodal X-ray imaging and tomography.
Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea
2016-05-01
A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors' knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments.
Nolin, Frédérique; Ploton, Dominique; Wortham, Laurence; Tchelidze, Pavel; Balossier, Gérard; Banchet, Vincent; Bobichon, Hélène; Lalun, Nathalie; Terryn, Christine; Michel, Jean
2012-11-01
Cryo fluorescence imaging coupled with the cryo-EM technique (cryo-CLEM) avoids chemical fixation and embedding in plastic, and is the gold standard for correlated imaging in a close to native state. This multi-modal approach has not previously included elementary nano analysis or evaluation of water content. We developed a new approach allowing analysis of targeted in situ intracellular ions and water measurements at the nanoscale (EDXS and STEM dark field imaging) within domains identified by examination of specific GFP-tagged proteins. This method allows both water and ions- fundamental to cell biology- to be located and quantified at the subcellular level. We illustrate the potential of this approach by investigating changes in water and ion content in nuclear domains identified by GFP-tagged proteins in cells stressed by Actinomycin D treatment and controls. The resolution of our approach was sufficient to distinguish clumps of condensed chromatin from surrounding nucleoplasm by fluorescence imaging and to perform nano analysis in this targeted compartment. Copyright © 2012 Elsevier Inc. All rights reserved.
Glemser, Philip A; Pfleiderer, Michael; Heger, Anna; Tremper, Jan; Krauskopf, Astrid; Schlemmer, Heinz-Peter; Yen, Kathrin; Simons, David
2017-03-01
The aim of this multi-reader feasibility study was to evaluate new post-processing CT imaging tools in rib fracture assessment of forensic cases by analyzing detection time and diagnostic accuracy. Thirty autopsy cases (20 with and 10 without rib fractures in autopsy) were randomly selected and included in this study. All cases received a native whole body CT scan prior to the autopsy procedure, which included dissection and careful evaluation of each rib. In addition to standard transverse sections (modality A), CT images were subjected to a reconstruction algorithm to compute axial labelling of the ribs (modality B) as well as "unfolding" visualizations of the rib cage (modality C, "eagle tool"). Three radiologists with different clinical and forensic experience who were blinded to autopsy results evaluated all cases in a random manner of modality and case. Rib fracture assessment of each reader was evaluated compared to autopsy and a CT consensus read as radiologic reference. A detailed evaluation of relevant test parameters revealed a better accordance to the CT consensus read as to the autopsy. Modality C was the significantly quickest rib fracture detection modality despite slightly reduced statistic test parameters compared to modalities A and B. Modern CT post-processing software is able to shorten reading time and to increase sensitivity and specificity compared to standard autopsy alone. The eagle tool as an easy to use tool is suited for an initial rib fracture screening prior to autopsy and can therefore be beneficial for forensic pathologists.
NASA Technical Reports Server (NTRS)
Sargsyan, Ashot E.; Kramer, Larry A.; Hamilton, Douglas R.; Hamilton, Douglas R.; Fogarty, Jennifer; Polk, J. D.
2010-01-01
Introduction: Intracranial pressure (ICP) elevation has been inferred or documented in a number of space crewmembers. Recent advances in noninvasive imaging technology offer new possibilities for ICP assessment. Most International Space Station (ISS) partner agencies have adopted a battery of occupational health monitoring tests including magnetic resonance imaging (MRI) pre- and postflight, and high-resolution sonography of the orbital structures in all mission phases including during flight. We hypothesize that joint consideration of data from the two techniques has the potential to improve quality and continuity of crewmember monitoring and care. Methods: Specially designed MRI and sonographic protocols were used to image eyes and optic nerves (ON) including the meningeal sheaths. Specific crewmembers multi-modality imaging data were analyzed to identify points of mutual validation as well as unique features of complementary nature. Results and Conclusion: Magnetic resonance imaging (MRI) and high-resolution sonography are both tomographic methods, however images obtained by the two modalities are based on different physical phenomena and use different acquisition principles. Consideration of the images acquired by these two modalities allows cross-validating findings related to the volume and fluid content of the ON subarachnoid space, shape of the globe, and other anatomical features of the orbit. Each of the imaging modalities also has unique advantages, making them complementary techniques.
Gainor, Sara Jane; Goins, R Turner; Miller, Lee Ann
2004-01-01
Making geriatric education available to rural faculty/preceptors, students, and practitioners presents many challenges. Often the only options considered for educating those in the health professions about geriatrics are either traditional face-to-face courses or distance education programs. The purpose of this paper was to examine the use of Web-based modules or courses and other distance learning technology in concert with traditional learning modalities. The Mountain State Geriatric Education Center explored the use of a multi-modal approach within a high-touch, high-tech framework. Our findings indicate the following: it is important to start where participants are ready to begin; flexibility and variety are needed; soliciting evaluative feedback from participants is valuable; there is a need to integrate distance learning with more traditional modalities; and a high-tech, high-touch approach provides a format which participants find acceptable, accessible, and attractive. This assertion does not rule out the use of technology for distance education but rather encourages educators to take advantage of a wide range of modalities, traditional and technological, to reach rural practitioners, faculty, and students.
NASA Astrophysics Data System (ADS)
Zhang, Pengfei; Zam, Azhar; Pugh, Edward N.; Zawadzki, Robert J.
2014-02-01
Animal models of human diseases play an important role in studying and advancing our understanding of these conditions, allowing molecular level studies of pathogenesis as well as testing of new therapies. Recently several non-invasive imaging modalities including Fundus Camera, Scanning Laser Ophthalmoscopy (SLO) and Optical Coherence Tomography (OCT) have been successfully applied to monitor changes in the retinas of the living animals in experiments in which a single animal is followed over a portion of its lifespan. Here we evaluate the capabilities and limitations of these three imaging modalities for visualization of specific structures in the mouse eye. Example images acquired from different types of mice are presented. Future directions of development for these instruments and potential advantages of multi-modal imaging systems are discussed as well.
Mei, Shuang; Wang, Yudan; Wen, Guojun
2018-04-02
Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality). Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.
Multi-Modality Imaging in the Evaluation and Treatment of Mitral Regurgitation.
Bouchard, Marc-André; Côté-Laroche, Claudia; Beaudoin, Jonathan
2017-10-13
Mitral regurgitation (MR) is frequent and associated with increased mortality and morbidity when severe. It may be caused by intrinsic valvular disease (primary MR) or ventricular deformation (secondary MR). Imaging has a critical role to document the severity, mechanism, and impact of MR on heart function as selected patients with MR may benefit from surgery whereas other will not. In patients planned for a surgical intervention, imaging is also important to select candidates for mitral valve (MV) repair over replacement and to predict surgical success. Although standard transthoracic echocardiography is the first-line modality to evaluate MR, newer imaging modalities like three-dimensional (3D) transesophageal echocardiography, stress echocardiography, cardiac magnetic resonance (CMR), and computed tomography (CT) are emerging and complementary tools for MR assessment. While some of these modalities can provide insight into MR severity, others will help to determine its mechanism. Understanding the advantages and limitations of each imaging modality is important to appreciate their respective role for MR assessment and help to resolve eventual discrepancies between different diagnostic methods. With the increasing use of transcatheter mitral procedures (repair or replacement) for high-surgical-risk patients, multimodality imaging has now become even more important to determine eligibility, preinterventional planning, and periprocedural guidance.
Oh, Gyungseok; Yoo, Su Woong; Jung, Yebin; Ryu, Yeon-Mi; Park, Youngrong; Kim, Sang-Yeob; Kim, Ki Hean; Kim, Sungjee; Myung, Seung-Jae; Chung, Euiheon
2014-05-01
Intravital imaging has provided molecular, cellular and anatomical insight into the study of tumor. Early detection and treatment of gastrointestinal (GI) diseases can be enhanced with specific molecular markers and endoscopic imaging modalities. We present a wide-field multi-channel fluorescence endoscope to screen GI tract for colon cancer using multiple molecular probes targeting matrix metalloproteinases (MMP) conjugated with quantum dots (QD) in AOM/DSS mouse model. MMP9 and MMP14 antibody (Ab)-QD conjugates demonstrate specific binding to colonic adenoma. The average target-to-background (T/B) ratios are 2.10 ± 0.28 and 1.78 ± 0.18 for MMP14 Ab-QD and MMP9 Ab-QD, respectively. The overlap between the two molecular probes is 67.7 ± 8.4%. The presence of false negative indicates that even more number of targeting could increase the sensitivity of overall detection given heterogeneous molecular expression in tumors. Our approach indicates potential for the screening of small or flat lesions that are precancerous.
Multimodal Imaging of the Normal Eye.
Kawali, Ankush; Pichi, Francesco; Avadhani, Kavitha; Invernizzi, Alessandro; Hashimoto, Yuki; Mahendradas, Padmamalini
2017-10-01
Multimodal imaging is the concept of "bundling" images obtained from various imaging modalities, viz., fundus photograph, fundus autofluorescence imaging, infrared (IR) imaging, simultaneous fluorescein and indocyanine angiography, optical coherence tomography (OCT), and, more recently, OCT angiography. Each modality has its pros and cons as well as its limitations. Combination of multiple imaging techniques will overcome their individual weaknesses and give a comprehensive picture. Such approach helps in accurate localization of a lesion and understanding the pathology in posterior segment. It is important to know imaging of normal eye before one starts evaluating pathology. This article describes multimodal imaging modalities in detail and discusses healthy eye features as seen on various imaging modalities mentioned above.
Gold-manganese nanoparticles for targeted diagnostic and imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murph, Simona Hunyadi
Imagine the possibility of non-invasive, non-radiation based Magnetic resonance imaging (MRI) in combating cardiac disease. Researchers at the Savannah River National Laboratory (SRNL) are developing a process that would use nanotechnology in a novel, targeted approach that would allow MRIs to be more descriptive and brighter, and to target specific organs. Researchers at SRNL have discovered a way to use multifunctional metallic gold-manganese nanoparticles to create a unique, targeted positive contrast agent. SRNL Senior Scientist Dr. Simona Hunyadi Murph says she first thought of using the nanoparticles for cardiac disease applications after learning that people who survive an infarct exhibitmore » up to 15 times higher rate of developing chronic heart failure, arrhythmias and/or sudden death compared to the general population. Without question, nanotechnology will revolutionize the future of technology. The development of functional nanomaterials with multi-detection modalities opens up new avenues for creating multi-purpose technologies for biomedical applications.« less
Sensor modeling and demonstration of a multi-object spectrometer for performance-driven sensing
NASA Astrophysics Data System (ADS)
Kerekes, John P.; Presnar, Michael D.; Fourspring, Kenneth D.; Ninkov, Zoran; Pogorzala, David R.; Raisanen, Alan D.; Rice, Andrew C.; Vasquez, Juan R.; Patel, Jeffrey P.; MacIntyre, Robert T.; Brown, Scott D.
2009-05-01
A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a highfidelity, hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track moving vehicle targets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
NASA Astrophysics Data System (ADS)
Emge, Darren K.; Adalı, Tülay
2014-06-01
As the availability and use of imaging methodologies continues to increase, there is a fundamental need to jointly analyze data that is collected from multiple modalities. This analysis is further complicated when, the size or resolution of the images differ, implying that the observation lengths of each of modality can be highly varying. To address this expanding landscape, we introduce the multiset singular value decomposition (MSVD), which can perform a joint analysis on any number of modalities regardless of their individual observation lengths. Through simulations, the inter modal relationships across the different modalities which are revealed by the MSVD are shown. We apply the MSVD to forensic fingerprint analysis, showing that MSVD joint analysis successfully identifies relevant similarities for further analysis, significantly reducing the processing time required. This reduction, takes this technique from a laboratory method to a useful forensic tool with applications across the law enforcement and security regimes.
MMX-I: data-processing software for multimodal X-ray imaging and tomography
Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea
2016-01-01
A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159
Armstrong, Anderson C; Gjesdal, Ola; Almeida, André; Nacif, Marcelo; Wu, Colin; Bluemke, David A; Brumback, Lyndia; Lima, João A C
2014-01-01
Left ventricular mass (LVM) and hypertrophy (LVH) are important parameters, but their use is surrounded by controversies. We compare LVM by echocardiography and cardiac magnetic resonance (CMR), investigating reproducibility aspects and the effect of echocardiography image quality. We also compare indexing methods within and between imaging modalities for classification of LVH and cardiovascular risk. Multi-Ethnic Study of Atherosclerosis enrolled 880 participants in Baltimore city, 146 had echocardiograms and CMR on the same day. LVM was then assessed using standard techniques. Echocardiography image quality was rated (good/limited) according to the parasternal view. LVH was defined after indexing LVM to body surface area, height(1.7) , height(2.7) , or by the predicted LVM from a reference group. Participants were classified for cardiovascular risk according to Framingham score. Pearson's correlation, Bland-Altman plots, percent agreement, and kappa coefficient assessed agreement within and between modalities. Left ventricular mass by echocardiography (140 ± 40 g) and by CMR were correlated (r = 0.8, P < 0.001) regardless of the echocardiography image quality. The reproducibility profile had strong correlations and agreement for both modalities. Image quality groups had similar characteristics; those with good images compared to CMR slightly superiorly. The prevalence of LVH tended to be higher with higher cardiovascular risk. The agreement for LVH between imaging modalities ranged from 77% to 98% and the kappa coefficient from 0.10 to 0.76. Echocardiography has a reliable performance for LVM assessment and classification of LVH, with limited influence of image quality. Echocardiography and CMR differ in the assessment of LVH, and additional differences rise from the indexing methods. © 2013. This article is a U.S. Government work and is in the public domain in the USA.
Multimodal Diffuse Optical Imaging
NASA Astrophysics Data System (ADS)
Intes, Xavier; Venugopal, Vivek; Chen, Jin; Azar, Fred S.
Diffuse optical imaging, particularly diffuse optical tomography (DOT), is an emerging clinical modality capable of providing unique functional information, at a relatively low cost, and with nonionizing radiation. Multimodal diffuse optical imaging has enabled a synergistic combination of functional and anatomical information: the quality of DOT reconstructions has been significantly improved by incorporating the structural information derived by the combined anatomical modality. In this chapter, we will review the basic principles of diffuse optical imaging, including instrumentation and reconstruction algorithm design. We will also discuss the approaches for multimodal imaging strategies that integrate DOI with clinically established modalities. The merit of the multimodal imaging approaches is demonstrated in the context of optical mammography, but the techniques described herein can be translated to other clinical scenarios such as brain functional imaging or muscle functional imaging.
Curvers, W L; Singh, R; Song, L-M Wong-Kee; Wolfsen, H C; Ragunath, K; Wang, K; Wallace, M B; Fockens, P; Bergman, J J G H M
2008-02-01
To investigate the diagnostic potential of endoscopic tri-modal imaging and the relative contribution of each imaging modality (i.e. high-resolution endoscopy (HRE), autofluorescence imaging (AFI) and narrow-band imaging (NBI)) for the detection of early neoplasia in Barrett's oesophagus. Prospective multi-centre study. Tertiary referral centres. 84 Patients with Barrett's oesophagus. The Barrett's oesophagus was inspected with HRE followed by AFI. All lesions detected with HRE and/or AFI were subsequently inspected in detail by NBI for the presence of abnormal mucosal and/or microvascular patterns. Biopsies were obtained from all suspicious lesions for blinded histopathological assessment followed by random biopsies. (1) Number of patients with early neoplasia diagnosed by HRE and AFI; (2) number of lesions with early neoplasia detected with HRE and AFI; and (3) reduction of false positive AFI findings after NBI. Per patient analysis: AFI identified all 16 patients with early neoplasia identified with HRE and detected an additional 11 patients with early neoplasia that were not identified with HRE. In three patients no abnormalities were seen but random biopsies revealed HGIN. After HRE inspection, AFI detected an additional 102 lesions; 19 contained HGIN/EC (false positive rate of AFI after HRE: 81%). Detailed inspection with NBI reduced this false positive rate to 26%. In this international multi-centre study, the addition of AFI to HRE increased the detection of both the number of patients and the number of lesions with early neoplasia in patients with Barrett's oesophagus. The false positive rate of AFI was reduced after detailed inspection with NBI.
NASA Astrophysics Data System (ADS)
Wáng, Yì Xiáng J.; Idée, Jean-Marc; Corot, Claire
2015-10-01
Designing of theranostics and dual or multi-modality contrast agents are currently two of the hottest topics in biotechnology and biomaterials science. However, for single entity theranostics, a right ratio of their diagnostic component and their therapeutic component may not always be realized in a composite suitable for clinical application. For dual/multiple modality molecular imaging agents, after in vivo administration, there is an optimal time window for imaging, when an agent is imaged by one modality, the pharmacokinetics of this agent may not allow imaging by another modality. Due to reticuloendothelial system clearance, efficient in vivo delivery of nanoparticles to the lesion site is sometimes difficult. The toxicity of these entities also remains poorly understood. While the medical need of theranostics is admitted, the business model remains to be established. There is an urgent need for a global and internationally harmonized re-evaluation of the approval and marketing processes of theranostics. However, a reasonable expectation exists that, in the near future, the current obstacles will be removed, thus allowing the wide use of these very promising agents.
A patient-specific segmentation framework for longitudinal MR images of traumatic brain injury
NASA Astrophysics Data System (ADS)
Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Chambers, Micah C.; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido
2012-02-01
Traumatic brain injury (TBI) is a major cause of death and disability worldwide. Robust, reproducible segmentations of MR images with TBI are crucial for quantitative analysis of recovery and treatment efficacy. However, this is a significant challenge due to severe anatomy changes caused by edema (swelling), bleeding, tissue deformation, skull fracture, and other effects related to head injury. In this paper, we introduce a multi-modal image segmentation framework for longitudinal TBI images. The framework is initialized through manual input of primary lesion sites at each time point, which are then refined by a joint approach composed of Bayesian segmentation and construction of a personalized atlas. The personalized atlas construction estimates the average of the posteriors of the Bayesian segmentation at each time point and warps the average back to each time point to provide the updated priors for Bayesian segmentation. The difference between our approach and segmenting longitudinal images independently is that we use the information from all time points to improve the segmentations. Given a manual initialization, our framework automatically segments healthy structures (white matter, grey matter, cerebrospinal fluid) as well as different lesions such as hemorrhagic lesions and edema. Our framework can handle different sets of modalities at each time point, which provides flexibility in analyzing clinical scans. We show results on three subjects with acute baseline scans and chronic follow-up scans. The results demonstrate that joint analysis of all the points yields improved segmentation compared to independent analysis of the two time points.
Multi-test cervical cancer diagnosis with missing data estimation
NASA Astrophysics Data System (ADS)
Xu, Tao; Huang, Xiaolei; Kim, Edward; Long, L. Rodney; Antani, Sameer
2015-03-01
Cervical cancer is a leading most common type of cancer for women worldwide. Existing screening programs for cervical cancer suffer from low sensitivity. Using images of the cervix (cervigrams) as an aid in detecting pre-cancerous changes to the cervix has good potential to improve sensitivity and help reduce the number of cervical cancer cases. In this paper, we present a method that utilizes multi-modality information extracted from multiple tests of a patient's visit to classify the patient visit to be either low-risk or high-risk. Our algorithm integrates image features and text features to make a diagnosis. We also present two strategies to estimate the missing values in text features: Image Classifier Supervised Mean Imputation (ICSMI) and Image Classifier Supervised Linear Interpolation (ICSLI). We evaluate our method on a large medical dataset and compare it with several alternative approaches. The results show that the proposed method with ICSLI strategy achieves the best result of 83.03% specificity and 76.36% sensitivity. When higher specificity is desired, our method can achieve 90% specificity with 62.12% sensitivity.
Modal Identification in an Automotive Multi-Component System Using HS 3D-DIC
López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A.
2018-01-01
The modal characterization of automotive lighting systems becomes difficult using sensors due to the light weight of the elements which compose the component as well as the intricate access to allocate them. In experimental modal analysis, high speed 3D digital image correlation (HS 3D-DIC) is attracting the attention since it provides full-field contactless measurements of 3D displacements as main advantage over other techniques. Different methodologies have been published that perform modal identification, i.e., natural frequencies, damping ratios, and mode shapes using the full-field information. In this work, experimental modal analysis has been performed in a multi-component automotive lighting system using HS 3D-DIC. Base motion excitation was applied to simulate operating conditions. A recently validated methodology has been employed for modal identification using transmissibility functions, i.e., the transfer functions from base motion tests. Results make it possible to identify local and global behavior of the different elements of injected polymeric and metallic materials. PMID:29401725
Context-Aware Fusion of RGB and Thermal Imagery for Traffic Monitoring
Alldieck, Thiemo; Bahnsen, Chris H.; Moeslund, Thomas B.
2016-01-01
In order to enable a robust 24-h monitoring of traffic under changing environmental conditions, it is beneficial to observe the traffic scene using several sensors, preferably from different modalities. To fully benefit from multi-modal sensor output, however, one must fuse the data. This paper introduces a new approach for fusing color RGB and thermal video streams by using not only the information from the videos themselves, but also the available contextual information of a scene. The contextual information is used to judge the quality of a particular modality and guides the fusion of two parallel segmentation pipelines of the RGB and thermal video streams. The potential of the proposed context-aware fusion is demonstrated by extensive tests of quantitative and qualitative characteristics on existing and novel video datasets and benchmarked against competing approaches to multi-modal fusion. PMID:27869730
Dual-Modality, Dual-Functional Nanoprobes for Cellular and Molecular Imaging
Menon, Jyothi U.; Gulaka, Praveen K.; McKay, Madalyn A.; Geethanath, Sairam; Liu, Li; Kodibagkar, Vikram D.
2012-01-01
An emerging need for evaluation of promising cellular therapies is a non-invasive method to image the movement and health of cells following transplantation. However, the use of a single modality to serve this purpose may not be advantageous as it may convey inaccurate or insufficient information. Multi-modal imaging strategies are becoming more popular for in vivo cellular and molecular imaging because of their improved sensitivity, higher resolution and structural/functional visualization. This study aims at formulating Nile Red doped hexamethyldisiloxane (HMDSO) nanoemulsions as dual modality (Magnetic Resonance Imaging/Fluorescence), dual-functional (oximetry/detection) nanoprobes for cellular and molecular imaging. HMDSO nanoprobes were prepared using a HS15-lecithin combination as surfactant and showed an average radius of 71±39 nm by dynamic light scattering and in vitro particle stability in human plasma over 24 hrs. They were found to readily localize in the cytosol of MCF7-GFP cells within 18 minutes of incubation. As proof of principle, these nanoprobes were successfully used for fluorescence imaging and for measuring pO2 changes in cells by magnetic resonance imaging, in vitro, thus showing potential for in vivo applications. PMID:23382776
Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis
Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.
2006-01-01
In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709
MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery
Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.
2016-01-01
Purpose Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation. PMID:27330239
MIND Demons for MR-to-CT deformable image registration in image-guided spine surgery
NASA Astrophysics Data System (ADS)
Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.
2016-03-01
Purpose: Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method: The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result: The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions: A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation.
MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery.
Reaungamornrat, S; De Silva, T; Uneri, A; Wolinsky, J-P; Khanna, A J; Kleinszig, G; Vogt, S; Prince, J L; Siewerdsen, J H
2016-02-27
Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation.
Karayanidis, Frini; Keuken, Max C; Wong, Aaron; Rennie, Jaime L; de Hollander, Gilles; Cooper, Patrick S; Ross Fulham, W; Lenroot, Rhoshel; Parsons, Mark; Phillips, Natalie; Michie, Patricia T; Forstmann, Birte U
2016-01-01
Our understanding of the complex interplay between structural and functional organisation of brain networks is being advanced by the development of novel multi-modal analyses approaches. The Age-ility Project (Phase 1) data repository offers open access to structural MRI, diffusion MRI, and resting-state fMRI scans, as well as resting-state EEG recorded from the same community participants (n=131, 15-35 y, 66 male). Raw imaging and electrophysiological data as well as essential demographics are made available via the NITRC website. All data have been reviewed for artifacts using a rigorous quality control protocol and detailed case notes are provided. Copyright © 2015. Published by Elsevier Inc.
Advances in combined endoscopic fluorescence confocal microscopy and optical coherence tomography
NASA Astrophysics Data System (ADS)
Risi, Matthew D.
Confocal microendoscopy provides real-time high resolution cellular level images via a minimally invasive procedure. Results from an ongoing clinical study to detect ovarian cancer with a novel confocal fluorescent microendoscope are presented. As an imaging modality, confocal fluorescence microendoscopy typically requires exogenous fluorophores, has a relatively limited penetration depth (100 μm), and often employs specialized aperture configurations to achieve real-time imaging in vivo. Two primary research directions designed to overcome these limitations and improve diagnostic capability are presented. Ideal confocal imaging performance is obtained with a scanning point illumination and confocal aperture, but this approach is often unsuitable for real-time, in vivo biomedical imaging. By scanning a slit aperture in one direction, image acquisition speeds are greatly increased, but at the cost of a reduction in image quality. The design, implementation, and experimental verification of a custom multi-point-scanning modification to a slit-scanning multi-spectral confocal microendoscope is presented. This new design improves the axial resolution while maintaining real-time imaging rates. In addition, the multi-point aperture geometry greatly reduces the effects of tissue scatter on imaging performance. Optical coherence tomography (OCT) has seen wide acceptance and FDA approval as a technique for ophthalmic retinal imaging, and has been adapted for endoscopic use. As a minimally invasive imaging technique, it provides morphological characteristics of tissues at a cellular level without requiring the use of exogenous fluorophores. OCT is capable of imaging deeper into biological tissue (˜1-2 mm) than confocal fluorescence microscopy. A theoretical analysis of the use of a fiber-bundle in spectral-domain OCT systems is presented. The fiber-bundle enables a flexible endoscopic design and provides fast, parallelized acquisition of the optical coherence tomography data. However, the multi-mode characteristic of the fibers in the fiber-bundle affects the depth sensitivity of the imaging system. A description of light interference in a multi-mode fiber is presented along with numerical simulations and experimental studies to illustrate the theoretical analysis.
Adali, Tülay; Levin-Schwartz, Yuri; Calhoun, Vince D.
2015-01-01
Fusion of information from multiple sets of data in order to extract a set of features that are most useful and relevant for the given task is inherent to many problems we deal with today. Since, usually, very little is known about the actual interaction among the datasets, it is highly desirable to minimize the underlying assumptions. This has been the main reason for the growing importance of data-driven methods, and in particular of independent component analysis (ICA) as it provides useful decompositions with a simple generative model and using only the assumption of statistical independence. A recent extension of ICA, independent vector analysis (IVA) generalizes ICA to multiple datasets by exploiting the statistical dependence across the datasets, and hence, as we discuss in this paper, provides an attractive solution to fusion of data from multiple datasets along with ICA. In this paper, we focus on two multivariate solutions for multi-modal data fusion that let multiple modalities fully interact for the estimation of underlying features that jointly report on all modalities. One solution is the Joint ICA model that has found wide application in medical imaging, and the second one is the the Transposed IVA model introduced here as a generalization of an approach based on multi-set canonical correlation analysis. In the discussion, we emphasize the role of diversity in the decompositions achieved by these two models, present their properties and implementation details to enable the user make informed decisions on the selection of a model along with its associated parameters. Discussions are supported by simulation results to help highlight the main issues in the implementation of these methods. PMID:26525830
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapuyade-Lahorgue, J; Ruan, S; Li, H
Purpose: Multi-tracer PET imaging is getting more attention in radiotherapy by providing additional tumor volume information such as glucose and oxygenation. However, automatic PET-based tumor segmentation is still a very challenging problem. We propose a statistical fusion approach to joint segment the sub-area of tumors from the two tracers FDG and FMISO PET images. Methods: Non-standardized Gamma distributions are convenient to model intensity distributions in PET. As a serious correlation exists in multi-tracer PET images, we proposed a new fusion method based on copula which is capable to represent dependency between different tracers. The Hidden Markov Field (HMF) model ismore » used to represent spatial relationship between PET image voxels and statistical dynamics of intensities for each modality. Real PET images of five patients with FDG and FMISO are used to evaluate quantitatively and qualitatively our method. A comparison between individual and multi-tracer segmentations was conducted to show advantages of the proposed fusion method. Results: The segmentation results show that fusion with Gaussian copula can receive high Dice coefficient of 0.84 compared to that of 0.54 and 0.3 of monomodal segmentation results based on individual segmentation of FDG and FMISO PET images. In addition, high correlation coefficients (0.75 to 0.91) for the Gaussian copula for all five testing patients indicates the dependency between tumor regions in the multi-tracer PET images. Conclusion: This study shows that using multi-tracer PET imaging can efficiently improve the segmentation of tumor region where hypoxia and glucidic consumption are present at the same time. Introduction of copulas for modeling the dependency between two tracers can simultaneously take into account information from both tracers and deal with two pathological phenomena. Future work will be to consider other families of copula such as spherical and archimedian copulas, and to eliminate partial volume effect by considering dependency between neighboring voxels.« less
Multi-modal gesture recognition using integrated model of motion, audio and video
NASA Astrophysics Data System (ADS)
Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko
2015-07-01
Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.
Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan; Zhu, Yingying; Nie, Feiping; Munsell, Brent
2018-01-01
Graph-based transductive learning (GTL) is a powerful machine learning technique that is used when sufficient training data is not available. In particular, conventional GTL approaches first construct a fixed inter-subject relation graph that is based on similarities in voxel intensity values in the feature domain, which can then be used to propagate the known phenotype data (i.e., clinical scores and labels) from the training data to the testing data in the label domain. However, this type of graph is exclusively learned in the feature domain, and primarily due to outliers in the observed features, may not be optimal for label propagation in the label domain. To address this limitation, a progressive GTL (pGTL) method is proposed that gradually finds an intrinsic data representation that more accurately aligns imaging features with the phenotype data. In general, optimal feature-to-phenotype alignment is achieved using an iterative approach that: (1) refines inter-subject relationships observed in the feature domain by using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined inter-subject relationships, and (3) verifies the intrinsic data representation on the training data to guarantee an optimal classification when applied to testing data. Additionally, the iterative approach is extended to multi-modal imaging data to further improve pGTL classification accuracy. Using Alzheimer’s disease and Parkinson’s disease study data, the classification accuracy of the proposed pGTL method is compared to several state-of-the-art classification methods, and the results show pGTL can more accurately identify subjects, even at different progression stages, in these two study data sets. PMID:28551556
Smoking Cessation: Uni-Modal and Multi-Modal Hypnotic and Non-Hypnotic Approaches.
ERIC Educational Resources Information Center
Habicht, Manuela H.
A survey of Queensland's population in 1993 determined that 24% of the adults were smokers. National data compiled in 1992 indicated that 72% of the drug-related deaths were related to tobacco use. In light of the need for effective smoking cessation approaches, a literature review was undertaken to determine the efficacy of hypnotic and…
Feature-based fusion of medical imaging data.
Calhoun, Vince D; Adali, Tülay
2009-09-01
The acquisition of multiple brain imaging types for a given study is a very common practice. There have been a number of approaches proposed for combining or fusing multitask or multimodal information. These can be roughly divided into those that attempt to study convergence of multimodal imaging, for example, how function and structure are related in the same region of the brain, and those that attempt to study the complementary nature of modalities, for example, utilizing temporal EEG information and spatial functional magnetic resonance imaging information. Within each of these categories, one can attempt data integration (the use of one imaging modality to improve the results of another) or true data fusion (in which multiple modalities are utilized to inform one another). We review both approaches and present a recent computational approach that first preprocesses the data to compute features of interest. The features are then analyzed in a multivariate manner using independent component analysis. We describe the approach in detail and provide examples of how it has been used for different fusion tasks. We also propose a method for selecting which combination of modalities provides the greatest value in discriminating groups. Finally, we summarize and describe future research topics.
Multi-modal anatomical optical coherence tomography and CT for in vivo dynamic upper airway imaging
NASA Astrophysics Data System (ADS)
Balakrishnan, Santosh; Bu, Ruofei; Price, Hillel; Zdanski, Carlton; Oldenburg, Amy L.
2017-02-01
We describe a novel, multi-modal imaging protocol for validating quantitative dynamic airway imaging performed using anatomical Optical Coherence Tomography (aOCT). The aOCT system consists of a catheter-based aOCT probe that is deployed via a bronchoscope, while a programmable ventilator is used to control airway pressure. This setup is employed on the bed of a Siemens Biograph CT system capable of performing respiratory-gated acquisitions. In this arrangement the position of the aOCT catheter may be visualized with CT to aid in co-registration. Utilizing this setup we investigate multiple respiratory pressure parameters with aOCT, and respiratory-gated CT, on both ex vivo porcine trachea and live, anesthetized pigs. This acquisition protocol has enabled real-time measurement of airway deformation with simultaneous measurement of pressure under physiologically relevant static and dynamic conditions- inspiratory peak or peak positive airway pressures of 10-40 cm H2O, and 20-30 breaths per minute for dynamic studies. We subsequently compare the airway cross sectional areas (CSA) obtained from aOCT and CT, including the change in CSA at different stages of the breathing cycle for dynamic studies, and the CSA at different peak positive airway pressures for static studies. This approach has allowed us to improve our acquisition methodology and to validate aOCT measurements of the dynamic airway for the first time. We believe that this protocol will prove invaluable for aOCT system development and greatly facilitate translation of OCT systems for airway imaging into the clinical setting.
Kim, Kang; Wagner, William R.
2015-01-01
With the rapid expansion of biomaterial development and coupled efforts to translate such advances toward the clinic, non-invasive and non-destructive imaging tools to evaluate implants in situ in a timely manner are critically needed. The required multilevel information is comprehensive, including structural, mechanical, and biological changes such as scaffold degradation, mechanical strength, cell infiltration, extracellular matrix formation and vascularization to name a few. With its inherent advantages of non-invasiveness and non-destructiveness, ultrasound imaging can be an ideal tool for both preclinical and clinical uses. In this review, currently available ultrasound imaging technologies that have been applied in vitro and in vivo for tissue engineering and regenerative medicine are discussed and some new emerging ultrasound technologies and multi-modality approaches utilizing ultrasound are introduced. PMID:26518412
Tumor Lysing Genetically Engineered T Cells Loaded with Multi-Modal Imaging Agents
NASA Astrophysics Data System (ADS)
Bhatnagar, Parijat; Alauddin, Mian; Bankson, James A.; Kirui, Dickson; Seifi, Payam; Huls, Helen; Lee, Dean A.; Babakhani, Aydin; Ferrari, Mauro; Li, King C.; Cooper, Laurence J. N.
2014-03-01
Genetically-modified T cells expressing chimeric antigen receptors (CAR) exert anti-tumor effect by identifying tumor-associated antigen (TAA), independent of major histocompatibility complex. For maximal efficacy and safety of adoptively transferred cells, imaging their biodistribution is critical. This will determine if cells home to the tumor and assist in moderating cell dose. Here, T cells are modified to express CAR. An efficient, non-toxic process with potential for cGMP compliance is developed for loading high cell number with multi-modal (PET-MRI) contrast agents (Super Paramagnetic Iron Oxide Nanoparticles - Copper-64; SPION-64Cu). This can now be potentially used for 64Cu-based whole-body PET to detect T cell accumulation region with high-sensitivity, followed by SPION-based MRI of these regions for high-resolution anatomically correlated images of T cells. CD19-specific-CAR+SPIONpos T cells effectively target in vitro CD19+ lymphoma.
A collaborative interaction and visualization multi-modal environment for surgical planning.
Foo, Jung Leng; Martinez-Escobar, Marisol; Peloquin, Catherine; Lobe, Thom; Winer, Eliot
2009-01-01
The proliferation of virtual reality visualization and interaction technologies has changed the way medical image data is analyzed and processed. This paper presents a multi-modal environment that combines a virtual reality application with a desktop application for collaborative surgical planning. Both visualization applications can function independently but can also be synced over a network connection for collaborative work. Any changes to either application is immediately synced and updated to the other. This is an efficient collaboration tool that allows multiple teams of doctors with only an internet connection to visualize and interact with the same patient data simultaneously. With this multi-modal environment framework, one team working in the VR environment and another team from a remote location working on a desktop machine can both collaborate in the examination and discussion for procedures such as diagnosis, surgical planning, teaching and tele-mentoring.
Robust Nonrigid Multimodal Image Registration using Local Frequency Maps*
Jian, Bing; Vemuri, Baba C.; Marroquin, José L.
2008-01-01
Automatic multi-modal image registration is central to numerous tasks in medical imaging today and has a vast range of applications e.g., image guidance, atlas construction, etc. In this paper, we present a novel multi-modal 3D non-rigid registration algorithm where in 3D images to be registered are represented by their corresponding local frequency maps efficiently computed using the Riesz transform as opposed to the popularly used Gabor filters. The non-rigid registration between these local frequency maps is formulated in a statistically robust framework involving the minimization of the integral squared error a.k.a. L2E (L2 error). This error is expressed as the squared difference between the true density of the residual (which is the squared difference between the non-rigidly transformed reference and the target local frequency representations) and a Gaussian or mixture of Gaussians density approximation of the same. The non-rigid transformation is expressed in a B-spline basis to achieve the desired smoothness in the transformation as well as computational efficiency. The key contributions of this work are (i) the use of Riesz transform to achieve better efficiency in computing the local frequency representation in comparison to Gabor filter-based approaches, (ii) new mathematical model for local-frequency based non-rigid registration, (iii) analytic computation of the gradient of the robust non-rigid registration cost function to achieve efficient and accurate registration. The proposed non-rigid L2E-based registration is a significant extension of research reported in literature to date. We present experimental results for registering several real data sets with synthetic and real non-rigid misalignments. PMID:17354721
NASA Astrophysics Data System (ADS)
Hu, Ruiguang; Xiao, Liping; Zheng, Wenjuan
2015-12-01
In this paper, multi-kernel learning(MKL) is used for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and valid images are chosen by the FOCARSS algorithm. Second, text based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Last, text and images representation are fused with a few methods. Experimental results demonstrate that the classification accuracy of MKL is higher than those of all other fusion methods in decision level and feature level, and much higher than the accuracy of single-modal classification.
Computer-aided, multi-modal, and compression diffuse optical studies of breast tissue
NASA Astrophysics Data System (ADS)
Busch, David Richard, Jr.
Diffuse Optical Tomography and Spectroscopy permit measurement of important physiological parameters non-invasively through ˜10 cm of tissue. I have applied these techniques in measurements of human breast and breast cancer. My thesis integrates three loosely connected themes in this context: multi-modal breast cancer imaging, automated data analysis of breast cancer images, and microvascular hemodynamics of breast under compression. As per the first theme, I describe construction, testing, and the initial clinical usage of two generations of imaging systems for simultaneous diffuse optical and magnetic resonance imaging. The second project develops a statistical analysis of optical breast data from many spatial locations in a population of cancers to derive a novel optical signature of malignancy; I then apply this data-derived signature for localization of cancer in additional subjects. Finally, I construct and deploy diffuse optical instrumentation to measure blood content and blood flow during breast compression; besides optics, this research has implications for any method employing breast compression, e.g., mammography.
Robust biological parametric mapping: an improved technique for multimodal brain image analysis
NASA Astrophysics Data System (ADS)
Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.
2011-03-01
Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.
A multi-modal approach to assessing recovery in youth athletes following concussion.
Reed, Nick; Murphy, James; Dick, Talia; Mah, Katie; Paniccia, Melissa; Verweel, Lee; Dobney, Danielle; Keightley, Michelle
2014-09-25
Concussion is one of the most commonly reported injuries amongst children and youth involved in sport participation. Following a concussion, youth can experience a range of short and long term neurobehavioral symptoms (somatic, cognitive and emotional/behavioral) that can have a significant impact on one's participation in daily activities and pursuits of interest (e.g., school, sports, work, family/social life, etc.). Despite this, there remains a paucity in clinically driven research aimed specifically at exploring concussion within the youth sport population, and more specifically, multi-modal approaches to measuring recovery. This article provides an overview of a novel and multi-modal approach to measuring recovery amongst youth athletes following concussion. The presented approach involves the use of both pre-injury/baseline testing and post-injury/follow-up testing to assess performance across a wide variety of domains (post-concussion symptoms, cognition, balance, strength, agility/motor skills and resting state heart rate variability). The goal of this research is to gain a more objective and accurate understanding of recovery following concussion in youth athletes (ages 10-18 years). Findings from this research can help to inform the development and use of improved approaches to concussion management and rehabilitation specific to the youth sport community.
NASA Astrophysics Data System (ADS)
Smith, Edward M.; Wandtke, John; Robinson, Arvin E.
1999-07-01
The selection criteria for the archive were based on the objectives of the Medical Information, Communication and Archive System (MICAS), a multi-vendor incremental approach to PACS. These objectives include interoperability between all components, seamless integration of the Radiology Information System (RIS) with MICAS and eventually other hospital databases, all components must demonstrate DICOM compliance prior to acceptance and automated workflow that can be programmed to meet changes in the healthcare environment. The long-term multi-modality archive is being implemented in 3 or more phases with the first phase designed to provide a 12 to 18 month storage solution. This decision was made because the cost per GB of storage is rapidly decreasing and the speed at which data can be retrieved is increasing with time. The open-solution selected allows incorporation of leading edge, 'best of breed' hardware and software and provides maximum jukeboxes, provides maximum flexibility of workflow both within and outside of radiology. The selected solution is media independent, supports multiple jukeboxes, provides expandable storage capacity and will provide redundancy and fault tolerance at minimal cost. Some of the required attributes of the archive include scalable archive strategy, virtual image database with global query and object-oriented database. The selection process took approximately 10 months with Cemax-Icon being the vendor selected. Prior to signing a purchase order, Cemax-Icon performed a site survey, agreed upon the acceptance test protocol and provided a written guarantee of connectivity between their archive and the imaging modalities and other MICAS components.
Comprehensive approach to image-guided surgery
NASA Astrophysics Data System (ADS)
Peters, Terence M.; Comeau, Roch M.; Kasrai, Reza; St. Jean, Philippe; Clonda, Diego; Sinasac, M.; Audette, Michel A.; Fenster, Aaron
1998-06-01
Image-guided surgery has evolved over the past 15 years from stereotactic planning, where the surgeon planned approaches to intracranial targets on the basis of 2D images presented on a simple workstation, to the use of sophisticated multi- modality 3D image integration in the operating room, with guidance being provided by mechanically, optically or electro-magnetically tracked probes or microscopes. In addition, sophisticated procedures such as thalamotomies and pallidotomies to relieve the symptoms of Parkinson's disease, are performed with the aid of volumetric atlases integrated with the 3D image data. Operations that are performed stereotactically, that is to say via a small burr- hole in the skull, are able to assume that the information contained in the pre-operative imaging study, accurately represents the brain morphology during the surgical procedure. On the other hand, preforming a procedure via an open craniotomy presents a problem. Not only does tissue shift when the operation begins, even the act of opening the skull can cause significant shift of the brain tissue due to the relief of intra-cranial pressure, or the effect of drugs. Means of tracking and correcting such shifts from an important part of the work in the field of image-guided surgery today. One approach has ben through the development of intra-operative MRI imaging systems. We describe an alternative approach which integrates intra-operative ultrasound with pre-operative MRI to track such changes in tissue morphology.
NASA Technical Reports Server (NTRS)
Jones, R. L.
1984-01-01
An interactive digital computer program for modal analysis and gain estimation for eigensystem synthesis was written. Both mathematical and operation considerations are described; however, the mathematical presentation is limited to those concepts essential to the operational capability of the program. The program is capable of both modal and spectral synthesis of multi-input control systems. It is user friendly, has scratchpad capability and dynamic memory, and can be used to design either state or output feedback systems.
Kim, James D.; Hashemi, Nafiseh; Gelman, Rachel; Lee, Andrew G.
2012-01-01
In the past three decades, there have been countless advances in imaging modalities that have revolutionized evaluation, management, and treatment of neuro-ophthalmic disorders. Non-invasive approaches for early detection and monitoring of treatments have decreased morbidity and mortality. Understanding of basic methods of imaging techniques and choice of imaging modalities in cases encountered in neuro-ophthalmology clinic is critical for proper evaluation of patients. Two main imaging modalities that are often used are computed tomography (CT) and magnetic resonance imaging (MRI). However, variations of these modalities and appropriate location of imaging must be considered in each clinical scenario. In this article, we review and summarize the best neuroimaging studies for specific neuro-ophthalmic indications and the diagnostic radiographic findings for important clinical entities. PMID:23961025
STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.
Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X
2009-08-01
This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.
Kharche, Sanjay R.; So, Aaron; Salerno, Fabio; Lee, Ting-Yim; Ellis, Chris; Goldman, Daniel; McIntyre, Christopher W.
2018-01-01
Dialysis prolongs life but augments cardiovascular mortality. Imaging data suggests that dialysis increases myocardial blood flow (BF) heterogeneity, but its causes remain poorly understood. A biophysical model of human coronary vasculature was used to explain the imaging observations, and highlight causes of coronary BF heterogeneity. Post-dialysis CT images from patients under control, pharmacological stress (adenosine), therapy (cooled dialysate), and adenosine and cooled dialysate conditions were obtained. The data presented disparate phenotypes. To dissect vascular mechanisms, a 3D human vasculature model based on known experimental coronary morphometry and a space filling algorithm was implemented. Steady state simulations were performed to investigate the effects of altered aortic pressure and blood vessel diameters on myocardial BF heterogeneity. Imaging showed that stress and therapy potentially increased mean and total BF, while reducing heterogeneity. BF histograms of one patient showed multi-modality. Using the model, it was found that total coronary BF increased as coronary perfusion pressure was increased. BF heterogeneity was differentially affected by large or small vessel blocking. BF heterogeneity was found to be inversely related to small blood vessel diameters. Simulation of large artery stenosis indicates that BF became heterogeneous (increase relative dispersion) and gave multi-modal histograms. The total transmural BF as well as transmural BF heterogeneity reduced due to large artery stenosis, generating large patches of very low BF regions downstream. Blocking of arteries at various orders showed that blocking larger arteries results in multi-modal BF histograms and large patches of low BF, whereas smaller artery blocking results in augmented relative dispersion and fractal dimension. Transmural heterogeneity was also affected. Finally, the effects of augmented aortic pressure in the presence of blood vessel blocking shows differential effects on BF heterogeneity as well as transmural BF. Improved aortic blood pressure may improve total BF. Stress and therapy may be effective if they dilate small vessels. A potential cause for the observed complex BF distributions (multi-modal BF histograms) may indicate existing large vessel stenosis. The intuitive BF heterogeneity methods used can be readily used in clinical studies. Further development of the model and methods will permit personalized assessment of patient BF status. PMID:29867555
Frangioni, John V.; De Grand, Alec M.
2007-10-30
The invention is based, in part, on the discovery that by combining certain components one can generate a tissue-like phantom that mimics any desired tissue, is simple and inexpensive to prepare, and is stable over many weeks or months. In addition, new multi-modal imaging objects (e.g., beads) can be inserted into the phantoms to mimic tissue pathologies, such as cancer, or merely to serve as calibration standards. These objects can be imaged using one, two, or more (e.g., four) different imaging modalities (e.g., x-ray computed tomography (CT), positron emission tomography (PET), single photon emission computed tomography (SPECT), and near-infrared (NIR) fluorescence) simultaneously.
AlJaroudi, Wael A; Lloyd, Steven G; Chaudhry, Farooq A; Hage, Fadi G
2017-06-01
This review summarizes key imaging studies that were presented in the American Heart Association Scientific Sessions 2016 related to the fields of nuclear cardiology, cardiac computed tomography, cardiac magnetic resonance, and echocardiography. This bird's eye view will inform readers about multiple studies from these different modalities. We hope that this general overview will be useful for those that did not attend the conference as well as to those that did since it is often difficult to get exposure to many abstracts at large meetings. The review, therefore, aims to help readers stay updated on the newest imaging studies presented at the meeting.
Serag, Ahmed; Wilkinson, Alastair G.; Telford, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Anblagan, Devasuda; Macnaught, Gillian; Semple, Scott I.; Boardman, James P.
2017-01-01
Quantitative volumes from brain magnetic resonance imaging (MRI) acquired across the life course may be useful for investigating long term effects of risk and resilience factors for brain development and healthy aging, and for understanding early life determinants of adult brain structure. Therefore, there is an increasing need for automated segmentation tools that can be applied to images acquired at different life stages. We developed an automatic segmentation method for human brain MRI, where a sliding window approach and a multi-class random forest classifier were applied to high-dimensional feature vectors for accurate segmentation. The method performed well on brain MRI data acquired from 179 individuals, analyzed in three age groups: newborns (38–42 weeks gestational age), children and adolescents (4–17 years) and adults (35–71 years). As the method can learn from partially labeled datasets, it can be used to segment large-scale datasets efficiently. It could also be applied to different populations and imaging modalities across the life course. PMID:28163680
Chen, Cheng; Wang, Wei; Ozolek, John A.; Rohde, Gustavo K.
2013-01-01
We describe a new supervised learning-based template matching approach for segmenting cell nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given dataset to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template-based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered nuclei. PMID:23568787
3D prostate MR-TRUS non-rigid registration using dual optimization with volume-preserving constraint
NASA Astrophysics Data System (ADS)
Qiu, Wu; Yuan, Jing; Fenster, Aaron
2016-03-01
We introduce an efficient and novel convex optimization-based approach to the challenging non-rigid registration of 3D prostate magnetic resonance (MR) and transrectal ultrasound (TRUS) images, which incorporates a new volume preserving constraint to essentially improve the accuracy of targeting suspicious regions during the 3D TRUS guided prostate biopsy. Especially, we propose a fast sequential convex optimization scheme to efficiently minimize the employed highly nonlinear image fidelity function using the robust multi-channel modality independent neighborhood descriptor (MIND) across the two modalities of MR and TRUS. The registration accuracy was evaluated using 10 patient images by calculating the target registration error (TRE) using manually identified corresponding intrinsic fiducials in the whole prostate gland. We also compared the MR and TRUS manually segmented prostate surfaces in the registered images in terms of the Dice similarity coefficient (DSC), mean absolute surface distance (MAD), and maximum absolute surface distance (MAXD). Experimental results showed that the proposed method with the introduced volume-preserving prior significantly improves the registration accuracy comparing to the method without the volume-preserving constraint, by yielding an overall mean TRE of 2:0+/-0:7 mm, and an average DSC of 86:5+/-3:5%, MAD of 1:4+/-0:6 mm and MAXD of 6:5+/-3:5 mm.
Diverse Application of Magnetic Resonance Imaging for Mouse Phenotyping
Wu, Yijen L.; Lo, Cecilia W.
2017-01-01
Small animal models, particularly mouse models, of human diseases are becoming an indispensable tool for biomedical research. Studies in animal models have provided important insights into the etiology of diseases and accelerated the development of therapeutic strategies. Detailed phenotypic characterization is essential, both for the development of such animal models and mechanistic studies into disease pathogenesis and testing the efficacy of experimental therapeutics. Magnetic Resonance Imaging (MRI) is a versatile and non-invasive imaging modality with excellent penetration depth, tissue coverage, and soft tissue contrast. MRI, being a multi-modal imaging modality, together with proven imaging protocols and availability of good contrast agents, is ideally suited for phenotyping mutant mouse models. Here we describe the applications of MRI for phenotyping structural birth defects involving the brain, heart, and kidney in mice. The versatility of MRI and its ease of use are well suited to meet the rapidly increasing demands for mouse phenotyping in the coming age of functional genomics. PMID:28544650
Multifunctional Inorganic Nanoparticles: Recent Progress in Thermal Therapy and Imaging
Cherukula, Kondareddy; Manickavasagam Lekshmi, Kamali; Uthaman, Saji; Cho, Kihyun; Cho, Chong-Su; Park, In-Kyu
2016-01-01
Nanotechnology has enabled the development of many alternative anti-cancer approaches, such as thermal therapies, which cause minimal damage to healthy cells. Current challenges in cancer treatment are the identification of the diseased area and its efficient treatment without generating many side effects. Image-guided therapies can be a useful tool to diagnose and treat the diseased tissue and they offer therapy and imaging using a single nanostructure. The present review mainly focuses on recent advances in the field of thermal therapy and imaging integrated with multifunctional inorganic nanoparticles. The main heating sources for heat-induced therapies are the surface plasmon resonance (SPR) in the near infrared region and alternating magnetic fields (AMFs). The different families of inorganic nanoparticles employed for SPR- and AMF-based thermal therapies and imaging are described. Furthermore, inorganic nanomaterials developed for multimodal therapies with different and multi-imaging modalities are presented in detail. Finally, relevant clinical perspectives and the future scope of inorganic nanoparticles in image-guided therapies are discussed. PMID:28335204
Compressed single pixel imaging in the spatial frequency domain
Torabzadeh, Mohammad; Park, Il-Yong; Bartels, Randy A.; Durkin, Anthony J.; Tromberg, Bruce J.
2017-01-01
Abstract. We have developed compressed sensing single pixel spatial frequency domain imaging (cs-SFDI) to characterize tissue optical properties over a wide field of view (35 mm×35 mm) using multiple near-infrared (NIR) wavelengths simultaneously. Our approach takes advantage of the relatively sparse spatial content required for mapping tissue optical properties at length scales comparable to the transport scattering length in tissue (ltr∼1 mm) and the high bandwidth available for spectral encoding using a single-element detector. cs-SFDI recovered absorption (μa) and reduced scattering (μs′) coefficients of a tissue phantom at three NIR wavelengths (660, 850, and 940 nm) within 7.6% and 4.3% of absolute values determined using camera-based SFDI, respectively. These results suggest that cs-SFDI can be developed as a multi- and hyperspectral imaging modality for quantitative, dynamic imaging of tissue optical and physiological properties. PMID:28300272
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, J; Ge, Y; Li, K
2015-06-15
Purpose: The anatomical noise power spectra (NPS) for differential phase contrast (DPC) and dark field (DF) imaging have recently been characterized using a power-law model with two parameters, alpha and beta, an innovative extension to the methodology used in x-ray attenuation based breast imaging such as mammography, DBT, or cone-beam CT. Beta values of 3.6, 2.6, and 1.3 have been measured for absorption, DPC, and DF respectively for cadaver breasts imaged in the coronal plane; these dramatic differences should be reflected in their detection performance. The purpose of this study was to determine the impact of anatomical noise on breastmore » calcification detection and compare the detection performance of the three contrast mechanisms of a multi-contrast x-ray imaging system. Methods: In our studies, a calcification image object was segmented out of the multi-contrast images of a cadaver breast specimen. 50 measured total NPS were measured from breast cadavers directly. The ideal model observer detectability was calculated for a range of doses (5–100%) and a range of calcification sizes (diameter = 0.25–2.5 mm). Results: Overall we found the highest average detectability corresponded to DPC imaging (7.4 for 1 mm calc.), with DF the next highest (3.8 for 1 mm calc.), and absorption the lowest (3.2 for 1 mm calc.). However, absorption imaging also showed the slowest dependence on dose of the three modalities due to the significant anatomical noise. DPC showed a peak detectability for calcifications ∼1.25 mm in diameter, DF showed a peak for calcifications around 0.75 mm in diameter, and absorption imaging had no such peak in the range explored. Conclusion: Understanding imaging performance for DPC and DF is critical to transition these modalities to the clinic. The results presented here offer new insight into how these modalities complement absorption imaging to maximize the likelihood of detecting early breast cancers. J. Garrett, Y. Ge, K. Li: Nothing to disclose. G.-H. Chen: Research funded, GE Healthcare; Research funded, Siemens AX.« less
Digital image archiving: challenges and choices.
Dumery, Barbara
2002-01-01
In the last five years, imaging exam volume has grown rapidly. In addition to increased image acquisition, there is more patient information per study. RIS-PACS integration and information-rich DICOM headers now provide us with more patient information relative to each study. The volume of archived digital images is increasing and will continue to rise at a steeper incline than film-based storage of the past. Many filmless facilities have been caught off guard by this increase, which has been stimulated by many factors. The most significant factor is investment in new digital and DICOM-compliant modalities. A huge volume driver is the increase in images per study from multi-slice technology. Storage requirements also are affected by disaster recovery initiatives and state retention mandates. This burgeoning rate of imaging data volume presents many challenges: cost of ownership, data accessibility, storage media obsolescence, database considerations, physical limitations, reliability and redundancy. There are two basic approaches to archiving--single tier and multi-tier. Each has benefits. With a single-tier approach, all the data is stored on a single media that can be accessed very quickly. A redundant copy of the data is then stored onto another less expensive media. This is usually a removable media. In this approach, the on-line storage is increased incrementally as volume grows. In a multi-tier approach, storage levels are set up based on access speed and cost. In other words, all images are stored at the deepest archiving level, which is also the least expensive. Images are stored on or moved back to the intermediate and on-line levels if they will need to be accessed more quickly. It can be difficult to decide what the best approach is for your organization. The options include RAIDs (redundant array of independent disks), direct attached RAID storage (DAS), network storage using RAIDs (NAS and SAN), removable media such as different types of tape, compact disks (CDs and DVDs) and magneto-optical disks (MODs). As you evaluate the various options for storage, it is important to consider both performance and cost. For most imaging enterprises, a single-tier archiving approach is the best solution. With the cost of hard drives declining, NAS is a very feasible solution today. It is highly reliable, offers immediate access to all exams, and easily scales as imaging volume grows. Best of all, media obsolescence challenges need not be of concern. For back-up storage, removable media can be implemented, with a smaller investment needed as it will only be used for a redundant copy of the data. There is no need to keep it online and available. If further system redundancy is desired, multiple servers should be considered. The multi-tier approach still has its merits for smaller enterprises, but with a detailed long-term cost of ownership analysis, NAS will probably still come out on top as the solution of choice for many imaging facilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Y; Fullerton, G; Goins, B
Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group;more » 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement errors during the animal study.« less
Sensor-agnostic photogrammetric image registration with applications to population modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Devin A; Moehl, Jessica J
2016-01-01
Photogrammetric registration of airborne and spaceborne imagery is a crucial prerequisite to many data fusion tasks. While embedded sensor models provide a rough geolocation estimate, these metadata may be incomplete or imprecise. Manual solutions are appropriate for small-scale projects, but for rapid streams of cross-modal, multi-sensor, multi-temporal imagery with varying metadata standards, an automated approach is required. We present a high-performance image registration workflow to address this need. This paper outlines the core development concepts and demonstrates its utility with respect to the 2016 data fusion contest imagery. In particular, Iris ultra-HD video is georeferenced to the Earth surface viamore » registration to DEIMOS-2 imagery, which serves as a trusted control source. Geolocation provides opportunity to augment the video with spatial context, stereo-derived disparity, spectral sensitivity, change detection, and numerous ancillary geospatial layers. We conclude by leveraging these derivative data layers towards one such fusion application: population distribution modeling.« less
Multi-modality 3D breast imaging with X-Ray tomosynthesis and automated ultrasound.
Sinha, Sumedha P; Roubidoux, Marilyn A; Helvie, Mark A; Nees, Alexis V; Goodsitt, Mitchell M; LeCarpentier, Gerald L; Fowlkes, J Brian; Chalek, Carl L; Carson, Paul L
2007-01-01
This study evaluated the utility of 3D automated ultrasound in conjunction with 3D digital X-Ray tomosynthesis for breast cancer detection and assessment, to better localize and characterize lesions in the breast. Tomosynthesis image volumes and automated ultrasound image volumes were acquired in the same geometry and in the same view for 27 patients. 3 MQSA certified radiologists independently reviewed the image volumes, visually correlating the images from the two modalities with in-house software. More sophisticated software was used on a smaller set of 10 cases, which enabled the radiologist to draw a 3D box around the suspicious lesion in one image set and isolate an anatomically correlated, similarly boxed region in the other modality image set. In the primary study, correlation was found to be moderately useful to the readers. In the additional study, using improved software, the median usefulness rating increased and confidence in localizing and identifying the suspicious mass increased in more than half the cases. As automated scanning and reading software techniques advance, superior results are expected.
Semi-automated Image Processing for Preclinical Bioluminescent Imaging.
Slavine, Nikolai V; McColl, Roderick W
Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.
Agile Multi-Scale Decompositions for Automatic Image Registration
NASA Technical Reports Server (NTRS)
Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline
2016-01-01
In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.
Assessment of Abdominal Adipose Tissue and Organ Fat Content by Magnetic Resonance Imaging
Hu, Houchun H.; Nayak, Krishna S.; Goran, Michael I.
2010-01-01
As the prevalence of obesity continues to rise, rapid and accurate tools for assessing abdominal body and organ fat quantity and distribution are critically needed to assist researchers investigating therapeutic and preventive measures against obesity and its comorbidities. Magnetic resonance imaging (MRI) is the most promising modality to address such need. It is non-invasive, utilizes no ionizing radiation, provides unmatched 3D visualization, is repeatable, and is applicable to subject cohorts of all ages. This article is aimed to provide the reader with an overview of current and state-of-the-art techniques in MRI and associated image analysis methods for fat quantification. The principles underlying traditional approaches such as T1-weighted imaging and magnetic resonance spectroscopy as well as more modern chemical-shift imaging techniques are discussed and compared. The benefits of contiguous 3D acquisitions over 2D multi-slice approaches are highlighted. Typical post-processing procedures for extracting adipose tissue depot volumes and percent organ fat content from abdominal MRI data sets are explained. Furthermore, the advantages and disadvantages of each MRI approach with respect to imaging parameters, spatial resolution, subject motion, scan time, and appropriate fat quantitative endpoints are also provided. Practical considerations in implementing these methods are also presented. PMID:21348916
Numerical study on simultaneous emission and transmission tomography in the MRI framework
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Cong, Wenxiang; Wang, Ge
2017-09-01
Multi-modality imaging methods are instrumental for advanced diagnosis and therapy. Specifically, a hybrid system that combines computed tomography (CT), nuclear imaging, and magnetic resonance imaging (MRI) will be a Holy Grail of medical imaging, delivering complementary structural/morphological, functional, and molecular information for precision medicine. A novel imaging method was recently demonstrated that takes advantage of radiotracer polarization to combine MRI principles with nuclear imaging. This approach allows the concentration of a polarized Υ-ray emitting radioisotope to be imaged with MRI resolution potentially outperforming the standard nuclear imaging mode at a sensitivity significantly higher than that of MRI. In our work, we propose to acquire MRI-modulated nuclear data for simultaneous image reconstruction of both emission and transmission parameters, suggesting the potential for simultaneous CT-SPECT-MRI. The synchronized diverse datasets allow excellent spatiotemporal registration and unique insight into physiological and pathological features. Here we describe the methodology involving the system design with emphasis on the formulation for tomographic images, even when significant radiotracer signals are limited to a region of interest (ROI). Initial numerical results demonstrate the feasibility of our approach for reconstructing concentration and attenuation images through a head phantom with various radio-labeled ROIs. Additional considerations regarding the radioisotope characteristics are also discussed.
Principles of Simultaneous PET/MR Imaging.
Catana, Ciprian
2017-05-01
Combined PET/MR imaging scanners capable of acquiring simultaneously the complementary information provided by the 2 imaging modalities are now available for human use. After addressing the hardware challenges for integrating the 2 imaging modalities, most of the efforts in the field have focused on developing MR-based attenuation correction methods for neurologic and whole-body applications, implementing approaches for improving one modality by using the data provided by the other and exploring research and clinical applications that could benefit from the synergistic use of the multimodal data. Copyright © 2017 Elsevier Inc. All rights reserved.
In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie
2015-03-01
Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.
Grid-Enabled Quantitative Analysis of Breast Cancer
2009-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast
Hu, Dehong; Sheng, Zonghai; Gao, Guanhui; Siu, Fungming; Liu, Chengbo; Wan, Qian; Gong, Ping; Zheng, Hairong; Ma, Yifan; Cai, Lintao
2016-07-01
Photodynamic therapy (PDT) is a noninvasive and effective approach for cancer treatment. The main bottlenecks of clinical PDT are poor selectivity of photosensitizer and inadequate oxygen supply resulting in serious side effects and low therapeutic efficiency. Herein, a thermal-modulated reactive oxygen species (ROS) strategy using activatable human serum albumin-chlorin e6 nanoassemblies (HSA-Ce6 NAs) for promoting PDT against cancer is developed. Through intermolecular disulfide bond crosslinking and hydrophobic interaction, Ce6 photosensitizer is effectively loaded into the HSA NAs, and the obtained HSA-Ce6 NAs exhibit excellent reduction response, as well as enhanced tumor accumulation and retention. By the precision control of the overall body temperature instead of local tumor temperature increasing from 37 °C to 43 °C, the photosensitization reaction rate of HSA-Ce6 NAs increases 20%, and the oxygen saturation of tumor tissue raise 52%, significantly enhancing the generation of ROS for promoting PDT. Meanwhile, the intrinsic fluorescence and photoacoustic properties, and the chelating characteristic of porphyrin ring can endow the HSA-Ce6 NAs with fluorescence, photoacoustic and magnetic resonance triple-modal imaging functions. Upon irradiation of low-energy near-infrared laser, the tumors are completely suppressed without tumor recurrence and therapy-induced side effects. The robust thermal-modulated ROS strategy combined with albumin-based activatable nanophotosensitizer is highly potential for multi-modal imaging-guided PDT and clinical translation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gold nanoclusters as contrast agents for fluorescent and X-ray dual-modality imaging.
Zhang, Aili; Tu, Yu; Qin, Songbing; Li, Yan; Zhou, Juying; Chen, Na; Lu, Qiang; Zhang, Bingbo
2012-04-15
Multimodal imaging technique is an alternative approach to improve sensitivity of early cancer diagnosis. In this study, highly fluorescent and strong X-ray absorption coefficient gold nanoclusters (Au NCs) are synthesized as dual-modality imaging contrast agents (CAs) for fluorescent and X-ray dual-modality imaging. The experimental results show that the as-prepared Au NCs are well constructed with ultrasmall sizes, reliable fluorescent emission, high computed tomography (CT) value and fine biocompatibility. In vivo imaging results indicate that the obtained Au NCs are capable of fluorescent and X-ray enhanced imaging. Copyright © 2012 Elsevier Inc. All rights reserved.
Potential of PET-MRI for imaging of non-oncologic musculoskeletal disease.
Kogan, Feliks; Fan, Audrey P; Gold, Garry E
2016-12-01
Early detection of musculoskeletal disease leads to improved therapies and patient outcomes, and would benefit greatly from imaging at the cellular and molecular level. As it becomes clear that assessment of multiple tissues and functional processes are often necessary to study the complex pathogenesis of musculoskeletal disorders, the role of multi-modality molecular imaging becomes increasingly important. New positron emission tomography-magnetic resonance imaging (PET-MRI) systems offer to combine high-resolution MRI with simultaneous molecular information from PET to study the multifaceted processes involved in numerous musculoskeletal disorders. In this article, we aim to outline the potential clinical utility of hybrid PET-MRI to these non-oncologic musculoskeletal diseases. We summarize current applications of PET molecular imaging in osteoarthritis (OA), rheumatoid arthritis (RA), metabolic bone diseases and neuropathic peripheral pain. Advanced MRI approaches that reveal biochemical and functional information offer complementary assessment in soft tissues. Additionally, we discuss technical considerations for hybrid PET-MR imaging including MR attenuation correction, workflow, radiation dose, and quantification.
Still-to-video face recognition in unconstrained environments
NASA Astrophysics Data System (ADS)
Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing
2015-02-01
Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.
Building an EEG-fMRI Multi-Modal Brain Graph: A Concurrent EEG-fMRI Study
Yu, Qingbao; Wu, Lei; Bridwell, David A.; Erhardt, Erik B.; Du, Yuhui; He, Hao; Chen, Jiayu; Liu, Peng; Sui, Jing; Pearlson, Godfrey; Calhoun, Vince D.
2016-01-01
The topological architecture of brain connectivity has been well-characterized by graph theory based analysis. However, previous studies have primarily built brain graphs based on a single modality of brain imaging data. Here we develop a framework to construct multi-modal brain graphs using concurrent EEG-fMRI data which are simultaneously collected during eyes open (EO) and eyes closed (EC) resting states. FMRI data are decomposed into independent components with associated time courses by group independent component analysis (ICA). EEG time series are segmented, and then spectral power time courses are computed and averaged within 5 frequency bands (delta; theta; alpha; beta; low gamma). EEG-fMRI brain graphs, with EEG electrodes and fMRI brain components serving as nodes, are built by computing correlations within and between fMRI ICA time courses and EEG spectral power time courses. Dynamic EEG-fMRI graphs are built using a sliding window method, versus static ones treating the entire time course as stationary. In global level, static graph measures and properties of dynamic graph measures are different across frequency bands and are mainly showing higher values in eyes closed than eyes open. Nodal level graph measures of a few brain components are also showing higher values during eyes closed in specific frequency bands. Overall, these findings incorporate fMRI spatial localization and EEG frequency information which could not be obtained by examining only one modality. This work provides a new approach to examine EEG-fMRI associations within a graph theoretic framework with potential application to many topics. PMID:27733821
Wave analysis of a plenoptic system and its applications
NASA Astrophysics Data System (ADS)
Shroff, Sapna A.; Berkner, Kathrin
2013-03-01
Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.
Example based lesion segmentation
NASA Astrophysics Data System (ADS)
Roy, Snehashis; He, Qing; Carass, Aaron; Jog, Amod; Cuzzocreo, Jennifer L.; Reich, Daniel S.; Prince, Jerry; Pham, Dzung
2014-03-01
Automatic and accurate detection of white matter lesions is a significant step toward understanding the progression of many diseases, like Alzheimer's disease or multiple sclerosis. Multi-modal MR images are often used to segment T2 white matter lesions that can represent regions of demyelination or ischemia. Some automated lesion segmentation methods describe the lesion intensities using generative models, and then classify the lesions with some combination of heuristics and cost minimization. In contrast, we propose a patch-based method, in which lesions are found using examples from an atlas containing multi-modal MR images and corresponding manual delineations of lesions. Patches from subject MR images are matched to patches from the atlas and lesion memberships are found based on patch similarity weights. We experiment on 43 subjects with MS, whose scans show various levels of lesion-load. We demonstrate significant improvement in Dice coefficient and total lesion volume compared to a state of the art model-based lesion segmentation method, indicating more accurate delineation of lesions.
AlJaroudi, Wael A; Lloyd, Steven G; Hage, Fadi G
2018-04-01
This review summarizes key imaging studies that were presented in the American Heart Association Scientific Sessions 2017 related to the fields of nuclear cardiology, cardiac computed tomography, cardiac magnetic resonance, and echocardiography. The aim of this bird's eye view is to inform readers about multiple studies reported at the meeting from these different imaging modalities. While such a review is most useful for those that did not attend the conference, we find that a general overview may also be useful to those that did since it is often difficult to get exposure to many abstracts at large meetings. The review, therefore, aims to help readers stay updated on the newest imaging studies presented at the meeting and will hopefully stimulate new ideas for future research in imaging.
NASA Astrophysics Data System (ADS)
Deng, Zijian; Li, Changhui
2016-06-01
Imaging small blood vessels and measuring their functional information in finger joint are still challenges for clinical imaging modalities. In this study, we developed a multi-transducer functional photoacoustic tomography (PAT) system and successfully imaged human finger-joint vessels from ˜1 mm to <0.2 mm in diameter. In addition, the oxygen saturation (SO2) values of these vessels were also measured. Our results demonstrate that PAT can provide both anatomical and functional information of individual finger-joint vessels with different sizes, which might help the study of finger-joint diseases, such as rheumatoid arthritis.
State-of-the-art radiation detectors for medical imaging: Demands and trends
NASA Astrophysics Data System (ADS)
Darambara, Dimitra G.
2006-12-01
Over the last half-century a variety of significant technical advances in several scientific fields has been pointing to an exploding growth in the field of medical imaging leading to a better interpretation of more specific anatomical, biochemical and molecular pathways. In particular, the development of novel imaging detectors and readout electronics has been critical to the advancement of medical imaging allowing the invention of breakthrough platforms for simultaneous acquisition of multi-modality images at molecular level. The present paper presents a review of the challenges, demands and constraints on radiation imaging detectors imposed by the nature of the modality and the physics of the imaging source. This is followed by a concise review and perspective on various types of state-of-the-art detector technologies that have been developed to meet these requirements. Trends, prospects and new concepts for future imaging detectors are also highlighted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, Adam; Piquette, Kathryn E.; Bergmann, Uwe
Ancient Egyptian mummies were often covered with an outer casing, panels and masks made from cartonnage: a lightweight material made from linen, plaster, and recycled papyrus held together with adhesive. Egyptologists, papyrologists, and historians aim to recover and read extant text on the papyrus contained within cartonnage layers, but some methods, such as dissolving mummy casings, are destructive. The use of an advanced range of different imaging modalities was investigated to test the feasibility of non-destructive approaches applied to multi-layered papyrus found in ancient Egyptian mummy cartonnage. Eight different techniques were compared by imaging four synthetic phantoms designed to providemore » robust, well-understood, yet relevant sample standards using modern papyrus and replica inks. The techniques include optical (multispectral imaging with reflection and transillumination, and optical coherence tomography), X-ray (X-ray fluorescence imaging, X-ray fluorescence spectroscopy, X-ray micro computed tomography and phase contrast X-ray) and terahertz-based approaches. Optical imaging techniques were able to detect inks on all four phantoms, but were unable to significantly penetrate papyrus. X-ray-based techniques were sensitive to iron-based inks with excellent penetration but were not able to detect carbon-based inks. However, using terahertz imaging, it was possible to detect carbon-based inks with good penetration but with less sensitivity to iron-based inks. The phantoms allowed reliable and repeatable tests to be made at multiple sites on three continents. Finally, the tests demonstrated that each imaging modality needs to be optimised for this particular application: it is, in general, not sufficient to repurpose an existing device without modification. Furthermore, it is likely that no single imaging technique will to be able to robustly detect and enable the reading of text within ancient Egyptian mummy cartonnage. However, by carefully selecting, optimising and combining techniques, text contained within these fragile and rare artefacts may eventually be open to non-destructive imaging, identification, and interpretation.« less
Gibson, Adam; Piquette, Kathryn E.; Bergmann, Uwe; ...
2018-02-26
Ancient Egyptian mummies were often covered with an outer casing, panels and masks made from cartonnage: a lightweight material made from linen, plaster, and recycled papyrus held together with adhesive. Egyptologists, papyrologists, and historians aim to recover and read extant text on the papyrus contained within cartonnage layers, but some methods, such as dissolving mummy casings, are destructive. The use of an advanced range of different imaging modalities was investigated to test the feasibility of non-destructive approaches applied to multi-layered papyrus found in ancient Egyptian mummy cartonnage. Eight different techniques were compared by imaging four synthetic phantoms designed to providemore » robust, well-understood, yet relevant sample standards using modern papyrus and replica inks. The techniques include optical (multispectral imaging with reflection and transillumination, and optical coherence tomography), X-ray (X-ray fluorescence imaging, X-ray fluorescence spectroscopy, X-ray micro computed tomography and phase contrast X-ray) and terahertz-based approaches. Optical imaging techniques were able to detect inks on all four phantoms, but were unable to significantly penetrate papyrus. X-ray-based techniques were sensitive to iron-based inks with excellent penetration but were not able to detect carbon-based inks. However, using terahertz imaging, it was possible to detect carbon-based inks with good penetration but with less sensitivity to iron-based inks. The phantoms allowed reliable and repeatable tests to be made at multiple sites on three continents. Finally, the tests demonstrated that each imaging modality needs to be optimised for this particular application: it is, in general, not sufficient to repurpose an existing device without modification. Furthermore, it is likely that no single imaging technique will to be able to robustly detect and enable the reading of text within ancient Egyptian mummy cartonnage. However, by carefully selecting, optimising and combining techniques, text contained within these fragile and rare artefacts may eventually be open to non-destructive imaging, identification, and interpretation.« less
MO-E-12A-01: Quantitative Imaging: Techniques, Applications, and Challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, E; Jeraj, R; McNitt-Gray, M
The first symposium in the Quantitative Imaging Track focused on the introduction of quantitative imaging (QI) by illustrating the potential of QI in diagnostic and therapeutic applications in research and patient care, highlighting key challenges in implementation of such QI applications, and reviewing QI efforts of selected national and international agencies and organizations, including the FDA, NCI, NIST, and RSNA. This second QI symposium will focus more specifically on the techniques, applications, and challenges of QI. The first talk of the session will focus on modalityagnostic challenges of QI, beginning with challenges of the development and implementation of QI applicationsmore » in single-center, single-vendor settings and progressing to the challenges encountered in the most general setting of multi-center, multi-vendor settings. The subsequent three talks will focus on specific QI challenges and opportunities in the modalityspecific settings of CT, PET/CT, and MR. Each talk will provide information on modality-specific QI techniques, applications, and challenges, including current efforts focused on solutions to such challenges. Learning Objectives: Understand key general challenges of QI application development and implementation, regardless of modality. Understand selected QI techniques and applications in CT, PET/CT, and MR. Understand challenges, and potential solutions for such challenges, for the applications presented for each modality.« less
Joint modality fusion and temporal context exploitation for semantic video analysis
NASA Astrophysics Data System (ADS)
Papadopoulos, Georgios Th; Mezaris, Vasileios; Kompatsiaris, Ioannis; Strintzis, Michael G.
2011-12-01
In this paper, a multi-modal context-aware approach to semantic video analysis is presented. Overall, the examined video sequence is initially segmented into shots and for every resulting shot appropriate color, motion and audio features are extracted. Then, Hidden Markov Models (HMMs) are employed for performing an initial association of each shot with the semantic classes that are of interest separately for each modality. Subsequently, a graphical modeling-based approach is proposed for jointly performing modality fusion and temporal context exploitation. Novelties of this work include the combined use of contextual information and multi-modal fusion, and the development of a new representation for providing motion distribution information to HMMs. Specifically, an integrated Bayesian Network is introduced for simultaneously performing information fusion of the individual modality analysis results and exploitation of temporal context, contrary to the usual practice of performing each task separately. Contextual information is in the form of temporal relations among the supported classes. Additionally, a new computationally efficient method for providing motion energy distribution-related information to HMMs, which supports the incorporation of motion characteristics from previous frames to the currently examined one, is presented. The final outcome of this overall video analysis framework is the association of a semantic class with every shot. Experimental results as well as comparative evaluation from the application of the proposed approach to four datasets belonging to the domains of tennis, news and volleyball broadcast video are presented.
NASA Astrophysics Data System (ADS)
Chen, K.; Weinmann, M.; Gao, X.; Yan, M.; Hinz, S.; Jutzi, B.; Weinmann, M.
2018-05-01
In this paper, we address the deep semantic segmentation of aerial imagery based on multi-modal data. Given multi-modal data composed of true orthophotos and the corresponding Digital Surface Models (DSMs), we extract a variety of hand-crafted radiometric and geometric features which are provided separately and in different combinations as input to a modern deep learning framework. The latter is represented by a Residual Shuffling Convolutional Neural Network (RSCNN) combining the characteristics of a Residual Network with the advantages of atrous convolution and a shuffling operator to achieve a dense semantic labeling. Via performance evaluation on a benchmark dataset, we analyze the value of different feature sets for the semantic segmentation task. The derived results reveal that the use of radiometric features yields better classification results than the use of geometric features for the considered dataset. Furthermore, the consideration of data on both modalities leads to an improvement of the classification results. However, the derived results also indicate that the use of all defined features is less favorable than the use of selected features. Consequently, data representations derived via feature extraction and feature selection techniques still provide a gain if used as the basis for deep semantic segmentation.
On the Multi-Modal Object Tracking and Image Fusion Using Unsupervised Deep Learning Methodologies
NASA Astrophysics Data System (ADS)
LaHaye, N.; Ott, J.; Garay, M. J.; El-Askary, H. M.; Linstead, E.
2017-12-01
The number of different modalities of remote-sensors has been on the rise, resulting in large datasets with different complexity levels. Such complex datasets can provide valuable information separately, yet there is a bigger value in having a comprehensive view of them combined. As such, hidden information can be deduced through applying data mining techniques on the fused data. The curse of dimensionality of such fused data, due to the potentially vast dimension space, hinders our ability to have deep understanding of them. This is because each dataset requires a user to have instrument-specific and dataset-specific knowledge for optimum and meaningful usage. Once a user decides to use multiple datasets together, deeper understanding of translating and combining these datasets in a correct and effective manner is needed. Although there exists data centric techniques, generic automated methodologies that can potentially solve this problem completely don't exist. Here we are developing a system that aims to gain a detailed understanding of different data modalities. Such system will provide an analysis environment that gives the user useful feedback and can aid in research tasks. In our current work, we show the initial outputs our system implementation that leverages unsupervised deep learning techniques so not to burden the user with the task of labeling input data, while still allowing for a detailed machine understanding of the data. Our goal is to be able to track objects, like cloud systems or aerosols, across different image-like data-modalities. The proposed system is flexible, scalable and robust to understand complex likenesses within multi-modal data in a similar spatio-temporal range, and also to be able to co-register and fuse these images when needed.
Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H.; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6–8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods. PMID:24505729
Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H; Shen, Dinggang
2013-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6-8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larner, J.
In this interactive session, lung SBRT patient cases will be presented to highlight real-world considerations for ensuring safe and accurate treatment delivery. An expert panel of speakers will discuss challenges specific to lung SBRT including patient selection, patient immobilization techniques, 4D CT simulation and respiratory motion management, target delineation for treatment planning, online treatment alignment, and established prescription regimens and OAR dose limits. Practical examples of cases, including the patient flow thought the clinical process are presented and audience participation will be encouraged. This panel session is designed to provide case demonstration and review for lung SBRT in terms ofmore » (1) clinical appropriateness in patient selection, (2) strategies for simulation, including 4D and respiratory motion management, and (3) applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent, and (4) image guidance in treatment delivery. Learning Objectives: Understand the established requirements for patient selection in lung SBRT Become familiar with the various immobilization strategies for lung SBRT, including technology for respiratory motion management Understand the benefits and pitfalls of applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent determination for lung SBRT Understand established prescription regimes and OAR dose limits.« less
Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery
Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir
2017-01-01
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659
NASA Astrophysics Data System (ADS)
Tian, Chao; Zhang, Wei; Nguyen, Van Phuc; Huang, Ziyi; Wang, Xueding; Paulus, Yannis M.
2018-02-01
Current clinical available retinal imaging techniques have limitations, including limited depth of penetration or requirement for the invasive injection of exogenous contrast agents. Here, we developed a novel multimodal imaging system for high-speed, high-resolution retinal imaging of larger animals, such as rabbits. The system integrates three state-of-the-art imaging modalities, including photoacoustic microscopy (PAM), optical coherence tomography (OCT), and fluorescence microscopy (FM). In vivo experimental results of rabbit eyes show that the PAM is able to visualize laser-induced retinal burns and distinguish individual eye blood vessels using a laser exposure dose of 80 nJ, which is well below the American National Standards Institute (ANSI) safety limit 160 nJ. The OCT can discern different retinal layers and visualize laser burns and choroidal detachments. The novel multi-modal imaging platform holds great promise in ophthalmic imaging.
NASA Astrophysics Data System (ADS)
Zakiya, Hanifah; Sinaga, Parlindungan; Hamidah, Ida
2017-05-01
The results of field studies showed the ability of science literacy of students was still low. One root of the problem lies in the books used in learning is not oriented toward science literacy component. This study focused on the effectiveness of the use of textbook-oriented provisioning capability science literacy by using multi modal representation. The text books development method used Design Representational Approach Learning to Write (DRALW). Textbook design which was applied to the topic of "Kinetic Theory of Gases" is implemented in XI grade students of high school learning. Effectiveness is determined by consideration of the effect and the normalized percentage gain value, while the hypothesis was tested using Independent T-test. The results showed that the textbooks which were developed using multi-mode representation science can improve the literacy skills of students. Based on the size of the effect size textbooks developed with representation multi modal was found effective in improving students' science literacy skills. The improvement was occurred in all the competence and knowledge of scientific literacy. The hypothesis testing showed that there was a significant difference on the ability of science literacy between class that uses textbooks with multi modal representation and the class that uses the regular textbook used in schools.
Weinstein, Ronald S; Graham, Anna R; Lian, Fangru; Braunhut, Beth L; Barker, Gail R; Krupinski, Elizabeth A; Bhattacharyya, Achyut K
2012-04-01
Telepathology, the distant service component of digital pathology, is a growth industry. The word "telepathology" was introduced into the English Language in 1986. Initially, two different, competing imaging modalities were used for telepathology. These were dynamic (real time) robotic telepathology and static image (store-and-forward) telepathology. In 1989, a hybrid dynamic robotic/static image telepathology system was developed in Norway. This hybrid imaging system bundled these two primary pathology imaging modalities into a single multi-modality pathology imaging system. Similar hybrid systems were subsequently developed and marketed in other countries as well. It is noteworthy that hybrid dynamic robotic/static image telepathology systems provided the infrastructure for the first truly sustainable telepathology services. Since then, impressive progress has been made in developing another telepathology technology, so-called "virtual microscopy" telepathology (also called "whole slide image" telepathology or "WSI" telepathology). Over the past decade, WSI has appeared to be emerging as the preferred digital telepathology digital imaging modality. However, recently, there has been a re-emergence of interest in dynamic-robotic telepathology driven, in part, by concerns over the lack of a means for up-and-down focusing (i.e., Z-axis focusing) using early WSI processors. In 2010, the initial two U.S. patents for robotic telepathology (issued in 1993 and 1994) expired enabling many digital pathology equipment companies to incorporate dynamic-robotic telepathology modules into their WSI products for the first time. The dynamic-robotic telepathology module provided a solution to the up-and-down focusing issue. WSI and dynamic robotic telepathology are now, rapidly, being bundled into a new class of telepathology/digital pathology imaging system, the "WSI-enhanced dynamic robotic telepathology system". To date, six major WSI processor equipment companies have embraced the approach and developed WSI-enhanced dynamic-robotic digital telepathology systems, marketed under a variety of labels. Successful commercialization of such systems could help overcome the current resistance of some pathologists to incorporate digital pathology, and telepathology, into their routine and esoteric laboratory services. Also, WSI-enhanced dynamic robotic telepathology could be useful for providing general pathology and subspecialty pathology services to many of the world's underserved populations in the decades ahead. This could become an important enabler for the delivery of patient-centered healthcare in the future. © 2012 The Authors APMIS © 2012 APMIS.
Image-guided thoracic surgery in the hybrid operation room.
Ujiie, Hideki; Effat, Andrew; Yasufuku, Kazuhiro
2017-01-01
There has been an increase in the use of image-guided technology to facilitate minimally invasive therapy. The next generation of minimally invasive therapy is focused on advancement and translation of novel image-guided technologies in therapeutic interventions, including surgery, interventional pulmonology, radiation therapy, and interventional laser therapy. To establish the efficacy of different minimally invasive therapies, we have developed a hybrid operating room, known as the guided therapeutics operating room (GTx OR) at the Toronto General Hospital. The GTx OR is equipped with multi-modality image-guidance systems, which features a dual source-dual energy computed tomography (CT) scanner, a robotic cone-beam CT (CBCT)/fluoroscopy, high-performance endobronchial ultrasound system, endoscopic surgery system, near-infrared (NIR) fluorescence imaging system, and navigation tracking systems. The novel multimodality image-guidance systems allow physicians to quickly, and accurately image patients while they are on the operating table. This yield improved outcomes since physicians are able to use image guidance during their procedures, and carry out innovative multi-modality therapeutics. Multiple preclinical translational studies pertaining to innovative minimally invasive technology is being developed in our guided therapeutics laboratory (GTx Lab). The GTx Lab is equipped with similar technology, and multimodality image-guidance systems as the GTx OR, and acts as an appropriate platform for translation of research into human clinical trials. Through the GTx Lab, we are able to perform basic research, such as the development of image-guided technologies, preclinical model testing, as well as preclinical imaging, and then translate that research into the GTx OR. This OR allows for the utilization of new technologies in cancer therapy, including molecular imaging, and other innovative imaging modalities, and therefore enables a better quality of life for patients, both during and after the procedure. In this article, we describe capabilities of the GTx systems, and discuss the first-in-human technologies used, and evaluated in GTx OR.
Nanodiamond Landmarks for Subcellular Multimodal Optical and Electron Imaging
Zurbuchen, Mark A.; Lake, Michael P.; Kohan, Sirus A.; Leung, Belinda; Bouchard, Louis-S.
2013-01-01
There is a growing need for biolabels that can be used in both optical and electron microscopies, are non-cytotoxic, and do not photobleach. Such biolabels could enable targeted nanoscale imaging of sub-cellular structures, and help to establish correlations between conjugation-delivered biomolecules and function. Here we demonstrate a sub-cellular multi-modal imaging methodology that enables localization of inert particulate probes, consisting of nanodiamonds having fluorescent nitrogen-vacancy centers. These are functionalized to target specific structures, and are observable by both optical and electron microscopies. Nanodiamonds targeted to the nuclear pore complex are rapidly localized in electron-microscopy diffraction mode to enable “zooming-in” to regions of interest for detailed structural investigations. Optical microscopies reveal nanodiamonds for in-vitro tracking or uptake-confirmation. The approach is general, works down to the single nanodiamond level, and can leverage the unique capabilities of nanodiamonds, such as biocompatibility, sensitive magnetometry, and gene and drug delivery. PMID:24036840
Kellogg, Marissa; Liang, Conrad W; Liebeskind, David S
2016-01-01
Clinicians treating sudden neurologic deficit are being faced with an increasing number of available imaging modalities. In this chapter we discuss a general approach to acute neuroimaging and weigh the considerations that determine which modality or modalities should be utilized. © 2016 Elsevier B.V. All rights reserved.
Xie, Yunyan; Cui, Zaixu; Zhang, Zhongmin; Sun, Yu; Sheng, Can; Li, Kuncheng; Gong, Gaolang; Han, Ying; Jia, Jianping
2015-01-01
Identifying amnestic mild cognitive impairment (aMCI) is of great clinical importance because aMCI is a putative prodromal stage of Alzheimer's disease. The present study aimed to explore the feasibility of accurately identifying aMCI with a magnetic resonance imaging (MRI) biomarker. We integrated measures of both gray matter (GM) abnormalities derived from structural MRI and white matter (WM) alterations acquired from diffusion tensor imaging at the voxel level across the entire brain. In particular, multi-modal brain features, including GM volume, WM fractional anisotropy, and mean diffusivity, were extracted from a relatively large sample of 64 Han Chinese aMCI patients and 64 matched controls. Then, support vector machine classifiers for GM volume, FA, and MD were fused to distinguish the aMCI patients from the controls. The fused classifier was evaluated with the leave-one-out and the 10-fold cross-validations, and the classifier had an accuracy of 83.59% and an area under the curve of 0.862. The most discriminative regions of GM were mainly located in the medial temporal lobe, temporal lobe, precuneus, cingulate gyrus, parietal lobe, and frontal lobe, whereas the most discriminative regions of WM were mainly located in the corpus callosum, cingulum, corona radiata, frontal lobe, and parietal lobe. Our findings suggest that aMCI is characterized by a distributed pattern of GM abnormalities and WM alterations that represent discriminative power and reflect relevant pathological changes in the brain, and these changes further highlight the advantage of multi-modal feature integration for identifying aMCI.
Analysis of chronic aortic regurgitation by 2D and 3D echocardiography and cardiac MRI
Stoebe, Stephan; Metze, Michael; Jurisch, Daniel; Tayal, Bhupendar; Solty, Kilian; Laufs, Ulrich; Pfeiffer, Dietrich; Hagendorff, Andreas
2018-01-01
Purpose The study compares the feasibility of the quantitative volumetric and semi-quantitative approach for quantification of chronic aortic regurgitation (AR) using different imaging modalities. Methods Left ventricular (LV) volumes, regurgitant volumes (RVol) and regurgitant fractions (RF) were assessed retrospectively by 2D, 3D echocardiography and cMRI in 55 chronic AR patients. Semi-quantitative parameters were assessed by 2D echocardiography. Results 22 (40%) patients had mild, 25 (46%) moderate and 8 (14%) severe AR. The quantitative volumetric approach was feasible using 2D, 3D echocardiography and cMRI, whereas the feasibility of semi-quantitative parameters varied considerably. LV volume (LVEDV, LVESV, SVtot) analyses showed good correlations between the different imaging modalities, although significantly increased LV volumes were assessed by cMRI. RVol was significantly different between 2D/3D echocardiography and 2D echocardiography/cMRI but was not significantly different between 3D echocardiography/cMRI. RF was not statistically different between 2D echocardiography/cMRI and 3D echocardiography/cMRI showing poor correlations (r < 0.5) between the different imaging modalities. For AR grading by RF, moderate agreement was observed between 2D/3D echocardiography and 2D echocardiography/cMRI and good agreement was observed between 3D echocardiography/cMRI. Conclusion Semi-quantitative parameters are difficult to determine by 2D echocardiography in clinical routine. The quantitative volumetric RF assessment seems to be feasible and can be discussed as an alternative approach in chronic AR. However, RVol and RF did not correlate well between the different imaging modalities. The best agreement for grading of AR severity by RF was observed between 3D echocardiography and cMRI. LV volumes can be verified by different approaches and different imaging modalities. PMID:29519957
Fixed Base Modal Survey of the MPCV Orion European Service Module Structural Test Article
NASA Technical Reports Server (NTRS)
Winkel, James P.; Akers, J. C.; Suarez, Vicente J.; Staab, Lucas D.; Napolitano, Kevin L.
2017-01-01
Recently, the MPCV Orion European Service Module Structural Test Article (E-STA) underwent sine vibration testing using the multi-axis shaker system at NASA GRC Plum Brook Station Mechanical Vibration Facility (MVF). An innovative approach using measured constraint shapes at the interface of E-STA to the MVF allowed high-quality fixed base modal parameters of the E-STA to be extracted, which have been used to update the E-STA finite element model (FEM), without the need for a traditional fixed base modal survey. This innovative approach provided considerable program cost and test schedule savings. This paper documents this modal survey, which includes the modal pretest analysis sensor selection, the fixed base methodology using measured constraint shapes as virtual references and measured frequency response functions, and post-survey comparison between measured and analysis fixed base modal parameters.
Bakas, Spyridon; Zeng, Ke; Sotiras, Aristeidis; Rathore, Saima; Akbari, Hamed; Gaonkar, Bilwaj; Rozycki, Martin; Pati, Sarthak; Davatzikos, Christos
2016-01-01
We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.
Manuel, Anura Michelle; Kalimuthu, Santhi; Pathmanathan, Sitra Siri; Narayanan, Prepageran; Zainal Abidin, Zurina; Azmi, Khairul; Khalil, Alizan
2017-04-01
Arteriovenous malformations are congenital lesions that may evolve with time and manifest in a plethora of presentations. They can occur as torrential epistaxis when it extensively involves the facial region. Multi-imaging modalities are available to assist in characterizing the structure of the lesion as well as its location and extent. This complex disease requires a multidisciplinary team approach with preoperative embolization and surgery. We present a rare cause of life-threatening epistaxis in a gentleman with a longstanding orbital and hemifacial arteriovenous malformation and discuss the complexities involved in its management. Copyright © 2017. Published by Elsevier Taiwan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep
2010-06-05
Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less
Liu, Yu; Kang, Ning; Lv, Jing; Zhou, Zijian; Zhao, Qingliang; Ma, Lingceng; Chen, Zhong; Ren, Lei; Nie, Liming
2016-08-01
A gadolinium-doped multi-shell upconversion nanoparticle under 800 nm excitation is synthesized with a 10-fold fluorescence-intensity enhancement over that under 980 nm. The nanoformulations exhibit excellent photoacoustic/luminescence/magnetic resonance tri-modal imaging capabilities, enabling visualization of tumor morphology and microvessel distribution at a new imaging depth. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Belief-Space Approach to Integrated Intelligence - Research Area 10.3: Intelligent Networks
2017-12-05
A Belief-Space Approach to Integrated Intelligence- Research Area 10.3: Intelligent Networks The views , opinions and/or findings contained in this...high dimensionality and multi -modality of their hybrid configuration spaces. Planners that perform a purely geometric search are prohibitively slow...Hamburg, January Paper Title: Hierarchical planning for multi -contact non-prehensile manipulation Publication Type: Conference Paper or Presentation
Fully Scalable Porous Metal Electrospray Propulsion
2012-03-20
particular emphasis on the variation of specific impulse for multi-modal propulsion is currently carried out by MIT and the Busek Company under an...Beam profile distributions in the negative (left) and positive (center) modes as visualized directly thorough a multi-channel plate and phosphor...screen. These profiles are parabolic (right) indicating the non-thermal character of these type of ion beams. Microscopic Image of pattern imprinted on Si
Innovative approach for in-vivo ablation validation on multimodal images
NASA Astrophysics Data System (ADS)
Shahin, O.; Karagkounis, G.; Carnegie, D.; Schlaefer, A.; Boctor, E.
2014-03-01
Radiofrequency ablation (RFA) is an important therapeutic procedure for small hepatic tumors. To make sure that the target tumor is effectively treated, RFA monitoring is essential. While several imaging modalities can observe the ablation procedure, it is not clear how ablated lesions on the images correspond to actual necroses. This uncertainty contributes to the high local recurrence rates (up to 55%) after radiofrequency ablative therapy. This study investigates a novel approach to correlate images of ablated lesions with actual necroses. We mapped both intraoperative images of the lesion and a slice through the actual necrosis in a common reference frame. An electromagnetic tracking system was used to accurately match lesion slices from different imaging modalities. To minimize the liver deformation effect, the tracking reference frame was defined inside the tissue by anchoring an electromagnetic sensor adjacent to the lesion. A validation test was performed using a phantom and proved that the end-to-end accuracy of the approach was within 2mm. In an in-vivo experiment, intraoperative magnetic resonance imaging (MRI) and ultrasound (US) ablation images were correlated to gross and histopathology. The results indicate that the proposed method can accurately correlate invivo ablations on different modalities. Ultimately, this will improve the interpretation of the ablation monitoring and reduce the recurrence rates associated with RFA.
Garimella, Rama; Eskew, Jeff; Bhamidi, Priyanka; Vielhauer, George; Hong, Yan; Anderson, H. Clarke; Tawfik, Ossama; Rowe, Peter
2013-01-01
Osteosarcoma (OS) is a bone malignancy that affects children and adolescents. It is a highly aggressive tumor and typically metastasizes to lungs. Despite aggressive chemotherapy and surgical treatments, the current 5 year survival rate is 60–70%. Clinically relevant models are needed to understand OS pathobiology, metastatic progression from bones to lungs, and ultimately, to develop more efficacious treatment strategies and improve survival rates in OS patients with metastasis. The main goal of this study was to develop and characterize an in vivo OS model that will allow non-invasive tracking of tumor progression in real time, and aid in studying OS pathobiology, and screening of potential therapeutic agents against OS. In this study, we have used a multi-modality approach using bioluminescent imaging, electron microscopy, micro-computed tomography, and histopathology to develop and characterize a preclinical Bioluminescent Osteosarcoma Orthotopic Mouse (BOOM) model, using 143B human OS cell line. The results of this study clearly demonstrate that the BOOM model represents the clinical disease as evidenced by a spectrum of changes associated with tumor establishment, progression and metastasis, and detection of known OS biomarkers in the primary and metastatic tumor tissue. Key novel findings of this study include: (a) multimodality approach for extensive characterization of the BOOM model using 143B human OS cell line; (b) evidence of renal metastasis in OS orthotopic model using 143B cells; (c) evidence of Runx2 expression in the metastatic lung tissue; and (d) evidence of the presence of extracellular membrane vesicles and myofibroblasts in the BOOM model. PMID:25688332
Hayes, Ashley R; Gayzik, F Scott; Moreno, Daniel P; Martin, R Shayn; Stitzel, Joel D
The purpose of this study was to use data from a multi-modality image set of males and females representing the 5(th), 50(th), and 95(th) percentile (n=6) to examine abdominal organ location, morphology, and rib coverage variations between supine and seated postures. Medical images were acquired from volunteers in three image modalities including Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and upright MRI (uMRI). A manual and semi-automated segmentation method was used to acquire data and a registration technique was employed to conduct a comparative analysis between abdominal organs (liver, spleen, and kidneys) in both postures. Location of abdominal organs, defined by center of gravity movement, varied between postures and was found to be significant (p=0.002 to p=0.04) in multiple directions for each organ. In addition, morphology changes, including compression and expansion, were seen in each organ as a result of postural changes. Rib coverage, defined as the projected area of the ribs onto the abdominal organs, was measured in frontal, lateral, and posterior projections, and also varied between postures. A significant change in rib coverage between postures was measured for the spleen and right kidney (p=0.03 and p=0.02). The results indicate that posture affects the location, morphology and rib coverage area of abdominal organs and these implications should be noted in computational modeling efforts focused on a seated posture.
Imaging has enormous untapped potential to improve cancer research through software to extract and process morphometric and functional biomarkers. In the era of non-cytotoxic treatment agents, multi- modality image-guided ablative therapies and rapidly evolving computational resources, quantitative imaging software can be transformative in enabling minimally invasive, objective and reproducible evaluation of cancer treatment response. Post-processing algorithms are integral to high-throughput analysis and fine- grained differentiation of multiple molecular targets.
Gamma-Ray imaging for nuclear security and safety: Towards 3-D gamma-ray vision
NASA Astrophysics Data System (ADS)
Vetter, Kai; Barnowksi, Ross; Haefner, Andrew; Joshi, Tenzing H. Y.; Pavlovsky, Ryan; Quiter, Brian J.
2018-01-01
The development of portable gamma-ray imaging instruments in combination with the recent advances in sensor and related computer vision technologies enable unprecedented capabilities in the detection, localization, and mapping of radiological and nuclear materials in complex environments relevant for nuclear security and safety. Though multi-modal imaging has been established in medicine and biomedical imaging for some time, the potential of multi-modal data fusion for radiological localization and mapping problems in complex indoor and outdoor environments remains to be explored in detail. In contrast to the well-defined settings in medical or biological imaging associated with small field-of-view and well-constrained extension of the radiation field, in many radiological search and mapping scenarios, the radiation fields are not constrained and objects and sources are not necessarily known prior to the measurement. The ability to fuse radiological with contextual or scene data in three dimensions, in analog to radiological and functional imaging with anatomical fusion in medicine, provides new capabilities enhancing image clarity, context, quantitative estimates, and visualization of the data products. We have developed new means to register and fuse gamma-ray imaging with contextual data from portable or moving platforms. These developments enhance detection and mapping capabilities as well as provide unprecedented visualization of complex radiation fields, moving us one step closer to the realization of gamma-ray vision in three dimensions.
Multi-modality endoscopic imaging for the detection of colorectal cancer
NASA Astrophysics Data System (ADS)
Wall, Richard Andrew
Optical coherence tomography (OCT) is an imaging method that is considered the optical analog to ultrasound, using the technique of optical interferometry to construct two-dimensional depth-resolved images of tissue microstructure. With a resolution on the order of 10 um and a penetration depth of 1-2 mm in highly scattering tissue, fiber optics-coupled OCT is an ideal modality for the inspection of the mouse colon with its miniaturization capabilities. In the present study, the complementary modalities laser-induced fluorescence (LIF), which offers information on the biochemical makeup of the tissue, and surface magnifying chromoendoscopy, which offers high contrast surface visualization, are combined with OCT in endoscopic imaging systems for the greater specificity and sensitivity in the differentiation between normal and neoplastic tissue, and for the visualization of biomarkers which are indicative of early events in colorectal carcinogenesis. Oblique incidence reflectometry (OIR) also offers advantages, allowing the calculation of bulk tissue optical properties for use as a diagnostic tool. The study was broken up into three specific sections. First, a dual-modality OCTLIF imaging system was designed, capable of focusing light over 325-1300 nm using a reflective distal optics design. A dual-modality fluorescence-based SMC-OCT system was then designed and constructed, capable of resolving the stained mucosal crypt structure of the in vivo mouse colon. The SMC-OCT instrument's OIR capabilities were then modeled, as a modified version of the probe was used measure tissue scattering and absorption coefficients.
A scalable method to improve gray matter segmentation at ultra high field MRI.
Gulban, Omer Faruk; Schneider, Marian; Marquardt, Ingo; Haast, Roy A M; De Martino, Federico
2018-01-01
High-resolution (functional) magnetic resonance imaging (MRI) at ultra high magnetic fields (7 Tesla and above) enables researchers to study how anatomical and functional properties change within the cortical ribbon, along surfaces and across cortical depths. These studies require an accurate delineation of the gray matter ribbon, which often suffers from inclusion of blood vessels, dura mater and other non-brain tissue. Residual segmentation errors are commonly corrected by browsing the data slice-by-slice and manually changing labels. This task becomes increasingly laborious and prone to error at higher resolutions since both work and error scale with the number of voxels. Here we show that many mislabeled, non-brain voxels can be corrected more efficiently and semi-automatically by representing three-dimensional anatomical images using two-dimensional histograms. We propose both a uni-modal (based on first spatial derivative) and multi-modal (based on compositional data analysis) approach to this representation and quantify the benefits in 7 Tesla MRI data of nine volunteers. We present an openly accessible Python implementation of these approaches and demonstrate that editing cortical segmentations using two-dimensional histogram representations as an additional post-processing step aids existing algorithms and yields improved gray matter borders. By making our data and corresponding expert (ground truth) segmentations openly available, we facilitate future efforts to develop and test segmentation algorithms on this challenging type of data.
A scalable method to improve gray matter segmentation at ultra high field MRI
De Martino, Federico
2018-01-01
High-resolution (functional) magnetic resonance imaging (MRI) at ultra high magnetic fields (7 Tesla and above) enables researchers to study how anatomical and functional properties change within the cortical ribbon, along surfaces and across cortical depths. These studies require an accurate delineation of the gray matter ribbon, which often suffers from inclusion of blood vessels, dura mater and other non-brain tissue. Residual segmentation errors are commonly corrected by browsing the data slice-by-slice and manually changing labels. This task becomes increasingly laborious and prone to error at higher resolutions since both work and error scale with the number of voxels. Here we show that many mislabeled, non-brain voxels can be corrected more efficiently and semi-automatically by representing three-dimensional anatomical images using two-dimensional histograms. We propose both a uni-modal (based on first spatial derivative) and multi-modal (based on compositional data analysis) approach to this representation and quantify the benefits in 7 Tesla MRI data of nine volunteers. We present an openly accessible Python implementation of these approaches and demonstrate that editing cortical segmentations using two-dimensional histogram representations as an additional post-processing step aids existing algorithms and yields improved gray matter borders. By making our data and corresponding expert (ground truth) segmentations openly available, we facilitate future efforts to develop and test segmentation algorithms on this challenging type of data. PMID:29874295
Richard Mazurchuk, PhD | Division of Cancer Prevention
Dr. Richard Mazurchuk received a BS in Physics and MS and PhD in Biophysics from SUNY Buffalo. His research focused on developing novel multi-modality imaging techniques, contrast (enhancing) agents and methods to assess the efficacy of experimental therapeutics. |
Páramo, José A.; Rodríguez JA, José A.; Orbe, Josune
2006-01-01
The clinical utility of a biomarker depends on its ability to identify high-risk individuals to optimally manage the patient. A new biomarker would be of clinical value if it is accurate and reliable, provides good sensitivity and specificity, and is available for widespread application. Data are accumulating on the potential clinical utility of integrating imaging technologies and circulating biomarkers for the identification of vulnerable (high-risk) cardiovascular patients. A multi-biomarker strategy consisting of markers of inflammation, hemostasis and thrombosis, proteolysis and oxidative stress, combined with new imaging modalities (optical coherence tomography, virtual histology plus IVUS, PET) can increase our ability to identify such thombosis-prone patients. In an ideal scenario, cardiovascular biomarkers and imaging combined will provide a better diagnostic tool to identify high-risk individuals and also more efficient methods for effective therapies to reduce such cardiovascular risk. However, additional studies are required in order to show that this approach can contribute to improved diagnostic and therapeutic of atherosclerotic disease. PMID:19690647
Páramo, José A; Rodríguez Ja, José A; Orbe, Josune
2007-02-07
The clinical utility of a biomarker depends on its ability to identify high-risk individuals to optimally manage the patient. A new biomarker would be of clinical value if it is accurate and reliable, provides good sensitivity and specificity, and is available for widespread application. Data are accumulating on the potential clinical utility of integrating imaging technologies and circulating biomarkers for the identification of vulnerable (high-risk) cardiovascular patients. A multi-biomarker strategy consisting of markers of inflammation, hemostasis and thrombosis, proteolysis and oxidative stress, combined with new imaging modalities (optical coherence tomography, virtual histology plus IVUS, PET) can increase our ability to identify such thombosis-prone patients. In an ideal scenario, cardiovascular biomarkers and imaging combined will provide a better diagnostic tool to identify high-risk individuals and also more efficient methods for effective therapies to reduce such cardiovascular risk. However, additional studies are required in order to show that this approach can contribute to improved diagnostic and therapeutic of atherosclerotic disease.
Preclinical imaging methods for assessing the safety and efficacy of regenerative medicine therapies
NASA Astrophysics Data System (ADS)
Scarfe, Lauren; Brillant, Nathalie; Kumar, J. Dinesh; Ali, Noura; Alrumayh, Ahmed; Amali, Mohammed; Barbellion, Stephane; Jones, Vendula; Niemeijer, Marije; Potdevin, Sophie; Roussignol, Gautier; Vaganov, Anatoly; Barbaric, Ivana; Barrow, Michael; Burton, Neal C.; Connell, John; Dazzi, Francesco; Edsbagge, Josefina; French, Neil S.; Holder, Julie; Hutchinson, Claire; Jones, David R.; Kalber, Tammy; Lovatt, Cerys; Lythgoe, Mark F.; Patel, Sara; Patrick, P. Stephen; Piner, Jacqueline; Reinhardt, Jens; Ricci, Emanuelle; Sidaway, James; Stacey, Glyn N.; Starkey Lewis, Philip J.; Sullivan, Gareth; Taylor, Arthur; Wilm, Bettina; Poptani, Harish; Murray, Patricia; Goldring, Chris E. P.; Park, B. Kevin
2017-10-01
Regenerative medicine therapies hold enormous potential for a variety of currently incurable conditions with high unmet clinical need. Most progress in this field to date has been achieved with cell-based regenerative medicine therapies, with over a thousand clinical trials performed up to 2015. However, lack of adequate safety and efficacy data is currently limiting wider uptake of these therapies. To facilitate clinical translation, non-invasive in vivo imaging technologies that enable careful evaluation and characterisation of the administered cells and their effects on host tissues are critically required to evaluate their safety and efficacy in relevant preclinical models. This article reviews the most common imaging technologies available and how they can be applied to regenerative medicine research. We cover details of how each technology works, which cell labels are most appropriate for different applications, and the value of multi-modal imaging approaches to gain a comprehensive understanding of the responses to cell therapy in vivo.
PRECISION MANAGEMENT OF LOCALIZED PROSTATE CANCER
VanderWeele, David J.; Turkbey, Baris; Sowalsky, Adam G.
2017-01-01
Introduction The vast majority of men who are diagnosed with prostate cancer die of other causes, highlighting the importance of determining which patient has a risk of death from prostate cancer. Precision management of prostate cancer patients includes distinguishing which men have potentially lethal disease and employing strategies for determining which treatment modality appropriately balances the desire to achieve a durable response while preventing unnecessary overtreatment. Areas covered In this review, we highlight precision approaches to risk assessment and a context for the precision-guided application of definitive therapy. We focus on three dilemmas relevant to the diagnosis of localized prostate cancer: screening, the decision to treat, and postoperative management. Expert commentary In the last five years, numerous precision tools have emerged with potential benefit to the patient. However, to achieve optimal outcome, the decision to employ one or more of these tests must be considered in the context of prevailing conventional factors. Moreover, performance and interpretation of a molecular or imaging precision test remains practitioner-dependent. The next five years will witness increased marriage of molecular and imaging biomarkers for improved multi-modal diagnosis and discrimination of disease that is aggressive versus truly indolent. PMID:28133630
Cocchia, Rosangela; D’Andrea, Antonello; Conte, Marianna; Cavallaro, Massimo; Riegler, Lucia; Citro, Rodolfo; Sirignano, Cesare; Imbriaco, Massimo; Cappelli, Maurizio; Gregorio, Giovanni; Calabrò, Raffaele; Bossone, Eduardo
2017-01-01
Transcatheter aortic valve replacement (TAVR) has been validated as a new therapy for patients affected by severe symptomatic aortic stenosis who are not eligible for surgical intervention because of major contraindication or high operative risk. Patient selection for TAVR should be based not only on accurate assessment of aortic stenosis morphology, but also on several clinical and functional data. Multi-Imaging modalities should be preferred for assessing the anatomy and the dimensions of the aortic valve and annulus before TAVR. Ultrasounds represent the first line tool in evaluation of this patients giving detailed anatomic description of aortic valve complex and allowing estimating with enough reliability the hemodynamic entity of valvular stenosis. Angiography should be used to assess coronary involvement and plan a revascularization strategy before the implant. Multislice computed tomography play a central role as it can give anatomical details in order to choice the best fitting prosthesis, evaluate the morphology of the access path and detect other relevant comorbidities. Cardiovascular magnetic resonance and positron emission tomography are emergent modality helpful in aortic stenosis evaluation. The aim of this review is to give an overview on TAVR clinical and technical aspects essential for adequate selection. PMID:28400918
Optimal Co-segmentation of Tumor in PET-CT Images with Context Information
Song, Qi; Bai, Junjie; Han, Dongfeng; Bhatia, Sudershan; Sun, Wenqing; Rockey, William; Bayouth, John E.; Buatti, John M.
2014-01-01
PET-CT images have been widely used in clinical practice for radiotherapy treatment planning of the radiotherapy. Many existing segmentation approaches only work for a single imaging modality, which suffer from the low spatial resolution in PET or low contrast in CT. In this work we propose a novel method for the co-segmentation of the tumor in both PET and CT images, which makes use of advantages from each modality: the functionality information from PET and the anatomical structure information from CT. The approach formulates the segmentation problem as a minimization problem of a Markov Random Field (MRF) model, which encodes the information from both modalities. The optimization is solved using a graph-cut based method. Two sub-graphs are constructed for the segmentation of the PET and the CT images, respectively. To achieve consistent results in two modalities, an adaptive context cost is enforced by adding context arcs between the two subgraphs. An optimal solution can be obtained by solving a single maximum flow problem, which leads to simultaneous segmentation of the tumor volumes in both modalities. The proposed algorithm was validated in robust delineation of lung tumors on 23 PET-CT datasets and two head-and-neck cancer subjects. Both qualitative and quantitative results show significant improvement compared to the graph cut methods solely using PET or CT. PMID:23693127
NASA Astrophysics Data System (ADS)
Allen, Matthew S.; Mayes, Randall L.; Bergman, Elizabeth J.
2010-11-01
Modal substructuring or component mode synthesis (CMS) has been standard practice for many decades in the analytical realm, yet a number of significant difficulties have been encountered when attempting to combine experimentally derived modal models with analytical ones or when predicting the effect of structural modifications using experimental measurements. This work presents a new method that removes the effects of a flexible fixture from an experimentally obtained modal model. It can be viewed as an extension to the approach where rigid masses are removed from a structure. The approach presented here improves the modal basis of the substructure, so that it can be used to more accurately estimate the modal parameters of the built-up system. New types of constraints are also presented, which constrain the modal degrees of freedom of the substructures, avoiding the need to estimate the connection point displacements and rotations. These constraints together with the use of a flexible fixture enable a new approach for joining structures, especially those with statically indeterminate multi-point connections, such as two circular flanges that are joined by many more bolts than required to enforce compatibility if the substructures were rigid. Fixture design is discussed, one objective of which is to achieve a mass-loaded boundary condition that exercises the substructure at the connection point as it is in the built up system. The proposed approach is demonstrated with two examples using experimental measurements from laboratory systems. The first is a simple problem of joining two beams of differing lengths, while the second consists of a three-dimensional structure comprising a circular plate that is bolted at eight locations to a flange on a cylindrical structure. In both cases frequency response functions predicted by the substructuring methods agree well with those of the actual coupled structures over a significant range of frequencies.
Pettersson-Yeo, William; Benetti, Stefania; Marquand, Andre F.; Joules, Richard; Catani, Marco; Williams, Steve C. R.; Allen, Paul; McGuire, Philip; Mechelli, Andrea
2014-01-01
In the pursuit of clinical utility, neuroimaging researchers of psychiatric and neurological illness are increasingly using analyses, such as support vector machine, that allow inference at the single-subject level. Recent studies employing single-modality data, however, suggest that classification accuracies must be improved for such utility to be realized. One possible solution is to integrate different data types to provide a single combined output classification; either by generating a single decision function based on an integrated kernel matrix, or, by creating an ensemble of multiple single modality classifiers and integrating their predictions. Here, we describe four integrative approaches: (1) an un-weighted sum of kernels, (2) multi-kernel learning, (3) prediction averaging, and (4) majority voting, and compare their ability to enhance classification accuracy relative to the best single-modality classification accuracy. We achieve this by integrating structural, functional, and diffusion tensor magnetic resonance imaging data, in order to compare ultra-high risk (n = 19), first episode psychosis (n = 19) and healthy control subjects (n = 23). Our results show that (i) whilst integration can enhance classification accuracy by up to 13%, the frequency of such instances may be limited, (ii) where classification can be enhanced, simple methods may yield greater increases relative to more computationally complex alternatives, and, (iii) the potential for classification enhancement is highly influenced by the specific diagnostic comparison under consideration. In conclusion, our findings suggest that for moderately sized clinical neuroimaging datasets, combining different imaging modalities in a data-driven manner is no “magic bullet” for increasing classification accuracy. However, it remains possible that this conclusion is dependent on the use of neuroimaging modalities that had little, or no, complementary information to offer one another, and that the integration of more diverse types of data would have produced greater classification enhancement. We suggest that future studies ideally examine a greater variety of data types (e.g., genetic, cognitive, and neuroimaging) in order to identify the data types and combinations optimally suited to the classification of early stage psychosis. PMID:25076868
Pettersson-Yeo, William; Benetti, Stefania; Marquand, Andre F; Joules, Richard; Catani, Marco; Williams, Steve C R; Allen, Paul; McGuire, Philip; Mechelli, Andrea
2014-01-01
In the pursuit of clinical utility, neuroimaging researchers of psychiatric and neurological illness are increasingly using analyses, such as support vector machine, that allow inference at the single-subject level. Recent studies employing single-modality data, however, suggest that classification accuracies must be improved for such utility to be realized. One possible solution is to integrate different data types to provide a single combined output classification; either by generating a single decision function based on an integrated kernel matrix, or, by creating an ensemble of multiple single modality classifiers and integrating their predictions. Here, we describe four integrative approaches: (1) an un-weighted sum of kernels, (2) multi-kernel learning, (3) prediction averaging, and (4) majority voting, and compare their ability to enhance classification accuracy relative to the best single-modality classification accuracy. We achieve this by integrating structural, functional, and diffusion tensor magnetic resonance imaging data, in order to compare ultra-high risk (n = 19), first episode psychosis (n = 19) and healthy control subjects (n = 23). Our results show that (i) whilst integration can enhance classification accuracy by up to 13%, the frequency of such instances may be limited, (ii) where classification can be enhanced, simple methods may yield greater increases relative to more computationally complex alternatives, and, (iii) the potential for classification enhancement is highly influenced by the specific diagnostic comparison under consideration. In conclusion, our findings suggest that for moderately sized clinical neuroimaging datasets, combining different imaging modalities in a data-driven manner is no "magic bullet" for increasing classification accuracy. However, it remains possible that this conclusion is dependent on the use of neuroimaging modalities that had little, or no, complementary information to offer one another, and that the integration of more diverse types of data would have produced greater classification enhancement. We suggest that future studies ideally examine a greater variety of data types (e.g., genetic, cognitive, and neuroimaging) in order to identify the data types and combinations optimally suited to the classification of early stage psychosis.
Su, Hang; Yin, Zhaozheng; Huh, Seungil; Kanade, Takeo
2013-10-01
Phase-contrast microscopy is one of the most common and convenient imaging modalities to observe long-term multi-cellular processes, which generates images by the interference of lights passing through transparent specimens and background medium with different retarded phases. Despite many years of study, computer-aided phase contrast microscopy analysis on cell behavior is challenged by image qualities and artifacts caused by phase contrast optics. Addressing the unsolved challenges, the authors propose (1) a phase contrast microscopy image restoration method that produces phase retardation features, which are intrinsic features of phase contrast microscopy, and (2) a semi-supervised learning based algorithm for cell segmentation, which is a fundamental task for various cell behavior analysis. Specifically, the image formation process of phase contrast microscopy images is first computationally modeled with a dictionary of diffraction patterns; as a result, each pixel of a phase contrast microscopy image is represented by a linear combination of the bases, which we call phase retardation features. Images are then partitioned into phase-homogeneous atoms by clustering neighboring pixels with similar phase retardation features. Consequently, cell segmentation is performed via a semi-supervised classification technique over the phase-homogeneous atoms. Experiments demonstrate that the proposed approach produces quality segmentation of individual cells and outperforms previous approaches. Copyright © 2013 Elsevier B.V. All rights reserved.
Deeply learnt hashing forests for content based image retrieval in prostate MR images
NASA Astrophysics Data System (ADS)
Shah, Amit; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin
2016-03-01
Deluge in the size and heterogeneity of medical image databases necessitates the need for content based retrieval systems for their efficient organization. In this paper, we propose such a system to retrieve prostate MR images which share similarities in appearance and content with a query image. We introduce deeply learnt hashing forests (DL-HF) for this image retrieval task. DL-HF effectively leverages the semantic descriptiveness of deep learnt Convolutional Neural Networks. This is used in conjunction with hashing forests which are unsupervised random forests. DL-HF hierarchically parses the deep-learnt feature space to encode subspaces with compact binary code words. We propose a similarity preserving feature descriptor called Parts Histogram which is derived from DL-HF. Correlation defined on this descriptor is used as a similarity metric for retrieval from the database. Validations on publicly available multi-center prostate MR image database established the validity of the proposed approach. The proposed method is fully-automated without any user-interaction and is not dependent on any external image standardization like image normalization and registration. This image retrieval method is generalizable and is well-suited for retrieval in heterogeneous databases other imaging modalities and anatomies.
Qi, Shile; Calhoun, Vince D.; van Erp, Theo G. M.; Bustillo, Juan; Damaraju, Eswar; Turner, Jessica A.; Du, Yuhui; Chen, Jiayu; Yu, Qingbao; Mathalon, Daniel H.; Ford, Judith M.; Voyvodic, James; Mueller, Bryon A.; Belger, Aysenil; Ewen, Sarah Mc; Potkin, Steven G.; Preda, Adrian; Jiang, Tianzi
2017-01-01
Multimodal fusion is an effective approach to take advantage of cross-information among multiple imaging data to better understand brain diseases. However, most current fusion approaches are blind, without adopting any prior information. To date, there is increasing interest to uncover the neurocognitive mapping of specific behavioral measurement on enriched brain imaging data; hence, a supervised, goal-directed model that enables a priori information as a reference to guide multimodal data fusion is in need and a natural option. Here we proposed a fusion with reference model, called “multi-site canonical correlation analysis with reference plus joint independent component analysis” (MCCAR+jICA), which can precisely identify co-varying multimodal imaging patterns closely related to reference information, such as cognitive scores. In a 3-way fusion simulation, the proposed method was compared with its alternatives on estimation accuracy of both target component decomposition and modality linkage detection. MCCAR+jICA outperforms others with higher precision. In human imaging data, working memory performance was utilized as a reference to investigate the covarying functional and structural brain patterns among 3 modalities and how they are impaired in schizophrenia. Two independent cohorts (294 and 83 subjects respectively) were used. Interestingly, similar brain maps were identified between the two cohorts, with substantial overlap in the executive control networks in fMRI, salience network in sMRI, and major white matter tracts in dMRI. These regions have been linked with working memory deficits in schizophrenia in multiple reports, while MCCAR+jICA further verified them in a repeatable, joint manner, demonstrating the potential of such results to identify potential neuromarkers for mental disorders. PMID:28708547
Griffis, Joseph C; Allendorfer, Jane B; Szaflarski, Jerzy P
2016-01-15
Manual lesion delineation by an expert is the standard for lesion identification in MRI scans, but it is time-consuming and can introduce subjective bias. Alternative methods often require multi-modal MRI data, user interaction, scans from a control population, and/or arbitrary statistical thresholding. We present an approach for automatically identifying stroke lesions in individual T1-weighted MRI scans using naïve Bayes classification. Probabilistic tissue segmentation and image algebra were used to create feature maps encoding information about missing and abnormal tissue. Leave-one-case-out training and cross-validation was used to obtain out-of-sample predictions for each of 30 cases with left hemisphere stroke lesions. Our method correctly predicted lesion locations for 30/30 un-trained cases. Post-processing with smoothing (8mm FWHM) and cluster-extent thresholding (100 voxels) was found to improve performance. Quantitative evaluations of post-processed out-of-sample predictions on 30 cases revealed high spatial overlap (mean Dice similarity coefficient=0.66) and volume agreement (mean percent volume difference=28.91; Pearson's r=0.97) with manual lesion delineations. Our automated approach agrees with manual tracing. It provides an alternative to automated methods that require multi-modal MRI data, additional control scans, or user interaction to achieve optimal performance. Our fully trained classifier has applications in neuroimaging and clinical contexts. Copyright © 2015 Elsevier B.V. All rights reserved.
Gueninchault, N; Proudhon, H; Ludwig, W
2016-11-01
Multi-modal characterization of polycrystalline materials by combined use of three-dimensional (3D) X-ray diffraction and imaging techniques may be considered as the 3D equivalent of surface studies in the electron microscope combining diffraction and other imaging modalities. Since acquisition times at synchrotron sources are nowadays compatible with four-dimensional (time lapse) studies, suitable mechanical testing devices are needed which enable switching between these different imaging modalities over the course of a mechanical test. Here a specifically designed tensile device, fulfilling severe space constraints and permitting to switch between X-ray (holo)tomography, diffraction contrast tomography and topotomography, is presented. As a proof of concept the 3D characterization of an Al-Li alloy multicrystal by means of diffraction contrast tomography is presented, followed by repeated topotomography characterization of one selected grain at increasing levels of deformation. Signatures of slip bands and sudden lattice rotations inside the grain have been shown by means of in situ topography carried out during the load ramps, and diffraction spot peak broadening has been monitored throughout the experiment.
Gueninchault, N.; Proudhon, H.; Ludwig, W.
2016-01-01
Multi-modal characterization of polycrystalline materials by combined use of three-dimensional (3D) X-ray diffraction and imaging techniques may be considered as the 3D equivalent of surface studies in the electron microscope combining diffraction and other imaging modalities. Since acquisition times at synchrotron sources are nowadays compatible with four-dimensional (time lapse) studies, suitable mechanical testing devices are needed which enable switching between these different imaging modalities over the course of a mechanical test. Here a specifically designed tensile device, fulfilling severe space constraints and permitting to switch between X-ray (holo)tomography, diffraction contrast tomography and topotomography, is presented. As a proof of concept the 3D characterization of an Al–Li alloy multicrystal by means of diffraction contrast tomography is presented, followed by repeated topotomography characterization of one selected grain at increasing levels of deformation. Signatures of slip bands and sudden lattice rotations inside the grain have been shown by means of in situ topography carried out during the load ramps, and diffraction spot peak broadening has been monitored throughout the experiment. PMID:27787253
SU-E-J-218: Novel Validation Paradigm of MRI to CT Deformation of Prostate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padgett, K; University of Miami School of Medicine - Radiology, Miami, FL; Pirozzi, S
2015-06-15
Purpose: Deformable registration algorithms are inherently difficult to characterize in the multi-modality setting due to a significant differences in the characteristics of the different modalities (CT and MRI) as well as tissue deformations. We present a unique paradigm where this is overcome by utilizing a planning-MRI acquired within an hour of the planning-CT serving as a surrogate for quantifying MRI to CT deformation by eliminating the issues of multi-modality comparisons. Methods: For nine subjects, T2 fast-spin-echo images were acquired at two different time points, the first several weeks prior to planning (diagnostic-MRI) and the second on the same day asmore » the planning-CT (planning-MRI). Significant effort in patient positioning and bowel/bladder preparation was undertaken to minimize distortion of the prostate in all datasets. The diagnostic-MRI was rigidly and deformably aligned to the planning-CT utilizing a commercially available deformable registration algorithm synthesized from local registrations. Additionally, the quality of rigid alignment was ranked by an imaging physicist. The distances between corresponding anatomical landmarks on rigid and deformed registrations (diagnostic-MR to planning-CT) were evaluated. Results: It was discovered that in cases where the rigid registration was of acceptable quality the deformable registration didn’t improve the alignment, this was true of all metrics employed. If the analysis is separated into cases where the rigid alignment was ranked as unacceptable the deformable registration significantly improved the alignment, 4.62mm residual error in landmarks as compared to 5.72mm residual error in rigid alignments with a p-value of 0.0008. Conclusion: This paradigm provides an ideal testing ground for MR to CT deformable registration algorithms by allowing for inter-modality comparisons of multi-modality registrations. Consistent positioning, bowel and bladder preparation may Result in higher quality rigid registrations than typically achieved which limits the impact of deformable registrations. In this study cases where significant differences exist, deformable registrations provide significant value.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salter, B.
2016-06-15
In this interactive session, lung SBRT patient cases will be presented to highlight real-world considerations for ensuring safe and accurate treatment delivery. An expert panel of speakers will discuss challenges specific to lung SBRT including patient selection, patient immobilization techniques, 4D CT simulation and respiratory motion management, target delineation for treatment planning, online treatment alignment, and established prescription regimens and OAR dose limits. Practical examples of cases, including the patient flow thought the clinical process are presented and audience participation will be encouraged. This panel session is designed to provide case demonstration and review for lung SBRT in terms ofmore » (1) clinical appropriateness in patient selection, (2) strategies for simulation, including 4D and respiratory motion management, and (3) applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent, and (4) image guidance in treatment delivery. Learning Objectives: Understand the established requirements for patient selection in lung SBRT Become familiar with the various immobilization strategies for lung SBRT, including technology for respiratory motion management Understand the benefits and pitfalls of applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent determination for lung SBRT Understand established prescription regimes and OAR dose limits.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedict, S.
2016-06-15
In this interactive session, lung SBRT patient cases will be presented to highlight real-world considerations for ensuring safe and accurate treatment delivery. An expert panel of speakers will discuss challenges specific to lung SBRT including patient selection, patient immobilization techniques, 4D CT simulation and respiratory motion management, target delineation for treatment planning, online treatment alignment, and established prescription regimens and OAR dose limits. Practical examples of cases, including the patient flow thought the clinical process are presented and audience participation will be encouraged. This panel session is designed to provide case demonstration and review for lung SBRT in terms ofmore » (1) clinical appropriateness in patient selection, (2) strategies for simulation, including 4D and respiratory motion management, and (3) applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent, and (4) image guidance in treatment delivery. Learning Objectives: Understand the established requirements for patient selection in lung SBRT Become familiar with the various immobilization strategies for lung SBRT, including technology for respiratory motion management Understand the benefits and pitfalls of applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent determination for lung SBRT Understand established prescription regimes and OAR dose limits.« less
MO-E-BRB-00: PANEL DISCUSSION: SBRT/SRS Case Studies - Lung
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
In this interactive session, lung SBRT patient cases will be presented to highlight real-world considerations for ensuring safe and accurate treatment delivery. An expert panel of speakers will discuss challenges specific to lung SBRT including patient selection, patient immobilization techniques, 4D CT simulation and respiratory motion management, target delineation for treatment planning, online treatment alignment, and established prescription regimens and OAR dose limits. Practical examples of cases, including the patient flow thought the clinical process are presented and audience participation will be encouraged. This panel session is designed to provide case demonstration and review for lung SBRT in terms ofmore » (1) clinical appropriateness in patient selection, (2) strategies for simulation, including 4D and respiratory motion management, and (3) applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent, and (4) image guidance in treatment delivery. Learning Objectives: Understand the established requirements for patient selection in lung SBRT Become familiar with the various immobilization strategies for lung SBRT, including technology for respiratory motion management Understand the benefits and pitfalls of applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent determination for lung SBRT Understand established prescription regimes and OAR dose limits.« less
Carbon-11 radiolabeling of iron-oxide nanoparticles for dual-modality PET/MR imaging
NASA Astrophysics Data System (ADS)
Sharma, Ramesh; Xu, Youwen; Kim, Sung Won; Schueller, Michael J.; Alexoff, David; Smith, S. David; Wang, Wei; Schlyer, David
2013-07-01
Dual-modality imaging, using Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) simultaneously, is a powerful tool to gain valuable information correlating structure with function in biomedicine. The advantage of this dual approach is that the strengths of one modality can balance the weaknesses of the other. However, success of this technique requires developing imaging probes suitable for both. Here, we report on the development of a nanoparticle labeling procedure via covalent bonding with carbon-11 PET isotope. Carbon-11 in the form of [11C]methyl iodide was used as a methylation agent to react with carboxylic acid (-COOH) and amine (-NH2) functional groups of ligands bound to the nanoparticles (NPs). The surface coating ligands present on superparamagnetic iron-oxide nanoparticles (SPIO NPs) were radiolabeled to achieve dual-modality PET/MR imaging capabilities. The proof-of-concept dual-modality PET/MR imaging using the radiolabeled SPIO NPs was demonstrated in an in vivo experiment.Dual-modality imaging, using Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) simultaneously, is a powerful tool to gain valuable information correlating structure with function in biomedicine. The advantage of this dual approach is that the strengths of one modality can balance the weaknesses of the other. However, success of this technique requires developing imaging probes suitable for both. Here, we report on the development of a nanoparticle labeling procedure via covalent bonding with carbon-11 PET isotope. Carbon-11 in the form of [11C]methyl iodide was used as a methylation agent to react with carboxylic acid (-COOH) and amine (-NH2) functional groups of ligands bound to the nanoparticles (NPs). The surface coating ligands present on superparamagnetic iron-oxide nanoparticles (SPIO NPs) were radiolabeled to achieve dual-modality PET/MR imaging capabilities. The proof-of-concept dual-modality PET/MR imaging using the radiolabeled SPIO NPs was demonstrated in an in vivo experiment. Electronic supplementary information (ESI) available: Synthesis and functionalization of NPs. Fig. S1, TEM data of NPs before labeling. Fig. S2, magnetization curve of iron-oxide NPs. Fig. S3, radioactivity measurements for 11C-labeled NPs. Fig. S4, TGA data of iron-oxide NPs. Fig. S5-S8, Radio-TLC chromatograms of 11C-labeled NPs. Fig. S9, radio-HPLC chromatograms of supernatant solutions from washing 11C-labeled NPs to check for impurities. See DOI: 10.1039/c3nr02519e
Lee, Jasper; Zhang, Jianguo; Park, Ryan; Dagliyan, Grant; Liu, Brent; Huang, H K
2012-07-01
A Molecular Imaging Data Grid (MIDG) was developed to address current informatics challenges in archival, sharing, search, and distribution of preclinical imaging studies between animal imaging facilities and investigator sites. This manuscript presents a 2nd generation MIDG replacing the Globus Toolkit with a new system architecture that implements the IHE XDS-i integration profile. Implementation and evaluation were conducted using a 3-site interdisciplinary test-bed at the University of Southern California. The 2nd generation MIDG design architecture replaces the initial design's Globus Toolkit with dedicated web services and XML-based messaging for dedicated management and delivery of multi-modality DICOM imaging datasets. The Cross-enterprise Document Sharing for Imaging (XDS-i) integration profile from the field of enterprise radiology informatics was adopted into the MIDG design because streamlined image registration, management, and distribution dataflow are likewise needed in preclinical imaging informatics systems as in enterprise PACS application. Implementation of the MIDG is demonstrated at the University of Southern California Molecular Imaging Center (MIC) and two other sites with specified hardware, software, and network bandwidth. Evaluation of the MIDG involves data upload, download, and fault-tolerance testing scenarios using multi-modality animal imaging datasets collected at the USC Molecular Imaging Center. The upload, download, and fault-tolerance tests of the MIDG were performed multiple times using 12 collected animal study datasets. Upload and download times demonstrated reproducibility and improved real-world performance. Fault-tolerance tests showed that automated failover between Grid Node Servers has minimal impact on normal download times. Building upon the 1st generation concepts and experiences, the 2nd generation MIDG system improves accessibility of disparate animal-model molecular imaging datasets to users outside a molecular imaging facility's LAN using a new architecture, dataflow, and dedicated DICOM-based management web services. Productivity and efficiency of preclinical research for translational sciences investigators has been further streamlined for multi-center study data registration, management, and distribution.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method. PMID:24820966
Computational Intelligence Techniques for Tactile Sensing Systems
Gastaldo, Paolo; Pinna, Luigi; Seminara, Lucia; Valle, Maurizio; Zunino, Rodolfo
2014-01-01
Tactile sensing helps robots interact with humans and objects effectively in real environments. Piezoelectric polymer sensors provide the functional building blocks of the robotic electronic skin, mainly thanks to their flexibility and suitability for detecting dynamic contact events and for recognizing the touch modality. The paper focuses on the ability of tactile sensing systems to support the challenging recognition of certain qualities/modalities of touch. The research applies novel computational intelligence techniques and a tensor-based approach for the classification of touch modalities; its main results consist in providing a procedure to enhance system generalization ability and architecture for multi-class recognition applications. An experimental campaign involving 70 participants using three different modalities in touching the upper surface of the sensor array was conducted, and confirmed the validity of the approach. PMID:24949646
Computational intelligence techniques for tactile sensing systems.
Gastaldo, Paolo; Pinna, Luigi; Seminara, Lucia; Valle, Maurizio; Zunino, Rodolfo
2014-06-19
Tactile sensing helps robots interact with humans and objects effectively in real environments. Piezoelectric polymer sensors provide the functional building blocks of the robotic electronic skin, mainly thanks to their flexibility and suitability for detecting dynamic contact events and for recognizing the touch modality. The paper focuses on the ability of tactile sensing systems to support the challenging recognition of certain qualities/modalities of touch. The research applies novel computational intelligence techniques and a tensor-based approach for the classification of touch modalities; its main results consist in providing a procedure to enhance system generalization ability and architecture for multi-class recognition applications. An experimental campaign involving 70 participants using three different modalities in touching the upper surface of the sensor array was conducted, and confirmed the validity of the approach.
Cross-modal learning to rank via latent joint representation.
Wu, Fei; Jiang, Xinyang; Li, Xi; Tang, Siliang; Lu, Weiming; Zhang, Zhongfei; Zhuang, Yueting
2015-05-01
Cross-modal ranking is a research topic that is imperative to many applications involving multimodal data. Discovering a joint representation for multimodal data and learning a ranking function are essential in order to boost the cross-media retrieval (i.e., image-query-text or text-query-image). In this paper, we propose an approach to discover the latent joint representation of pairs of multimodal data (e.g., pairs of an image query and a text document) via a conditional random field and structural learning in a listwise ranking manner. We call this approach cross-modal learning to rank via latent joint representation (CML²R). In CML²R, the correlations between multimodal data are captured in terms of their sharing hidden variables (e.g., topics), and a hidden-topic-driven discriminative ranking function is learned in a listwise ranking manner. The experiments show that the proposed approach achieves a good performance in cross-media retrieval and meanwhile has the capability to learn the discriminative representation of multimodal data.
Multi-modal and targeted imaging improves automated mid-brain segmentation
NASA Astrophysics Data System (ADS)
Plassard, Andrew J.; D'Haese, Pierre F.; Pallavaram, Srivatsan; Newton, Allen T.; Claassen, Daniel O.; Dawant, Benoit M.; Landman, Bennett A.
2017-02-01
The basal ganglia and limbic system, particularly the thalamus, putamen, internal and external globus pallidus, substantia nigra, and sub-thalamic nucleus, comprise a clinically relevant signal network for Parkinson's disease. In order to manually trace these structures, a combination of high-resolution and specialized sequences at 7T are used, but it is not feasible to scan clinical patients in those scanners. Targeted imaging sequences at 3T such as F-GATIR, and other optimized inversion recovery sequences, have been presented which enhance contrast in a select group of these structures. In this work, we show that a series of atlases generated at 7T can be used to accurately segment these structures at 3T using a combination of standard and optimized imaging sequences, though no one approach provided the best result across all structures. In the thalamus and putamen, a median Dice coefficient over 0.88 and a mean surface distance less than 1.0mm was achieved using a combination of T1 and an optimized inversion recovery imaging sequences. In the internal and external globus pallidus a Dice over 0.75 and a mean surface distance less than 1.2mm was achieved using a combination of T1 and FGATIR imaging sequences. In the substantia nigra and sub-thalamic nucleus a Dice coefficient of over 0.6 and a mean surface distance of less than 1.0mm was achieved using the optimized inversion recovery imaging sequence. On average, using T1 and optimized inversion recovery together produced significantly improved segmentation results than any individual modality (p<0.05 wilcox sign-rank test).
Ahn, Hyungwoo; Chun, Eun Ju; Lee, Hak Jong; Hwang, Sung Il; Choi, Dong-Ju; Chae, In-Ho; Lee, Kyung Won
2018-01-01
Although the causes of hypertension are usually unknown, about 10% of the cases occur secondary to specific etiologies, which are often treatable. Common categories of secondary hypertension include renal parenchymal disease, renovascular stenosis, vascular and endocrinologic disorders. For diseases involving the renal parenchyma and adrenal glands, ultrasonography (US), computed tomography (CT) or magnetic resonance (MR) imaging is recommended. For renovascular stenosis and vascular disorders, Doppler US, conventional or noninvasive (CT or MR) angiography is an appropriate modality. Nuclear imaging can be useful in the differential diagnosis of endocrine causes. Radiologists should understand the role of each imaging modality and its typical findings in various causes of secondary hypertension. This article focuses on appropriate imaging approaches in accordance with the categorized etiologies leading to hypertension.
NASA Astrophysics Data System (ADS)
Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.
2013-12-01
The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.
2D-3D registration using gradient-based MI for image guided surgery systems
NASA Astrophysics Data System (ADS)
Yim, Yeny; Chen, Xuanyi; Wakid, Mike; Bielamowicz, Steve; Hahn, James
2011-03-01
Registration of preoperative CT data to intra-operative video images is necessary not only to compare the outcome of the vocal fold after surgery with the preplanned shape but also to provide the image guidance for fusion of all imaging modalities. We propose a 2D-3D registration method using gradient-based mutual information. The 3D CT scan is aligned to 2D endoscopic images by finding the corresponding viewpoint between the real camera for endoscopic images and the virtual camera for CT scans. Even though mutual information has been successfully used to register different imaging modalities, it is difficult to robustly register the CT rendered image to the endoscopic image due to varying light patterns and shape of the vocal fold. The proposed method calculates the mutual information in the gradient images as well as original images, assigning more weight to the high gradient regions. The proposed method can emphasize the effect of vocal fold and allow a robust matching regardless of the surface illumination. To find the viewpoint with maximum mutual information, a downhill simplex method is applied in a conditional multi-resolution scheme which leads to a less-sensitive result to local maxima. To validate the registration accuracy, we evaluated the sensitivity to initial viewpoint of preoperative CT. Experimental results showed that gradient-based mutual information provided robust matching not only for two identical images with different viewpoints but also for different images acquired before and after surgery. The results also showed that conditional multi-resolution scheme led to a more accurate registration than single-resolution.
A multi-modal approach for activity classification and fall detection
NASA Astrophysics Data System (ADS)
Castillo, José Carlos; Carneiro, Davide; Serrano-Cuerda, Juan; Novais, Paulo; Fernández-Caballero, Antonio; Neves, José
2014-04-01
The society is changing towards a new paradigm in which an increasing number of old adults live alone. In parallel, the incidence of conditions that affect mobility and independence is also rising as a consequence of a longer life expectancy. In this paper, the specific problem of falls of old adults is addressed by devising a technological solution for monitoring these users. Video cameras, accelerometers and GPS sensors are combined in a multi-modal approach to monitor humans inside and outside the domestic environment. Machine learning techniques are used to detect falls and classify activities from accelerometer data. Video feeds and GPS are used to provide location inside and outside the domestic environment. It results in a monitoring solution that does not imply the confinement of the users to a closed environment.
Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks
NASA Astrophysics Data System (ADS)
Audebert, Nicolas; Le Saux, Bertrand; Lefèvre, Sébastien
2018-06-01
In this work, we investigate various methods to deal with semantic labeling of very high resolution multi-modal remote sensing data. Especially, we study how deep fully convolutional networks can be adapted to deal with multi-modal and multi-scale remote sensing data for semantic labeling. Our contributions are threefold: (a) we present an efficient multi-scale approach to leverage both a large spatial context and the high resolution data, (b) we investigate early and late fusion of Lidar and multispectral data, (c) we validate our methods on two public datasets with state-of-the-art results. Our results indicate that late fusion make it possible to recover errors steaming from ambiguous data, while early fusion allows for better joint-feature learning but at the cost of higher sensitivity to missing data.
Hierarchical patch-based co-registration of differently stained histopathology slides
NASA Astrophysics Data System (ADS)
Yigitsoy, Mehmet; Schmidt, Günter
2017-03-01
Over the past decades, digital pathology has emerged as an alternative way of looking at the tissue at subcellular level. It enables multiplexed analysis of different cell types at micron level. Information about cell types can be extracted by staining sections of a tissue block using different markers. However, robust fusion of structural and functional information from different stains is necessary for reproducible multiplexed analysis. Such a fusion can be obtained via image co-registration by establishing spatial correspondences between tissue sections. Spatial correspondences can then be used to transfer various statistics about cell types between sections. However, the multi-modal nature of images and sparse distribution of interesting cell types pose several challenges for the registration of differently stained tissue sections. In this work, we propose a co-registration framework that efficiently addresses such challenges. We present a hierarchical patch-based registration of intensity normalized tissue sections. Preliminary experiments demonstrate the potential of the proposed technique for the fusion of multi-modal information from differently stained digital histopathology sections.
Design and development of a simple UV fluorescence multi-spectral imaging system
NASA Astrophysics Data System (ADS)
Tovar, Carlos; Coker, Zachary; Yakovlev, Vladislav V.
2018-02-01
Healthcare access in low-resource settings is compromised by the availability of affordable and accurate diagnostic equipment. The four primary poverty-related diseases - AIDS, pneumonia, malaria, and tuberculosis - account for approximately 400 million annual deaths worldwide as of 2016 estimates. Current diagnostic procedures for these diseases are prolonged and can become unreliable under various conditions. We present the development of a simple low-cost UV fluorescence multi-spectral imaging system geared towards low resource settings for a variety of biological and in-vitro applications. Fluorescence microscopy serves as a useful diagnostic indicator and imaging tool. The addition of a multi-spectral imaging modality allows for the detection of fluorophores within specific wavelength bands, as well as the distinction between fluorophores possessing overlapping spectra. The developed instrument has the potential for a very diverse range of diagnostic applications in basic biomedical science and biomedical diagnostics and imaging. Performance assessment of the microscope will be validated with a variety of samples ranging from organic compounds to biological samples.
Ahmad, Fareed; Mangano, Robert; Shore, Shirah; Polimenakos, Anastasios
2017-10-01
This is a case report of premature low birth weight infant with hypoplasia of left heart structures and a large malaligned VSD who underwent successful staged approach of biventricular repair. We obtained qualitative and quantitative echocardiographic, MRI, and conventional catheterization data to support stepwise strategy towards LV rehabilitation to sustain adequate cardiac output. A thorough and intense follow-up has shown significant growth of left heart structures and favorable clinical status following staged biventricular repair. Our data indicate usefulness of qualitative and quantitative advanced complimentary multi-imaging modalities in predicting the postnatal growth potential of critically underdeveloped left heart structures.
Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R
2017-11-01
The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues.
NASA Astrophysics Data System (ADS)
Ma, Xibo; Tian, Jie; Zhang, Bo; Zhang, Xing; Xue, Zhenwen; Dong, Di; Han, Dong
2011-03-01
Among many optical molecular imaging modalities, bioluminescence imaging (BLI) has more and more wide application in tumor detection and evaluation of pharmacodynamics, toxicity, pharmacokinetics because of its noninvasive molecular and cellular level detection ability, high sensitivity and low cost in comparison with other imaging technologies. However, BLI can not present the accurate location and intensity of the inner bioluminescence sources such as in the bone, liver or lung etc. Bioluminescent tomography (BLT) shows its advantage in determining the bioluminescence source distribution inside a small animal or phantom. Considering the deficiency of two-dimensional imaging modality, we developed three-dimensional tomography to reconstruct the information of the bioluminescence source distribution in transgenic mOC-Luc mice bone with the boundary measured data. In this paper, to study the osteocalcin (OC) accumulation in transgenic mOC-Luc mice bone, a BLT reconstruction method based on multilevel adaptive finite element (FEM) algorithm was used for localizing and quantifying multi bioluminescence sources. Optical and anatomical information of the tissues are incorporated as a priori knowledge in this method, which can reduce the ill-posedness of BLT. The data was acquired by the dual modality BLT and Micro CT prototype system that was developed by us. Through temperature control and absolute intensity calibration, a relative accurate intensity can be calculated. The location of the OC accumulation was reconstructed, which was coherent with the principle of bone differentiation. This result also was testified by ex vivo experiment in the black 96-plate well using the BLI system and the chemiluminescence apparatus.
3D/2D model-to-image registration by imitation learning for cardiac procedures.
Toth, Daniel; Miao, Shun; Kurzendorfer, Tanja; Rinaldi, Christopher A; Liao, Rui; Mansi, Tommaso; Rhode, Kawal; Mountney, Peter
2018-05-12
In cardiac interventions, such as cardiac resynchronization therapy (CRT), image guidance can be enhanced by involving preoperative models. Multimodality 3D/2D registration for image guidance, however, remains a significant research challenge for fundamentally different image data, i.e., MR to X-ray. Registration methods must account for differences in intensity, contrast levels, resolution, dimensionality, field of view. Furthermore, same anatomical structures may not be visible in both modalities. Current approaches have focused on developing modality-specific solutions for individual clinical use cases, by introducing constraints, or identifying cross-modality information manually. Machine learning approaches have the potential to create more general registration platforms. However, training image to image methods would require large multimodal datasets and ground truth for each target application. This paper proposes a model-to-image registration approach instead, because it is common in image-guided interventions to create anatomical models for diagnosis, planning or guidance prior to procedures. An imitation learning-based method, trained on 702 datasets, is used to register preoperative models to intraoperative X-ray images. Accuracy is demonstrated on cardiac models and artificial X-rays generated from CTs. The registration error was [Formula: see text] on 1000 test cases, superior to that of manual ([Formula: see text]) and gradient-based ([Formula: see text]) registration. High robustness is shown in 19 clinical CRT cases. Besides the proposed methods feasibility in a clinical environment, evaluation has shown good accuracy and high robustness indicating that it could be applied in image-guided interventions.
Cloud-based processing of multi-spectral imaging data
NASA Astrophysics Data System (ADS)
Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David
2017-03-01
Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.
Description of patellar movement by 3D parameters obtained from dynamic CT acquisition
NASA Astrophysics Data System (ADS)
de Sá Rebelo, Marina; Moreno, Ramon Alfredo; Gobbi, Riccardo Gomes; Camanho, Gilberto Luis; de Ávila, Luiz Francisco Rodrigues; Demange, Marco Kawamura; Pecora, Jose Ricardo; Gutierrez, Marco Antonio
2014-03-01
The patellofemoral joint is critical in the biomechanics of the knee. The patellofemoral instability is one condition that generates pain, functional impairment and often requires surgery as part of orthopedic treatment. The analysis of the patellofemoral dynamics has been performed by several medical image modalities. The clinical parameters assessed are mainly based on 2D measurements, such as the patellar tilt angle and the lateral shift among others. Besides, the acquisition protocols are mostly performed with the leg laid static at fixed angles. The use of helical multi slice CT scanner can allow the capture and display of the joint's movement performed actively by the patient. However, the orthopedic applications of this scanner have not yet been standardized or widespread. In this work we present a method to evaluate the biomechanics of the patellofemoral joint during active contraction using multi slice CT images. This approach can greatly improve the analysis of patellar instability by displaying the physiology during muscle contraction. The movement was evaluated by computing its 3D displacements and rotations from different knee angles. The first processing step registered the images in both angles based on the femuŕs position. The transformation matrix of the patella from the images was then calculated, which provided the rotations and translations performed by the patella from its position in the first image to its position in the second image. Analysis of these parameters for all frames provided real 3D information about the patellar displacement.
Alternative magnetic flux leakage modalities for pipeline inspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katragadda, G.; Lord, W.; Sun, Y.S.
1996-05-01
Increasing quality consciousness is placing higher demands on the accuracy and reliability of inspection systems used in defect detection and characterization. Nondestructive testing techniques often rely on using multi-transducer approaches to obtain greater defect sensitivity. This paper investigates the possibility of taking advantage of alternative modalities associated with the standard magnetic flux leakage tool to obtain additional defect information, while still using a single excitation source.
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
Development of a platform for co-registered ultrasound and MR contrast imaging in vivo
NASA Astrophysics Data System (ADS)
Chandrana, Chaitanya; Bevan, Peter; Hudson, John; Pang, Ian; Burns, Peter; Plewes, Donald; Chopra, Rajiv
2011-02-01
Imaging of the microvasculature is often performed using contrast agents in combination with either ultrasound (US) or magnetic resonance (MR) imaging. Contrast agents are used to enhance medical imaging by highlighting microvascular properties and function. Dynamic signal changes arising from the passage of contrast agents through the microvasculature can be used to characterize different pathologies; however, comparisons across modalities are difficult due to differences in the interactions of contrast agents with the microvasculature. Better knowledge of the relationship of contrast enhancement patterns with both modalities could enable better characterization of tissue microvasculature. We developed a co-registration platform for multi-modal US and MR imaging using clinical imaging systems in order to study the relationship between US and MR contrast enhancement. A preliminary validation study was performed in phantoms to determine the registration accuracy of the platform. In phantoms, the in-plane registration accuracy was measured to be 0.2 ± 0.2 and 0.3 ± 0.2 mm, in the lateral and axial directions, respectively. The out-of-plane registration accuracy was estimated to be 0.5 mm ±0.1. Co-registered US and MR imaging was performed in a rabbit model to evaluate contrast kinetics in different tissue types after bolus injections of US and MR contrast agents. The arrival time of the contrast agent in the plane of imaging was relatively similar for both modalities. We studied three different tissue types: muscle, large vessels and fat. In US, the temporal kinetics of signal enhancement were not strongly dependent on tissue type. In MR, however, due to the different amounts of agent extravasation in each tissue type, tissue-specific contrast kinetics were observed. This study demonstrates the feasibility of performing in vivo co-registered contrast US and MR imaging to study the relationships of the enhancement patterns with each modality.
Beltrami, Matteo; Palazzuoli, Alberto; Padeletti, Luigi; Cerbai, Elisabetta; Coiro, Stefano; Emdin, Michele; Marcucci, Rossella; Morrone, Doralisa; Cameli, Matteo; Savino, Ketty; Pedrinelli, Roberto; Ambrosio, Giuseppe
2018-02-01
Functional analysis and measurement of left atrium are an integral part of cardiac evaluation, and they represent a key element during non-invasive analysis of diastolic function in patients with hypertension (HT) and/or heart failure with preserved ejection fraction (HFpEF). However, diastolic dysfunction remains quite elusive regarding classification, and atrial size and function are two key factors for left ventricular (LV) filling evaluation. Chronic left atrial (LA) remodelling is the final step of chronic intra-cavitary pressure overload, and it accompanies increased neurohormonal, proarrhythmic and prothrombotic activities. In this systematic review, we aim to purpose a multi-modality approach for LA geometry and function analysis, which integrates diastolic flow with LA characteristics and remodelling through application of both traditional and new diagnostic tools. The most important studies published in the literature on LA size, function and diastolic dysfunction in patients with HFpEF, HT and/or atrial fibrillation (AF) are considered and discussed. In HFpEF and HT, pulsed and tissue Doppler assessments are useful tools to estimate LV filling pressure, atrio-ventricular coupling and LV relaxation but they need to be enriched with LA evaluation in terms of morphology and function. An integrated evaluation should be also applied to patients with a high arrhythmic risk, in whom eccentric LA remodelling and higher LA stiffness are associated with a greater AF risk. Evaluation of LA size, volume, function and structure are mandatory in the management of patients with HT, HFpEF and AF. A multi-modality approach could provide additional information, identifying subjects with more severe LA remodelling. Left atrium assessment deserves an accurate study inside the cardiac imaging approach and optimised measurement with established cut-offs need to be better recognised through multicenter studies. © 2017 John Wiley & Sons Ltd.
A multi-modal stereo microscope based on a spatial light modulator.
Lee, M P; Gibson, G M; Bowman, R; Bernet, S; Ritsch-Marte, M; Phillips, D B; Padgett, M J
2013-07-15
Spatial Light Modulators (SLMs) can emulate the classic microscopy techniques, including differential interference (DIC) contrast and (spiral) phase contrast. Their programmability entails the benefit of flexibility or the option to multiplex images, for single-shot quantitative imaging or for simultaneous multi-plane imaging (depth-of-field multiplexing). We report the development of a microscope sharing many of the previously demonstrated capabilities, within a holographic implementation of a stereo microscope. Furthermore, we use the SLM to combine stereo microscopy with a refocusing filter and with a darkfield filter. The instrument is built around a custom inverted microscope and equipped with an SLM which gives various imaging modes laterally displaced on the same camera chip. In addition, there is a wide angle camera for visualisation of a larger region of the sample.
Primary prevention of cannabis use: a systematic review of randomized controlled trials.
Norberg, Melissa M; Kezelman, Sarah; Lim-Howe, Nicholas
2013-01-01
A systematic review of primary prevention was conducted for cannabis use outcomes in youth and young adults. The aim of the review was to develop a comprehensive understanding of prevention programming by assessing universal, targeted, uni-modal, and multi-modal approaches as well as individual program characteristics. Twenty-eight articles, representing 25 unique studies, identified from eight electronic databases (EMBASE, MEDLINE, CINAHL, ERIC, PsycINFO, DRUG, EBM Reviews, and Project CORK), were eligible for inclusion. Results indicated that primary prevention programs can be effective in reducing cannabis use in youth populations, with statistically significant effect sizes ranging from trivial (0.07) to extremely large (5.26), with the majority of significant effect sizes being trivial to small. Given that the preponderance of significant effect sizes were trivial to small and that percentages of statistically significant and non-statistically significant findings were often equivalent across program type and individual components, the effectiveness of primary prevention for cannabis use should be interpreted with caution. Universal multi-modal programs appeared to outperform other program types (i.e, universal uni-modal, targeted multi-modal, targeted unimodal). Specifically, universal multi-modal programs that targeted early adolescents (10-13 year olds), utilised non-teacher or multiple facilitators, were short in duration (10 sessions or less), and implemented boosters sessions were associated with large median effect sizes. While there were studies in these areas that contradicted these results, the results highlight the importance of assessing the interdependent relationship of program components and program types. Finally, results indicated that the overall quality of included studies was poor, with an average quality rating of 4.64 out of 9. Thus, further quality research and reporting and the development of new innovative programs are required.
Primary Prevention of Cannabis Use: A Systematic Review of Randomized Controlled Trials
Norberg, Melissa M.; Kezelman, Sarah; Lim-Howe, Nicholas
2013-01-01
A systematic review of primary prevention was conducted for cannabis use outcomes in youth and young adults. The aim of the review was to develop a comprehensive understanding of prevention programming by assessing universal, targeted, uni-modal, and multi-modal approaches as well as individual program characteristics. Twenty-eight articles, representing 25 unique studies, identified from eight electronic databases (EMBASE, MEDLINE, CINAHL, ERIC, PsycINFO, DRUG, EBM Reviews, and Project CORK), were eligible for inclusion. Results indicated that primary prevention programs can be effective in reducing cannabis use in youth populations, with statistically significant effect sizes ranging from trivial (0.07) to extremely large (5.26), with the majority of significant effect sizes being trivial to small. Given that the preponderance of significant effect sizes were trivial to small and that percentages of statistically significant and non-statistically significant findings were often equivalent across program type and individual components, the effectiveness of primary prevention for cannabis use should be interpreted with caution. Universal multi-modal programs appeared to outperform other program types (i.e, universal uni-modal, targeted multi-modal, targeted unimodal). Specifically, universal multi-modal programs that targeted early adolescents (10–13 year olds), utilised non-teacher or multiple facilitators, were short in duration (10 sessions or less), and implemented boosters sessions were associated with large median effect sizes. While there were studies in these areas that contradicted these results, the results highlight the importance of assessing the interdependent relationship of program components and program types. Finally, results indicated that the overall quality of included studies was poor, with an average quality rating of 4.64 out of 9. Thus, further quality research and reporting and the development of new innovative programs are required. PMID:23326396
Eytan, Danny; Pang, Elizabeth W; Doesburg, Sam M; Nenadovic, Vera; Gavrilovic, Bojan; Laussen, Peter; Guerguerian, Anne-Marie
2016-01-01
Acute brain injury is a common cause of death and critical illness in children and young adults. Fundamental management focuses on early characterization of the extent of injury and optimizing recovery by preventing secondary damage during the days following the primary injury. Currently, bedside technology for measuring neurological function is mainly limited to using electroencephalography (EEG) for detection of seizures and encephalopathic features, and evoked potentials. We present a proof of concept study in patients with acute brain injury in the intensive care setting, featuring a bedside functional imaging set-up designed to map cortical brain activation patterns by combining high density EEG recordings, multi-modal sensory stimulation (auditory, visual, and somatosensory), and EEG source modeling. Use of source-modeling allows for examination of spatiotemporal activation patterns at the cortical region level as opposed to the traditional scalp potential maps. The application of this system in both healthy and brain-injured participants is demonstrated with modality-specific source-reconstructed cortical activation patterns. By combining stimulation obtained with different modalities, most of the cortical surface can be monitored for changes in functional activation without having to physically transport the subject to an imaging suite. The results in patients in an intensive care setting with anatomically well-defined brain lesions suggest a topographic association between their injuries and activation patterns. Moreover, we report the reproducible application of a protocol examining a higher-level cortical processing with an auditory oddball paradigm involving presentation of the patient's own name. This study reports the first successful application of a bedside functional brain mapping tool in the intensive care setting. This application has the potential to provide clinicians with an additional dimension of information to manage critically-ill children and adults, and potentially patients not suited for magnetic resonance imaging technologies.
The new frontiers of multimodality and multi-isotope imaging
NASA Astrophysics Data System (ADS)
Behnam Azad, Babak; Nimmagadda, Sridhar
2014-06-01
Technological advances in imaging systems and the development of target specific imaging tracers has been rapidly growing over the past two decades. Recent progress in "all-in-one" imaging systems that allow for automated image coregistration has significantly added to the growth of this field. These developments include ultra high resolution PET and SPECT scanners that can be integrated with CT or MR resulting in PET/CT, SPECT/CT, SPECT/PET and PET/MRI scanners for simultaneous high resolution high sensitivity anatomical and functional imaging. These technological developments have also resulted in drastic enhancements in image quality and acquisition time while eliminating cross compatibility issues between modalities. Furthermore, the most cutting edge technology, though mostly preclinical, also allows for simultaneous multimodality multi-isotope image acquisition and image reconstruction based on radioisotope decay characteristics. These scientific advances, in conjunction with the explosion in the development of highly specific multimodality molecular imaging agents, may aid in realizing simultaneous imaging of multiple biological processes and pave the way towards more efficient diagnosis and improved patient care.
NASA Astrophysics Data System (ADS)
Hsu, Chih-Wei; Le, Henry H.; Li-Villarreal, Nanbing; Piazza, Victor G.; Kalaga, Sowmya; Dickinson, Mary E.
2017-02-01
Hemodynamic force is vital to cardiovascular remodeling in the early post-implantation mouse embryo. Here, we present work using microCT and lightsheet microscopy to establish the critical sequence of developmental events required for forming functional vasculature and circulation in the embryo, yolk sac, and placenta in the context of normal and impaired flow. A flow impaired model, Mlc2a+/- will be used to determine how hemodynamic force affects the specific events during embryonic development and vascular remodeling between the 4 and 29-somite stage using microCT. We have recently established high-resolution methods for the generation of 3D image volumes from the whole embryo within the deciduum (Hsu et al., in revision). This method enables the careful characterization of 3D images of vitelline and umbilical vessel remodeling to define how poor blood flow impacts both vitelline and umbilical vessel remodeling. Novel lightsheet live imaging techniques will be used to determine the consequence of impaired blood flow on yolk sac vasculature remodeling and formation of umbilical vessels using transgenic reporters: Flk-myr::mCherry, Flk1-H2B::YFP, or ɛGlobin-GFP. High-resolution 3D imaging of fixed and ScaleA2-cleared whole mount embryos labeled with Ki67 and Caspase3 will also be performed using lightsheet microscopy to quantify the proliferation and apoptotic indexes of early post-implanted embryos and yolk sac. This multi-modality approach is aimed at revealing further information about the cellular mechanisms required for proper vessel remodeling and the initial stages in placentation during early post-implantation development.
Muldoon, Timothy J; Polydorides, Alexandros D; Maru, Dipen M; Harpaz, Noam; Harris, Michael T; Hofstettor, Wayne; Hiotis, Spiros P; Kim, Sanghyun A; Ky, Alex J; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2012-01-01
Background Confocal endomicroscopy has revolutionized endoscopy by offering sub-cellular images of gastrointestinal epithelium; however, field-of-view is limited. There is a need for multi-scale endoscopy platforms that use widefield imaging to better direct placement of high-resolution probes. Design Feasibility Study Objective This study evaluates the feasibility of a single agent, proflavine hemisulfate, as a contrast medium during both widefield and high resolution imaging to characterize morphologic changes associated with a variety of gastrointestinal conditions. Setting U.T. M.D. Anderson Cancer Center (Houston, TX) and Mount Sinai Medical Center (New York, NY) Patients, Interventions, and Main Outcome Measurements Surgical specimens were obtained from 15 patients undergoing esophagectomy/colectomy. Proflavine, a vital fluorescent dye, was applied topically. Specimens were imaged with a widefield multispectral microscope and a high-resolution microendoscope. Images were compared to histopathology. Results Widefield-fluorescence imaging enhanced visualization of morphology, including the presence and spatial distribution of glands, glandular distortion, atrophy and crowding. High-resolution imaging of widefield-abnormal areas revealed that neoplastic progression corresponded to glandular heterogeneity and nuclear crowding in dysplasia, with glandular effacement in carcinoma. These widefield and high-resolution image features correlated well with histopathology. Limitations This imaging approach must be validated in vivo with a larger sample size. Conclusions Multi-scale proflavine-enhanced fluorescence imaging can delineate epithelial changes in a variety of gastrointestinal conditions. Distorted glandular features seen with widefield imaging could serve as a critical ‘bridge’ to high-resolution probe placement. An endoscopic platform combining the two modalities with a single vital-dye may facilitate point-of-care decision-making by providing real-time, in vivo diagnoses. PMID:22301343
Mobile, Multi-modal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia
2016-10-01
Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Coherent anti-stokes Raman spectroscopy ( CARS ) can be used to detect differences in the oxygen content...oxygen, eye, retina, photoreceptor, neuron, TRPM7, neurodegeneration, neurotoxicity, coherent anti-Stokes Raman spectroscopy, CARS , mouse 16...ANSI Std. Z39.18 Section 1: Introduction The study is based on the premise that Coherent Anti-Stokes Raman scattering ( CARS ) imaging provides a
Jones, Christopher P; Brenner, Ceri M; Stitt, Camilla A; Armstrong, Chris; Rusby, Dean R; Mirfayzi, Seyed R; Wilson, Lucy A; Alejo, Aarón; Ahmed, Hamad; Allott, Ric; Butler, Nicholas M H; Clarke, Robert J; Haddock, David; Hernandez-Gomez, Cristina; Higginson, Adam; Murphy, Christopher; Notley, Margaret; Paraskevoulakos, Charilaos; Jowsey, John; McKenna, Paul; Neely, David; Kar, Satya; Scott, Thomas B
2016-11-15
A small scale sample nuclear waste package, consisting of a 28mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500keV), with a source size of <0.5mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30cm(2) scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned. Copyright © 2016. Published by Elsevier B.V.
Pinkerton, Nathalie M.; Gindy, Marian E.; Calero-DdelC, Victoria L.; Wolfson, Theodore; Pagels, Robert F.; Adler, Derek; Gao, Dayuan; Li, Shike; Wang, Ruobing; Zevon, Margot; Yao, Nan; Pacheco, Carlos; Therien, Michael J.; Rinaldi, Carlos; Sinko, Patrick J.
2015-01-01
MRI and NIR-active, multi-modal Composite NanoCarriers (CNCs) are prepared using a simple, one-step process, Flash NanoPrecipitation (FNP). The FNP process allows for the independent control of the hydrodynamic diameter, co-core excipient and NIR dye loading, and iron oxide-based nanocrystal (IONC) content of the CNCs. In the controlled precipitation process, 10 nm IONCs are encapsulated into poly(ethylene glycol) stabilized CNCs to make biocompatible T2 contrast agents. By adjusting the formulation, CNC size is tuned between 80 and 360 nm. Holding the CNC size constant at an intensity weighted average diameter of 99 ± 3 nm (PDI width 28 nm), the particle relaxivity varies linearly with encapsulated IONC content ranging from 66 to 533 mM-1s-1 for CNCs formulated with 4 to 16 wt% IONC. To demonstrate the use of CNCs as in vivo MRI contrast agents, CNCs are surface functionalized with liver targeting hydroxyl groups. The CNCs enable the detection of 0.8 mm3 non-small cell lung cancer metastases in mice livers via MRI. Incorporating the hydrophobic, NIR dye PZn3 into CNCs enables complementary visualization with long-wavelength fluorescence at 800 nm. In vivo imaging demonstrates the ability of CNCs to act both as MRI and fluorescent imaging agents. PMID:25925128
Light Sheet Fluorescence Microscopy (LSFM)
Adams, Michael W.; Loftus, Andrew F.; Dunn, Sarah E.; Joens, Matthew S.; Fitzpatrick, James A.J.
2015-01-01
The development of confocal microscopy techniques introduced the ability to optically section fluorescent samples in the axial dimension, perpendicular to the image plane. These approaches, via the placement of a pinhole in the conjugate image plane, provided superior resolution in the axial (z) dimension resulting in nearly isotropic optical sections. However, increased axial resolution, via pinhole optics, comes at the cost of both speed and excitation efficiency. Light Sheet Fluorescent Microscopy (LSFM), a century old idea (Siedentopf and Zsigmondy, 1902) made possible with modern developments in both excitation and detection optics, provides sub-cellular resolution and optical sectioning capabilities without compromising speed or excitation efficiency. Over the past decade, several variations of LSFM have been implemented each with its own benefits and deficiencies. Here we discuss LSFM fundamentals and outline the basic principles of several major light sheet based imaging modalities (SPIM, inverted SPIM, multi-view SPIM, Bessel beam SPIM, and stimulated emission depletion SPIM while considering their biological relevance in terms of intrusiveness, temporal resolution, and sample requirements. PMID:25559221
Topology optimized design of functionally graded piezoelectric ultrasonic transducers
NASA Astrophysics Data System (ADS)
Rubio, Wilfredo Montealegre; Buiochi, Flávio; Adamowski, Julio Cezar; Silva, Emílio C. N.
2010-01-01
This work presents a new approach to systematically design piezoelectric ultrasonic transducers based on Topology Optimization Method (TOM) and Functionally Graded Material (FGM) concepts. The main goal is to find the optimal material distribution of Functionally Graded Piezoelectric Ultrasonic Transducers, to achieve the following requirements: (i) the transducer must be designed to have a multi-modal or uni-modal frequency response, which defines the kind of generated acoustic wave, either short pulse or continuous wave, respectively; (ii) the transducer is required to oscillate in a thickness extensional mode or piston-like mode, aiming at acoustic wave generation applications. Two kinds of piezoelectric materials are mixed for producing the FGM transducer. Material type 1 represents a PZT-5A piezoelectric ceramic and material type 2 represents a PZT-5H piezoelectric ceramic. To illustrate the proposed method, two Functionally Graded Piezoelectric Ultrasonic Transducers are designed. The TOM has shown to be a useful tool for designing Functionally Graded Piezoelectric Ultrasonic Transducers with uni-modal or multi-modal dynamic behavior.
Asteroid models from photometry and complementary data sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaasalainen, Mikko
I discuss inversion methods for asteroid shape and spin reconstruction with photometry (lightcurves) and complementary data sources such as adaptive optics or other images, occultation timings, interferometry, and range-Doppler radar data. These are essentially different sampling modes (generalized projections) of plane-of-sky images. An important concept in this approach is the optimal weighting of the various data modes. The maximum compatibility estimate, a multi-modal generalization of the maximum likelihood estimate, can be used for this purpose. I discuss the fundamental properties of lightcurve inversion by examining the two-dimensional case that, though not usable in our three-dimensional world, is simple to analyze,more » and it shares essentially the same uniqueness and stability properties as the 3-D case. After this, I review the main aspects of 3-D shape representations, lightcurve inversion, and the inclusion of complementary data.« less
ACIR: automatic cochlea image registration
NASA Astrophysics Data System (ADS)
Al-Dhamari, Ibraheem; Bauer, Sabine; Paulus, Dietrich; Lissek, Friedrich; Jacob, Roland
2017-02-01
Efficient Cochlear Implant (CI) surgery requires prior knowledge of the cochlea's size and its characteristics. This information helps to select suitable implants for different patients. To get these measurements, a segmentation method of cochlea medical images is needed. An important pre-processing step for good cochlea segmentation involves efficient image registration. The cochlea's small size and complex structure, in addition to the different resolutions and head positions during imaging, reveals a big challenge for the automated registration of the different image modalities. In this paper, an Automatic Cochlea Image Registration (ACIR) method for multi- modal human cochlea images is proposed. This method is based on using small areas that have clear structures from both input images instead of registering the complete image. It uses the Adaptive Stochastic Gradient Descent Optimizer (ASGD) and Mattes's Mutual Information metric (MMI) to estimate 3D rigid transform parameters. The use of state of the art medical image registration optimizers published over the last two years are studied and compared quantitatively using the standard Dice Similarity Coefficient (DSC). ACIR requires only 4.86 seconds on average to align cochlea images automatically and to put all the modalities in the same spatial locations without human interference. The source code is based on the tool elastix and is provided for free as a 3D Slicer plugin. Another contribution of this work is a proposed public cochlea standard dataset which can be downloaded for free from a public XNAT server.
mPano: cloud-based mobile panorama view from single picture
NASA Astrophysics Data System (ADS)
Li, Hongzhi; Zhu, Wenwu
2013-09-01
Panorama view provides people an informative and natural user experience to represent the whole scene. The advances on mobile augmented reality, mobile-cloud computing, and mobile internet can enable panorama view on mobile phone with new functionalities, such as anytime anywhere query where a landmark picture is and what the whole scene looks like. To generate and explore panorama view on mobile devices faces significant challenges due to the limitations of computing capacity, battery life, and memory size of mobile phones, as well as the bandwidth of mobile Internet connection. To address the challenges, this paper presents a novel cloud-based mobile panorama view system that can generate and view panorama-view on mobile devices from a single picture, namely "Pano". In our system, first, we propose a novel iterative multi-modal image retrieval (IMIR) approach to get spatially adjacent images using both tag and content information from the single picture. Second, we propose a cloud-based parallel server synthing approach to generate panorama view in cloud, against today's local-client synthing approach that is almost impossible for mobile phones. Third, we propose predictive-cache solution to reduce latency of image delivery from cloud server to the mobile client. We have built a real mobile panorama view system and perform experiments. The experimental results demonstrated the effectiveness of our system and the proposed key component technologies, especially for landmark images.
The year 2013 in the European Heart Journal--Cardiovascular Imaging: Part II.
Plein, Sven; Edvardsen, Thor; Pierard, Luc A; Saraste, Antti; Knuuti, Juhani; Maurer, Gerald; Lancellotti, Patrizio
2014-08-01
The new multi-modality cardiovascular imaging journal, European Heart Journal - Cardiovascular Imaging, was created in 2012. Here we summarize the most important studies from the journal's second year in two articles. Part I of the review has summarized studies in myocardial function, myocardial ischaemia, and emerging techniques in cardiovascular imaging. Part II is focussed on valvular heart diseases, heart failure, cardiomyopathies, and congenital heart diseases. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2014. For permissions please email: journals.permissions@oup.com.
Sethi, A; Rusu, I; Surucu, M; Halama, J
2012-06-01
Evaluate accuracy of multi-modality image registration in radiotherapy planning process. A water-filled anthropomorphic head phantom containing eight 'donut-shaped' fiducial markers (3 internal + 5 external) was selected for this study. Seven image sets (3CTs, 3MRs and PET) of phantom were acquired and fused in a commercial treatment planning system. First, a narrow slice (0.75mm) baseline CT scan was acquired (CT1). Subsequently, the phantom was re-scanned with a coarse slice width = 1.5mm (CT2) and after subjecting phantom to rotation/displacement (CT3). Next, the phantom was scanned in a 1.5 Tesla MR scanner and three MR image sets (axial T1, axial T2, coronal T1) were acquired at 2mm slice width. Finally, the phantom and center of fiducials were doped with 18F and a PET scan was performed with 2mm cubic voxels. All image scans (CT/MR/PET) were fused to the baseline (CT1) data using automated mutual-information based fusion algorithm. Difference between centroids of fiducial markers in various image modalities was used to assess image registration accuracy. CT/CT image registration was superior to CT/MR and CT/PET: average CT/CT fusion error was found to be 0.64 ± 0.14 mm. Corresponding values for CT/MR and CT/PET fusion were 1.33 ± 0.71mm and 1.11 ± 0.37mm. Internal markers near the center of phantom fused better than external markers placed on the phantom surface. This was particularly true for the CT/MR and CT/PET. The inferior quality of external marker fusion indicates possible distortion effects toward the edges of MR image. Peripheral targets in the PET scan may be subject to parallax error caused by depth of interaction of photons in detectors. Current widespread use of multimodality imaging in radiotherapy planning calls for periodic quality assurance of image registration process. Such studies may help improve safety and accuracy in treatment planning. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Kuzmak, Peter M.; Dayhoff, Ruth E.
1999-07-01
The US Department of Veterans Affairs (VA) is integrating imaging into the healthcare enterprise using the Digital Imaging and Communication in Medicine (DICOM) standard protocols. Image management is directly integrated into the VistA Hospital Information System (HIS) software and the clinical database. Radiology images are acquired via DICOM, and are stored directly in the HIS database. Images can be displayed on low-cost clinician's workstations throughout the medical center. High-resolution diagnostic quality multi-monitor VistA workstations with specialized viewing software can be used for reading radiology images. Two approaches are used to acquire and handle imags within the radiology department. Some sties have a commercial Picture Archiving and Communications System (PACS) interfaced to the VistA HIS, while other sites use the direct image acquisition and integrated diagnostic reading capabilities of VistA itself. A small set of DICOM services have been implemented by VistA to allow patient and study text data to be transmitted to image producing modalities and the commercial PACS, and to enable images and study data to be transferred back. The VistA DICOM capabilities are now used to interface seven different commercial PACS products and over twenty different radiology modalities. The communications capabilities of DICOM and the VA wide area network are begin used to support reading of radiology images form remote sites. DICOM has been the cornerstone in the ability to integrate imaging functionality into the Healthcare Enterprise. Because of its openness, it allows the integration of system component from commercial and non- commercial sources to work together to provide functional cost-effective solutions. As DICOM expands to non-radiology devices, integration must occur with the specialty information subsystems that handle orders and reports, their associated DICOM image capture systems, and the computer- based patient record. The mode and concepts of the DICOM standard can be extended to these other areas, but some adjustments may be required.
Imaging multi-scale dynamics in vivo with spiral volumetric optoacoustic tomography
NASA Astrophysics Data System (ADS)
Deán-Ben, X. Luís.; Fehm, Thomas F.; Ford, Steven J.; Gottschalk, Sven; Razansky, Daniel
2017-03-01
Imaging dynamics in living organisms is essential for the understanding of biological complexity. While multiple imaging modalities are often required to cover both microscopic and macroscopic spatial scales, dynamic phenomena may also extend over different temporal scales, necessitating the use of different imaging technologies based on the trade-off between temporal resolution and effective field of view. Optoacoustic (photoacoustic) imaging has been shown to offer the exclusive capability to link multiple spatial scales ranging from organelles to entire organs of small animals. Yet, efficient visualization of multi-scale dynamics remained difficult with state-of-the-art systems due to inefficient trade-offs between image acquisition and effective field of view. Herein, we introduce a spiral volumetric optoacoustic tomography (SVOT) technique that provides spectrally-enriched high-resolution optical absorption contrast across multiple spatio-temporal scales. We demonstrate that SVOT can be used to monitor various in vivo dynamics, from video-rate volumetric visualization of cardiac-associated motion in whole organs to high-resolution imaging of pharmacokinetics in larger regions. The multi-scale dynamic imaging capability thus emerges as a powerful and unique feature of the optoacoustic technology that adds to the multiple advantages of this technology for structural, functional and molecular imaging.
Kitzing, Yu Xuan; Gallagher, James; Waugh, Richard
2011-10-01
Congenital extrahepatic portocaval shunt is a rare condition that is described mostly in female patients. We report an unusual case of a young adult male patient with type 1 congenital extrahepatic portocaval shunt with associated development of a focal nodular hyperplasia on a background of regenerative nodules. With multi-slice CT utilisation, there is increased detection of portocaval malformation in asymptomatic patients. This congenital variant is clinically significant with associated development of hepatocellular lesions, hepatic dysfunction and/or encephalopathy. © 2011 The Authors. Journal of Medical Imaging and Radiation Oncology © 2011 The Royal Australian and New Zealand College of Radiologists.
Voxelwise multivariate analysis of multimodality magnetic resonance imaging
Naylor, Melissa G.; Cardenas, Valerie A.; Tosun, Duygu; Schuff, Norbert; Weiner, Michael; Schwartzman, Armin
2015-01-01
Most brain magnetic resonance imaging (MRI) studies concentrate on a single MRI contrast or modality, frequently structural MRI. By performing an integrated analysis of several modalities, such as structural, perfusion-weighted, and diffusion-weighted MRI, new insights may be attained to better understand the underlying processes of brain diseases. We compare two voxelwise approaches: (1) fitting multiple univariate models, one for each outcome and then adjusting for multiple comparisons among the outcomes and (2) fitting a multivariate model. In both cases, adjustment for multiple comparisons is performed over all voxels jointly to account for the search over the brain. The multivariate model is able to account for the multiple comparisons over outcomes without assuming independence because the covariance structure between modalities is estimated. Simulations show that the multivariate approach is more powerful when the outcomes are correlated and, even when the outcomes are independent, the multivariate approach is just as powerful or more powerful when at least two outcomes are dependent on predictors in the model. However, multiple univariate regressions with Bonferroni correction remains a desirable alternative in some circumstances. To illustrate the power of each approach, we analyze a case control study of Alzheimer's disease, in which data from three MRI modalities are available. PMID:23408378
Ontology-based approach for in vivo human connectomics: the medial Brodmann area 6 case study
Moreau, Tristan; Gibaud, Bernard
2015-01-01
Different non-invasive neuroimaging modalities and multi-level analysis of human connectomics datasets yield a great amount of heterogeneous data which are hard to integrate into an unified representation. Biomedical ontologies can provide a suitable integrative framework for domain knowledge as well as a tool to facilitate information retrieval, data sharing and data comparisons across scales, modalities and species. Especially, it is urgently needed to fill the gap between neurobiology and in vivo human connectomics in order to better take into account the reality highlighted in Magnetic Resonance Imaging (MRI) and relate it to existing brain knowledge. The aim of this study was to create a neuroanatomical ontology, called “Human Connectomics Ontology” (HCO), in order to represent macroscopic gray matter regions connected with fiber bundles assessed by diffusion tractography and to annotate MRI connectomics datasets acquired in the living human brain. First a neuroanatomical “view” called NEURO-DL-FMA was extracted from the reference ontology Foundational Model of Anatomy (FMA) in order to construct a gross anatomy ontology of the brain. HCO extends NEURO-DL-FMA by introducing entities (such as “MR_Node” and “MR_Route”) and object properties (such as “tracto_connects”) pertaining to MR connectivity. The Web Ontology Language Description Logics (OWL DL) formalism was used in order to enable reasoning with common reasoning engines. Moreover, an experimental work was achieved in order to demonstrate how the HCO could be effectively used to address complex queries concerning in vivo MRI connectomics datasets. Indeed, neuroimaging datasets of five healthy subjects were annotated with terms of the HCO and a multi-level analysis of the connectivity patterns assessed by diffusion tractography of the right medial Brodmann Area 6 was achieved using a set of queries. This approach can facilitate comparison of data across scales, modalities and species. PMID:25914640
Imaging approaches for the study of cell based cardiac therapies
Lau, Joe F.; Anderson, Stasia A.; Adler, Eric; Frank, Joseph A.
2009-01-01
Despite promising preclinical data, the treatment of cardiovascular diseases using embryonic, bone-marrow-derived, and skeletal myoblast stem cells has not yet come to fruition within mainstream clinical practice. Major obstacles in cardiac stem cell investigations include the ability to monitor cell engraftment and survival following implantation within the myocardium. Several cellular imaging modalities, including reporter gene and MRI-based tracking approaches, have emerged that provide the means to identify, localize and monitor stem cells longitudinally in vivo following implantation. This Review will examine the various cardiac cellular tracking modalities, including the combinatorial use of several probes in multimodality imaging, with a focus on data from the last five years. PMID:20027188
NASA Astrophysics Data System (ADS)
Zhang, Pengfei; Zam, Azhar; Jian, Yifan; Wang, Xinlei; Burns, Marie E.; Sarunic, Marinko V.; Pugh, Edward N.; Zawadzki, Robert J.
2015-03-01
A compact, non-invasive multi-modal system has been developed for in vivo mouse retina imaging. It is configured for simultaneously detecting green and red fluorescent protein signals with scanning laser ophthalmoscopy (SLO) back-scattered light from the SLO illumination beam, and depth information about different retinal layers by means of Optical Coherence Tomography (OCT). Simultaneous assessment of retinal characteristics with different modalities can provide a wealth of information about the structural and functional changes in the retinal neural tissue and chorio-retinal vasculature in vivo. Additionally, simultaneous acquisition of multiple channels facilitates analysis of the data of different modalities by automatic temporal and structural co-registration. As an example of the instrument's performance we imaged the retina of a mouse with constitutive expression of GFP in microglia cells (Cx3cr1GFP/+), and which also expressed the red fluorescent protein mCherry in Müller glial cells by means of adeno-associated virus delivery (AAV2) of an mCherry cDNA driven by the GFAP (glial fibrillary acid protein) promoter.
Namiki, Kana; Miyawaki, Atsushi; Ishikawa, Takuji
2017-01-01
Whole slide imaging (WSI) is a useful tool for multi-modal imaging, and in our work, we have often combined WSI with darkfield microscopy. However, traditional darkfield microscopy cannot use a single condenser to support high- and low-numerical-aperture objectives, which limits the modality of WSI. To overcome this limitation, we previously developed a darkfield internal reflection illumination (DIRI) microscope using white light-emitting diodes (LEDs). Although the developed DIRI is useful for biological applications, substantial problems remain to be resolved. In this study, we propose a novel illumination technique called color DIRI. The use of three-color LEDs dramatically improves the capability of the system, such that color DIRI (1) enables optimization of the illumination color; (2) can be combined with an oil objective lens; (3) can produce fluorescence excitation illumination; (4) can adjust the wavelength of light to avoid cell damage or reactions; and (5) can be used as a photostimulator. These results clearly illustrate that the proposed color DIRI can significantly extend WSI modalities for biological applications. PMID:28085892
An efficient method for the fusion of light field refocused images
NASA Astrophysics Data System (ADS)
Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei
2018-04-01
Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.
NASA Astrophysics Data System (ADS)
Khan, Faisal M.; Kulikowski, Casimir A.
2016-03-01
A major focus area for precision medicine is in managing the treatment of newly diagnosed prostate cancer patients. For patients with a positive biopsy, clinicians aim to develop an individualized treatment plan based on a mechanistic understanding of the disease factors unique to each patient. Recently, there has been a movement towards a multi-modal view of the cancer through the fusion of quantitative information from multiple sources, imaging and otherwise. Simultaneously, there have been significant advances in machine learning methods for medical prognostics which integrate a multitude of predictive factors to develop an individualized risk assessment and prognosis for patients. An emerging area of research is in semi-supervised approaches which transduce the appropriate survival time for censored patients. In this work, we apply a novel semi-supervised approach for support vector regression to predict the prognosis for newly diagnosed prostate cancer patients. We integrate clinical characteristics of a patient's disease with imaging derived metrics for biomarker expression as well as glandular and nuclear morphology. In particular, our goal was to explore the performance of nuclear and glandular architecture within the transduction algorithm and assess their predictive power when compared with the Gleason score manually assigned by a pathologist. Our analysis in a multi-institutional cohort of 1027 patients indicates that not only do glandular and morphometric characteristics improve the predictive power of the semi-supervised transduction algorithm; they perform better when the pathological Gleason is absent. This work represents one of the first assessments of quantitative prostate biopsy architecture versus the Gleason grade in the context of a data fusion paradigm which leverages a semi-supervised approach for risk prognosis.
Armstrong, Anderson C.; Gjesdal, Ola; Almeida, André; Nacif, Marcelo; Wu, Colin; Bluemke, David A.; Brumback, Lyndia; Lima, João A. C.
2013-01-01
BACKGROUND Left ventricular mass (LVM) and hypertrophy (LVH) are important parameters, but their use is surrounded by controversies. We compare LVM by echocardiography and cardiac magnetic resonance (CMR), investigating reproducibility aspects and the effect of echocardiography image quality. We also compare indexing methods within and between imaging modalities for classification of LVH and cardiovascular risk. METHODS MESA enrolled 880 participants in Baltimore City; 146 had echocardiograms and CMR on the same day. LVM was then assessed using standard techniques. Echocardiography image quality was rated (good/limited) according to the parasternal view. LVH was defined after indexing LVM to body surface area, height1.7, height2.7, or by the predicted LVM from a reference group. Participants were classified for cardiovascular risk according to Framingham score. Pearson’s correlation, Bland-Altman plots, percent agreement, and kappa coefficient assessed agreement within and between modalities. RESULTS LVM by echocardiography (140 ± 40 g) and by CMR were correlated (r = 0.8, p < 0.001) regardless of the echocardiography image quality. The reproducibility profile had strong correlations and agreement for both modalities. Image quality groups had similar characteristics; those with good images compared to CMR slightly superiorly. The prevalence of LVH tended to be higher with higher cardiovascular risk. The agreement for LVH between imaging modalities ranged from 77% to 98% and the kappa coefficient from 0.10 to 0.76. CONCLUSIONS Echocardiography has a reliable performance for LVM assessment and classification of LVH, with limited influence of image quality. Echocardiography and CMR differ in the assessment of LVH, and additional differences rise from the indexing methods. PMID:23930739
Combating atherosclerosis with targeted nanomedicines: recent advances and future prospective
Nakhlband, Ailar; Eskandani, Morteza; Saeedi, Nazli; Ghaffari, Samad; Garjani, Alireza
2018-01-01
Introduction: Cardiovascular diseases (CVDs) is recognized as the leading cause of mortality worldwide. The increasing prevalence of such disease demands novel therapeutic and diagnostic approaches to overcome associated clinical/social issues. Recent advances in nanotechnology and biological sciences have provided intriguing insights to employ targeted Nanomachines to the desired location as imaging, diagnosis, and therapeutic modalities. Nanomedicines as novel tools for enhanced drug delivery, imaging, and diagnosis strategies have shown great promise to combat cardiovascular diseases. Methods: In the current study, we intend to review the most recent studies on the nano-based strategies for improved management of CVDs. Results: A cascade of events results in the formation of atheromatous plaque and arterial stenosis. Furthermore, recent studies have shown that nanomedicines have displayed unique functionalities and provided de novo applications in the diagnosis and treatment of atherosclerosis. Conclusion: Despite some limitations, nanomedicines hold considerable potential in the prevention, diagnosis, and treatment of various ailments including atherosclerosis. Fewer side effects, amenable physicochemical properties and multi-potential application of such nano-systems are recognized through various investigations. Therefore, it is strongly believed that with targeted drug delivery to atherosclerotic lesions and plaque, management of onset and progression of disease would be more efficient than classical treatment modalities. PMID:29713603
Hybrid optical acoustic seafloor mapping
NASA Astrophysics Data System (ADS)
Inglis, Gabrielle
The oceanographic research and industrial communities have a persistent demand for detailed three dimensional sea floor maps which convey both shape and texture. Such data products are used for archeology, geology, ship inspection, biology, and habitat classification. There are a variety of sensing modalities and processing techniques available to produce these maps and each have their own potential benefits and related challenges. Multibeam sonar and stereo vision are such two sensors with complementary strengths making them ideally suited for data fusion. Data fusion approaches however, have seen only limited application to underwater mapping and there are no established methods for creating hybrid, 3D reconstructions from two underwater sensing modalities. This thesis develops a processing pipeline to synthesize hybrid maps from multi-modal survey data. It is helpful to think of this processing pipeline as having two distinct phases: Navigation Refinement and Map Construction. This thesis extends existing work in underwater navigation refinement by incorporating methods which increase measurement consistency between both multibeam and camera. The result is a self consistent 3D point cloud comprised of camera and multibeam measurements. In map construction phase, a subset of the multi-modal point cloud retaining the best characteristics of each sensor is selected to be part of the final map. To quantify the desired traits of a map several characteristics of a useful map are distilled into specific criteria. The different ways that hybrid maps can address these criteria provides justification for producing them as an alternative to current methodologies. The processing pipeline implements multi-modal data fusion and outlier rejection with emphasis on different aspects of map fidelity. The resulting point cloud is evaluated in terms of how well it addresses the map criteria. The final hybrid maps retain the strengths of both sensors and show significant improvement over the single modality maps and naively assembled multi-modal maps.
NASA Astrophysics Data System (ADS)
Merčep, Elena; Burton, Neal C.; Deán-Ben, Xosé Luís.; Razansky, Daniel
2017-02-01
The complementary contrast of the optoacoustic (OA) and pulse-echo ultrasound (US) modalities makes the combined usage of these imaging technologies highly advantageous. Due to the different physical contrast mechanisms development of a detector array optimally suited for both modalities is one of the challenges to efficient implementation of a single OA-US imaging device. We demonstrate imaging performance of the first hybrid detector array whose novel design, incorporating array segments of linear and concave geometry, optimally supports image acquisition in both reflection-mode ultrasonography and optoacoustic tomography modes. Hybrid detector array has a total number of 256 elements and three segments of different geometry and variable pitch size: a central 128-element linear segment with pitch of 0.25mm, ideally suited for pulse-echo US imaging, and two external 64-elements segments with concave geometry and 0.6mm pitch optimized for OA image acquisition. Interleaved OA and US image acquisition with up to 25 fps is facilitated through a custom-made multiplexer unit. Spatial resolution of the transducer was characterized in numerical simulations and validated in phantom experiments and comprises 230 and 300 μm in the respective OA and US imaging modes. Imaging performance of the multi-segment detector array was experimentally shown in a series of imaging sessions with healthy volunteers. Employing mixed array geometries allows at the same time achieving excellent OA contrast with a large field of view, and US contrast for complementary structural features with reduced side-lobes and improved resolution. The newly designed hybrid detector array that comprises segments of linear and concave geometries optimally fulfills requirements for efficient US and OA imaging and may expand the applicability of the developed hybrid OPUS imaging technology and accelerate its clinical translation.
Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R.
2017-01-01
The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues. PMID:29188089
Segmentation and Visual Analysis of Whole-Body Mouse Skeleton microSPECT
Khmelinskii, Artem; Groen, Harald C.; Baiker, Martin; de Jong, Marion; Lelieveldt, Boudewijn P. F.
2012-01-01
Whole-body SPECT small animal imaging is used to study cancer, and plays an important role in the development of new drugs. Comparing and exploring whole-body datasets can be a difficult and time-consuming task due to the inherent heterogeneity of the data (high volume/throughput, multi-modality, postural and positioning variability). The goal of this study was to provide a method to align and compare side-by-side multiple whole-body skeleton SPECT datasets in a common reference, thus eliminating acquisition variability that exists between the subjects in cross-sectional and multi-modal studies. Six whole-body SPECT/CT datasets of BALB/c mice injected with bone targeting tracers 99mTc-methylene diphosphonate (99mTc-MDP) and 99mTc-hydroxymethane diphosphonate (99mTc-HDP) were used to evaluate the proposed method. An articulated version of the MOBY whole-body mouse atlas was used as a common reference. Its individual bones were registered one-by-one to the skeleton extracted from the acquired SPECT data following an anatomical hierarchical tree. Sequential registration was used while constraining the local degrees of freedom (DoFs) of each bone in accordance to the type of joint and its range of motion. The Articulated Planar Reformation (APR) algorithm was applied to the segmented data for side-by-side change visualization and comparison of data. To quantitatively evaluate the proposed algorithm, bone segmentations of extracted skeletons from the correspondent CT datasets were used. Euclidean point to surface distances between each dataset and the MOBY atlas were calculated. The obtained results indicate that after registration, the mean Euclidean distance decreased from 11.5±12.1 to 2.6±2.1 voxels. The proposed approach yielded satisfactory segmentation results with minimal user intervention. It proved to be robust for “incomplete” data (large chunks of skeleton missing) and for an intuitive exploration and comparison of multi-modal SPECT/CT cross-sectional mouse data. PMID:23152834
Viewpoints on Medical Image Processing: From Science to Application
Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas
2013-01-01
Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804
Viewpoints on Medical Image Processing: From Science to Application.
Deserno Né Lehmann, Thomas M; Handels, Heinz; Maier-Hein Né Fritzsche, Klaus H; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas
2013-05-01
Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment.
Multimodal Image Registration through Simultaneous Segmentation.
Aganj, Iman; Fischl, Bruce
2017-11-01
Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.
NASA Astrophysics Data System (ADS)
Wu, Binlin
New near-infrared (NIR) diffuse optical tomography (DOT) approaches were developed to detect, locate, and image small targets embedded in highly scattering turbid media. The first approach, referred to as time reversal optical tomography (TROT), is based on time reversal (TR) imaging and multiple signal classification (MUSIC). The second approach uses decomposition methods of non-negative matrix factorization (NMF) and principal component analysis (PCA) commonly used in blind source separation (BSS) problems, and compare the outcomes with that of optical imaging using independent component analysis (OPTICA). The goal is to develop a safe, affordable, noninvasive imaging modality for detection and characterization of breast tumors in early growth stages when those are more amenable to treatment. The efficacy of the approaches was tested using simulated data, and experiments involving model media and absorptive, scattering, and fluorescent targets, as well as, "realistic human breast model" composed of ex vivo breast tissues with embedded tumors. The experimental arrangements realized continuous wave (CW) multi-source probing of samples and multi-detector acquisition of diffusely transmitted signal in rectangular slab geometry. A data matrix was generated using the perturbation in the transmitted light intensity distribution due to the presence of absorptive or scattering targets. For fluorescent targets the data matrix was generated using the diffusely transmitted fluorescence signal distribution from the targets. The data matrix was analyzed using different approaches to detect and characterize the targets. The salient features of the approaches include ability to: (a) detect small targets; (b) provide three-dimensional location of the targets with high accuracy (~within a millimeter or 2); and (c) assess optical strength of the targets. The approaches are less computation intensive and consequently are faster than other inverse image reconstruction methods that attempt to reconstruct the optical properties of every voxel of the sample volume. The location of a target was estimated to be the weighted center of the optical property of the target. Consequently, the locations of small targets were better specified than those of the extended targets. It was more difficult to retrieve the size and shape of a target. The fluorescent measurements seemed to provide better accuracy than the transillumination measurements. In the case of ex vivo detection of tumors embedded in human breast tissue, measurements using multiple wavelengths provided more robust results, and helped suppress artifacts (false positives) than that from single wavelength measurements. The ability to detect and locate small targets, speedier reconstruction, combined with fluorophore-specific multi-wavelength probing has the potential to make these approaches suitable for breast cancer detection and diagnosis.
Menze, Bjoern H.; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-André; Székely, Gabor; Ayache, Nicholas; Golland, Polina
2016-01-01
We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM) to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as “tumor core” or “fluid-filled structure”, but without a one-to-one correspondence to the hypo-or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the generative-discriminative model to be one of the top ranking methods in the BRATS evaluation. PMID:26599702
Menze, Bjoern H; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-Andre; Szekely, Gabor; Ayache, Nicholas; Golland, Polina
2016-04-01
We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM), to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as "tumor core" or "fluid-filled structure", but without a one-to-one correspondence to the hypo- or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the extended discriminative -discriminative model to be one of the top ranking methods in the BRATS evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, X; Jani, A; Rossi, P
Purpose: MRI has shown promise in identifying prostate tumors with high sensitivity and specificity for the detection of prostate cancer. Accurate segmentation of the prostate plays a key role various tasks: to accurately localize prostate boundaries for biopsy needle placement and radiotherapy, to initialize multi-modal registration algorithms or to obtain the region of interest for computer-aided detection of prostate cancer. However, manual segmentation during biopsy or radiation therapy can be time consuming and subject to inter- and intra-observer variation. This study’s purpose it to develop an automated method to address this technical challenge. Methods: We present an automated multi-atlas segmentationmore » for MR prostate segmentation using patch-based label fusion. After an initial preprocessing for all images, all the atlases are non-rigidly registered to a target image. And then, the resulting transformation is used to propagate the anatomical structure labels of the atlas into the space of the target image. The top L similar atlases are further chosen by measuring intensity and structure difference in the region of interest around prostate. Finally, using voxel weighting based on patch-based anatomical signature, the label that the majority of all warped labels predict for each voxel is used for the final segmentation of the target image. Results: This segmentation technique was validated with a clinical study of 13 patients. The accuracy of our approach was assessed using the manual segmentation (gold standard). The mean volume Dice Overlap Coefficient was 89.5±2.9% between our and manual segmentation, which indicate that the automatic segmentation method works well and could be used for 3D MRI-guided prostate intervention. Conclusion: We have developed a new prostate segmentation approach based on the optimal feature learning label fusion framework, demonstrated its clinical feasibility, and validated its accuracy. This segmentation technique could be a useful tool in image-guided interventions for prostate-cancer diagnosis and treatment.« less
Combinatorial Markov Random Fields and Their Applications to Information Organization
2008-02-01
titles, part-of- speech tags; • Image processing: images, colors, texture, blobs, interest points, caption words; • Video processing: video signal, audio...McGurk and MacDonald published their pioneering work [80] that revealed the multi-modal nature of speech perception: sound and moving lips compose one... Speech (POS) n-grams (that correspond to the syntactic structure of text). POS n-grams are extracted from sentences in an incremental manner: the first n
NASA Astrophysics Data System (ADS)
Daffara, Claudia; Parisotto, Simone; Ambrosini, Dario
2018-05-01
We present a multi-purpose, dual-mode imaging method in the Mid-Wavelength Infrared (MWIR) range (from 3 μm to 5 μm) for a more efficient nondestructive analysis of artworks. Using a setup based on a MWIR thermal camera and multiple radiation sources, two radiometric image datasets are acquired in different acquisition modalities, the image in quasi-reflectance mode (TQR) and the thermal sequence in emission mode. Here, the advantages are: the complementarity of the information; the use of the quasi-reflectance map for calculating the emissivity map; the use of TQR map for a referentiation to the visible of the thermographic images. The concept of the method is presented, the practical feasibility is demonstrated through a custom imaging setup, the potentiality for the nondestructive analysis is shown on a notable application to cultural heritage. The method has been used as experimental tool in support of the restoration of the mural painting "Monocromo" by Leonardo da Vinci. Feedback from the operators and a comparison with some conventional diagnostic techniques is also given to underline the novelty and potentiality of the method.
A Multi-Functional Imaging Approach to High-Content Protein Interaction Screening
Matthews, Daniel R.; Fruhwirth, Gilbert O.; Weitsman, Gregory; Carlin, Leo M.; Ofo, Enyinnaya; Keppler, Melanie; Barber, Paul R.; Tullis, Iain D. C.; Vojnovic, Borivoj; Ng, Tony; Ameer-Beg, Simon M.
2012-01-01
Functional imaging can provide a level of quantification that is not possible in what might be termed traditional high-content screening. This is due to the fact that the current state-of-the-art high-content screening systems take the approach of scaling-up single cell assays, and are therefore based on essentially pictorial measures as assay indicators. Such phenotypic analyses have become extremely sophisticated, advancing screening enormously, but this approach can still be somewhat subjective. We describe the development, and validation, of a prototype high-content screening platform that combines steady-state fluorescence anisotropy imaging with fluorescence lifetime imaging (FLIM). This functional approach allows objective, quantitative screening of small molecule libraries in protein-protein interaction assays. We discuss the development of the instrumentation, the process by which information on fluorescence resonance energy transfer (FRET) can be extracted from wide-field, acceptor fluorescence anisotropy imaging and cross-checking of this modality using lifetime imaging by time-correlated single-photon counting. Imaging of cells expressing protein constructs where eGFP and mRFP1 are linked with amino-acid chains of various lengths (7, 19 and 32 amino acids) shows the two methodologies to be highly correlated. We validate our approach using a small-scale inhibitor screen of a Cdc42 FRET biosensor probe expressed in epidermoid cancer cells (A431) in a 96 microwell-plate format. We also show that acceptor fluorescence anisotropy can be used to measure variations in hetero-FRET in protein-protein interactions. We demonstrate this using a screen of inhibitors of internalization of the transmembrane receptor, CXCR4. These assays enable us to demonstrate all the capabilities of the instrument, image processing and analytical techniques that have been developed. Direct correlation between acceptor anisotropy and donor FLIM is observed for FRET assays, providing an opportunity to rapidly screen proteins, interacting on the nano-meter scale, using wide-field imaging. PMID:22506000
Chew, Avenell L.; Lamey, Tina; McLaren, Terri; De Roach, John
2016-01-01
Purpose To present en face optical coherence tomography (OCT) images generated by graph-search theory algorithm-based custom software and examine correlation with other imaging modalities. Methods En face OCT images derived from high density OCT volumetric scans of 3 healthy subjects and 4 patients using a custom algorithm (graph-search theory) and commercial software (Heidelberg Eye Explorer software (Heidelberg Engineering)) were compared and correlated with near infrared reflectance, fundus autofluorescence, adaptive optics flood-illumination ophthalmoscopy (AO-FIO) and microperimetry. Results Commercial software was unable to generate accurate en face OCT images in eyes with retinal pigment epithelium (RPE) pathology due to segmentation error at the level of Bruch’s membrane (BM). Accurate segmentation of the basal RPE and BM was achieved using custom software. The en face OCT images from eyes with isolated interdigitation or ellipsoid zone pathology were of similar quality between custom software and Heidelberg Eye Explorer software in the absence of any other significant outer retinal pathology. En face OCT images demonstrated angioid streaks, lesions of acute macular neuroretinopathy, hydroxychloroquine toxicity and Bietti crystalline deposits that correlated with other imaging modalities. Conclusions Graph-search theory algorithm helps to overcome the limitations of outer retinal segmentation inaccuracies in commercial software. En face OCT images can provide detailed topography of the reflectivity within a specific layer of the retina which correlates with other forms of fundus imaging. Our results highlight the need for standardization of image reflectivity to facilitate quantification of en face OCT images and longitudinal analysis. PMID:27959968
Ashok, Praveen C.; Praveen, Bavishna B.; Bellini, Nicola; Riches, Andrew; Dholakia, Kishan; Herrington, C. Simon
2013-01-01
We report a multimodal optical approach using both Raman spectroscopy and optical coherence tomography (OCT) in tandem to discriminate between colonic adenocarcinoma and normal colon. Although both of these non-invasive techniques are capable of discriminating between normal and tumour tissues, they are unable individually to provide both the high specificity and high sensitivity required for disease diagnosis. We combine the chemical information derived from Raman spectroscopy with the texture parameters extracted from OCT images. The sensitivity obtained using Raman spectroscopy and OCT individually was 89% and 78% respectively and the specificity was 77% and 74% respectively. Combining the information derived using the two techniques increased both sensitivity and specificity to 94% demonstrating that combining complementary optical information enhances diagnostic accuracy. These data demonstrate that multimodal optical analysis has the potential to achieve accurate non-invasive cancer diagnosis. PMID:24156073
Designing a Website to Support Students' Academic Writing Process
ERIC Educational Resources Information Center
Åberg, Eva Svärdemo; Ståhle, Ylva; Engdahl, Ingrid; Knutes-Nyqvist, Helen
2016-01-01
Academic writing skills are crucial when students, e.g., in teacher education programs, write their undergraduate theses. A multi-modal web-based and self-regulated learning resource on academic writing was developed, using texts, hypertext, moving images, podcasts and templates. A study, using surveys and a focus group, showed that students used…
Articulatory Mediation of Speech Perception: A Causal Analysis of Multi-Modal Imaging Data
ERIC Educational Resources Information Center
Gow, David W., Jr.; Segawa, Jennifer A.
2009-01-01
The inherent confound between the organization of articulation and the acoustic-phonetic structure of the speech signal makes it exceptionally difficult to evaluate the competing claims of motor and acoustic-phonetic accounts of how listeners recognize coarticulated speech. Here we use Granger causation analysis of high spatiotemporal resolution…
Development of Convergence Nanoparticles for Multi-Modal Bio-Medical Imaging
2008-09-18
Synthesized nanoparticles (1 mg /ml ( Mn +Fe)) are mixed with cancer cell (MCF7) and heat generation efficacy was measured with the cell viability under...fabrication of MnFe2O4 which has superior magnetic property compared to other types of metal ferrites . Figure 1. Magnetic nanoparticle for disease
Prada, Francesco; Del Bene, Massimiliano; Moiraghi, Alessandro; Casali, Cecilia; Legnani, Federico Giuseppe; Saladino, Andrea; Perin, Alessandro; Vetrano, Ignazio Gaspare; Mattei, Luca; Richetta, Carla; Saini, Marco; DiMeco, Francesco
2015-01-01
The main goal in meningioma surgery is to achieve complete tumor removal, when possible, while improving or preserving patient neurological functions. Intraoperative imaging guidance is one fundamental tool for such achievement. In this regard, intra-operative ultrasound (ioUS) is a reliable solution to obtain real-time information during surgery and it has been applied in many different aspect of neurosurgery. In the last years, different ioUS modalities have been described: B-mode, Fusion Imaging with pre-operative acquired MRI, Doppler, contrast enhanced ultrasound (CEUS), and elastosonography. In this paper, we present our US based multimodal approach in meningioma surgery. We describe all the most relevant ioUS modalities and their intraoperative application to obtain precise and specific information regarding the lesion for a tailored approach in meningioma surgery. For each modality, we perform a review of the literature accompanied by a pictorial essay based on our routinely use of ioUS for meningioma resection.
Shapes, scents and sounds: quantifying the full multi-sensory basis of conceptual knowledge.
Hoffman, Paul; Lambon Ralph, Matthew A
2013-01-01
Contemporary neuroscience theories assume that concepts are formed through experience in multiple sensory-motor modalities. Quantifying the contribution of each modality to different object categories is critical to understanding the structure of the conceptual system and to explaining category-specific knowledge deficits. Verbal feature listing is typically used to elicit this information but has a number of drawbacks: sensory knowledge often cannot easily be translated into verbal features and many features are experienced in multiple modalities. Here, we employed a more direct approach in which subjects rated their knowledge of objects in each sensory-motor modality separately. Compared with these ratings, feature listing over-estimated the importance of visual form and functional knowledge and under-estimated the contributions of other sensory channels. An item's sensory rating proved to be a better predictor of lexical-semantic processing speed than the number of features it possessed, suggesting that ratings better capture the overall quantity of sensory information associated with a concept. Finally, the richer, multi-modal rating data not only replicated the sensory-functional distinction between animals and non-living things but also revealed novel distinctions between different types of artefact. Hierarchical cluster analyses indicated that mechanical devices (e.g., vehicles) were distinct from other non-living objects because they had strong sound and motion characteristics, making them more similar to animals in this respect. Taken together, the ratings align with neuroscience evidence in suggesting that a number of distinct sensory processing channels make important contributions to object knowledge. Multi-modal ratings for 160 objects are provided as supplementary materials. Copyright © 2012 Elsevier Ltd. All rights reserved.
Using consumer-grade devices for multi-imager non-contact imaging photoplethysmography
NASA Astrophysics Data System (ADS)
Blackford, Ethan B.; Estepp, Justin R.
2017-02-01
Imaging photoplethysmography is a technique through which the morphology of the blood volume pulse can be obtained through non-contact video recordings of exposed skin with superficial vasculature. The acceptance of such a convenient modality for use in everyday applications may well depend upon the availability of consumer-grade imagers that facilitate ease-of-adoption. Multiple imagers have been used previously in concept demonstrations, showing improvements in quality of the extracted blood volume pulse signal. However, the use of multi-imager sensors requires synchronization of the frame exposures between the individual imagers, a capability that has only recently been available without creating custom solutions. In this work, we consider the use of multiple, commercially-available, synchronous imagers for use in imaging photoplethysmography. A commercially-available solution for adopting multi-imager synchronization was analyzed for 21 stationary, seated participants while ground-truth physiological signals were simultaneously measured. A total of three imagers were used, facilitating a comparison between fused data from all three imagers versus data from the single, central imager in the array. The within-subjects design included analyses of pulse rate and pulse signal-to-noise ratio. Using the fused data from the triple-imager array, mean absolute error in pulse rate measurement was reduced to 3.8 as compared to 7.4 beats per minute with the single imager. While this represents an overall improvement in the multi-imager case, it is also noted that these errors are substantially higher than those obtained in comparable studies. We further discuss these results and their implications for using readily-available commercial imaging solutions for imaging photoplethysmography applications.
A novel multimodal optical imaging system for early detection of oral cancer
Malik, Bilal H.; Jabbour, Joey M.; Cheng, Shuna; Cuenca, Rodrigo; Cheng, Yi-Shing Lisa; Wright, John M.; Jo, Javier A.; Maitland, Kristen C.
2015-01-01
Objectives Several imaging techniques have been advocated as clinical adjuncts to improve identification of suspicious oral lesions. However, these have not yet shown superior sensitivity or specificity over conventional oral examination techniques. We developed a multimodal, multi-scale optical imaging system that combines macroscopic biochemical imaging of fluorescence lifetime imaging (FLIM) with subcellular morphologic imaging of reflectance confocal microscopy (RCM) for early detection of oral cancer. We tested our system on excised human oral tissues. Study Design A total of four tissue specimen were imaged. These specimens were diagnosed as one each: clinically normal, oral lichen planus, gingival hyperplasia, and superficially-invasive squamous cell carcinoma (SCC). The optical and fluorescence lifetime properties of each specimen were recorded. Results Both quantitative and qualitative differences between normal, benign and SCC lesions can be resolved with FLIM-RCM imaging. The results demonstrate that an integrated approach based on these two methods can potentially enable rapid screening and evaluation of large areas of oral epithelial tissue. Conclusions Early results from ongoing studies of imaging human oral cavity illustrate the synergistic combination of the two modalities. An adjunct device based on such optical characterization of oral mucosa can potentially be used to detect oral carcinogenesis in early stages. PMID:26725720
Forbes, Ruaridh; Makhija, Varun; Veyrinas, Kévin; Stolow, Albert; Lee, Jason W L; Burt, Michael; Brouard, Mark; Vallance, Claire; Wilkinson, Iain; Lausten, Rune; Hockett, Paul
2017-07-07
The Pixel-Imaging Mass Spectrometry (PImMS) camera allows for 3D charged particle imaging measurements, in which the particle time-of-flight is recorded along with (x, y) position. Coupling the PImMS camera to an ultrafast pump-probe velocity-map imaging spectroscopy apparatus therefore provides a route to time-resolved multi-mass ion imaging, with both high count rates and large dynamic range, thus allowing for rapid measurements of complex photofragmentation dynamics. Furthermore, the use of vacuum ultraviolet wavelengths for the probe pulse allows for an enhanced observation window for the study of excited state molecular dynamics in small polyatomic molecules having relatively high ionization potentials. Herein, preliminary time-resolved multi-mass imaging results from C 2 F 3 I photolysis are presented. The experiments utilized femtosecond VUV and UV (160.8 nm and 267 nm) pump and probe laser pulses in order to demonstrate and explore this new time-resolved experimental ion imaging configuration. The data indicate the depth and power of this measurement modality, with a range of photofragments readily observed, and many indications of complex underlying wavepacket dynamics on the excited state(s) prepared.
Clinical Utility and Future Applications of PET/CT and PET/CMR in Cardiology
Pan, Jonathan A.; Salerno, Michael
2016-01-01
Over the past several years, there have been major advances in cardiovascular positron emission tomography (PET) in combination with either computed tomography (CT) or, more recently, cardiovascular magnetic resonance (CMR). These multi-modality approaches have significant potential to leverage the strengths of each modality to improve the characterization of a variety of cardiovascular diseases and to predict clinical outcomes. This review will discuss current developments and potential future uses of PET/CT and PET/CMR for cardiovascular applications, which promise to add significant incremental benefits to the data provided by each modality alone. PMID:27598207
Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.
Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael
2015-08-01
In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.
Neitzel, Julia; Nuttall, Rachel; Sorg, Christian
2018-01-01
Previous animal research suggests that the spread of pathological agents in Alzheimer’s disease (AD) follows the direction of signaling pathways. Specifically, tau pathology has been suggested to propagate in an infection-like mode along axons, from transentorhinal cortices to medial temporal lobe cortices and consequently to other cortical regions, while amyloid-beta (Aβ) pathology seems to spread in an activity-dependent manner among and from isocortical regions into limbic and then subcortical regions. These directed connectivity-based spread models, however, have not been tested directly in AD patients due to the lack of an in vivo method to identify directed connectivity in humans. Recently, a new method—metabolic connectivity mapping (MCM)—has been developed and validated in healthy participants that uses simultaneous FDG-PET and resting-state fMRI data acquisition to identify directed intrinsic effective connectivity (EC). To this end, postsynaptic energy consumption (FDG-PET) is used to identify regions with afferent input from other functionally connected brain regions (resting-state fMRI). Here, we discuss how this multi-modal imaging approach allows quantitative, whole-brain mapping of signaling direction in AD patients, thereby pointing out some of the advantages it offers compared to other EC methods (i.e., Granger causality, dynamic causal modeling, Bayesian networks). Most importantly, MCM provides the basis on which models of pathology spread, derived from animal studies, can be tested in AD patients. In particular, future work should investigate whether tau and Aβ in humans propagate along the trajectories of directed connectivity in order to advance our understanding of the neuropathological mechanisms causing disease progression. PMID:29434570
A small field of view camera for hybrid gamma and optical imaging
NASA Astrophysics Data System (ADS)
Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.
2014-12-01
The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.
NASA Astrophysics Data System (ADS)
Wei, Qingyang; Wang, Shi; Ma, Tianyu; Wu, Jing; Liu, Hui; Xu, Tianpeng; Xia, Yan; Fan, Peng; Lyu, Zhenlei; Liu, Yaqiang
2015-06-01
PET, SPECT and CT imaging techniques are widely used in preclinical small animal imaging applications. In this paper, we present a compact small animal PET/SPECT/CT tri-modality system. A dual-functional, shared detector design is implemented which enables PET and SPECT imaging with a same LYSO ring detector. A multi-pinhole collimator is mounted on the system and inserted into the detector ring in SPECT imaging mode. A cone-beam CT consisting of a micro focus X-ray tube and a CMOS detector is implemented. The detailed design and the performance evaluations are reported in this paper. In PET imaging mode, the measured NEMA based spatial resolution is 2.12 mm (FWHM), and the sensitivity at the central field of view (CFOV) is 3.2%. The FOV size is 50 mm (∅)×100 mm (L). The SPECT has a spatial resolution of 1.32 mm (FWHM) and an average sensitivity of 0.031% at the center axial, and a 30 mm (∅)×90 mm (L) FOV. The CT spatial resolution is 8.32 lp/mm @10%MTF, and the contrast discrimination function value is 2.06% with 1.5 mm size cubic box object. In conclusion, a compact, tri-modality PET/SPECT/CT system was successfully built with low cost and high performance.
Estimating Missing Features to Improve Multimedia Information Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagherjeiran, A; Love, N S; Kamath, C
Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features.more » In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.« less
Medical Image Retrieval: A Multimodal Approach
Cao, Yu; Steffey, Shawn; He, Jianbiao; Xiao, Degui; Tao, Cui; Chen, Ping; Müller, Henning
2014-01-01
Medical imaging is becoming a vital component of war on cancer. Tremendous amounts of medical image data are captured and recorded in a digital format during cancer care and cancer research. Facing such an unprecedented volume of image data with heterogeneous image modalities, it is necessary to develop effective and efficient content-based medical image retrieval systems for cancer clinical practice and research. While substantial progress has been made in different areas of content-based image retrieval (CBIR) research, direct applications of existing CBIR techniques to the medical images produced unsatisfactory results, because of the unique characteristics of medical images. In this paper, we develop a new multimodal medical image retrieval approach based on the recent advances in the statistical graphic model and deep learning. Specifically, we first investigate a new extended probabilistic Latent Semantic Analysis model to integrate the visual and textual information from medical images to bridge the semantic gap. We then develop a new deep Boltzmann machine-based multimodal learning model to learn the joint density model from multimodal information in order to derive the missing modality. Experimental results with large volume of real-world medical images have shown that our new approach is a promising solution for the next-generation medical imaging indexing and retrieval system. PMID:26309389
Dennis, Emily L; Babikian, Talin; Alger, Jeffry; Rashid, Faisal; Villalon-Reina, Julio E; Jin, Yan; Olsen, Alexander; Mink, Richard; Babbitt, Christopher; Johnson, Jeffrey; Giza, Christopher C; Thompson, Paul M; Asarnow, Robert F
2018-05-10
Traumatic brain injury can cause extensive damage to the white matter (WM) of the brain. These disruptions can be especially damaging in children, whose brains are still maturing. Diffusion magnetic resonance imaging (dMRI) is the most commonly used method to assess WM organization, but it has limited resolution to differentiate causes of WM disruption. Magnetic resonance spectroscopy (MRS) yields spectra showing the levels of neurometabolites that can indicate neuronal/axonal health, inflammation, membrane proliferation/turnover, and other cellular processes that are on-going post-injury. Previous analyses on this dataset revealed a significant division within the msTBI patient group, based on interhemispheric transfer time (IHTT); one subgroup of patients (TBI-normal) showed evidence of recovery over time, while the other showed continuing degeneration (TBI-slow). We combined dMRI with MRS to better understand WM disruptions in children with moderate-severe traumatic brain injury (msTBI). Tracts with poorer WM organization, as shown by lower FA and higher MD and RD, also showed lower N-acetylaspartate (NAA), a marker of neuronal and axonal health and myelination. We did not find lower NAA in tracts with normal WM organization. Choline, a marker of inflammation, membrane turnover, or gliosis, did not show such associations. We further show that multi-modal imaging can improve outcome prediction over a single modality, as well as over earlier cognitive function measures. Our results suggest that demyelination plays an important role in WM disruption post-injury in a subgroup of msTBI children and indicate the utility of multi-modal imaging. © 2018 Wiley Periodicals, Inc.
2014-01-01
Background Diabetes, a highly prevalent, chronic disease, is associated with increasing frailty and functional decline in older people, with concomitant personal, social, and public health implications. We describe the rationale and methods of the multi-modal intervention in diabetes in frailty (MID-Frail) study. Methods/Design The MID-Frail study is an open, randomised, multicentre study, with random allocation by clusters (each trial site) to a usual care group or an intervention group. A total of 1,718 subjects will be randomised with each site enrolling on average 14 or 15 subjects. The primary objective of the study is to evaluate, in comparison with usual clinical practice, the effectiveness of a multi-modal intervention (specific clinical targets, education, diet, and resistance training exercise) in frail and pre-frail subjects aged ≥70 years with type 2 diabetes in terms of the difference in function 2 years post-randomisation. Difference in function will be measured by changes in a summary ordinal score on the short physical performance battery (SPPB) of at least one point. Secondary outcomes include daily activities, economic evaluation, and quality of life. Discussion The MID-Frail study will provide evidence on the clinical, functional, social, and economic impact of a multi-modal approach in frail and pre-frail older people with type 2 diabetes. Trial registration ClinicalTrials.gov: NCT01654341. PMID:24456998
Rodríguez-Mañas, Leocadio; Bayer, Antony J; Kelly, Mark; Zeyfang, Andrej; Izquierdo, Mikel; Laosa, Olga; Hardman, Timothy C; Sinclair, Alan J; Moreira, Severina; Cook, Justin
2014-01-24
Diabetes, a highly prevalent, chronic disease, is associated with increasing frailty and functional decline in older people, with concomitant personal, social, and public health implications. We describe the rationale and methods of the multi-modal intervention in diabetes in frailty (MID-Frail) study. The MID-Frail study is an open, randomised, multicentre study, with random allocation by clusters (each trial site) to a usual care group or an intervention group. A total of 1,718 subjects will be randomised with each site enrolling on average 14 or 15 subjects. The primary objective of the study is to evaluate, in comparison with usual clinical practice, the effectiveness of a multi-modal intervention (specific clinical targets, education, diet, and resistance training exercise) in frail and pre-frail subjects aged ≥70 years with type 2 diabetes in terms of the difference in function 2 years post-randomisation. Difference in function will be measured by changes in a summary ordinal score on the short physical performance battery (SPPB) of at least one point. Secondary outcomes include daily activities, economic evaluation, and quality of life. The MID-Frail study will provide evidence on the clinical, functional, social, and economic impact of a multi-modal approach in frail and pre-frail older people with type 2 diabetes. ClinicalTrials.gov: NCT01654341.
Capsule endoscopy in Crohn’s disease: Are we seeing any better?
Hudesman, David; Mazurek, Jonathan; Swaminath, Arun
2014-01-01
Crohn’s disease (CD) is a complex, immune-mediated disorder that often requires a multi-modality approach for optimal diagnosis and management. While traditional methods include ileocolonoscopy and radiologic modalities, increasingly, capsule endoscopy (CE) has been incorporated into the algorithm for both the diagnosis and monitoring of CD. Multiple studies have examined the utility of this emerging technology in the management of CD, and have compared it to other available modalities. CE offers a noninvasive approach to evaluate areas of the small bowel that are difficult to reach with traditional endoscopy. Furthermore, CE maybe favored in specific sub segments of patients with inflammatory bowel disease (IBD), such as those with IBD unclassified (IBD-U), pediatric patients and patients with CD who have previously undergone surgery. PMID:25278698
Yang, Xin; Liu, Chaoyue; Wang, Zhiwei; Yang, Jun; Min, Hung Le; Wang, Liang; Cheng, Kwang-Ting Tim
2017-12-01
Multi-parameter magnetic resonance imaging (mp-MRI) is increasingly popular for prostate cancer (PCa) detection and diagnosis. However, interpreting mp-MRI data which typically contains multiple unregistered 3D sequences, e.g. apparent diffusion coefficient (ADC) and T2-weighted (T2w) images, is time-consuming and demands special expertise, limiting its usage for large-scale PCa screening. Therefore, solutions to computer-aided detection of PCa in mp-MRI images are highly desirable. Most recent advances in automated methods for PCa detection employ a handcrafted feature based two-stage classification flow, i.e. voxel-level classification followed by a region-level classification. This work presents an automated PCa detection system which can concurrently identify the presence of PCa in an image and localize lesions based on deep convolutional neural network (CNN) features and a single-stage SVM classifier. Specifically, the developed co-trained CNNs consist of two parallel convolutional networks for ADC and T2w images respectively. Each network is trained using images of a single modality in a weakly-supervised manner by providing a set of prostate images with image-level labels indicating only the presence of PCa without priors of lesions' locations. Discriminative visual patterns of lesions can be learned effectively from clutters of prostate and surrounding tissues. A cancer response map with each pixel indicating the likelihood to be cancerous is explicitly generated at the last convolutional layer of the network for each modality. A new back-propagated error E is defined to enforce both optimized classification results and consistent cancer response maps for different modalities, which help capture highly representative PCa-relevant features during the CNN feature learning process. The CNN features of each modality are concatenated and fed into a SVM classifier. For images which are classified to contain cancers, non-maximum suppression and adaptive thresholding are applied to the corresponding cancer response maps for PCa foci localization. Evaluation based on 160 patient data with 12-core systematic TRUS-guided prostate biopsy as the reference standard demonstrates that our system achieves a sensitivity of 0.46, 0.92 and 0.97 at 0.1, 1 and 10 false positives per normal/benign patient which is significantly superior to two state-of-the-art CNN-based methods (Oquab et al., 2015; Zhou et al., 2015) and 6-core systematic prostate biopsies. Copyright © 2017 Elsevier B.V. All rights reserved.
Fiot, Jean-Baptiste; Cohen, Laurent D; Raniga, Parnesh; Fripp, Jurgen
2013-09-01
Support vector machines (SVM) are machine learning techniques that have been used for segmentation and classification of medical images, including segmentation of white matter hyper-intensities (WMH). Current approaches using SVM for WMH segmentation extract features from the brain and classify these followed by complex post-processing steps to remove false positives. The method presented in this paper combines advanced pre-processing, tissue-based feature selection and SVM classification to obtain efficient and accurate WMH segmentation. Features from 125 patients, generated from up to four MR modalities [T1-w, T2-w, proton-density and fluid attenuated inversion recovery(FLAIR)], differing neighbourhood sizes and the use of multi-scale features were compared. We found that although using all four modalities gave the best overall classification (average Dice scores of 0.54 ± 0.12, 0.72 ± 0.06 and 0.82 ± 0.06 respectively for small, moderate and severe lesion loads); this was not significantly different (p = 0.50) from using just T1-w and FLAIR sequences (Dice scores of 0.52 ± 0.13, 0.71 ± 0.08 and 0.81 ± 0.07). Furthermore, there was a negligible difference between using 5 × 5 × 5 and 3 × 3 × 3 features (p = 0.93). Finally, we show that careful consideration of features and pre-processing techniques not only saves storage space and computation time but also leads to more efficient classification, which outperforms the one based on all features with post-processing. Copyright © 2013 John Wiley & Sons, Ltd.
Leferink, Anne M; Reis, Diogo Santos; van Blitterswijk, Clemens A; Moroni, Lorenzo
2018-04-11
When tissue engineering strategies rely on the combination of three-dimensional (3D) polymeric or ceramic scaffolds with cells to culture implantable tissue constructs in vitro, it is desirable to monitor tissue growth and cell fate to be able to more rationally predict the quality and success of the construct upon implantation. Such a 3D construct is often referred to as a 'black-box' since the properties of the scaffolds material limit the applicability of most imaging modalities to assess important construct parameters. These parameters include the number of cells, the amount and type of tissue formed and the distribution of cells and tissue throughout the construct. Immunolabeling enables the spatial and temporal identification of multiple tissue types within one scaffold without the need to sacrifice the construct. In this report, we concisely review the applicability of antibodies (Abs) and their conjugation chemistries in tissue engineered constructs. With some preliminary experiments, we show an efficient conjugation strategy to couple extracellular matrix Abs to fluorophores. The conjugated probes proved to be effective in determining the presence of collagen type I and type II on electrospun and additive manufactured 3D scaffolds seeded with adult human bone marrow derived mesenchymal stromal cells. The conjugation chemistry applied in our proof of concept study is expected to be applicable in the coupling of any other fluorophore or particle to the Abs. This could ultimately lead to a library of probes to permit high-contrast imaging by several imaging modalities.
Wu, Dan; Chang, Linda; Akazawa, Kentaro; Oishi, Kumiko; Skranes, Jon; Ernst, Thomas; Oishi, Kenichi
2017-01-01
Preterm birth adversely affects postnatal brain development. In order to investigate the critical gestational age at birth (GAB) that alters the developmental trajectory of gray and white matter structures in the brain, we investigated diffusion tensor and quantitative T2 mapping data in 43 term-born and 43 preterm-born infants. A novel multivariate linear model—the change point model, was applied to detect change points in fractional anisotropy, mean diffusivity, and T2 relaxation time. Change points captured the “critical” GAB value associated with a change in the linear relation between GAB and MRI measures. The analysis was performed in 126 regions across the whole brain using an atlas-based image quantification approach to investigate the spatial pattern of the critical GAB. Our results demonstrate that the critical GABs are region- and modality-specific, generally following a central-to-peripheral and bottom-to-top order of structural development. This study may offer unique insights into the postnatal neurological development associated with differential degrees of preterm birth. PMID:28111189
Wu, Dan; Chang, Linda; Akazawa, Kentaro; Oishi, Kumiko; Skranes, Jon; Ernst, Thomas; Oishi, Kenichi
2017-04-01
Preterm birth adversely affects postnatal brain development. In order to investigate the critical gestational age at birth (GAB) that alters the developmental trajectory of gray and white matter structures in the brain, we investigated diffusion tensor and quantitative T2 mapping data in 43 term-born and 43 preterm-born infants. A novel multivariate linear model-the change point model, was applied to detect change points in fractional anisotropy, mean diffusivity, and T2 relaxation time. Change points captured the "critical" GAB value associated with a change in the linear relation between GAB and MRI measures. The analysis was performed in 126 regions across the whole brain using an atlas-based image quantification approach to investigate the spatial pattern of the critical GAB. Our results demonstrate that the critical GABs are region- and modality-specific, generally following a central-to-peripheral and bottom-to-top order of structural development. This study may offer unique insights into the postnatal neurological development associated with differential degrees of preterm birth. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Rapid multi-modality preregistration based on SIFT descriptor.
Chen, Jian; Tian, Jie
2006-01-01
This paper describes the scale invariant feature transform (SIFT) method for rapid preregistration of medical image. This technique originates from Lowe's method wherein preregistration is achieved by matching the corresponding keypoints between two images. The computational complexity has been reduced when we applied SIFT preregistration method before refined registration due to its O(n) exponential calculations. The features of SIFT are highly distinctive and invariant to image scaling and rotation, and partially invariant to change in illumination and contrast, it is robust and repeatable for cursorily matching two images. We also altered the descriptor so our method can deal with multimodality preregistration.
Voxelwise multivariate analysis of multimodality magnetic resonance imaging.
Naylor, Melissa G; Cardenas, Valerie A; Tosun, Duygu; Schuff, Norbert; Weiner, Michael; Schwartzman, Armin
2014-03-01
Most brain magnetic resonance imaging (MRI) studies concentrate on a single MRI contrast or modality, frequently structural MRI. By performing an integrated analysis of several modalities, such as structural, perfusion-weighted, and diffusion-weighted MRI, new insights may be attained to better understand the underlying processes of brain diseases. We compare two voxelwise approaches: (1) fitting multiple univariate models, one for each outcome and then adjusting for multiple comparisons among the outcomes and (2) fitting a multivariate model. In both cases, adjustment for multiple comparisons is performed over all voxels jointly to account for the search over the brain. The multivariate model is able to account for the multiple comparisons over outcomes without assuming independence because the covariance structure between modalities is estimated. Simulations show that the multivariate approach is more powerful when the outcomes are correlated and, even when the outcomes are independent, the multivariate approach is just as powerful or more powerful when at least two outcomes are dependent on predictors in the model. However, multiple univariate regressions with Bonferroni correction remain a desirable alternative in some circumstances. To illustrate the power of each approach, we analyze a case control study of Alzheimer's disease, in which data from three MRI modalities are available. Copyright © 2013 Wiley Periodicals, Inc.
Seifi, Payam; Epel, Boris; Sundramoorthy, Subramanian V.; Mailer, Colin; Halpern, Howard J.
2011-01-01
Purpose: Electron spin-echo (ESE) oxygen imaging is a new and evolving electron paramagnetic resonance (EPR) imaging (EPRI) modality that is useful for physiological in vivo applications, such as EPR oxygen imaging (EPROI), with potential application to imaging of multicentimeter objects as large as human tumors. A present limitation on the size of the object to be imaged at a given resolution is the frequency bandwidth of the system, since the location is encoded as a frequency offset in ESE imaging. The authors’ aim in this study was to demonstrate the object size advantage of the multioffset bandwidth extension technique.Methods: The multiple-stepped Zeeman field offset (or simply multi-B) technique was used for imaging of an 8.5-cm-long phantom containing a narrow single line triaryl methyl compound (trityl) solution at the 250 MHz imaging frequency. The image is compared to a standard single-field ESE image of the same phantom.Results: For the phantom used in this study, transverse relaxation (T2e) electron spin-echo (ESE) images from multi-B acquisition are more uniform, contain less prominent artifacts, and have a better signal to noise ratio (SNR) compared to single-field T2e images.Conclusions: The multi-B method is suitable for imaging of samples whose physical size restricts the applicability of the conventional single-field ESE imaging technique. PMID:21815379
Hilar cholangiocarcinoma: Cross sectional evaluation of disease spectrum
Mahajan, Mangal S; Moorthy, Srikanth; Karumathil, Sreekumar P; Rajeshkannan, R; Pothera, Ramchandran
2015-01-01
Although hilar cholangiocarcinoma is relatively rare, it can be diagnosed on imaging by identifying its typical pattern. In most cases, the tumor appears to be centered on the right or left hepatic duct with involvement of the ipsilateral portal vein, atrophy of hepatic lobe on that side, and invasion of adjacent liver parenchyma. Multi-detector computed tomography (MDCT) and magnetic resonance cholangiopancreatography (MRCP) are commonly used imaging modalities to assess the longitudinal and horizontal spread of tumor. PMID:25969643
Current status of the joint Mayo Clinic-IBM PACS project
NASA Astrophysics Data System (ADS)
Hangiandreou, Nicholas J.; Williamson, Byrn, Jr.; Gehring, Dale G.; Persons, Kenneth R.; Reardon, Frank J.; Salutz, James R.; Felmlee, Joel P.; Loewen, M. D.; Forbes, Glenn S.
1994-05-01
A multi-phase collaboration between Mayo Clinic and IBM-Rochester was undertaken, with the goal of developing a picture archiving and communication system for routine clinical use in the Radiology Department. The initial phase of this project (phase 0) was started in 1988. The current system has been fully integrated into the clinical practice and, to date, over 6.5 million images from 16 imaging modalities have been archived. Phase 3 of this project has recently concluded.
High-intensity focused ultrasound (HIFU) array system for image-guided ablative therapy (IGAT)
NASA Astrophysics Data System (ADS)
Kaczkowski, Peter J.; Keilman, George W.; Cunitz, Bryan W.; Martin, Roy W.; Vaezy, Shahram; Crum, Lawrence A.
2003-06-01
Recent interest in using High Intensity Focused Ultrasound (HIFU) for surgical applications such as hemostasis and tissue necrosis has stimulated the development of image-guided systems for non-invasive HIFU therapy. Seeking an all-ultrasound therapeutic modality, we have developed a clinical HIFU system comprising an integrated applicator that permits precisely registered HIFU therapy delivery and high quality ultrasound imaging using two separate arrays, a multi-channel signal generator and RF amplifier system, and a software program that provides the clinician with a graphical overlay of the ultrasound image and therapeutic protocol controls. Electronic phasing of a 32 element 2 MHz HIFU annular array allows adjusting the focus within the range of about 4 to 12 cm from the face. A central opening in the HIFU transducer permits mounting a commercial medical imaging scanhead (ATL P7-4) that is held in place within a special housing. This mechanical fixture ensures precise coaxial registration between the HIFU transducer and the image plane of the imaging probe. Recent enhancements include development of an acoustic lens using numerical simulations for use with a 5-element array. Our image-guided therapy system is very flexible and enables exploration of a variety of new HIFU therapy delivery and monitoring approaches in the search for safe, effective, and efficient treatment protocols.
Partially coherent lensfree tomographic microscopy⋄
Isikman, Serhan O.; Bishara, Waheb; Ozcan, Aydogan
2012-01-01
Optical sectioning of biological specimens provides detailed volumetric information regarding their internal structure. To provide a complementary approach to existing three-dimensional (3D) microscopy modalities, we have recently demonstrated lensfree optical tomography that offers high-throughput imaging within a compact and simple platform. In this approach, in-line holograms of objects at different angles of partially coherent illumination are recorded using a digital sensor-array, which enables computing pixel super-resolved tomographic images of the specimen. This imaging modality, which forms the focus of this review, offers micrometer-scale 3D resolution over large imaging volumes of, for example, 10–15 mm3, and can be assembled in light weight and compact architectures. Therefore, lensfree optical tomography might be particularly useful for lab-on-a-chip applications as well as for microscopy needs in resource-limited settings. PMID:22193016
Detection of bladder metabolic artifacts in (18)F-FDG PET imaging.
Roman-Jimenez, Geoffrey; Crevoisier, Renaud De; Leseur, Julie; Devillers, Anne; Ospina, Juan David; Simon, Antoine; Terve, Pierre; Acosta, Oscar
2016-04-01
Positron emission tomography using (18)F-fluorodeoxyglucose ((18)F-FDG-PET) is a widely used imaging modality in oncology. It enables significant functional information to be included in analyses of anatomical data provided by other image modalities. Although PET offers high sensitivity in detecting suspected malignant metabolism, (18)F-FDG uptake is not tumor-specific and can also be fixed in surrounding healthy tissue, which may consequently be mistaken as cancerous. PET analyses may be particularly hampered in pelvic-located cancers by the bladder׳s physiological uptake potentially obliterating the tumor uptake. In this paper, we propose a novel method for detecting (18)F-FDG bladder artifacts based on a multi-feature double-step classification approach. Using two manually defined seeds (tumor and bladder), the method consists of a semi-automated double-step clustering strategy that simultaneously takes into consideration standard uptake values (SUV) on PET, Hounsfield values on computed tomography (CT), and the distance to the seeds. This method was performed on 52 PET/CT images from patients treated for locally advanced cervical cancer. Manual delineations of the bladder on CT images were used in order to evaluate bladder uptake detection capability. Tumor preservation was evaluated using a manual segmentation of the tumor, with a threshold of 42% of the maximal uptake within the tumor. Robustness was assessed by randomly selecting different initial seeds. The classification averages were 0.94±0.09 for sensitivity, 0.98±0.01 specificity, and 0.98±0.01 accuracy. These results suggest that this method is able to detect most (18)F-FDG bladder metabolism artifacts while preserving tumor uptake, and could thus be used as a pre-processing step for further non-parasitized PET analyses. Copyright © 2016. Published by Elsevier Ltd.
Multimodality medical image database for temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost
2003-05-01
This paper presents the development of a human brain multi-modality database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted and FLAIR MRI and ictal/interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as non-verbal Wechsler memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication matches the neurosurgeons expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.
Towards an intelligent framework for multimodal affective data analysis.
Poria, Soujanya; Cambria, Erik; Hussain, Amir; Huang, Guang-Bin
2015-03-01
An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with user-generated multimodal data in contexts such as e-learning, e-health, automatic video content tagging and human-computer interaction. In particular, the developed intelligent agent adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio and video) features to enhance the multimodal information extraction process. In preliminary experiments using the eNTERFACE dataset, our proposed multi-modal system is shown to achieve an accuracy of 87.95%, outperforming the best state-of-the-art system by more than 10%, or in relative terms, a 56% reduction in error rate. Copyright © 2014 Elsevier Ltd. All rights reserved.
3D widefield light microscope image reconstruction without dyes
NASA Astrophysics Data System (ADS)
Larkin, S.; Larson, J.; Holmes, C.; Vaicik, M.; Turturro, M.; Jurkevich, A.; Sinha, S.; Ezashi, T.; Papavasiliou, G.; Brey, E.; Holmes, T.
2015-03-01
3D image reconstruction using light microscope modalities without exogenous contrast agents is proposed and investigated as an approach to produce 3D images of biological samples for live imaging applications. Multimodality and multispectral imaging, used in concert with this 3D optical sectioning approach is also proposed as a way to further produce contrast that could be specific to components in the sample. The methods avoid usage of contrast agents. Contrast agents, such as fluorescent or absorbing dyes, can be toxic to cells or alter cell behavior. Current modes of producing 3D image sets from a light microscope, such as 3D deconvolution algorithms and confocal microscopy generally require contrast agents. Zernike phase contrast (ZPC), transmitted light brightfield (TLB), darkfield microscopy and others can produce contrast without dyes. Some of these modalities have not previously benefitted from 3D image reconstruction algorithms, however. The 3D image reconstruction algorithm is based on an underlying physical model of scattering potential, expressed as the sample's 3D absorption and phase quantities. The algorithm is based upon optimizing an objective function - the I-divergence - while solving for the 3D absorption and phase quantities. Unlike typical deconvolution algorithms, each microscope modality, such as ZPC or TLB, produces two output image sets instead of one. Contrast in the displayed image and 3D renderings is further enabled by treating the multispectral/multimodal data as a feature set in a mathematical formulation that uses the principal component method of statistics.
Dual modality virtual colonoscopy workstation: design, implementation, and preliminary evaluation
NASA Astrophysics Data System (ADS)
Chen, Dongqing; Meissner, Michael
2006-03-01
The aim of this study is to develop a virtual colonoscopy (VC) workstation that supports both CT (computed tomography) and MR (magnetic resonance) imaging procedures. The workflow should be optimized and be able to take advantage of both image modalities. The technological break through is at the real-time volume rendering of spatial-intensity-inhomogeneous MR images to achieve high quality 3D endoluminal view. VC aims at visualizing CT or MR tomography images for detection of colonic polyp and lesion. It is also called as CT/MR colonography based on the imaging modality that is employed. The published results of large scale clinical trial demonstrated more than 90% of sensitivity on polyp detection for certain CT colonography (CTC) workstation. A drawback of the CT colonoscopy is the radiation exposure. MR colonography (MRC) is free from the X-ray radiation. It achieved almost 100% specificity for polyp detection in published trials. The better tissue contrast in MR image allows the accurate diagnosis of inflammatory bowel disease also, which is usually difficult in CTC. At present, most of the VC workstations are designed for CT examination. They are not able to display multi-sequence MR series concurrently in a single application. The automatic correlation between 2D and 3D view is not available due to the difficulty of 3D model building for MR images. This study aims at enhancing a commercial VC product that was successfully used for CTC to equally support dark-lumen protocol MR procedure also.
Review of photoacoustic flow imaging: its current state and its promises
van den Berg, P.J.; Daoudi, K.; Steenbergen, W.
2015-01-01
Flow imaging is an important method for quantification in many medical imaging modalities, with applications ranging from estimating wall shear rate to detecting angiogenesis. Modalities like ultrasound and optical coherence tomography both offer flow imaging capabilities, but suffer from low contrast to red blood cells and are sensitive to clutter artefacts. Photoacoustic imaging (PAI) is a relatively new field, with a recent interest in flow imaging. The recent enthusiasm for PA flow imaging is due to its intrinsic contrast to haemoglobin, which offers a new spin on existing methods of flow imaging, and some unique approaches in addition. This review article will delve into the research on photoacoustic flow imaging, explain the principles behind the many techniques and comment on their individual advantages and disadvantages. PMID:26640771
Review of photoacoustic flow imaging: its current state and its promises.
van den Berg, P J; Daoudi, K; Steenbergen, W
2015-09-01
Flow imaging is an important method for quantification in many medical imaging modalities, with applications ranging from estimating wall shear rate to detecting angiogenesis. Modalities like ultrasound and optical coherence tomography both offer flow imaging capabilities, but suffer from low contrast to red blood cells and are sensitive to clutter artefacts. Photoacoustic imaging (PAI) is a relatively new field, with a recent interest in flow imaging. The recent enthusiasm for PA flow imaging is due to its intrinsic contrast to haemoglobin, which offers a new spin on existing methods of flow imaging, and some unique approaches in addition. This review article will delve into the research on photoacoustic flow imaging, explain the principles behind the many techniques and comment on their individual advantages and disadvantages.
ERIC Educational Resources Information Center
Donnelly, Debra J.
2018-01-01
Traditional privileging of the printed text has been considerably eroded by rapid technological advancement and in Australia, as elsewhere, many History teaching programs feature an array of multi-modal historical representations. Research suggests that engagement with the visual and multi-modal constructs has the potential to enrich the pedagogy…
Laser-driven x-ray and neutron source development for industrial applications of plasma accelerators
NASA Astrophysics Data System (ADS)
Brenner, C. M.; Mirfayzi, S. R.; Rusby, D. R.; Armstrong, C.; Alejo, A.; Wilson, L. A.; Clarke, R.; Ahmed, H.; Butler, N. M. H.; Haddock, D.; Higginson, A.; McClymont, A.; Murphy, C.; Notley, M.; Oliver, P.; Allott, R.; Hernandez-Gomez, C.; Kar, S.; McKenna, P.; Neely, D.
2016-01-01
Pulsed beams of energetic x-rays and neutrons from intense laser interactions with solid foils are promising for applications where bright, small emission area sources, capable of multi-modal delivery are ideal. Possible end users of laser-driven multi-modal sources are those requiring advanced non-destructive inspection techniques in industry sectors of high value commerce such as aerospace, nuclear and advanced manufacturing. We report on experimental work that demonstrates multi-modal operation of high power laser-solid interactions for neutron and x-ray beam generation. Measurements and Monte Carlo radiation transport simulations show that neutron yield is increased by a factor ~2 when a 1 mm copper foil is placed behind a 2 mm lithium foil, compared to using a 2 cm block of lithium only. We explore x-ray generation with a 10 picosecond drive pulse in order to tailor the spectral content for radiography with medium density alloy metals. The impact of using >1 ps pulse duration on laser-accelerated electron beam generation and transport is discussed alongside the optimisation of subsequent bremsstrahlung emission in thin, high atomic number target foils. X-ray spectra are deconvolved from spectrometer measurements and simulation data generated using the GEANT4 Monte Carlo code. We also demonstrate the unique capability of laser-driven x-rays in being able to deliver single pulse high spatial resolution projection imaging of thick metallic objects. Active detector radiographic imaging of industrially relevant sample objects with a 10 ps drive pulse is presented for the first time, demonstrating that features of 200 μm size are resolved when projected at high magnification.
Combined multi-spectrum and orthogonal Laplacianfaces for fast CB-XLCT imaging with single-view data
NASA Astrophysics Data System (ADS)
Zhang, Haibo; Geng, Guohua; Chen, Yanrong; Qu, Xuan; Zhao, Fengjun; Hou, Yuqing; Yi, Huangjian; He, Xiaowei
2017-12-01
Cone-beam X-ray luminescence computed tomography (CB-XLCT) is an attractive hybrid imaging modality, which has the potential of monitoring the metabolic processes of nanophosphors-based drugs in vivo. Single-view data reconstruction as a key issue of CB-XLCT imaging promotes the effective study of dynamic XLCT imaging. However, it suffers from serious ill-posedness in the inverse problem. In this paper, a multi-spectrum strategy is adopted to relieve the ill-posedness of reconstruction. The strategy is based on the third-order simplified spherical harmonic approximation model. Then, an orthogonal Laplacianfaces-based method is proposed to reduce the large computational burden without degrading the imaging quality. Both simulated data and in vivo experimental data were used to evaluate the efficiency and robustness of the proposed method. The results are satisfactory in terms of both location and quantitative recovering with computational efficiency, indicating that the proposed method is practical and promising for single-view CB-XLCT imaging.
Efficient generation of image chips for training deep learning algorithms
NASA Astrophysics Data System (ADS)
Han, Sanghui; Fafard, Alex; Kerekes, John; Gartley, Michael; Ientilucci, Emmett; Savakis, Andreas; Law, Charles; Parhan, Jason; Turek, Matt; Fieldhouse, Keith; Rovito, Todd
2017-05-01
Training deep convolutional networks for satellite or aerial image analysis often requires a large amount of training data. For a more robust algorithm, training data need to have variations not only in the background and target, but also radiometric variations in the image such as shadowing, illumination changes, atmospheric conditions, and imaging platforms with different collection geometry. Data augmentation is a commonly used approach to generating additional training data. However, this approach is often insufficient in accounting for real world changes in lighting, location or viewpoint outside of the collection geometry. Alternatively, image simulation can be an efficient way to augment training data that incorporates all these variations, such as changing backgrounds, that may be encountered in real data. The Digital Imaging and Remote Sensing Image Image Generation (DIRSIG) model is a tool that produces synthetic imagery using a suite of physics-based radiation propagation modules. DIRSIG can simulate images taken from different sensors with variation in collection geometry, spectral response, solar elevation and angle, atmospheric models, target, and background. Simulation of Urban Mobility (SUMO) is a multi-modal traffic simulation tool that explicitly models vehicles that move through a given road network. The output of the SUMO model was incorporated into DIRSIG to generate scenes with moving vehicles. The same approach was used when using helicopters as targets, but with slight modifications. Using the combination of DIRSIG and SUMO, we quickly generated many small images, with the target at the center with different backgrounds. The simulations generated images with vehicles and helicopters as targets, and corresponding images without targets. Using parallel computing, 120,000 training images were generated in about an hour. Some preliminary results show an improvement in the deep learning algorithm when real image training data are augmented with the simulated images, especially when obtaining sufficient real data was particularly challenging.
The Mind Research Network - Mental Illness Neuroscience Discovery Grant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, J.; Calhoun, V.
The scientific and technological programs of the Mind Research Network (MRN), reflect DOE missions in basic science and associated instrumentation, computational modeling, and experimental techniques. MRN's technical goals over the course of this project have been to develop and apply integrated, multi-modality functional imaging techniques derived from a decade of DOE-support research and technology development.
NASA Astrophysics Data System (ADS)
Adi Aizudin Bin Radin Nasirudin, Radin; Meier, Reinhard; Ahari, Carmen; Sievert, Matti; Fiebich, Martin; Rummeny, Ernst J.; No"l, Peter B.
2011-03-01
Optical imaging (OI) is a relatively new method in detecting active inflammation of hand joints of patients suffering from rheumatoid arthritis (RA). With the high number of people affected by this disease especially in western countries, the availability of OI as an early diagnostic imaging method is clinically highly relevant. In this paper, we present a newly in-house developed OI analyzing tool and a clinical evaluation study. Our analyzing tool extends the capability of existing OI tools. We include many features in the tool, such as region-based image analysis, hyper perfusion curve analysis, and multi-modality image fusion to aid clinicians in localizing and determining the intensity of inflammation in joints. Additionally, image data management options, such as the full integration of PACS/RIS, are included. In our clinical study we demonstrate how OI facilitates the detection of active inflammation in rheumatoid arthritis. The preliminary clinical results indicate a sensitivity of 43.5%, a specificity of 80.3%, an accuracy of 65.7%, a positive predictive value of 76.6%, and a negative predictive value of 64.9% in relation to clinical results from MRI. The accuracy of inflammation detection serves as evidence to the potential of OI as a useful imaging modality for early detection of active inflammation in patients with rheumatoid arthritis. With our in-house developed tool we extend the usefulness of OI imaging in the clinical arena. Overall, we show that OI is a fast, inexpensive, non-invasive and nonionizing yet highly sensitive and accurate imaging modality.-
Multimodal Nonlinear Optical Microscopy
Yue, Shuhua; Slipchenko, Mikhail N.; Cheng, Ji-Xin
2013-01-01
Because each nonlinear optical (NLO) imaging modality is sensitive to specific molecules or structures, multimodal NLO imaging capitalizes the potential of NLO microscopy for studies of complex biological tissues. The coupling of multiphoton fluorescence, second harmonic generation, and coherent anti-Stokes Raman scattering (CARS) has allowed investigation of a broad range of biological questions concerning lipid metabolism, cancer development, cardiovascular disease, and skin biology. Moreover, recent research shows the great potential of using CARS microscope as a platform to develop more advanced NLO modalities such as electronic-resonance-enhanced four-wave mixing, stimulated Raman scattering, and pump-probe microscopy. This article reviews the various approaches developed for realization of multimodal NLO imaging as well as developments of new NLO modalities on a CARS microscope. Applications to various aspects of biological and biomedical research are discussed. PMID:24353747
Harmonization of multi-site diffusion tensor imaging data.
Fortin, Jean-Philippe; Parker, Drew; Tunç, Birkan; Watanabe, Takanori; Elliott, Mark A; Ruparel, Kosha; Roalf, David R; Satterthwaite, Theodore D; Gur, Ruben C; Gur, Raquel E; Schultz, Robert T; Verma, Ragini; Shinohara, Russell T
2017-11-01
Diffusion tensor imaging (DTI) is a well-established magnetic resonance imaging (MRI) technique used for studying microstructural changes in the white matter. As with many other imaging modalities, DTI images suffer from technical between-scanner variation that hinders comparisons of images across imaging sites, scanners and over time. Using fractional anisotropy (FA) and mean diffusivity (MD) maps of 205 healthy participants acquired on two different scanners, we show that the DTI measurements are highly site-specific, highlighting the need of correcting for site effects before performing downstream statistical analyses. We first show evidence that combining DTI data from multiple sites, without harmonization, may be counter-productive and negatively impacts the inference. Then, we propose and compare several harmonization approaches for DTI data, and show that ComBat, a popular batch-effect correction tool used in genomics, performs best at modeling and removing the unwanted inter-site variability in FA and MD maps. Using age as a biological phenotype of interest, we show that ComBat both preserves biological variability and removes the unwanted variation introduced by site. Finally, we assess the different harmonization methods in the presence of different levels of confounding between site and age, in addition to test robustness to small sample size studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Information Systems for Subject Specialists: A Multi-Modal Approach to Indexing and Classification.
ERIC Educational Resources Information Center
Swift, D.F.; And Others
A fundamental problem in the two broad approaches to indexing in the social sciences--providing structure using preferred terms, cross references, and groupings of sets of materials, or compiling a concordance of an author's terms based on occurrence, leaving users free to impose their own structure--is that different indexers and users focus on…
Uddin, Muhammad Shahin; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain
2016-05-20
Compared with other medical-imaging modalities, ultrasound (US) imaging is a valuable way to examine the body's internal organs, and two-dimensional (2D) imaging is currently the most common technique used in clinical diagnoses. Conventional 2D US imaging systems are highly flexible cost-effective imaging tools that permit operators to observe and record images of a large variety of thin anatomical sections in real time. Recently, 3D US imaging has also been gaining popularity due to its considerable advantages over 2D US imaging. It reduces dependency on the operator and provides better qualitative and quantitative information for an effective diagnosis. Furthermore, it provides a 3D view, which allows the observation of volume information. The major shortcoming of any type of US imaging is the presence of speckle noise. Hence, speckle reduction is vital in providing a better clinical diagnosis. The key objective of any speckle-reduction algorithm is to attain a speckle-free image while preserving the important anatomical features. In this paper we introduce a nonlinear multi-scale complex wavelet-diffusion based algorithm for speckle reduction and sharp-edge preservation of 2D and 3D US images. In the proposed method we use a Rayleigh and Maxwell-mixture model for 2D and 3D US images, respectively, where a genetic algorithm is used in combination with an expectation maximization method to estimate mixture parameters. Experimental results using both 2D and 3D synthetic, physical phantom, and clinical data demonstrate that our proposed algorithm significantly reduces speckle noise while preserving sharp edges without discernible distortions. The proposed approach performs better than the state-of-the-art approaches in both qualitative and quantitative measures.
DOT National Transportation Integrated Search
2014-08-06
This white paper provides a review of research and current practices of integrating economic development goals in metropolitan area transportation planning. The information presented is intended to serve as a technical resource for transportation pla...
Near-common-path interferometer for imaging Fourier-transform spectroscopy in wide-field microscopy
Wadduwage, Dushan N.; Singh, Vijay Raj; Choi, Heejin; Yaqoob, Zahid; Heemskerk, Hans; Matsudaira, Paul; So, Peter T. C.
2017-01-01
Imaging Fourier-transform spectroscopy (IFTS) is a powerful method for biological hyperspectral analysis based on various imaging modalities, such as fluorescence or Raman. Since the measurements are taken in the Fourier space of the spectrum, it can also take advantage of compressed sensing strategies. IFTS has been readily implemented in high-throughput, high-content microscope systems based on wide-field imaging modalities. However, there are limitations in existing wide-field IFTS designs. Non-common-path approaches are less phase-stable. Alternatively, designs based on the common-path Sagnac interferometer are stable, but incompatible with high-throughput imaging. They require exhaustive sequential scanning over large interferometric path delays, making compressive strategic data acquisition impossible. In this paper, we present a novel phase-stable, near-common-path interferometer enabling high-throughput hyperspectral imaging based on strategic data acquisition. Our results suggest that this approach can improve throughput over those of many other wide-field spectral techniques by more than an order of magnitude without compromising phase stability. PMID:29392168
Non-rigid multi-frame registration of cell nuclei in live cell fluorescence microscopy image data.
Tektonidis, Marco; Kim, Il-Han; Chen, Yi-Chun M; Eils, Roland; Spector, David L; Rohr, Karl
2015-01-01
The analysis of the motion of subcellular particles in live cell microscopy images is essential for understanding biological processes within cells. For accurate quantification of the particle motion, compensation of the motion and deformation of the cell nucleus is required. We introduce a non-rigid multi-frame registration approach for live cell fluorescence microscopy image data. Compared to existing approaches using pairwise registration, our approach exploits information from multiple consecutive images simultaneously to improve the registration accuracy. We present three intensity-based variants of the multi-frame registration approach and we investigate two different temporal weighting schemes. The approach has been successfully applied to synthetic and live cell microscopy image sequences, and an experimental comparison with non-rigid pairwise registration has been carried out. Copyright © 2014 Elsevier B.V. All rights reserved.
Evidence-based pain management: is the concept of integrative medicine applicable?
2012-01-01
This article is dedicated to the concept of predictive, preventive, and personalized (integrative) medicine beneficial and applicable to advance pain management, overviews recent insights, and discusses novel minimally invasive tools, performed under ultrasound guidance, enhanced by model-guided approach in the field of musculoskeletal pain and neuromuscular diseases. The complexity of pain emergence and regression demands intellectual-, image-guided techniques personally specified to the patient. For personalized approach, the combination of the modalities of ultrasound, EMG, MRI, PET, and SPECT gives new opportunities to experimental and clinical studies. Neuromuscular imaging should be crucial for emergence of studies concerning advanced neuroimaging technologies to predict movement disorders, postural imbalance with integrated application of imaging, and functional modalities for rehabilitation and pain management. Scientific results should initiate evidence-based preventive movement programs in sport medicine rehabilitation. Traditional medicine and mathematical analytical approaches and education challenges are discussed in this review. The physiological management of exactly assessed pathological condition, particularly in movement disorders, requires participative medical approach to gain harmonized and sustainable effect. PMID:23088743
Nanoparticles for cancer imaging: The good, the bad, and the promise
Chapman, Sandra; Dobrovolskaia, Marina; Farahani, Keyvan; Goodwin, Andrew; Joshi, Amit; Lee, Hakho; Meade, Thomas; Pomper, Martin; Ptak, Krzysztof; Rao, Jianghong; Singh, Ravi; Sridhar, Srinivas; Stern, Stephan; Wang, Andrew; Weaver, John B.; Woloschak, Gayle; Yang, Lily
2014-01-01
Summary Recent advances in molecular imaging and nanotechnology are providing new opportunities for biomedical imaging with great promise for the development of novel imaging agents. The unique optical, magnetic, and chemical properties of materials at the scale of nanometers allow the creation of imaging probes with better contrast enhancement, increased sensitivity, controlled biodistribution, better spatial and temporal information, multi-functionality and multi-modal imaging across MRI, PET, SPECT, and ultrasound. These features could ultimately translate to clinical advantages such as earlier detection, real time assessment of disease progression and personalized medicine. However, several years of investigation into the application of these materials to cancer research has revealed challenges that have delayed the successful application of these agents to the field of biomedical imaging. Understanding these challenges is critical to take full advantage of the benefits offered by nano-sized imaging agents. Therefore, this article presents the lessons learned and challenges encountered by a group of leading researchers in this field, and suggests ways forward to develop nanoparticle probes for cancer imaging. Published by Elsevier Ltd. PMID:25419228
Methods to mitigate data truncation artifacts in multi-contrast tomosynthesis image reconstructions
NASA Astrophysics Data System (ADS)
Garrett, John; Ge, Yongshuai; Li, Ke; Chen, Guang-Hong
2015-03-01
Differential phase contrast imaging is a promising new image modality that utilizes the refraction rather than the absorption of x-rays to image an object. A Talbot-Lau interferometer may be used to permit differential phase contrast imaging with a conventional medical x-ray source and detector. However, the current size of the gratings fabricated for these interferometers are often relatively small. As a result, data truncation image artifacts are often observed in a tomographic acquisition and reconstruction. When data are truncated in x-ray absorption imaging, the methods have been introduced to mitigate the truncation artifacts. However, the same strategy to mitigate absorption truncation artifacts may not be appropriate for differential phase contrast or dark field tomographic imaging. In this work, several new methods to mitigate data truncation artifacts in a multi-contrast imaging system have been proposed and evaluated for tomosynthesis data acquisitions. The proposed methods were validated using experimental data acquired for a bovine udder as well as several cadaver breast specimens using a benchtop system at our facility.
Henderson, Michael L; Dayhoff, Ruth E; Titton, Csaba P; Casertano, Andrew
2006-01-01
As part of its patient care mission, the U.S. Veterans Health Administration performs diagnostic imaging procedures at 141 medical centers and 850 outpatient clinics. VHA's VistA Imaging Package provides a full archival, display, and communications infrastructure and interfaces to radiology and other HIS modules as well as modalities and a worklist provider In addition, various medical center entities within VHA have elected to install commercial picture archiving and communications systems to enable image organization and interpretation. To evaluate interfaces between commercial PACS, the VistA hospital information system, and imaging modalities, VHA has built a fully constrained specification that is based on the Radiology Technical Framework (Rad-TF) Integrating the Healthcare Enterprise. The Health Level Seven normative conformance mechanism was applied to the IHE Rad-TF and agency requirements to arrive at a baseline set of message specifications. VHA provides a thorough implementation and testing process to promote the adoption of standards-based interoperability by all PACS vendors that want to interface with VistA Imaging.
Multi-Modal Hallucinations and Cognitive Function in Parkinson's Disease
Katzen, Heather; Myerson, Connie; Papapetropoulos, Spiridon; Nahab, Fatta; Gallo, Bruno; Levin, Bonnie
2010-01-01
Background/Aims Hallucinations have been linked to a constellation of cognitive deficits in Parkinson's disease (PD), but it is not known whether multi-modal hallucinations are associated with greater neuropsychological dysfunction. Methods 152 idiopathic PD patients were categorized based on the presence or absence of hallucinations and then were further subdivided into visual-only (VHonly; n = 35) or multi-modal (VHplus; n = 12) hallucination groups. All participants underwent detailed neuropsychological assessment. Results Participants with hallucinations performed more poorly on select neuropsychological measures and exhibited more mood symptoms. There were no differences between VHonly and VHplus groups. Conclusions PD patients with multi-modal hallucinations are not at greater risk for neuropsychological impairment than those with single-modal hallucinations. PMID:20689283
Targeting of deep-brain structures in nonhuman primates using MR and CT Images
NASA Astrophysics Data System (ADS)
Chen, Antong; Hines, Catherine; Dogdas, Belma; Bone, Ashleigh; Lodge, Kenneth; O'Malley, Stacey; Connolly, Brett; Winkelmann, Christopher T.; Bagchi, Ansuman; Lubbers, Laura S.; Uslaner, Jason M.; Johnson, Colena; Renger, John; Zariwala, Hatim A.
2015-03-01
In vivo gene delivery in central nervous systems of nonhuman primates (NHP) is an important approach for gene therapy and animal model development of human disease. To achieve a more accurate delivery of genetic probes, precise stereotactic targeting of brain structures is required. However, even with assistance from multi-modality 3D imaging techniques (e.g. MR and CT), the precision of targeting is often challenging due to difficulties in identification of deep brain structures, e.g. the striatum which consists of multiple substructures, and the nucleus basalis of meynert (NBM), which often lack clear boundaries to supporting anatomical landmarks. Here we demonstrate a 3D-image-based intracranial stereotactic approach applied toward reproducible intracranial targeting of bilateral NBM and striatum of rhesus. For the targeting we discuss the feasibility of an atlas-based automatic approach. Delineated originally on a high resolution 3D histology-MR atlas set, the NBM and the striatum could be located on the MR image of a rhesus subject through affine and nonrigid registrations. The atlas-based targeting of NBM was compared with the targeting conducted manually by an experienced neuroscientist. Based on the targeting, the trajectories and entry points for delivering the genetic probes to the targets could be established on the CT images of the subject after rigid registration. The accuracy of the targeting was assessed quantitatively by comparison between NBM locations obtained automatically and manually, and finally demonstrated qualitatively via post mortem analysis of slices that had been labelled via Evan Blue infusion and immunohistochemistry.
Einstein, Andrew J; Lloyd, Steven G; Chaudhry, Farooq A; AlJaroudi, Wael A; Hage, Fadi G
2016-04-01
Multiple novel studies were presented at the 2015 American Heart Association Scientific Sessions which was considered a successful conference at many levels. In this review, we will summarize key studies in nuclear cardiology, cardiac magnetic resonance, echocardiography, and cardiac computed tomography that were presented at the Sessions. We hope that this bird's eye view will keep readers updated on the newest imaging studies presented at the meeting whether or not they were able to attend the meeting.
ERIC Educational Resources Information Center
Chen, Hsin-Liang; Williams, James Patrick
2009-01-01
This project studies the use of multi-modal media objects in an online information literacy class. One hundred sixty-two undergraduate students answered seven surveys. Significant relationships are found among computer skills, teaching materials, communication tools and learning experience. Multi-modal media objects and communication tools are…
Radiological Evaluation of Ambiguous Genitalia with Various Imaging Modalities
NASA Astrophysics Data System (ADS)
Ravi, N.; Bindushree, Kadakola
2012-07-01
Disorders of sex development (DSDs) are congenital conditions in which the development of chromosomal, gonadal, or anatomic sex is atypical. These can be classified broadly into four categories on the basis of gonadal histologic features: female pseudohermaphroditism (46,XX with two ovaries); male pseudohermaphroditism (46,XY with two testes); true hermaphroditism (ovotesticular DSD) (both ovarian and testicular tissues); and gonadal dysgenesis, either mixed (a testis and a streak gonad) or pure (bilateral streak gonads). Imaging plays an important role in demonstrating the anatomy and associated anomalies. Ultrasonography is the primary modality for demonstrating internal organs and magnetic resonance imaging is used as an adjunct modality to assess for internal gonads and genitalia. Early and appropriate gender assignment is necessary for healthy physical and psychologic development of children with ambiguous genitalia. Gender assignment can be facilitated with a team approach that involves a pediatric endocrinologist, geneticist, urologist, psychiatrist, social worker, neonatologist, nurse, and radiologist, allowing timely diagnosis and proper management. We describe case series on ambiguous genitalia presented to our department who were evaluated with multiple imaging modalities.
Construction of Silica-Based Micro/Nanoplatforms for Ultrasound Theranostic Biomedicine.
Zhou, Yang; Han, Xiaoxia; Jing, Xiangxiang; Chen, Yu
2017-09-01
Ultrasound (US)-based biomedicine has been extensively explored for its applications in both diagnostic imaging and disease therapy. The fast development of theranostic nanomedicine significantly promotes the development of US-based biomedicine. This progress report summarizes and discusses the recent developments of rational design and fabrication of silica-based micro/nanoparticles for versatile US-based biomedical applications. The synthetic strategies and surface-engineering approaches of silica-based micro/nanoparticles are initially discussed, followed by detailed introduction on their US-based theranostic applications. They have been extensively explored in contrast-enhanced US imaging, US-based multi-modality imaging, synergistic high-intensity focused US (HIFU) ablation, sonosensitizer-enhanced sonodynamic therapy (SDT), as well as US-triggered chemotherapy. Their biological effects and biosafety have been briefly discussed to guarantee further clinical translation. Based on the high biocompatibility, versatile composition/structure and high performance in US-based theranostic biomedicine, these silica-based theranostic agents are expected to pave a new way for achieving efficient US-based theranostics of disease by taking the specific advantages of material science, nanotechnology and US-based biomedicine. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
SU-G-BRA-01: A Real-Time Tumor Localization and Guidance Platform for Radiotherapy Using US and MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bednarz, B; Culberson, W; Bassetti, M
Purpose: To develop and validate a real-time motion management platform for radiotherapy that directly tracks tumor motion using ultrasound and MRI. This will be a cost-effective and non-invasive real-time platform combining the excellent temporal resolution of ultrasound with the excellent soft-tissue contrast of MRI. Methods: A 4D planar ultrasound acquisition during the treatment that is coupled to a pre-treatment calibration training image set consisting of a simultaneous 4D ultrasound and 4D MRI acquisition. The image sets will be rapidly matched using advanced image and signal processing algorithms, allowing the display of virtual MR images of the tumor/organ motion in real-timemore » from an ultrasound acquisition. Results: The completion of this work will result in several innovations including: a (2D) patch-like, MR and LINAC compatible 4D planar ultrasound transducer that is electronically steerable for hands-free operation to provide real-time virtual MR and ultrasound imaging for motion management during radiation therapy; a multi- modal tumor localization strategy that uses ultrasound and MRI; and fast and accurate image processing algorithms that provide real-time information about the motion and location of tumor or related soft-tissue structures within the patient. Conclusion: If successful, the proposed approach will provide real-time guidance for radiation therapy without degrading image or treatment plan quality. The approach would be equally suitable for image-guided proton beam or heavy ion-beam therapy. This work is partially funded by NIH grant R01CA190298.« less
Molecular imaging and the unification of multilevel mechanisms and data in medical physics.
Nikiforidis, George C; Sakellaropoulos, George C; Kagadis, George C
2008-08-01
Molecular imaging (MI) constitutes a recently developed approach of imaging, where modalities and agents have been reinvented and used in novel combinations in order to expose and measure biologic processes occurring at molecular and cellular levels. It is an approach that bridges the gap between modalities acquiring data from high (e.g., computed tomography, magnetic resonance imaging, and positron-emitting isotopes) and low (e.g., PCR, microarrays) levels of a biological organization. While data integration methodologies will lead to improved diagnostic and prognostic performance, interdisciplinary collaboration, triggered by MI, will result in a better perception of the underlying biological mechanisms. Toward the development of a unifying theory describing these mechanisms, medical physicists can formulate new hypotheses, provide the physical constraints bounding them, and consequently design appropriate experiments. Their new scientific and working environment calls for interventions in their syllabi to educate scientists with enhanced capabilities for holistic views and synthesis.
Pre-Motor Response Time Benefits in Multi-Modal Displays
2013-11-12
when animals are presented with stimuli from two sensory modalities as compared with stimulation from only one modality. The combinations of two...modality attention and orientation behaviors (see also Wallace, Meredith, & Stein, 609 !998). Multi-modal stimulation in the world is not always...perceptually when the stimuli are congruent. In another study, Craig (2006) had participants judge the direction of apparent motion by stimulating
Probing consciousness in a sensory-disconnected paralyzed patient.
Rohaut, Benjamin; Raimondo, Federico; Galanaud, Damien; Valente, Mélanie; Sitt, Jacobo Diego; Naccache, Lionel
2017-01-01
Diagnosis of consciousness can be very challenging in some clinical situations such as severe sensory-motor impairments. We report the case study of a patient who presented a total "locked-in syndrome" associated with and a multi-sensory deafferentation (visual, auditory and tactile modalities) following a protuberantial infarction. In spite of this severe and extreme disconnection from the external world, we could detect reliable evidence of consciousness using a multivariate analysis of his high-density resting state electroencephalogram. This EEG-based diagnosis was eventually confirmed by the clinical evolution of the patient. This approach illustrates the potential importance of functional brain-imaging data to improve diagnosis of consciousness and of cognitive abilities in critical situations in which the behavioral channel is compromised such as deafferented locked-in syndrome.
Survey on the use of smart and adaptive engineering systems in medicine.
Abbod, M F; Linkens, D A; Mahfouf, M; Dounias, G
2002-11-01
In this paper, the current published knowledge about smart and adaptive engineering systems in medicine is reviewed. The achievements of frontier research in this particular field within medical engineering are described. A multi-disciplinary approach to the applications of adaptive systems is observed from the literature surveyed. The three modalities of diagnosis, imaging and therapy are considered to be an appropriate classification method for the analysis of smart systems being applied to specified medical sub-disciplines. It is expected that future research in biomedicine should identify subject areas where more advanced intelligent systems could be applied than is currently evident. The literature provides evidence of hybridisation of different types of adaptive and smart systems with applications in different areas of medical specifications. Copyright 2002 Elsevier Science B.V.
NASA Astrophysics Data System (ADS)
Chiarelli, Antonio Maria; Croce, Pierpaolo; Merla, Arcangelo; Zappasodi, Filippo
2018-06-01
Objective. Brain–computer interface (BCI) refers to procedures that link the central nervous system to a device. BCI was historically performed using electroencephalography (EEG). In the last years, encouraging results were obtained by combining EEG with other neuroimaging technologies, such as functional near infrared spectroscopy (fNIRS). A crucial step of BCI is brain state classification from recorded signal features. Deep artificial neural networks (DNNs) recently reached unprecedented complex classification outcomes. These performances were achieved through increased computational power, efficient learning algorithms, valuable activation functions, and restricted or back-fed neurons connections. By expecting significant overall BCI performances, we investigated the capabilities of combining EEG and fNIRS recordings with state-of-the-art deep learning procedures. Approach. We performed a guided left and right hand motor imagery task on 15 subjects with a fixed classification response time of 1 s and overall experiment length of 10 min. Left versus right classification accuracy of a DNN in the multi-modal recording modality was estimated and it was compared to standalone EEG and fNIRS and other classifiers. Main results. At a group level we obtained significant increase in performance when considering multi-modal recordings and DNN classifier with synergistic effect. Significance. BCI performances can be significantly improved by employing multi-modal recordings that provide electrical and hemodynamic brain activity information, in combination with advanced non-linear deep learning classification procedures.
Christie, Andrew; Roditi, Giles
2014-01-01
This article reviews the importance of preinterventional cross-sectional imaging in the evaluation of peripheral arterial disease, as well as discussing the pros and cons of each imaging modality. The importance of a multidisciplinary team approach is emphasized. PMID:25435657
Conradsen, Isa; Beniczky, Sandor; Wolf, Peter; Terney, Daniella; Sams, Thomas; Sorensen, Helge B D
2009-01-01
Many epilepsy patients cannot call for help during a seizure, because they are unconscious or because of the affection of their motor system or speech function. This can lead to injuries, medical complications and at worst death. An alarm system setting off at seizure onset could help to avoid hazards. Today no reliable alarm systems are available. A Multi-modal Intelligent Seizure Acquisition (MISA) system based on full body motion data seems as a good approach towards detection of epileptic seizures. The system is the first to provide a full body description for epilepsy applications. Three test subjects were used for this pilot project. Each subject simulated 15 seizures and in addition performed some predefined normal activities, during a 4-hour monitoring with electromyography (EMG), accelerometer, magnetometer and gyroscope (AMG), electrocardiography (ECG), electroencephalography (EEG) and audio and video recording. The results showed that a non-subject specific MISA system developed on data from the modalities: accelerometer (ACM), gyroscope and EMG is able to detect 98% of the simulated seizures and at the same time mistakes only 4 of the normal movements for seizures. If the system is individualized (subject specific) it is able to detect all simulated seizures with a maximum of 1 false positive. Based on the results from the simulated seizures and normal movements the MISA system seems to be a promising approach to seizure detection.
Multi-Modal Electronic Payment Systems Best Practices and Convergence White Paper
DOT National Transportation Integrated Search
2010-02-25
The United States transportation industry has changed dramatically in the methodologies to collect fares and fees over the past 10-15 years. Largely a cash-centric approach until the early 90s, the toll and transit industries began automating ostensi...
Multiview echocardiography fusion using an electromagnetic tracking system.
Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; Paakkanen, Riitta; Khan, Nehan; Noga, Michelle; Boulanger, Pierre; Becher, Harald
2016-08-01
Three-dimensional ultrasound is an emerging modality for the assessment of complex cardiac anatomy and function. The advantages of this modality include lack of ionizing radiation, portability, low cost, and high temporal resolution. Major limitations include limited field-of-view, reliance on frequently limited acoustic windows, and poor signal to noise ratio. This study proposes a novel approach to combine multiple views into a single image using an electromagnetic tracking system in order to improve the field-of-view. The novel method has several advantages: 1) it does not rely on image information for alignment, and therefore, the method does not require image overlap; 2) the alignment accuracy of the proposed approach is not affected by any poor image quality as in the case of image registration based approaches; 3) in contrast to previous optical tracking based system, the proposed approach does not suffer from line-of-sight limitation; and 4) it does not require any initial calibration. In this pilot project, we were able to show that using a heart phantom, our method can fuse multiple echocardiographic images and improve the field-of view. Quantitative evaluations showed that the proposed method yielded a nearly optimal alignment of image data sets in three-dimensional space. The proposed method demonstrates the electromagnetic system can be used for the fusion of multiple echocardiography images with a seamless integration of sensors to the transducer.
VIGAN: Missing View Imputation with Generative Adversarial Networks.
Shang, Chao; Palmer, Aaron; Sun, Jiangwen; Chen, Ko-Shin; Lu, Jin; Bi, Jinbo
2017-01-01
In an era when big data are becoming the norm, there is less concern with the quantity but more with the quality and completeness of the data. In many disciplines, data are collected from heterogeneous sources, resulting in multi-view or multi-modal datasets. The missing data problem has been challenging to address in multi-view data analysis. Especially, when certain samples miss an entire view of data, it creates the missing view problem. Classic multiple imputations or matrix completion methods are hardly effective here when no information can be based on in the specific view to impute data for such samples. The commonly-used simple method of removing samples with a missing view can dramatically reduce sample size, thus diminishing the statistical power of a subsequent analysis. In this paper, we propose a novel approach for view imputation via generative adversarial networks (GANs), which we name by VIGAN. This approach first treats each view as a separate domain and identifies domain-to-domain mappings via a GAN using randomly-sampled data from each view, and then employs a multi-modal denoising autoencoder (DAE) to reconstruct the missing view from the GAN outputs based on paired data across the views. Then, by optimizing the GAN and DAE jointly, our model enables the knowledge integration for domain mappings and view correspondences to effectively recover the missing view. Empirical results on benchmark datasets validate the VIGAN approach by comparing against the state of the art. The evaluation of VIGAN in a genetic study of substance use disorders further proves the effectiveness and usability of this approach in life science.
Wang, Hongzhi; Yushkevich, Paul A.
2013-01-01
Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427
Nanoparticles in practice for molecular-imaging applications: An overview.
Padmanabhan, Parasuraman; Kumar, Ajay; Kumar, Sundramurthy; Chaudhary, Ravi Kumar; Gulyás, Balázs
2016-09-01
Nanoparticles (NPs) are playing a progressively more significant role in multimodal and multifunctional molecular imaging. The agents like Superparamagnetic iron oxide (SPIO), manganese oxide (MnO), gold NPs/nanorods and quantum dots (QDs) possess specific properties like paramagnetism, superparamagnetism, surface plasmon resonance (SPR) and photoluminescence respectively. These specific properties make them able for single/multi-modal and single/multi-functional molecular imaging. NPs generally have nanomolar or micromolar sensitivity range and can be detected via imaging instrumentation. The distinctive characteristics of these NPs make them suitable for imaging, therapy and delivery of drugs. Multifunctional nanoparticles (MNPs) can be produced through either modification of shell or surface or by attaching an affinity ligand to the nanoparticles. They are utilized for targeted imaging by magnetic resonance imaging (MRI), single photon emission computed tomography (SPECT), positron emission tomography (PET), computed tomography (CT), photo acoustic imaging (PAI), two photon or fluorescent imaging and ultra sound etc. Toxicity factor of NPs is also a very important concern and toxic effect should be eliminated. First generation NPs have been designed, developed and tested in living subjects and few of them are already in clinical use. In near future, molecular imaging will get advanced with multimodality and multifunctionality to detect diseases like cancer, neurodegenerative diseases, cardiac diseases, inflammation, stroke, atherosclerosis and many others in their early stages. In the current review, we discussed single/multifunctional nanoparticles along with molecular imaging modalities. The present article intends to reveal recent avenues for nanomaterials in multimodal and multifunctional molecular imaging through a review of pertinent literatures. The topic emphasises on the distinctive characteristics of nanomaterial which makes them, suitable for biomedical imaging, therapy and delivery of drugs. This review is more informative of indicative technologies which will be helpful in a way to plan, understand and lead the nanotechnology related work. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Baker, Jennifer H E; McPhee, Kelly C; Moosvi, Firas; Saatchi, Katayoun; Häfeli, Urs O; Minchinton, Andrew I; Reinsberg, Stefan A
2016-01-01
Macromolecular gadolinium (Gd)-based contrast agents are in development as blood pool markers for MRI. HPG-GdF is a 583 kDa hyperbranched polyglycerol doubly tagged with Gd and Alexa 647 nm dye, making it both MR and histologically visible. In this study we examined the location of HPG-GdF in whole-tumor xenograft sections matched to in vivo DCE-MR images of both HPG-GdF and Gadovist. Despite its large size, we have shown that HPG-GdF extravasates from some tumor vessels and accumulates over time, but does not distribute beyond a few cell diameters from vessels. Fractional plasma volume (fPV) and apparent permeability-surface area product (aPS) parameters were derived from the MR concentration-time curves of HPG-GdF. Non-viable necrotic tumor tissue was excluded from the analysis by applying a novel bolus arrival time (BAT) algorithm to all voxels. aPS derived from HPG-GdF was the only MR parameter to identify a difference in vascular function between HCT116 and HT29 colorectal tumors. This study is the first to relate low and high molecular weight contrast agents with matched whole-tumor histological sections. These detailed comparisons identified tumor regions that appear distinct from each other using the HPG-GdF biomarkers related to perfusion and vessel leakiness, while Gadovist-imaged parameter measures in the same regions were unable to detect variation in vascular function. We have established HPG-GdF as a biocompatible multi-modal high molecular weight contrast agent with application for examining vascular function in both MR and histological modalities. Copyright © 2015 John Wiley & Sons, Ltd.
Combined X-ray CT and mass spectrometry for biomedical imaging applications
NASA Astrophysics Data System (ADS)
Schioppa, E., Jr.; Ellis, S.; Bruinen, A. L.; Visser, J.; Heeren, R. M. A.; Uher, J.; Koffeman, E.
2014-04-01
Imaging technologies play a key role in many branches of science, especially in biology and medicine. They provide an invaluable insight into both internal structure and processes within a broad range of samples. There are many techniques that allow one to obtain images of an object. Different techniques are based on the analysis of a particular sample property by means of a dedicated imaging system, and as such, each imaging modality provides the researcher with different information. The use of multimodal imaging (imaging with several different techniques) can provide additional and complementary information that is not possible when employing a single imaging technique alone. In this study, we present for the first time a multi-modal imaging technique where X-ray computerized tomography (CT) is combined with mass spectrometry imaging (MSI). While X-ray CT provides 3-dimensional information regarding the internal structure of the sample based on X-ray absorption coefficients, MSI of thin sections acquired from the same sample allows the spatial distribution of many elements/molecules, each distinguished by its unique mass-to-charge ratio (m/z), to be determined within a single measurement and with a spatial resolution as low as 1 μm or even less. The aim of the work is to demonstrate how molecular information from MSI can be spatially correlated with 3D structural information acquired from X-ray CT. In these experiments, frozen samples are imaged in an X-ray CT setup using Medipix based detectors equipped with a CO2 cooled sample holder. Single projections are pre-processed before tomographic reconstruction using a signal-to-thickness calibration. In the second step, the object is sliced into thin sections (circa 20 μm) that are then imaged using both matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) and secondary ion (SIMS) mass spectrometry, where the spatial distribution of specific molecules within the sample is determined. The combination of two vastly different imaging approaches provides complementary information (i.e., anatomical and molecular distributions) that allows the correlation of distinct structural features with specific molecules distributions leading to unique insights in disease development.
Quantitative Medical Image Analysis for Clinical Development of Therapeutics
NASA Astrophysics Data System (ADS)
Analoui, Mostafa
There has been significant progress in development of therapeutics for prevention and management of several disease areas in recent years, leading to increased average life expectancy, as well as of quality of life, globally. However, due to complexity of addressing a number of medical needs and financial burden of development of new class of therapeutics, there is a need for better tools for decision making and validation of efficacy and safety of new compounds. Numerous biological markers (biomarkers) have been proposed either as adjunct to current clinical endpoints or as surrogates. Imaging biomarkers are among rapidly increasing biomarkers, being examined to expedite effective and rational drug development. Clinical imaging often involves a complex set of multi-modality data sets that require rapid and objective analysis, independent of reviewer's bias and training. In this chapter, an overview of imaging biomarkers for drug development is offered, along with challenges that necessitate quantitative and objective image analysis. Examples of automated and semi-automated analysis approaches are provided, along with technical review of such methods. These examples include the use of 3D MRI for osteoarthritis, ultrasound vascular imaging, and dynamic contrast enhanced MRI for oncology. Additionally, a brief overview of regulatory requirements is discussed. In conclusion, this chapter highlights key challenges and future directions in this area.
Chen, T N; Yin, X T; Li, X G; Zhao, J; Wang, L; Mu, N; Ma, K; Huo, K; Liu, D; Gao, B Y; Feng, H; Li, F
2018-05-08
Objective: To explore the clinical and teaching application value of virtual reality technology in preoperative planning and intraoperative guide of glioma located in central sulcus region. Method: Ten patients with glioma in the central sulcus region were proposed to surgical treatment. The neuro-imaging data, including CT, CTA, DSA, MRI, fMRI were input to 3dgo sczhry workstation for image fusion and 3D reconstruction. Spatial relationships between the lesions and the surrounding structures on the virtual reality image were obtained. These images were applied to the operative approach design, operation process simulation, intraoperative auxiliary decision and the training of specialist physician. Results: Intraoperative founding of 10 patients were highly consistent with preoperative simulation with virtual reality technology. Preoperative 3D reconstruction virtual reality images improved the feasibility of operation planning and operation accuracy. This technology had not only shown the advantages for neurological function protection and lesion resection during surgery, but also improved the training efficiency and effectiveness of dedicated physician by turning the abstract comprehension to virtual reality. Conclusion: Image fusion and 3D reconstruction based virtual reality technology in glioma resection is helpful for formulating the operation plan, improving the operation safety, increasing the total resection rate, and facilitating the teaching and training of the specialist physician.
Wong, K K; Chondrogiannis, S; Bowles, H; Fuster, D; Sánchez, N; Rampin, L; Rubello, D
Nuclear medicine traditionally employs planar and single photon emission computed tomography (SPECT) imaging techniques to depict the biodistribution of radiotracers for the diagnostic investigation of a range of disorders of endocrine gland function. The usefulness of combining functional information with anatomy derived from computed tomography (CT), magnetic resonance imaging (MRI), and high resolution ultrasound (US), has long been appreciated, either using visual side-by-side correlation, or software-based co-registration. The emergence of hybrid SPECT/CT camera technology now allows the simultaneous acquisition of combined multi-modality imaging, with seamless fusion of 3D volume datasets. Thus, it is not surprising that there is growing literature describing the many advantages that contemporary SPECT/CT technology brings to radionuclide investigation of endocrine disorders, showing potential advantages for the pre-operative locating of the parathyroid adenoma using a minimally invasive surgical approach, especially in the presence of ectopic glands and in multiglandular disease. In conclusion, hybrid SPECT/CT imaging has become an essential tool to ensure the most accurate diagnostic in the management of patients with hyperparathyroidism. Copyright © 2016 Elsevier España, S.L.U. y SEMNIM. All rights reserved.
Hollow fiber: a biophotonic implant for live cells
NASA Astrophysics Data System (ADS)
Silvestre, Oscar F.; Holton, Mark D.; Summers, Huw D.; Smith, Paul J.; Errington, Rachel J.
2009-02-01
The technical objective of this study has been to design, build and validate biocompatible hollow fiber implants based on fluorescence with integrated biophotonics components to enable in fiber kinetic cell based assays. A human osteosarcoma in vitro cell model fiber system has been established with validation studies to determine in fiber cell growth, cell cycle analysis and organization in normal and drug treated conditions. The rationale for implant development have focused on developing benchmark concepts in standard monolayer tissue culture followed by the development of in vitro hollow fiber designs; encompassing imaging with and without integrated biophotonics. Furthermore the effect of introducing targetable biosensors into the encapsulated tumor implant such as quantum dots for informing new detection readouts and possible implant designs have been evaluated. A preliminary micro/macro imaging approach has been undertaken, that could provide a mean to track distinct morphological changes in cells growing in a 3D matrix within the fiber which affect the light scattering properties of the implant. Parallel engineering studies have showed the influence of the optical properties of the fiber polymer wall in all imaging modes. Taken all together, we show the basic foundation and the opportunities for multi-modal imaging within an in vitro implant format.
Accelerating image recognition on mobile devices using GPGPU
NASA Astrophysics Data System (ADS)
Bordallo López, Miguel; Nykänen, Henri; Hannuksela, Jari; Silvén, Olli; Vehviläinen, Markku
2011-01-01
The future multi-modal user interfaces of battery-powered mobile devices are expected to require computationally costly image analysis techniques. The use of Graphic Processing Units for computing is very well suited for parallel processing and the addition of programmable stages and high precision arithmetic provide for opportunities to implement energy-efficient complete algorithms. At the moment the first mobile graphics accelerators with programmable pipelines are available, enabling the GPGPU implementation of several image processing algorithms. In this context, we consider a face tracking approach that uses efficient gray-scale invariant texture features and boosting. The solution is based on the Local Binary Pattern (LBP) features and makes use of the GPU on the pre-processing and feature extraction phase. We have implemented a series of image processing techniques in the shader language of OpenGL ES 2.0, compiled them for a mobile graphics processing unit and performed tests on a mobile application processor platform (OMAP3530). In our contribution, we describe the challenges of designing on a mobile platform, present the performance achieved and provide measurement results for the actual power consumption in comparison to using the CPU (ARM) on the same platform.
Falahati, Farshad; Westman, Eric; Simmons, Andrew
2014-01-01
Machine learning algorithms and multivariate data analysis methods have been widely utilized in the field of Alzheimer's disease (AD) research in recent years. Advances in medical imaging and medical image analysis have provided a means to generate and extract valuable neuroimaging information. Automatic classification techniques provide tools to analyze this information and observe inherent disease-related patterns in the data. In particular, these classifiers have been used to discriminate AD patients from healthy control subjects and to predict conversion from mild cognitive impairment to AD. In this paper, recent studies are reviewed that have used machine learning and multivariate analysis in the field of AD research. The main focus is on studies that used structural magnetic resonance imaging (MRI), but studies that included positron emission tomography and cerebrospinal fluid biomarkers in addition to MRI are also considered. A wide variety of materials and methods has been employed in different studies, resulting in a range of different outcomes. Influential factors such as classifiers, feature extraction algorithms, feature selection methods, validation approaches, and cohort properties are reviewed, as well as key MRI-based and multi-modal based studies. Current and future trends are discussed.
de Borst, Aline W; Valente, Giancarlo; Jääskeläinen, Iiro P; Tikka, Pia
2016-04-01
In the perceptual domain, it has been shown that the human brain is strongly shaped through experience, leading to expertise in highly-skilled professionals. What has remained unclear is whether specialization also shapes brain networks underlying mental imagery. In our fMRI study, we aimed to uncover modality-specific mental imagery specialization of film experts. Using multi-voxel pattern analysis we decoded from brain activity of professional cinematographers and sound designers whether they were imagining sounds or images of particular film clips. In each expert group distinct multi-voxel patterns, specific for the modality of their expertise, were found during classification of imagery modality. These patterns were mainly localized in the occipito-temporal and parietal cortex for cinematographers and in the auditory cortex for sound designers. We also found generalized patterns across perception and imagery that were distinct for the two expert groups: they involved frontal cortex for the cinematographers and temporal cortex for the sound designers. Notably, the mental representations of film clips and sounds of cinematographers contained information that went beyond modality-specificity. We were able to successfully decode the implicit presence of film genre from brain activity during mental imagery in cinematographers. The results extend existing neuroimaging literature on expertise into the domain of mental imagery and show that experience in visual versus auditory imagery can alter the representation of information in modality-specific association cortices. Copyright © 2016 Elsevier Inc. All rights reserved.
Cho, In K; Wang, Silun; Mao, Hui; Chan, Anthony WS
2016-01-01
Recent advances in stem cell-based regenerative medicine, cell replacement therapy, and genome editing technologies (i.e. CRISPR-Cas 9) have sparked great interest in in vivo cell monitoring. Molecular imaging promises a unique approach to noninvasively monitor cellular and molecular phenomena, including cell survival, migration, proliferation, and even differentiation at the whole organismal level. Several imaging modalities and strategies have been explored for monitoring cell grafts in vivo. We begin this review with an introduction describing the progress in stem cell technology, with a perspective toward cell replacement therapy. The importance of molecular imaging in reporting and assessing the status of cell grafts and their relation to the local microenvironment is highlighted since the current knowledge gap is one of the major obstacles in clinical translation of stem cell therapy. Based on currently available imaging techniques, we provide a brief discussion on the pros and cons of each imaging modality used for monitoring cell grafts with particular emphasis on magnetic resonance imaging (MRI) and the reporter gene approach. Finally, we conclude with a comprehensive discussion of future directions of applying molecular imaging in regenerative medicine to emphasize further the importance of correlating cell graft conditions and clinical outcomes to advance regenerative medicine. PMID:27766183
ERIC Educational Resources Information Center
Kuby, Candace R.
2013-01-01
Drawing on theories of multi-modality and critical visual literacy, this article focuses on images that five-and six year-olds painted in a class-made book, Voice on the Bus, about racial segregation. The article discusses how children used illustrations to convey their understandings of Rosa Parks' bus arrest in Alabama. A post-structural view…
ERIC Educational Resources Information Center
John, Benneaser; Thavavel, V.; Jayaraj, Jayakumar; Muthukumar, A.; Jeevanandam, Poornaselvan Kittu
2016-01-01
Academic writing skills are crucial when students, e.g., in teacher education programs, write their undergraduate theses. A multi-modal web-based and self-regulated learning resource on academic writing was developed, using texts, hypertext, moving images, podcasts and templates. A study, using surveys and a focus group, showed that students used…
Extra-skeletal Ewing's sarcoma in adults: presentation of two cases.
Lipski, Samuel M; Cermak, Katia; Shumelinsky, Felix; Gil, Thierry; Gebhart, Michael J
2010-12-01
Extraosseous Ewing's sarcoma represents about 5% of the Ewing family of tumours. Two cases in adult patients are presented, emphasizing the complexity of a multi-modality treatment approach of this tumour. Clinical presentation, chemotherapeutical, surgical and radiotherapeutical approaches are discussed. A thorough literature search was done to correlate our therapeutic attitude with current knowledge of this very rare disease.
Magnetomotive Molecular Nanoprobes
John, Renu; Boppart, Stephen A.
2012-01-01
Tremendous developments in the field of biomedical imaging in the past two decades have resulted in the transformation of anatomical imaging to molecular-specific imaging. The main approaches towards imaging at a molecular level are the development of high resolution imaging modalities with high penetration depths and increased sensitivity, and the development of molecular probes with high specificity. The development of novel molecular contrast agents and their success in molecular optical imaging modalities have lead to the emergence of molecular optical imaging as a more versatile and capable technique for providing morphological, spatial, and functional information at the molecular level with high sensitivity and precision, compared to other imaging modalities. In this review, we discuss a new class of dynamic contrast agents called magnetomotive molecular nanoprobes for molecular-specific imaging. Magnetomotive agents are superparamagnetic nanoparticles, typically iron-oxide, that are physically displaced by the application of a small modulating external magnetic field. Dynamic phase-sensitive position measurements are performed using any high resolution imaging modality, including optical coherence tomography (OCT), ultrasonography, or magnetic resonance imaging (MRI). The dynamics of the magnetomotive agents can be used to extract the biomechanical tissue properties in which the nanoparticles are bound, and the agents can be used to deliver therapy via magnetomotive displacements to modulate or disrupt cell function, or hyperthermia to kill cells. These agents can be targeted via conjugation to antibodies, and in vivo targeted imaging has been shown in a carcinogen-induced rat mammary tumor model. The iron-oxide nanoparticles also exhibit negative T2 contrast in MRI, and modulations can produce ultrasound imaging contrast for multimodal imaging applications. PMID:21517766
Phase congruency map driven brain tumour segmentation
NASA Astrophysics Data System (ADS)
Szilágyi, Tünde; Brady, Michael; Berényi, Ervin
2015-03-01
Computer Aided Diagnostic (CAD) systems are already of proven value in healthcare, especially for surgical planning, nevertheless much remains to be done. Gliomas are the most common brain tumours (70%) in adults, with a survival time of just 2-3 months if detected at WHO grades III or higher. Such tumours are extremely variable, necessitating multi-modal Magnetic Resonance Images (MRI). The use of Gadolinium-based contrast agents is only relevant at later stages of the disease where it highlights the enhancing rim of the tumour. Currently, there is no single accepted method that can be used as a reference. There are three main challenges with such images: to decide whether there is tumour present and is so localize it; to construct a mask that separates healthy and diseased tissue; and to differentiate between the tumour core and the surrounding oedema. This paper presents two contributions. First, we develop tumour seed selection based on multiscale multi-modal texture feature vectors. Second, we develop a method based on a local phase congruency based feature map to drive level-set segmentation. The segmentations achieved with our method are more accurate than previously presented methods, particularly for challenging low grade tumours.
Multi-modal porous microstructure for high temperature fuel cell application
NASA Astrophysics Data System (ADS)
Wejrzanowski, T.; Haj Ibrahim, S.; Cwieka, K.; Loeffler, M.; Milewski, J.; Zschech, E.; Lee, C.-G.
2018-01-01
In this study, the effect of microstructure of porous nickel electrode on the performance of high temperature fuel cell is investigated and presented based on a molten carbonate fuel cell (MCFC) cathode. The cathode materials are fabricated from slurry consisting of nickel powder and polymeric binder/solvent mixture, using the tape casting method. The final pore structure is shaped through modifying the slurry composition - with or without the addition of porogen(s). The manufactured materials are extensively characterized by various techniques involving: micro-computed tomography (micro-XCT), scanning electron microscopy (SEM), mercury porosimetry, BET and Archimedes method. Tomographic images are also analyzed and quantified to reveal the evolution of pore space due to nickel in situ oxidation to NiO, and infiltration by the electrolyte. Single-cell performance tests are carried out under MCFC operation conditions to estimate the performance of the manufactured materials. It is found that the multi-modal microstructure of MCFC cathode results in a significant enhancement of the power density generated by the reference cell. To give greater insight into the understanding of the effect of microstructure on the properties of the cathode, a model based on 3D tomography image transformation is proposed.
On using the Hilbert transform for blind identification of complex modes: A practical approach
NASA Astrophysics Data System (ADS)
Antunes, Jose; Debut, Vincent; Piteau, Pilippe; Delaune, Xavier; Borsoi, Laurent
2018-01-01
The modal identification of dynamical systems under operational conditions, when subjected to wide-band unmeasured excitations, is today a viable alternative to more traditional modal identification approaches based on processing sets of measured FRFs or impulse responses. Among current techniques for performing operational modal identification, the so-called blind identification methods are the subject of considerable investigation. In particular, the SOBI (Second-Order Blind Identification) method was found to be quite efficient. SOBI was originally developed for systems with normal modes. To address systems with complex modes, various extension approaches have been proposed, in particular: (a) Using a first-order state-space formulation for the system dynamics; (b) Building complex analytic signals from the measured responses using the Hilbert transform. In this paper we further explore the latter option, which is conceptually interesting while preserving the model order and size. Focus is on applicability of the SOBI technique for extracting the modal responses from analytic signals built from a set of vibratory responses. The novelty of this work is to propose a straightforward computational procedure for obtaining the complex cross-correlation response matrix to be used for the modal identification procedure. After clarifying subtle aspects of the general theoretical framework, we demonstrate that the correlation matrix of the analytic responses can be computed through a Hilbert transform of the real correlation matrix, so that the actual time-domain responses are no longer required for modal identification purposes. The numerical validation of the proposed technique is presented based on time-domain simulations of a conceptual physical multi-modal system, designed to display modes ranging from normal to highly complex, while keeping modal damping low and nearly independent of the modal complexity, and which can prove very interesting in test bench applications. Numerical results for complex modal identifications are presented, and the quality of the identified modal matrix and modal responses, extracted using the complex SOBI technique and implementing the proposed formulation, is assessed.
Designing Image Operators for MRI-PET Image Fusion of the Brain
NASA Astrophysics Data System (ADS)
Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.
2006-09-01
Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.
Pancreatic fluid collections: What is the ideal imaging technique?
Dhaka, Narendra; Samanta, Jayanta; Kochhar, Suman; Kalra, Navin; Appasani, Sreekanth; Manrai, Manish; Kochhar, Rakesh
2015-12-28
Pancreatic fluid collections (PFCs) are seen in up to 50% of cases of acute pancreatitis. The Revised Atlanta classification categorized these collections on the basis of duration of disease and contents, whether liquid alone or a mixture of fluid and necrotic debris. Management of these different types of collections differs because of the variable quantity of debris; while patients with pseudocysts can be drained by straight-forward stent placement, walled-off necrosis requires multi-disciplinary approach. Differentiating these collections on the basis of clinical severity alone is not reliable, so imaging is primarily performed. Contrast-enhanced computed tomography is the commonly used modality for the diagnosis and assessment of proportion of solid contents in PFCs; however with certain limitations such as use of iodinated contrast material especially in renal failure patients and radiation exposure. Magnetic resonance imaging (MRI) performs better than computed tomography (CT) in characterization of pancreatic/peripancreatic fluid collections especially for quantification of solid debris and fat necrosis (seen as fat density globules), and is an alternative in those situations where CT is contraindicated. Also magnetic resonance cholangiopancreatography is highly sensitive for detecting pancreatic duct disruption and choledocholithiasis. Endoscopic ultrasound is an evolving technique with higher reproducibility for fluid-to-debris component estimation with the added advantage of being a single stage procedure for both diagnosis (solid debris delineation) and management (drainage of collection) in the same sitting. Recently role of diffusion weighted MRI and positron emission tomography/CT with (18)F-FDG labeled autologous leukocytes is also emerging for detection of infection noninvasively. Comparative studies between these imaging modalities are still limited. However we look forward to a time when this gap in literature will be fulfilled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Is Non-invasive Image-Guided Breast Brachytherapy Good? – Jess Hiatt, MS Non-invasive Image-Guided Breast Brachytherapy (NIBB) is an emerging therapy for breast boost treatments as well as Accelerated Partial Breast Irradiation (APBI) using HDR surface breast brachytherapy. NIBB allows for smaller treatment volumes while maintaining optimal target coverage. Considering the real-time image-guidance and immobilization provided by the NIBB modality, minimal margins around the target tissue are necessary. Accelerated Partial Breast Irradiation in brachytherapy: is shorter better? - Dorin Todor, PhD VCU A review of balloon and strut devices will be provided together with the origins of APBI: the interstitial multi-catheter implant.more » A dosimetric and radiobiological perspective will help point out the evolution in breast brachytherapy, both in terms of devices and the protocols/clinical trials under which these devices are used. Improvements in imaging, delivery modalities and convenience are among the factors driving the ultrashort fractionation schedules but our understanding of both local control and toxicities associated with various treatments is lagging. A comparison between various schedules, from a radiobiological perspective, will be given together with a critical analysis of the issues. to review and understand the evolution and development of APBI using brachytherapy methods to understand the basis and limitations of radio-biological ‘equivalence’ between fractionation schedules to review commonly used and proposed fractionation schedules Intra-operative breast brachytherapy: Is one stop shopping best?- Bruce Libby, PhD. University of Virginia A review of intraoperative breast brachytherapy will be presented, including the Targit-A and other trials that have used electronic brachytherapy. More modern approaches, in which the lumpectomy procedure is integrated into an APBI workflow, will also be discussed. Learning Objectives: To review past and current clinical trials for IORT To discuss lumpectomy-scan-plan-treat workflow for IORT.« less
New false color mapping for image fusion
NASA Astrophysics Data System (ADS)
Toet, Alexander; Walraven, Jan
1996-03-01
A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).
Kashif, Amer S; Lotz, Thomas F; Heeren, Adrianus M W; Chase, James G
2013-11-01
It is estimated that every year, 1 × 10(6) women are diagnosed with breast cancer, and more than 410,000 die annually worldwide. Digital Image Elasto Tomography (DIET) is a new noninvasive breast cancer screening modality that induces mechanical vibrations in the breast and images its surface motion with digital cameras to detect changes in stiffness. This research develops a new automated approach for diagnosing breast cancer using DIET based on a modal analysis model. The first and second natural frequency of silicone phantom breasts is analyzed. Separate modal analysis is performed for each region of the phantom to estimate the modal parameters using imaged motion data over several input frequencies. Statistical methods are used to assess the likelihood of a frequency shift, which can indicate tumor location. Phantoms with 5, 10, and 20 mm stiff inclusions are tested, as well as a homogeneous (healthy) phantom. Inclusions are located at four locations with different depth. The second natural frequency proves to be a reliable metric with the potential to clearly distinguish lesion like inclusions of different stiffness, as well as providing an approximate location for the tumor like inclusions. The 10 and 20 mm inclusions are always detected regardless of depth. The 5 mm inclusions are only detected near the surface. The homogeneous phantom always yields a negative result, as expected. Detection is based on a statistical likelihood analysis to determine the presence of significantly different frequency response over the phantom, which is a novel approach to this problem. The overall results show promise and justify proof of concept trials with human subjects.
Integration of heterogeneous data for classification in hyperspectral satellite imagery
NASA Astrophysics Data System (ADS)
Benedetto, J.; Czaja, W.; Dobrosotskaya, J.; Doster, T.; Duke, K.; Gillis, D.
2012-06-01
As new remote sensing modalities emerge, it becomes increasingly important to nd more suitable algorithms for fusion and integration of dierent data types for the purposes of target/anomaly detection and classication. Typical techniques that deal with this problem are based on performing detection/classication/segmentation separately in chosen modalities, and then integrating the resulting outcomes into a more complete picture. In this paper we provide a broad analysis of a new approach, based on creating fused representations of the multi- modal data, which then can be subjected to analysis by means of the state-of-the-art classiers or detectors. In this scenario we shall consider the hyperspectral imagery combined with spatial information. Our approach involves machine learning techniques based on analysis of joint data-dependent graphs and their associated diusion kernels. Then, the signicant eigenvectors of the derived fused graph Laplace operator form the new representation, which provides integrated features from the heterogeneous input data. We compare these fused approaches with analysis of integrated outputs of spatial and spectral graph methods.
23 CFR 972.214 - Federal lands congestion management system (CMS).
Code of Federal Regulations, 2011 CFR
2011-04-01
... management strategies; (v) Determine methods to monitor and evaluate the performance of the multi-modal... means the level at which transportation system performance is no longer acceptable due to traffic... improve existing transportation system efficiency. Approaches may include the use of alternate mode...
23 CFR 972.214 - Federal lands congestion management system (CMS).
Code of Federal Regulations, 2010 CFR
2010-04-01
... management strategies; (v) Determine methods to monitor and evaluate the performance of the multi-modal... means the level at which transportation system performance is no longer acceptable due to traffic... improve existing transportation system efficiency. Approaches may include the use of alternate mode...
Stochastic Approaches to Understanding Dissociations in Inflectional Morphology
ERIC Educational Resources Information Center
Plunkett, Kim; Bandelow, Stephan
2006-01-01
Computer modelling research has undermined the view that double dissociations in behaviour are sufficient to infer separability in the cognitive mechanisms underlying those behaviours. However, all these models employ "multi-modal" representational schemes, where functional specialisation of processing emerges from the training process.…
Future Directions in Medical Physics
NASA Astrophysics Data System (ADS)
Jeraj, Robert
Medical Physics is a highly interdisciplinary field at the intersection between physics and medicine and biology. Medical Physics is aiming at development of novel applications of physical processes and techniques in various areas of medicine and biology. Medical Physics had and continues to have profound impact by developing improved imaging and treatment technologies, and helping to advance our understanding of the complexity of the disease. The general trend in medicine towards personalized therapy, and emphasis on accelerated translational research is having a profound impact on medical physics as well. In the traditional stronghold for medical physicists - radiation therapy - the new reality is shaping in the form of biologically conformal and combination therapies, as well as advanced particle therapy approaches, such as proton and ion therapies. Rapid increase in faster and more informative multi-modality medical imaging is bringing a wealth of information that is being complemented with data obtained from genomic profiling and other biomarkers. Novel data analysis and data mining approaches are proving grounds for employment of various artificial intelligence methods that will help further improving clinical decision making for optimization of various therapies as well as better understanding of the disease properties and disease evolution, ultimately leading to improved clinical outcomes.
NASA Astrophysics Data System (ADS)
Nuster, Robert; Wurzinger, Gerhild; Paltauf, Guenther
2017-03-01
CCD camera based optical ultrasound detection is a promising alternative approach for high resolution 3D photoacoustic imaging (PAI). To fully exploit its potential and to achieve an image resolution <50 μm, it is necessary to incorporate variations of the speed of sound (SOS) in the image reconstruction algorithm. Hence, in the proposed work the idea and a first implementation are shown how speed of sound imaging can be added to a previously developed camera based PAI setup. The current setup provides SOS-maps with a spatial resolution of 2 mm and an accuracy of the obtained absolute SOS values of about 1%. The proposed dual-modality setup has the potential to provide highly resolved and perfectly co-registered 3D photoacoustic and SOS images.
Domínguez D, Juan F; Egan, Gary F; Gray, Marcus A; Poudel, Govinda R; Churchyard, Andrew; Chua, Phyllis; Stout, Julie C; Georgiou-Karistianis, Nellie
2013-01-01
IMAGE-HD is an Australian based multi-modal longitudinal magnetic resonance imaging (MRI) study in premanifest and early symptomatic Huntington's disease (pre-HD and symp-HD, respectively). In this investigation we sought to determine the sensitivity of imaging methods to detect macrostructural (volume) and microstructural (diffusivity) longitudinal change in HD. We used a 3T MRI scanner to acquire T1 and diffusion weighted images at baseline and 18 months in 31 pre-HD, 31 symp-HD and 29 controls. Volume was measured across the whole brain, and volume and diffusion measures were ascertained for caudate and putamen. We observed a range of significant volumetric and, for the first time, diffusion changes over 18 months in both pre-HD and symp-HD, relative to controls, detectable at the brain-wide level (volume change in grey and white matter) and in caudate and putamen (volume and diffusivity change). Importantly, longitudinal volume change in the caudate was the only measure that discriminated between groups across all stages of disease: far from diagnosis (>15 years), close to diagnosis (<15 years) and after diagnosis. Of the two diffusion metrics (mean diffusivity, MD; fractional anisotropy, FA), only longitudinal FA change was sensitive to group differences, but only after diagnosis. These findings further confirm caudate atrophy as one of the most sensitive and early biomarkers of neurodegeneration in HD. They also highlight that different tissue properties have varying schedules in their ability to discriminate between groups along disease progression and may therefore inform biomarker selection for future therapeutic interventions.
In vivo confirmation of hydration based contrast mechanisms for terahertz medical imaging using MRI
NASA Astrophysics Data System (ADS)
Bajwa, Neha; Sung, Shijun; Garritano, James; Nowroozi, Bryan; Tewari, Priyamvada; Ennis, Daniel B.; Alger, Jeffery; Grundfest, Warren; Taylor, Zachary
2014-09-01
Terahertz (THz) detection has been proposed and applied to a variety of medical imaging applications in view of its unrivaled hydration profiling capabilities. Variations in tissue dielectric function have been demonstrated at THz frequencies to generate high contrast imagery of tissue, however, the source of image contrast remains to be verified using a modality with a comparable sensing scheme. To investigate the primary contrast mechanism, a pilot comparison study was performed in a burn wound rat model, widely known to create detectable gradients in tissue hydration through both injured and surrounding tissue. Parallel T2 weighted multi slice multi echo (T2w MSME) 7T Magnetic Resonance (MR) scans and THz surface reflectance maps were acquired of a full thickness skin burn in a rat model over a 5 hour time period. A comparison of uninjured and injured regions in the full thickness burn demonstrates a 3-fold increase in average T2 relaxation times and a 15% increase in average THz reflectivity, respectively. These results support the sensitivity and specificity of MRI for measuring in vivo burn tissue water content and the use of this modality to verify and understand the hydration sensing capabilities of THz imaging for acute assessments of the onset and evolution of diseases that affect the skin. A starting point for more sophisticated in vivo studies, this preliminary analysis may be used in the future to explore how and to what extent the release of unbound water affects imaging contrast in THz burn sensing.
MRI-guided fiber-based fluorescence molecular tomography for preclinical atherosclerosis imaging
NASA Astrophysics Data System (ADS)
Li, Baoqiang; Pouliot, Philippe; Lesage, Frederic
2014-09-01
Multi-modal imaging combining fluorescent molecular tomography (FMT) with MRI could provide information in these two modalities as well as optimize the recovery of functional information with MR-guidance. Here, we present a MRI-guided FMT system. An optical probe was designed consisting of a fiber plate on the top and bottom sides of the animal bed, respectively. In experiment, animal was installed between the two plates. Mounting fibers on each plate, transmission measuring could be conducted from both sides of the animal. Moreover, an accurate fluorescence reconstruction was achieved with MRI-derived anatomical guidance. The sensitivity of the FMT system was evaluated with a phantom showing that with long fibers, it was sufficient to detect 10nM Cy5.5 solution with ~28.5 dB in the phantom. The system was eventually used to image MMP activity involved in atherosclerosis with two ATX mice and two control mice. The reconstruction results were in agreement with ex vivo measurement.
NASA Astrophysics Data System (ADS)
Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.
1986-06-01
In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.
Dynamic characteristics of a wind turbine blade using 3D digital image correlation
NASA Astrophysics Data System (ADS)
Baqersad, Javad; Carr, Jennifer; Lundstrom, Troy; Niezrecki, Christopher; Avitabile, Peter; Slattery, Micheal
2012-04-01
Digital image correlation (DIC) has been becoming increasingly popular as a means to perform structural health monitoring because of its full-field, non-contacting measurement ability. In this paper, 3D DIC techniques are used to identify the mode shapes of a wind turbine blade. The blade containing a handful of optical targets is excited at different frequencies using a shaker as well as a pluck test. The response is recorded using two PHOTRON™ high speed cameras. Time domain data is transferred to the frequency domain to extract mode shapes and natural frequencies using an Operational Modal Approach. A finite element model of the blade is also used to compare the mode shapes. Furthermore, a modal hammer impact test is performed using a more conventional approach with an accelerometer. A comparison of mode shapes from the photogrammetric, finite element, and impact test approaches are presented to show the accuracy of the DIC measurement approach.
Learning of Multimodal Representations With Random Walks on the Click Graph.
Wu, Fei; Lu, Xinyan; Song, Jun; Yan, Shuicheng; Zhang, Zhongfei Mark; Rui, Yong; Zhuang, Yueting
2016-02-01
In multimedia information retrieval, most classic approaches tend to represent different modalities of media in the same feature space. With the click data collected from the users' searching behavior, existing approaches take either one-to-one paired data (text-image pairs) or ranking examples (text-query-image and/or image-query-text ranking lists) as training examples, which do not make full use of the click data, particularly the implicit connections among the data objects. In this paper, we treat the click data as a large click graph, in which vertices are images/text queries and edges indicate the clicks between an image and a query. We consider learning a multimodal representation from the perspective of encoding the explicit/implicit relevance relationship between the vertices in the click graph. By minimizing both the truncated random walk loss as well as the distance between the learned representation of vertices and their corresponding deep neural network output, the proposed model which is named multimodal random walk neural network (MRW-NN) can be applied to not only learn robust representation of the existing multimodal data in the click graph, but also deal with the unseen queries and images to support cross-modal retrieval. We evaluate the latent representation learned by MRW-NN on a public large-scale click log data set Clickture and further show that MRW-NN achieves much better cross-modal retrieval performance on the unseen queries/images than the other state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik
2018-05-01
Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.
Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik
2018-05-10
Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.
NASA Astrophysics Data System (ADS)
Margitus, Michael R.; Tagliaferri, William A., Jr.; Sudit, Moises; LaMonica, Peter M.
2012-06-01
Understanding the structure and dynamics of networks are of vital importance to winning the global war on terror. To fully comprehend the network environment, analysts must be able to investigate interconnected relationships of many diverse network types simultaneously as they evolve both spatially and temporally. To remove the burden from the analyst of making mental correlations of observations and conclusions from multiple domains, we introduce the Dynamic Graph Analytic Framework (DYGRAF). DYGRAF provides the infrastructure which facilitates a layered multi-modal network analysis (LMMNA) approach that enables analysts to assemble previously disconnected, yet related, networks in a common battle space picture. In doing so, DYGRAF provides the analyst with timely situation awareness, understanding and anticipation of threats, and support for effective decision-making in diverse environments.
MMX-I: A data-processing software for multi-modal X-ray imaging and tomography
NASA Astrophysics Data System (ADS)
Bergamaschi, A.; Medjoubi, K.; Messaoudi, C.; Marco, S.; Somogyi, A.
2017-06-01
Scanning hard X-ray imaging allows simultaneous acquisition of multimodal information, including X-ray fluorescence, absorption, phase and dark-field contrasts, providing structural and chemical details of the samples. Combining these scanning techniques with the infrastructure developed for fast data acquisition at Synchrotron Soleil permits to perform multimodal imaging and tomography during routine user experiments at the Nanoscopium beamline. A main challenge of such imaging techniques is the online processing and analysis of the generated very large volume (several hundreds of Giga Bytes) multimodal data-sets. This is especially important for the wide user community foreseen at the user oriented Nanoscopium beamline (e.g. from the fields of Biology, Life Sciences, Geology, Geobiology), having no experience in such data-handling. MMX-I is a new multi-platform open-source freeware for the processing and reconstruction of scanning multi-technique X-ray imaging and tomographic datasets. The MMX-I project aims to offer, both expert users and beginners, the possibility of processing and analysing raw data, either on-site or off-site. Therefore we have developed a multi-platform (Mac, Windows and Linux 64bit) data processing tool, which is easy to install, comprehensive, intuitive, extendable and user-friendly. MMX-I is now routinely used by the Nanoscopium user community and has demonstrated its performance in treating big data.
Automatic classification and detection of clinically relevant images for diabetic retinopathy
NASA Astrophysics Data System (ADS)
Xu, Xinyu; Li, Baoxin
2008-03-01
We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.
A Multi-modal, Discriminative and Spatially Invariant CNN for RGB-D Object Labeling.
Asif, Umar; Bennamoun, Mohammed; Sohel, Ferdous
2017-08-30
While deep convolutional neural networks have shown a remarkable success in image classification, the problems of inter-class similarities, intra-class variances, the effective combination of multimodal data, and the spatial variability in images of objects remain to be major challenges. To address these problems, this paper proposes a novel framework to learn a discriminative and spatially invariant classification model for object and indoor scene recognition using multimodal RGB-D imagery. This is achieved through three postulates: 1) spatial invariance - this is achieved by combining a spatial transformer network with a deep convolutional neural network to learn features which are invariant to spatial translations, rotations, and scale changes, 2) high discriminative capability - this is achieved by introducing Fisher encoding within the CNN architecture to learn features which have small inter-class similarities and large intra-class compactness, and 3) multimodal hierarchical fusion - this is achieved through the regularization of semantic segmentation to a multi-modal CNN architecture, where class probabilities are estimated at different hierarchical levels (i.e., imageand pixel-levels), and fused into a Conditional Random Field (CRF)- based inference hypothesis, the optimization of which produces consistent class labels in RGB-D images. Extensive experimental evaluations on RGB-D object and scene datasets, and live video streams (acquired from Kinect) show that our framework produces superior object and scene classification results compared to the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Malof, Jordan M.; Collins, Leslie M.
2016-05-01
Many remote sensing modalities have been developed for buried target detection (BTD), each one offering relative advantages over the others. There has been interest in combining several modalities into a single BTD system that benefits from the advantages of each constituent sensor. Recently an approach was developed, called multi-state management (MSM), that aims to achieve this goal by separating BTD system operation into discrete states, each with different sensor activity and system velocity. Additionally, a modeling approach, called Q-MSM, was developed to quickly analyze multi-modality BTD systems operating with MSM. This work extends previous work by demonstrating how Q-MSM modeling can be used to design BTD systems operating with MSM, and to guide research to yield the most performance benefits. In this work an MSM system is considered that combines a forward-looking infrared (FLIR) camera and a ground penetrating radar (GPR). Experiments are conducted using a dataset of real, field-collected, data which demonstrates how the Q-MSM model can be used to evaluate performance benefits of altering, or improving via research investment, various characteristics of the GPR and FLIR systems. Q-MSM permits fast analysis that can determine where system improvements will have the greatest impact, and can therefore help guide BTD research.
Mehta, Nehal N; Torigian, Drew A; Gelfand, Joel M; Saboury, Babak; Alavi, Abass
2012-05-02
Conventional non-invasive imaging modalities of atherosclerosis such as coronary artery calcium (CAC) and carotid intimal medial thickness (C-IMT) provide information about the burden of disease. However, despite multiple validation studies of CAC, and C-IMT, these modalities do not accurately assess plaque characteristics, and the composition and inflammatory state of the plaque determine its stability and, therefore, the risk of clinical events. [(18)F]-2-fluoro-2-deoxy-D-glucose (FDG) imaging using positron-emission tomography (PET)/computed tomography (CT) has been extensively studied in oncologic metabolism. Studies using animal models and immunohistochemistry in humans show that FDG-PET/CT is exquisitely sensitive for detecting macrophage activity, an important source of cellular inflammation in vessel walls. More recently, we and others have shown that FDG-PET/CT enables highly precise, novel measurements of inflammatory activity of activity of atherosclerotic plaques in large and medium-sized arteries. FDG-PET/CT studies have many advantages over other imaging modalities: 1) high contrast resolution; 2) quantification of plaque volume and metabolic activity allowing for multi-modal atherosclerotic plaque quantification; 3) dynamic, real-time, in vivo imaging; 4) minimal operator dependence. Finally, vascular inflammation detected by FDG-PET/CT has been shown to predict cardiovascular (CV) events independent of traditional risk factors and is also highly associated with overall burden of atherosclerosis. Plaque activity by FDG-PET/CT is modulated by known beneficial CV interventions such as short term (12 week) statin therapy as well as longer term therapeutic lifestyle changes (16 months). The current methodology for quantification of FDG uptake in atherosclerotic plaque involves measurement of the standardized uptake value (SUV) of an artery of interest and of the venous blood pool in order to calculate a target to background ratio (TBR), which is calculated by dividing the arterial SUV by the venous blood pool SUV. This method has shown to represent a stable, reproducible phenotype over time, has a high sensitivity for detection of vascular inflammation, and also has high inter-and intra-reader reliability. Here we present our methodology for patient preparation, image acquisition, and quantification of atherosclerotic plaque activity and vascular inflammation using SUV, TBR, and a global parameter called the metabolic volumetric product (MVP). These approaches may be applied to assess vascular inflammation in various study samples of interest in a consistent fashion as we have shown in several prior publications.
Eren, Suat
2010-01-01
Objective: To evaluate the efficacy of multi-detector row CT (MDCT) on pelvic congestion syndrome (PCS), which is often overlooked or poorly visualized with routine imaging examination. Materials and Methods: We evaluated the MDCT features of 40 patients with PCS (mean age, 45 years; range, 29–60 years) using axial, coronal, sagittal, 3D volume-rendered, and Maximum Intensity Projection MIP images. Results: MDCT revealed pelvic varices and ovarian vein dilatations in all patients. Bilateral ovarian vein dilatation was present in 25 patients, and 15 patients had unilateral dilatation. While 12 cases of secondary pelvic varices occurred simultaneously with a retroaortic left renal vein, 10 cases were due solely to a mass obstruction or stenosis of venous structures. Conclusion: MDCT is an effective tool in the evaluation of PCS, and it has more advantages than other imaging modalities. PMID:25610142