Sample records for calibrating mri machines

  1. MR signal intensity: staying on the bright side in MR image interpretation

    PubMed Central

    Bloem, Johan L; Reijnierse, Monique; Huizinga, Tom W J

    2018-01-01

    In 2003, the Nobel Prize for Medicine was awarded for contribution to the invention of MRI, reflecting the incredible value of MRI for medicine. Since 2003, enormous technical advancements have been made in acquiring MR images. However, MRI has a complicated, accident-prone dark side; images are not calibrated and respective images are dependent on all kinds of subjective choices in the settings of the machine, acquisition technique parameters, reconstruction techniques, data transmission, filtering and postprocessing techniques. The bright side is that understanding MR techniques increases opportunities to unravel characteristics of tissue. In this viewpoint, we summarise the different subjective choices that can be made to generate MR images and stress the importance of communication between radiologists and rheumatologists to correctly interpret images.

  2. A new vibrator to stimulate muscle proprioceptors in fMRI.

    PubMed

    Montant, Marie; Romaiguère, Patricia; Roll, Jean-Pierre

    2009-03-01

    Studying cognitive brain functions by functional magnetic resonance imaging (fMRI) requires appropriate stimulation devices that do not interfere with the magnetic fields. Since the emergence of fMRI in the 90s, a number of stimulation devices have been developed for the visual and auditory modalities. Only few devices, however, have been developed for the somesthesic modality. Here, we present a vibration device for studying somesthesia that is compatible with high magnetic field environments and that can be used in fMRI machines. This device consists of a poly vinyl chloride (PVC) vibrator containing a wind turbine and of a pneumatic apparatus that controls 1-6 vibrators simultaneously. Just like classical electromagnetic vibrators, our device stimulates muscle mechanoreceptors (muscle spindles) and generates reliable illusions of movement. We provide the fMRI compatibility data (phantom test), the calibration curve (vibration frequency as a function of air flow), as well as the results of a kinesthetic test (perceived speed of the illusory movement as a function of vibration frequency). This device was used successfully in several brain imaging studies using both fMRI and magnetoencephalography.

  3. a Contemporary Approach for Evaluation of the best Measurement Capability of a Force Calibration Machine

    NASA Astrophysics Data System (ADS)

    Kumar, Harish

    The present paper discusses the procedure for evaluation of best measurement capability of a force calibration machine. The best measurement capability of force calibration machine is evaluated by a comparison through the precision force transfer standards to the force standard machines. The force transfer standards are calibrated by the force standard machine and then by the force calibration machine by adopting the similar procedure. The results are reported and discussed in the paper and suitable discussion has been made for force calibration machine of 200 kN capacity. Different force transfer standards of nominal capacity 20 kN, 50 kN and 200 kN are used. It is found that there are significant variations in the .uncertainty of force realization by the force calibration machine according to the proposed method in comparison to the earlier method adopted.

  4. Multivariate analysis of fMRI time series: classification and regression of brain responses using machine learning.

    PubMed

    Formisano, Elia; De Martino, Federico; Valente, Giancarlo

    2008-09-01

    Machine learning and pattern recognition techniques are being increasingly employed in functional magnetic resonance imaging (fMRI) data analysis. By taking into account the full spatial pattern of brain activity measured simultaneously at many locations, these methods allow detecting subtle, non-strictly localized effects that may remain invisible to the conventional analysis with univariate statistical methods. In typical fMRI applications, pattern recognition algorithms "learn" a functional relationship between brain response patterns and a perceptual, cognitive or behavioral state of a subject expressed in terms of a label, which may assume discrete (classification) or continuous (regression) values. This learned functional relationship is then used to predict the unseen labels from a new data set ("brain reading"). In this article, we describe the mathematical foundations of machine learning applications in fMRI. We focus on two methods, support vector machines and relevance vector machines, which are respectively suited for the classification and regression of fMRI patterns. Furthermore, by means of several examples and applications, we illustrate and discuss the methodological challenges of using machine learning algorithms in the context of fMRI data analysis.

  5. Quantitative magnetic resonance imaging phantoms: A review and the need for a system phantom.

    PubMed

    Keenan, Kathryn E; Ainslie, Maureen; Barker, Alex J; Boss, Michael A; Cecil, Kim M; Charles, Cecil; Chenevert, Thomas L; Clarke, Larry; Evelhoch, Jeffrey L; Finn, Paul; Gembris, Daniel; Gunter, Jeffrey L; Hill, Derek L G; Jack, Clifford R; Jackson, Edward F; Liu, Guoying; Russek, Stephen E; Sharma, Samir D; Steckner, Michael; Stupic, Karl F; Trzasko, Joshua D; Yuan, Chun; Zheng, Jie

    2018-01-01

    The MRI community is using quantitative mapping techniques to complement qualitative imaging. For quantitative imaging to reach its full potential, it is necessary to analyze measurements across systems and longitudinally. Clinical use of quantitative imaging can be facilitated through adoption and use of a standard system phantom, a calibration/standard reference object, to assess the performance of an MRI machine. The International Society of Magnetic Resonance in Medicine AdHoc Committee on Standards for Quantitative Magnetic Resonance was established in February 2007 to facilitate the expansion of MRI as a mainstream modality for multi-institutional measurements, including, among other things, multicenter trials. The goal of the Standards for Quantitative Magnetic Resonance committee was to provide a framework to ensure that quantitative measures derived from MR data are comparable over time, between subjects, between sites, and between vendors. This paper, written by members of the Standards for Quantitative Magnetic Resonance committee, reviews standardization attempts and then details the need, requirements, and implementation plan for a standard system phantom for quantitative MRI. In addition, application-specific phantoms and implementation of quantitative MRI are reviewed. Magn Reson Med 79:48-61, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. Using machine learning for sequence-level automated MRI protocol selection in neuroradiology.

    PubMed

    Brown, Andrew D; Marotta, Thomas R

    2018-05-01

    Incorrect imaging protocol selection can lead to important clinical findings being missed, contributing to both wasted health care resources and patient harm. We present a machine learning method for analyzing the unstructured text of clinical indications and patient demographics from magnetic resonance imaging (MRI) orders to automatically protocol MRI procedures at the sequence level. We compared 3 machine learning models - support vector machine, gradient boosting machine, and random forest - to a baseline model that predicted the most common protocol for all observations in our test set. The gradient boosting machine model significantly outperformed the baseline and demonstrated the best performance of the 3 models in terms of accuracy (95%), precision (86%), recall (80%), and Hamming loss (0.0487). This demonstrates the feasibility of automating sequence selection by applying machine learning to MRI orders. Automated sequence selection has important safety, quality, and financial implications and may facilitate improvements in the quality and safety of medical imaging service delivery.

  7. Linear positioning laser calibration setup of CNC machine tools

    NASA Astrophysics Data System (ADS)

    Sui, Xiulin; Yang, Congjing

    2002-10-01

    The linear positioning laser calibration setup of CNC machine tools is capable of executing machine tool laser calibraiotn and backlash compensation. Using this setup, hole locations on CNC machien tools will be correct and machien tool geometry will be evaluated and adjusted. Machien tool laser calibration and backlash compensation is a simple and straightforward process. First the setup is to 'find' the stroke limits of the axis. Then the laser head is then brought into correct alignment. Second is to move the machine axis to the other extreme, the laser head is now aligned, using rotation and elevation adjustments. Finally the machine is moved to the start position and final alignment is verified. The stroke of the machine, and the machine compensation interval dictate the amount of data required for each axis. These factors determine the amount of time required for a through compensation of the linear positioning accuracy. The Laser Calibrator System monitors the material temperature and the air density; this takes into consideration machine thermal growth and laser beam frequency. This linear positioning laser calibration setup can be used on CNC machine tools, CNC lathes, horizontal centers and vertical machining centers.

  8. WE-G-BRB-08: TG-51 Calibration of First Commercial MRI-Guided IMRT System in the Presence of 0.35 Tesla Magnetic Field.

    PubMed

    Goddu, S; Green, O Pechenaya; Mutic, S

    2012-06-01

    The first real-time-MRI-guided radiotherapy system has been installed in a clinic and it is being evaluated. Presence of magnetic field (MF) during radiation output calibration may have implications on ionization measurements and there is a possibility that standard calibration protocols may not be suitable for dose measurements for such devices. In this study, we evaluated whether a standard calibration protocol (AAPM- TG-51) is appropriate for absolute dose measurement in presence of MF. Treatment delivery of the ViewRay (VR) system is via three 15,000Ci Cobalt-60 heads positioned 120-degrees apart and all calibration measurements were done in the presence of 0.35T MF. Two ADCL- calibrated ionization-chambers (Exradin A12, A16) were used for TG-51 calibration. Chambers were positioned at 5-cm depth, (SSD=105cm: VR's isocenter), and the MLC leaves were shaped to a 10.5cm × 10.5 cm field size. Percent-depth-dose (PDD) measurements were performed for 5 and 10 cm depths. Individual output of each head was measured using the AAPM- TG51 protocol. Calibration accuracy for each head was subsequently verified by Radiological Physics Center (RPC) TLD measurements. Measured ion-recombination (Pion) and polarity (Ppol) correction factors were less-than 1.002 and 1.006, respectively. Measured PDDs agreed with BJR-25 within ±0.2%. Maximum dose rates for the reference field size at VR's isocenter for heads 1, 2 and 3 were 1.445±0.005, 1.446±0.107, 1.431±0.006 Gy/minute, respectively. Our calibrations agreed with RPC- TLD measurements within ±1.3%, ±2.6% and ±2.0% for treatment-heads 1, 2 and 3, respectively. At the time of calibration, mean activity of the Co-60 sources was 10,800Ci±0.1%. This study shows that the TG- 51 calibration is feasible in the presence of 0.35T MF and the measurement agreement is within the range of results obtainable for conventional treatment machines. Drs. Green, Goddu, and Mutic served as scientific consultants for ViewRay, Inc. Dr. Mutic is on the clinical focus group for ViewRay, Inc., and his spouse holds shares in ViewRay, Inc. © 2012 American Association of Physicists in Medicine.

  9. Vessel calibre—a potential MRI biomarker of tumour response in clinical trials

    PubMed Central

    Emblem, Kyrre E.; Farrar, Christian T.; Gerstner, Elizabeth R.; Batchelor, Tracy T.; Borra, Ronald J. H.; Rosen, Bruce R.; Sorensen, A. Gregory; Jain, Rakesh K.

    2015-01-01

    Our understanding of the importance of blood vessels and angiogenesis in cancer has increased considerably over the past decades, and the assessment of tumour vessel calibre and structure has become increasingly important for in vivo monitoring of therapeutic response. The preferred method for in vivo imaging of most solid cancers is MRI, and the concept of vessel-calibre MRI has evolved since its initial inception in the early 1990s. Almost a quarter of a century later, unlike traditional contrast-enhanced MRI techniques, vessel-calibre MRI remains widely inaccessible to the general clinical community. The narrow availability of the technique is, in part, attributable to limited awareness and a lack of imaging standardization. Thus, the role of vessel-calibre MRI in early phase clinical trials remains to be determined. By contrast, regulatory approvals of antiangiogenic agents that are not directly cytotoxic have created an urgent need for clinical trials incorporating advanced imaging analyses, going beyond traditional assessments of tumour volume. To this end, we review the field of vessel-calibre MRI and summarize the emerging evidence supporting the use of this technique to monitor response to anticancer therapy. We also discuss the potential use of this biomarker assessment in clinical imaging trials and highlight relevant avenues for future research. PMID:25113840

  10. Comparison of fMRI data analysis by SPM99 on different operating systems.

    PubMed

    Shinagawa, Hideo; Honda, Ei-ichi; Ono, Takashi; Kurabayashi, Tohru; Ohyama, Kimie

    2004-09-01

    The hardware chosen for fMRI data analysis may depend on the platform already present in the laboratory or the supporting software. In this study, we ran SPM99 software on multiple platforms to examine whether we could analyze fMRI data by SPM99, and to compare their differences and limitations in processing fMRI data, which can be attributed to hardware capabilities. Six normal right-handed volunteers participated in a study of hand-grasping to obtain fMRI data. Each subject performed a run that consisted of 98 images. The run was measured using a gradient echo-type echo planar imaging sequence on a 1.5T apparatus with a head coil. We used several personal computer (PC), Unix and Linux machines to analyze the fMRI data. There were no differences in the results obtained on several PC, Unix and Linux machines. The only limitations in processing large amounts of the fMRI data were found using PC machines. This suggests that the results obtained with different machines were not affected by differences in hardware components, such as the CPU, memory and hard drive. Rather, it is likely that the limitations in analyzing a huge amount of the fMRI data were due to differences in the operating system (OS).

  11. Efficient gradient calibration based on diffusion MRI.

    PubMed

    Teh, Irvin; Maguire, Mahon L; Schneider, Jürgen E

    2017-01-01

    To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. The errors in apparent diffusion coefficients along orthogonal axes ranged from -9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and -0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from -5.5% to + 4.5% precalibration and were likewise reduced to -0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170-179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. © 2016 Wiley Periodicals, Inc.

  12. Efficient gradient calibration based on diffusion MRI

    PubMed Central

    Teh, Irvin; Maguire, Mahon L.

    2016-01-01

    Purpose To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. Methods The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. Results The errors in apparent diffusion coefficients along orthogonal axes ranged from −9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and −0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from −5.5% to + 4.5% precalibration and were likewise reduced to −0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Conclusion Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170–179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. PMID:26749277

  13. Integrated calibration sphere and calibration step fixture for improved coordinate measurement machine calibration

    DOEpatents

    Clifford, Harry J [Los Alamos, NM

    2011-03-22

    A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.

  14. Machine-learning in grading of gliomas based on multi-parametric magnetic resonance imaging at 3T.

    PubMed

    Citak-Er, Fusun; Firat, Zeynep; Kovanlikaya, Ilhami; Ture, Ugur; Ozturk-Isik, Esin

    2018-06-15

    The objective of this study was to assess the contribution of multi-parametric (mp) magnetic resonance imaging (MRI) quantitative features in the machine learning-based grading of gliomas with a multi-region-of-interests approach. Forty-three patients who were newly diagnosed as having a glioma were included in this study. The patients were scanned prior to any therapy using a standard brain tumor magnetic resonance (MR) imaging protocol that included T1 and T2-weighted, diffusion-weighted, diffusion tensor, MR perfusion and MR spectroscopic imaging. Three different regions-of-interest were drawn for each subject to encompass tumor, immediate tumor periphery, and distant peritumoral edema/normal. The normalized mp-MRI features were used to build machine-learning models for differentiating low-grade gliomas (WHO grades I and II) from high grades (WHO grades III and IV). In order to assess the contribution of regional mp-MRI quantitative features to the classification models, a support vector machine-based recursive feature elimination method was applied prior to classification. A machine-learning model based on support vector machine algorithm with linear kernel achieved an accuracy of 93.0%, a specificity of 86.7%, and a sensitivity of 96.4% for the grading of gliomas using ten-fold cross validation based on the proposed subset of the mp-MRI features. In this study, machine-learning based on multiregional and multi-parametric MRI data has proven to be an important tool in grading glial tumors accurately even in this limited patient population. Future studies are needed to investigate the use of machine learning algorithms for brain tumor classification in a larger patient cohort. Copyright © 2018. Published by Elsevier Ltd.

  15. Structural brain changes versus self-report: machine-learning classification of chronic fatigue syndrome patients.

    PubMed

    Sevel, Landrew S; Boissoneault, Jeff; Letzen, Janelle E; Robinson, Michael E; Staud, Roland

    2018-05-30

    Chronic fatigue syndrome (CFS) is a disorder associated with fatigue, pain, and structural/functional abnormalities seen during magnetic resonance brain imaging (MRI). Therefore, we evaluated the performance of structural MRI (sMRI) abnormalities in the classification of CFS patients versus healthy controls and compared it to machine learning (ML) classification based upon self-report (SR). Participants included 18 CFS patients and 15 healthy controls (HC). All subjects underwent T1-weighted sMRI and provided visual analogue-scale ratings of fatigue, pain intensity, anxiety, depression, anger, and sleep quality. sMRI data were segmented using FreeSurfer and 61 regions based on functional and structural abnormalities previously reported in patients with CFS. Classification was performed in RapidMiner using a linear support vector machine and bootstrap optimism correction. We compared ML classifiers based on (1) 61 a priori sMRI regional estimates and (2) SR ratings. The sMRI model achieved 79.58% classification accuracy. The SR (accuracy = 95.95%) outperformed both sMRI models. Estimates from multiple brain areas related to cognition, emotion, and memory contributed strongly to group classification. This is the first ML-based group classification of CFS. Our findings suggest that sMRI abnormalities are useful for discriminating CFS patients from HC, but SR ratings remain most effective in classification tasks.

  16. Calibrated FMRI.

    PubMed

    Hoge, Richard D

    2012-08-15

    Functional magnetic resonance imaging with blood oxygenation level-dependent (BOLD) contrast has had a tremendous influence on human neuroscience in the last twenty years, providing a non-invasive means of mapping human brain function with often exquisite sensitivity and detail. However the BOLD method remains a largely qualitative approach. While the same can be said of anatomic MRI techniques, whose clinical and research impact has not been diminished in the slightest by the lack of a quantitative interpretation of their image intensity, the quantitative expression of BOLD responses as a percent of the baseline T2*- weighted signal has been viewed as necessary since the earliest days of fMRI. Calibrated MRI attempts to dissociate changes in oxygen metabolism from changes in blood flow and volume, the latter three quantities contributing jointly to determine the physiologically ambiguous percent BOLD change. This dissociation is typically performed using a "calibration" procedure in which subjects inhale a gas mixture containing small amounts of carbon dioxide or enriched oxygen to produce changes in blood flow and BOLD signal which can be measured under well-defined hemodynamic conditions. The outcome is a calibration parameter M which can then be substituted into an expression providing the fractional change in oxygen metabolism given changes in blood flow and BOLD signal during a task. The latest generation of calibrated MRI methods goes beyond fractional changes to provide absolute quantification of resting-state oxygen consumption in micromolar units, in addition to absolute measures of evoked metabolic response. This review discusses the history, challenges, and advances in calibrated MRI, from the personal perspective of the author. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. The National Aeronautics and Space Administration's Gilmore Load Cell Machine: Load Cell Calibrations to 2.22 x 10(exp 7) Newtons

    NASA Technical Reports Server (NTRS)

    Haynes, Michael W.

    2000-01-01

    Designed in 1964 and erected in 1966, the mission of the Gilmore Load Cell Machine was to provide highly accurate calibrations for large capacity load cells in support of NASA's Apollo Program. Still in use today, the Gilmore Machine is a national treasure with no equal.

  18. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS)

    NASA Astrophysics Data System (ADS)

    Park, Suhyung; Park, Jaeseok

    2015-05-01

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  19. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS).

    PubMed

    Park, Suhyung; Park, Jaeseok

    2015-05-07

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  20. An RF dosimeter for independent SAR measurement in MRI scanners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Di; Bottomley, Paul A.; El-Sharkawy, AbdEl-Monem M.

    2013-12-15

    Purpose: The monitoring and management of radio frequency (RF) exposure is critical for ensuring magnetic resonance imaging (MRI) safety. Commercial MRI scanners can overestimate specific absorption rates (SAR) and improperly restrict clinical MRI scans or the application of new MRI sequences, while underestimation of SAR can lead to tissue heating and thermal injury. Accurate scanner-independent RF dosimetry is essential for measuring actual exposure when SAR is critical for ensuring regulatory compliance and MRI safety, for establishing RF exposure while evaluating interventional leads and devices, and for routine MRI quality assessment by medical physicists. However, at present there are no scanner-independentmore » SAR dosimeters. Methods: An SAR dosimeter with an RF transducer comprises two orthogonal, rectangular copper loops and a spherical MRI phantom. The transducer is placed in the magnet bore and calibrated to approximate the resistive loading of the scanner's whole-body birdcage RF coil for human subjects in Philips, GE and Siemens 3 tesla (3T) MRI scanners. The transducer loop reactances are adjusted to minimize interference with the transmit RF field (B{sub 1}) at the MRI frequency. Power from the RF transducer is sampled with a high dynamic range power monitor and recorded on a computer. The deposited power is calibrated and tested on eight different MRI scanners. Whole-body absorbed power vs weight and body mass index (BMI) is measured directly on 26 subjects. Results: A single linear calibration curve sufficed for RF dosimetry at 127.8 MHz on three different Philips and three GE 3T MRI scanners. An RF dosimeter operating at 123.2 MHz on two Siemens 3T scanners required a separate transducer and a slightly different calibration curve. Measurement accuracy was ∼3%. With the torso landmarked at the xiphoid, human adult whole‑body absorbed power varied approximately linearly with patient weight and BMI. This indicates that whole-body torso SAR is on average independent of the imaging subject, albeit with fluctuations. Conclusions: Our 3T RF dosimeter and transducers accurately measure RF exposure in body-equivalent loads and provide scanner-independent assessments of whole-body RF power deposition for establishing safety compliance useful for MRI sequence and device testing.« less

  1. Measurement of liver iron overload: noninvasive calibration of MRI-R2* by magnetic iron detector susceptometer.

    PubMed

    Gianesin, B; Zefiro, D; Musso, M; Rosa, A; Bruzzone, C; Balocco, M; Carrara, P; Bacigalupo, L; Banderali, S; Rollandi, G A; Gambaro, M; Marinelli, M; Forni, G L

    2012-06-01

    An accurate assessment of body iron accumulation is essential for the diagnosis and therapy of iron overload in diseases such as thalassemia or hemochromatosis. Magnetic iron detector susceptometry and MRI are noninvasive techniques capable of detecting iron overload in the liver. Although the transverse relaxation rate measured by MRI can be correlated with the presence of iron, a calibration step is needed to obtain the liver iron concentration. Magnetic iron detector provides an evaluation of the iron overload in the whole liver. In this article, we describe a retrospective observational study comparing magnetic iron detector and MRI examinations performed on the same group of 97 patients with transfusional or congenital iron overload. A biopsy-free linear calibration to convert the average transverse relaxation rate in iron overload (R(2) = 0.72), or in liver iron concentration evaluated in wet tissue (R(2) = 0.68), is presented. This article also compares liver iron concentrations calculated in dry tissue using MRI and the existing biopsy calibration with liver iron concentrations evaluated in wet tissue by magnetic iron detector to obtain an estimate of the wet-to-dry conversion factor of 6.7 ± 0.8 (95% confidence level). Copyright © 2011 Wiley-Liss, Inc.

  2. Application of calibrated fMRI in Alzheimer's disease.

    PubMed

    Lajoie, Isabelle; Nugent, Scott; Debacker, Clément; Dyson, Kenneth; Tancredi, Felipe B; Badhwar, AmanPreet; Belleville, Sylvie; Deschaintre, Yan; Bellec, Pierre; Doyon, Julien; Bocti, Christian; Gauthier, Serge; Arnold, Douglas; Kergoat, Marie-Jeanne; Chertkow, Howard; Monchi, Oury; Hoge, Richard D

    2017-01-01

    Calibrated fMRI based on arterial spin-labeling (ASL) and blood oxygen-dependent contrast (BOLD), combined with periods of hypercapnia and hyperoxia, can provide information on cerebrovascular reactivity (CVR), resting blood flow (CBF), oxygen extraction fraction (OEF), and resting oxidative metabolism (CMRO 2 ). Vascular and metabolic integrity are believed to be affected in Alzheimer's disease (AD), thus, the use of calibrated fMRI in AD may help understand the disease and monitor therapeutic responses in future clinical trials. In the present work, we applied a calibrated fMRI approach referred to as Quantitative O2 (QUO2) in a cohort of probable AD dementia and age-matched control participants. The resulting CBF, OEF and CMRO 2 values fell within the range from previous studies using positron emission tomography (PET) with 15 O labeling. Moreover, the typical parietotemporal pattern of hypoperfusion and hypometabolism in AD was observed, especially in the precuneus, a particularly vulnerable region. We detected no deficit in frontal CBF, nor in whole grey matter CVR, which supports the hypothesis that the effects observed were associated specifically with AD rather than generalized vascular disease. Some key pitfalls affecting both ASL and BOLD methods were encountered, such as prolonged arterial transit times (particularly in the occipital lobe), the presence of susceptibility artifacts obscuring medial temporal regions, and the challenges associated with the hypercapnic manipulation in AD patients and elderly participants. The present results are encouraging and demonstrate the promise of calibrated fMRI measurements as potential biomarkers in AD. Although CMRO 2 can be imaged with 15 O PET, the QUO2 method uses more widely available imaging infrastructure, avoids exposure to ionizing radiation, and integrates with other MRI-based measures of brain structure and function.

  3. Performance of a Machine Learning Classifier of Knee MRI Reports in Two Large Academic Radiology Practices: A Tool to Estimate Diagnostic Yield.

    PubMed

    Hassanpour, Saeed; Langlotz, Curtis P; Amrhein, Timothy J; Befera, Nicholas T; Lungren, Matthew P

    2017-04-01

    The purpose of this study is to evaluate the performance of a natural language processing (NLP) system in classifying a database of free-text knee MRI reports at two separate academic radiology practices. An NLP system that uses terms and patterns in manually classified narrative knee MRI reports was constructed. The NLP system was trained and tested on expert-classified knee MRI reports from two major health care organizations. Radiology reports were modeled in the training set as vectors, and a support vector machine framework was used to train the classifier. A separate test set from each organization was used to evaluate the performance of the system. We evaluated the performance of the system both within and across organizations. Standard evaluation metrics, such as accuracy, precision, recall, and F1 score (i.e., the weighted average of the precision and recall), and their respective 95% CIs were used to measure the efficacy of our classification system. The accuracy for radiology reports that belonged to the model's clinically significant concept classes after training data from the same institution was good, yielding an F1 score greater than 90% (95% CI, 84.6-97.3%). Performance of the classifier on cross-institutional application without institution-specific training data yielded F1 scores of 77.6% (95% CI, 69.5-85.7%) and 90.2% (95% CI, 84.5-95.9%) at the two organizations studied. The results show excellent accuracy by the NLP machine learning classifier in classifying free-text knee MRI reports, supporting the institution-independent reproducibility of knee MRI report classification. Furthermore, the machine learning classifier performed well on free-text knee MRI reports from another institution. These data support the feasibility of multiinstitutional classification of radiologic imaging text reports with a single machine learning classifier without requiring institution-specific training data.

  4. Accurate and simple method for quantification of hepatic fat content using magnetic resonance imaging: a prospective study in biopsy-proven nonalcoholic fatty liver disease.

    PubMed

    Hatta, Tomoko; Fujinaga, Yasunari; Kadoya, Masumi; Ueda, Hitoshi; Murayama, Hiroaki; Kurozumi, Masahiro; Ueda, Kazuhiko; Komatsu, Michiharu; Nagaya, Tadanobu; Joshita, Satoru; Kodama, Ryo; Tanaka, Eiji; Uehara, Tsuyoshi; Sano, Kenji; Tanaka, Naoki

    2010-12-01

    To assess the degree of hepatic fat content, simple and noninvasive methods with high objectivity and reproducibility are required. Magnetic resonance imaging (MRI) is one such candidate, although its accuracy remains unclear. We aimed to validate an MRI method for quantifying hepatic fat content by calibrating MRI reading with a phantom and comparing MRI measurements in human subjects with estimates of liver fat content in liver biopsy specimens. The MRI method was performed by a combination of MRI calibration using a phantom and double-echo chemical shift gradient-echo sequence (double-echo fast low-angle shot sequence) that has been widely used on a 1.5-T scanner. Liver fat content in patients with nonalcoholic fatty liver disease (NAFLD, n = 26) was derived from a calibration curve generated by scanning the phantom. Liver fat was also estimated by optical image analysis. The correlation between the MRI measurements and liver histology findings was examined prospectively. Magnetic resonance imaging measurements showed a strong correlation with liver fat content estimated from the results of light microscopic examination (correlation coefficient 0.91, P < 0.001) regardless of the degree of hepatic steatosis. Moreover, the severity of lobular inflammation or fibrosis did not influence the MRI measurements. This MRI method is simple and noninvasive, has excellent ability to quantify hepatic fat content even in NAFLD patients with mild steatosis or advanced fibrosis, and can be performed easily without special devices.

  5. Absolute calibration for complex-geometry biomedical diffuse optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Mastanduno, Michael A.; Jiang, Shudong; El-Ghussein, Fadi; diFlorio-Alexander, Roberta; Pogue, Brian W.; Paulsen, Keith D.

    2013-03-01

    We have presented methodology to calibrate data in NIRS/MRI imaging versus an absolute reference phantom and results in both phantoms and healthy volunteers. This method directly calibrates data to a diffusion-based model, takes advantage of patient specific geometry from MRI prior information, and generates an initial guess without the need for a large data set. This method of calibration allows for more accurate quantification of total hemoglobin, oxygen saturation, water content, scattering, and lipid concentration as compared with other, slope-based methods. We found the main source of error in the method to be derived from incorrect assignment of reference phantom optical properties rather than initial guess in reconstruction. We also present examples of phantom and breast images from a combined frequency domain and continuous wave MRI-coupled NIRS system. We were able to recover phantom data within 10% of expected contrast and within 10% of the actual value using this method and compare these results with slope-based calibration methods. Finally, we were able to use this technique to calibrate and reconstruct images from healthy volunteers. Representative images are shown and discussion is provided for comparison with existing literature. These methods work towards fully combining the synergistic attributes of MRI and NIRS for in-vivo imaging of breast cancer. Complete software and hardware integration in dual modality instruments is especially important due to the complexity of the technology and success will contribute to complex anatomical and molecular prognostic information that can be readily obtained in clinical use.

  6. Comparison between laser interferometric and calibrated artifacts for the geometric test of machine tools

    NASA Astrophysics Data System (ADS)

    Sousa, Andre R.; Schneider, Carlos A.

    2001-09-01

    A touch probe is used on a 3-axis vertical machine center to check against a hole plate, calibrated on a coordinate measuring machine (CMM). By comparing the results obtained from the machine tool and CMM, the main machine tool error components are measured, attesting the machine accuracy. The error values can b used also t update the error compensation table at the CNC, enhancing the machine accuracy. The method is easy to us, has a lower cost than classical test techniques, and preliminary results have shown that its uncertainty is comparable to well established techniques. In this paper the method is compared with the laser interferometric system, regarding reliability, cost and time efficiency.

  7. Life cycle costing as a decision making tool for technology acquisition in radio-diagnosis

    PubMed Central

    Chakravarty, Abhijit; Debnath, Jyotindu

    2014-01-01

    Background Life cycle costing analysis is an emerging conceptual tool to validate capital investment in healthcare. Methods A preliminary study was done to analyze the long-term cost impact of acquiring a new 3 T MRI system when compared to technological upgradation of the existing 1.5 T MRI system with a view to evolve a decision matrix for correct investment planning and technology management. Operating costing method was utilized to estimate cost per unit MRI scan, costing inputs were considered for the existing 1.5 T and the proposed 3 T machine. Cost for each expected year in the life span of both 1.5 T and 3 T MRI scan options were then discounted to its Net Present Value. Net Present Value thus calculated for both the alternative options of 1.5 T and 3 T MRI machine was charted along with various intangible but critical Figures of Merit (FOM) to create a decision matrix for capital investment planning. Result Considering all fixed and variable costs contributing towards assumed operation, unit cost per MRI procedure was found to be Rs. 4244.58 for the 1.5 T upgrade and Rs. 6059.37 for the new 3 T MRI machine. Life Cycle Cost Analysis of the proposed 1.5 T upgrade and new 3 T machine showed a Net Present Value of Rs. 42,148,587.80 and Rs. 27,587,842.38 respectively. Conclusion The utility of life cycle costing as a strategic decision making tool towards evaluating alternative options for capital investment planning in health care environment is reiterated. PMID:25609862

  8. Machine learning classification with confidence: application of transductive conformal predictors to MRI-based diagnostic and prognostic markers in depression.

    PubMed

    Nouretdinov, Ilia; Costafreda, Sergi G; Gammerman, Alexander; Chervonenkis, Alexey; Vovk, Vladimir; Vapnik, Vladimir; Fu, Cynthia H Y

    2011-05-15

    There is rapidly accumulating evidence that the application of machine learning classification to neuroimaging measurements may be valuable for the development of diagnostic and prognostic prediction tools in psychiatry. However, current methods do not produce a measure of the reliability of the predictions. Knowing the risk of the error associated with a given prediction is essential for the development of neuroimaging-based clinical tools. We propose a general probabilistic classification method to produce measures of confidence for magnetic resonance imaging (MRI) data. We describe the application of transductive conformal predictor (TCP) to MRI images. TCP generates the most likely prediction and a valid measure of confidence, as well as the set of all possible predictions for a given confidence level. We present the theoretical motivation for TCP, and we have applied TCP to structural and functional MRI data in patients and healthy controls to investigate diagnostic and prognostic prediction in depression. We verify that TCP predictions are as accurate as those obtained with more standard machine learning methods, such as support vector machine, while providing the additional benefit of a valid measure of confidence for each prediction. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. Demystifying liver iron concentration measurements with MRI.

    PubMed

    Henninger, B

    2018-06-01

    This Editorial comment refers to the article: Non-invasive measurement of liver iron concentration using 3-Tesla magnetic resonance imaging: validation against biopsy. D'Assignies G, et al. Eur Radiol Nov 2017. • MRI is a widely accepted reliable tool to determine liver iron concentration. • MRI cannot measure iron directly, it needs calibration. • Calibration curves for 3.0T are rare in the literature. • The study by d'Assignies et al. provides valuable information on this topic. • Evaluation of liver iron overload should no longer be restricted to experts.

  10. Calibration and validation of TRUST MRI for the estimation of cerebral blood oxygenation

    PubMed Central

    Lu, Hanzhang; Xu, Feng; Grgac, Ksenija; Liu, Peiying; Qin, Qin; van Zijl, Peter

    2011-01-01

    Recently, a T2-Relaxation-Under-Spin-Tagging (TRUST) MRI technique was developed to quantitatively estimate blood oxygen saturation fraction (Y) via the measurement of pure blood T2. This technique has shown promise for normalization of fMRI signals, for the assessment of oxygen metabolism, and in studies of cognitive aging and multiple sclerosis. However, a human validation study has not been conducted. In addition, the calibration curve used to convert blood T2 to Y has not accounted for the effects of hematocrit (Hct). In the present study, we first conducted experiments on blood samples under physiologic conditions, and the Carr-Purcell-Meiboom-Gill (CPMG) T2 was determined for a range of Y and Hct values. The data were fitted to a two-compartment exchange model to allow the characterization of a three-dimensional plot that can serve to calibrate the in vivo data. Next, in a validation study in humans, we showed that arterial Y estimated using TRUST MRI was 0.837±0.036 (N=7) during the inhalation of 14% O2, which was in excellent agreement with the gold-standard Y values of 0.840±0.036 based on Pulse-Oximetry. These data suggest that the availability of this calibration plot should enhance the applicability of TRUST MRI for non-invasive assessment of cerebral blood oxygenation. PMID:21590721

  11. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features

    PubMed Central

    Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-01-01

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization. PMID:28599282

  12. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features.

    PubMed

    Zhang, Xin; Yan, Lin-Feng; Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-07-18

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization.

  13. Development of a Machine-Vision System for Recording of Force Calibration Data

    NASA Astrophysics Data System (ADS)

    Heamawatanachai, Sumet; Chaemthet, Kittipong; Changpan, Tawat

    This paper presents the development of a new system for recording of force calibration data using machine vision technology. Real time camera and computer system were used to capture images of the reading from the instruments during calibration. Then, the measurement images were transformed and translated to numerical data using optical character recognition (OCR) technique. These numerical data along with raw images were automatically saved to memories as the calibration database files. With this new system, the human error of recording would be eliminated. The verification experiments were done by using this system for recording the measurement results from an amplifier (DMP 40) with load cell (HBM-Z30-10kN). The NIMT's 100-kN deadweight force standard machine (DWM-100kN) was used to generate test forces. The experiments setup were done in 3 categories; 1) dynamics condition (record during load changing), 2) statics condition (record during fix load), and 3) full calibration experiments in accordance with ISO 376:2011. The captured images from dynamics condition experiment gave >94% without overlapping of number. The results from statics condition experiment were >98% images without overlapping. All measurement images without overlapping were translated to number by the developed program with 100% accuracy. The full calibration experiments also gave 100% accurate results. Moreover, in case of incorrect translation of any result, it is also possible to trace back to the raw calibration image to check and correct it. Therefore, this machine-vision-based system and program should be appropriate for recording of force calibration data.

  14. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  15. Using Active Learning for Speeding up Calibration in Simulation Models

    PubMed Central

    Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2015-01-01

    Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190

  16. Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression.

    PubMed

    Sato, João R; Moll, Jorge; Green, Sophie; Deakin, John F W; Thomaz, Carlos E; Zahn, Roland

    2015-08-30

    Standard functional magnetic resonance imaging (fMRI) analyses cannot assess the potential of a neuroimaging signature as a biomarker to predict individual vulnerability to major depression (MD). Here, we use machine learning for the first time to address this question. Using a recently identified neural signature of guilt-selective functional disconnection, the classification algorithm was able to distinguish remitted MD from control participants with 78.3% accuracy. This demonstrates the high potential of our fMRI signature as a biomarker of MD vulnerability. Crown Copyright © 2015. Published by Elsevier Ireland Ltd. All rights reserved.

  17. Multiclass Classification for the Differential Diagnosis on the ADHD Subtypes Using Recursive Feature Elimination and Hierarchical Extreme Learning Machine: Structural MRI Study

    PubMed Central

    Qureshi, Muhammad Naveed Iqbal; Min, Beomjun; Jo, Hang Joon; Lee, Boreom

    2016-01-01

    The classification of neuroimaging data for the diagnosis of certain brain diseases is one of the main research goals of the neuroscience and clinical communities. In this study, we performed multiclass classification using a hierarchical extreme learning machine (H-ELM) classifier. We compared the performance of this classifier with that of a support vector machine (SVM) and basic extreme learning machine (ELM) for cortical MRI data from attention deficit/hyperactivity disorder (ADHD) patients. We used 159 structural MRI images of children from the publicly available ADHD-200 MRI dataset. The data consisted of three types, namely, typically developing (TDC), ADHD-inattentive (ADHD-I), and ADHD-combined (ADHD-C). We carried out feature selection by using standard SVM-based recursive feature elimination (RFE-SVM) that enabled us to achieve good classification accuracy (60.78%). In this study, we found the RFE-SVM feature selection approach in combination with H-ELM to effectively enable the acquisition of high multiclass classification accuracy rates for structural neuroimaging data. In addition, we found that the most important features for classification were the surface area of the superior frontal lobe, and the cortical thickness, volume, and mean surface area of the whole cortex. PMID:27500640

  18. Multiclass Classification for the Differential Diagnosis on the ADHD Subtypes Using Recursive Feature Elimination and Hierarchical Extreme Learning Machine: Structural MRI Study.

    PubMed

    Qureshi, Muhammad Naveed Iqbal; Min, Beomjun; Jo, Hang Joon; Lee, Boreom

    2016-01-01

    The classification of neuroimaging data for the diagnosis of certain brain diseases is one of the main research goals of the neuroscience and clinical communities. In this study, we performed multiclass classification using a hierarchical extreme learning machine (H-ELM) classifier. We compared the performance of this classifier with that of a support vector machine (SVM) and basic extreme learning machine (ELM) for cortical MRI data from attention deficit/hyperactivity disorder (ADHD) patients. We used 159 structural MRI images of children from the publicly available ADHD-200 MRI dataset. The data consisted of three types, namely, typically developing (TDC), ADHD-inattentive (ADHD-I), and ADHD-combined (ADHD-C). We carried out feature selection by using standard SVM-based recursive feature elimination (RFE-SVM) that enabled us to achieve good classification accuracy (60.78%). In this study, we found the RFE-SVM feature selection approach in combination with H-ELM to effectively enable the acquisition of high multiclass classification accuracy rates for structural neuroimaging data. In addition, we found that the most important features for classification were the surface area of the superior frontal lobe, and the cortical thickness, volume, and mean surface area of the whole cortex.

  19. Simultaneous auto-calibration and gradient delays estimation (SAGE) in non-Cartesian parallel MRI using low-rank constraints.

    PubMed

    Jiang, Wenwen; Larson, Peder E Z; Lustig, Michael

    2018-03-09

    To correct gradient timing delays in non-Cartesian MRI while simultaneously recovering corruption-free auto-calibration data for parallel imaging, without additional calibration scans. The calibration matrix constructed from multi-channel k-space data should be inherently low-rank. This property is used to construct reconstruction kernels or sensitivity maps. Delays between the gradient hardware across different axes and RF receive chain, which are relatively benign in Cartesian MRI (excluding EPI), lead to trajectory deviations and hence data inconsistencies for non-Cartesian trajectories. These in turn lead to higher rank and corrupted calibration information which hampers the reconstruction. Here, a method named Simultaneous Auto-calibration and Gradient delays Estimation (SAGE) is proposed that estimates the actual k-space trajectory while simultaneously recovering the uncorrupted auto-calibration data. This is done by estimating the gradient delays that result in the lowest rank of the calibration matrix. The Gauss-Newton method is used to solve the non-linear problem. The method is validated in simulations using center-out radial, projection reconstruction and spiral trajectories. Feasibility is demonstrated on phantom and in vivo scans with center-out radial and projection reconstruction trajectories. SAGE is able to estimate gradient timing delays with high accuracy at a signal to noise ratio level as low as 5. The method is able to effectively remove artifacts resulting from gradient timing delays and restore image quality in center-out radial, projection reconstruction, and spiral trajectories. The low-rank based method introduced simultaneously estimates gradient timing delays and provides accurate auto-calibration data for improved image quality, without any additional calibration scans. © 2018 International Society for Magnetic Resonance in Medicine.

  20. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm.

    PubMed

    Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S

    2016-01-01

    High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  1. CMM Interim Check (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montano, Joshua Daniel

    2015-03-23

    Coordinate Measuring Machines (CMM) are widely used in industry, throughout the Nuclear Weapons Complex and at Los Alamos National Laboratory (LANL) to verify part conformance to design definition. Calibration cycles for CMMs at LANL are predominantly one year in length. Unfortunately, several nonconformance reports have been generated to document the discovery of a certified machine found out of tolerance during a calibration closeout. In an effort to reduce risk to product quality two solutions were proposed – shorten the calibration cycle which could be costly, or perform an interim check to monitor the machine’s performance between cycles. The CMM interimmore » check discussed makes use of Renishaw’s Machine Checking Gauge. This off-the-shelf product simulates a large sphere within a CMM’s measurement volume and allows for error estimation. Data was gathered, analyzed, and simulated from seven machines in seventeen different configurations to create statistical process control run charts for on-the-floor monitoring.« less

  2. Machine Learning and Radiology

    PubMed Central

    Wang, Shijun; Summers, Ronald M.

    2012-01-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077

  3. Long Term Uncertainty Investigations of 1 MN Force Calibration Machine at NPL, India (NPLI)

    NASA Astrophysics Data System (ADS)

    Kumar, Rajesh; Kumar, Harish; Kumar, Anil; Vikram

    2012-01-01

    The present paper is an attempt to study the long term uncertainty of 1 MN hydraulic multiplication system (HMS) force calibration machine (FCM) at the National Physical Laboratory, India (NPLI), which is used for calibration of the force measuring instruments in the range of 100 kN - 1 MN. The 1 MN HMS FCM was installed at NPLI in 1993 and was built on the principle of hydraulic amplifications of dead weights. The best measurement capability (BMC) of the machine is ± 0.025% (k = 2) and it is traceable to national standards by means of precision force transfer standards (FTS). The present study discusses the uncertainty variations of the 1 MN HMS FCM over the years and describes the other parameters in detail, too. The 1 MN HMS FCM was calibrated in the years 2004, 2006, 2007, 2008, 2009 and 2010 and the results have been reported.

  4. Method and apparatus for calibrating multi-axis load cells in a dexterous robot

    NASA Technical Reports Server (NTRS)

    Wampler, II, Charles W. (Inventor); Platt, Jr., Robert J. (Inventor)

    2012-01-01

    A robotic system includes a dexterous robot having robotic joints, angle sensors adapted for measuring joint angles at a corresponding one of the joints, load cells for measuring a set of strain values imparted to a corresponding one of the load cells during a predetermined pose of the robot, and a host machine. The host machine is electrically connected to the load cells and angle sensors, and receives the joint angle values and strain values during the predetermined pose. The robot presses together mating pairs of load cells to form the poses. The host machine executes an algorithm to process the joint angles and strain values, and from the set of all calibration matrices that minimize error in force balance equations, selects the set of calibration matrices that is closest in a value to a pre-specified value. A method for calibrating the load cells via the algorithm is also provided.

  5. Visual brain activity patterns classification with simultaneous EEG-fMRI: A multimodal approach.

    PubMed

    Ahmad, Rana Fayyaz; Malik, Aamir Saeed; Kamel, Nidal; Reza, Faruque; Amin, Hafeez Ullah; Hussain, Muhammad

    2017-01-01

    Classification of the visual information from the brain activity data is a challenging task. Many studies reported in the literature are based on the brain activity patterns using either fMRI or EEG/MEG only. EEG and fMRI considered as two complementary neuroimaging modalities in terms of their temporal and spatial resolution to map the brain activity. For getting a high spatial and temporal resolution of the brain at the same time, simultaneous EEG-fMRI seems to be fruitful. In this article, we propose a new method based on simultaneous EEG-fMRI data and machine learning approach to classify the visual brain activity patterns. We acquired EEG-fMRI data simultaneously on the ten healthy human participants by showing them visual stimuli. Data fusion approach is used to merge EEG and fMRI data. Machine learning classifier is used for the classification purposes. Results showed that superior classification performance has been achieved with simultaneous EEG-fMRI data as compared to the EEG and fMRI data standalone. This shows that multimodal approach improved the classification accuracy results as compared with other approaches reported in the literature. The proposed simultaneous EEG-fMRI approach for classifying the brain activity patterns can be helpful to predict or fully decode the brain activity patterns.

  6. Automated discrimination of dementia spectrum disorders using extreme learning machine and structural T1 MRI features.

    PubMed

    Jongin Kim; Boreom Lee

    2017-07-01

    The classification of neuroimaging data for the diagnosis of Alzheimer's Disease (AD) is one of the main research goals of the neuroscience and clinical fields. In this study, we performed extreme learning machine (ELM) classifier to discriminate the AD, mild cognitive impairment (MCI) from normal control (NC). We compared the performance of ELM with that of a linear kernel support vector machine (SVM) for 718 structural MRI images from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The data consisted of normal control, MCI converter (MCI-C), MCI non-converter (MCI-NC), and AD. We employed SVM-based recursive feature elimination (RFE-SVM) algorithm to find the optimal subset of features. In this study, we found that the RFE-SVM feature selection approach in combination with ELM shows the superior classification accuracy to that of linear kernel SVM for structural T1 MRI data.

  7. Whole-machine calibration approach for phased array radar with self-test

    NASA Astrophysics Data System (ADS)

    Shen, Kai; Yao, Zhi-Cheng; Zhang, Jin-Chang; Yang, Jian

    2017-06-01

    The performance of the missile-borne phased array radar is greatly influenced by the inter-channel amplitude and phase inconsistencies. In order to ensure its performance, the amplitude and the phase characteristics of radar should be calibrated. Commonly used methods mainly focus on antenna calibration, such as FFT, REV, etc. However, the radar channel also contains T / R components, channels, ADC and messenger. In order to achieve on-based phased array radar amplitude information for rapid machine calibration and compensation, we adopt a high-precision plane scanning test platform for phase amplitude test. A calibration approach for the whole channel system based on the radar frequency source test is proposed. Finally, the advantages and the application prospect of this approach are analysed.

  8. Calibrated thermal microscopy of the tool-chip interface in machining

    NASA Astrophysics Data System (ADS)

    Yoon, Howard W.; Davies, Matthew A.; Burns, Timothy J.; Kennedy, M. D.

    2000-03-01

    A critical parameter in predicting tool wear during machining and in accurate computer simulations of machining is the spatially-resolved temperature at the tool-chip interface. We describe the development and the calibration of a nearly diffraction-limited thermal-imaging microscope to measure the spatially-resolved temperatures during the machining of an AISI 1045 steel with a tungsten-carbide tool bit. The microscope has a target area of 0.5 mm X 0.5 mm square region with a < 5 micrometers spatial resolution and is based on a commercial InSb 128 X 128 focal plane array with an all reflective microscope objective. The minimum frame image acquisition time is < 1 ms. The microscope is calibrated using a standard blackbody source from the radiance temperature calibration laboratory at the National Institute of Standards and Technology, and the emissivity of the machined material is deduced from the infrared reflectivity measurements. The steady-state thermal images from the machining of 1045 steel are compared to previous determinations of tool temperatures from micro-hardness measurements and are found to be in agreement with those studies. The measured average chip temperatures are also in agreement with the temperature rise estimated from energy balance considerations. From these calculations and the agreement between the experimental and the calculated determinations of the emissivity of the 1045 steel, the standard uncertainty of the temperature measurements is estimated to be about 45 degree(s)C at 900 degree(s)C.

  9. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    PubMed Central

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  10. A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI.

    PubMed

    Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

  11. Classifying Cognitive Profiles Using Machine Learning with Privileged Information in Mild Cognitive Impairment.

    PubMed

    Alahmadi, Hanin H; Shen, Yuan; Fouad, Shereen; Luft, Caroline Di B; Bentham, Peter; Kourtzi, Zoe; Tino, Peter

    2016-01-01

    Early diagnosis of dementia is critical for assessing disease progression and potential treatment. State-or-the-art machine learning techniques have been increasingly employed to take on this diagnostic task. In this study, we employed Generalized Matrix Learning Vector Quantization (GMLVQ) classifiers to discriminate patients with Mild Cognitive Impairment (MCI) from healthy controls based on their cognitive skills. Further, we adopted a "Learning with privileged information" approach to combine cognitive and fMRI data for the classification task. The resulting classifier operates solely on the cognitive data while it incorporates the fMRI data as privileged information (PI) during training. This novel classifier is of practical use as the collection of brain imaging data is not always possible with patients and older participants. MCI patients and healthy age-matched controls were trained to extract structure from temporal sequences. We ask whether machine learning classifiers can be used to discriminate patients from controls and whether differences between these groups relate to individual cognitive profiles. To this end, we tested participants in four cognitive tasks: working memory, cognitive inhibition, divided attention, and selective attention. We also collected fMRI data before and after training on a probabilistic sequence learning task and extracted fMRI responses and connectivity as features for machine learning classifiers. Our results show that the PI guided GMLVQ classifiers outperform the baseline classifier that only used the cognitive data. In addition, we found that for the baseline classifier, divided attention is the only relevant cognitive feature. When PI was incorporated, divided attention remained the most relevant feature while cognitive inhibition became also relevant for the task. Interestingly, this analysis for the fMRI GMLVQ classifier suggests that (1) when overall fMRI signal is used as inputs to the classifier, the post-training session is most relevant; and (2) when the graph feature reflecting underlying spatiotemporal fMRI pattern is used, the pre-training session is most relevant. Taken together these results suggest that brain connectivity before training and overall fMRI signal after training are both diagnostic of cognitive skills in MCI.

  12. A Novel Diffusion MRI Phantom, and a Method for Enhancing MR Image Quality | NCI Technology Transfer Center | TTC

    Cancer.gov

    The use of Polyvinyl Pyrrolidone (PVP) solutions of varying concentrations as phantoms for diffusion MRI calibration and quality control is disclosed. This diffusion MRI phantom material is already being adopted by radiologists for quality control and assurance in clinical studies.

  13. A quantitative comparison of two methods to correct eddy current-induced distortions in DT-MRI.

    PubMed

    Muñoz Maniega, Susana; Bastin, Mark E; Armitage, Paul A

    2007-04-01

    Eddy current-induced geometric distortions of single-shot, diffusion-weighted, echo-planar (DW-EP) images are a major confounding factor to the accurate determination of water diffusion parameters in diffusion tensor MRI (DT-MRI). Previously, it has been suggested that these geometric distortions can be removed from brain DW-EP images using affine transformations determined from phantom calibration experiments using iterative cross-correlation (ICC). Since this approach was first described, a number of image-based registration methods have become available that can also correct eddy current-induced distortions in DW-EP images. However, as yet no study has investigated whether separate eddy current calibration or image-based registration provides the most accurate way of removing these artefacts from DT-MRI data. Here we compare how ICC phantom calibration and affine FLIRT (http://www.fmrib.ox.ac.uk), a popular image-based multi-modal registration method that can correct both eddy current-induced distortions and bulk subject motion, perform when registering DW-EP images acquired with different slice thicknesses (2.8 and 5 mm) and b-values (1000 and 3000 s/mm(2)). With the use of consistency testing, it was found that ICC was a more robust algorithm for correcting eddy current-induced distortions than affine FLIRT, especially at high b-value and small slice thickness. In addition, principal component analysis demonstrated that the combination of ICC phantom calibration (to remove eddy current-induced distortions) with rigid body FLIRT (to remove bulk subject motion) provided a more accurate registration of DT-MRI data than that achieved by affine FLIRT.

  14. Image-guided Navigation of Single-element Focused Ultrasound Transducer

    PubMed Central

    Kim, Hyungmin; Chiu, Alan; Park, Shinsuk; Yoo, Seung-Schik

    2014-01-01

    The spatial specificity and controllability of focused ultrasound (FUS), in addition to its ability to modify the excitability of neural tissue, allows for the selective and reversible neuromodulation of the brain function, with great potential in neurotherapeutics. Intra-operative magnetic resonance imaging (MRI) guidance (in short, MRg) has limitations due to its complicated examination logistics, such as fixation through skull screws to mount the stereotactic frame, simultaneous sonication in the MRI environment, and restrictions in choosing MR-compatible materials. In order to overcome these limitations, an image-guidance system based on optical tracking and pre-operative imaging data is developed, separating the imaging acquisition for guidance and sonication procedure for treatment. Techniques to define the local coordinates of the focal point of sonication are presented. First, mechanical calibration detects the concentric rotational motion of a rigid-body optical tracker, attached to a straight rod mimicking the sonication path, pivoted at the virtual FUS focus. The spatial error presented in the mechanical calibration was compensated further by MRI-based calibration, which estimates the spatial offset between the navigated focal point and the ground-truth location of the sonication focus obtained from a temperature-sensitive MR sequence. MRI-based calibration offered a significant decrease in spatial errors (1.9±0.8 mm; 57% reduction) compared to the mechanical calibration method alone (4.4±0.9 mm). Using the presented method, pulse-mode FUS was applied to the motor area of the rat brain, and successfully stimulated the motor cortex. The presented techniques can be readily adapted for the transcranial application of FUS to intact human brain. PMID:25232203

  15. Probe-Specific Procedure to Estimate Sensitivity and Detection Limits for 19F Magnetic Resonance Imaging.

    PubMed

    Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M

    2016-01-01

    Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.

  16. Classification of fMRI resting-state maps using machine learning techniques: A comparative study

    NASA Astrophysics Data System (ADS)

    Gallos, Ioannis; Siettos, Constantinos

    2017-11-01

    We compare the efficiency of Principal Component Analysis (PCA) and nonlinear learning manifold algorithms (ISOMAP and Diffusion maps) for classifying brain maps between groups of schizophrenia patients and healthy from fMRI scans during a resting-state experiment. After a standard pre-processing pipeline, we applied spatial Independent component analysis (ICA) to reduce (a) noise and (b) spatial-temporal dimensionality of fMRI maps. On the cross-correlation matrix of the ICA components, we applied PCA, ISOMAP and Diffusion Maps to find an embedded low-dimensional space. Finally, support-vector-machines (SVM) and k-NN algorithms were used to evaluate the performance of the algorithms in classifying between the two groups.

  17. Application of coordinate transform on ball plate calibration

    NASA Astrophysics Data System (ADS)

    Wei, Hengzheng; Wang, Weinong; Ren, Guoying; Pei, Limei

    2015-02-01

    For the ball plate calibration method with coordinate measurement machine (CMM) equipped with laser interferometer, it is essential to adjust the ball plate parallel to the direction of laser beam. It is very time-consuming. To solve this problem, a method based on coordinate transformation between machine system and object system is presented. With the fixed points' coordinates of the ball plate measured in the object system and machine system, the transformation matrix between the coordinate systems is calculated. The laser interferometer measurement data error due to the placement of ball plate can be corrected with this transformation matrix. Experimental results indicate that this method is consistent with the handy adjustment method. It avoids the complexity of ball plate adjustment. It also can be applied to the ball beam calibration.

  18. Machine learning and radiology.

    PubMed

    Wang, Shijun; Summers, Ronald M

    2012-07-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. Copyright © 2012. Published by Elsevier B.V.

  19. Calibrated LCD/TFT stimulus presentation for visual psychophysics in fMRI.

    PubMed

    Strasburger, H; Wüstenberg, T; Jäncke, L

    2002-11-15

    Standard projection techniques using liquid crystal (LCD) or thin-film transistor (TFT) technology show drastic distortions in luminance and contrast characteristics across the screen and across grey levels. Common luminance measurement and calibration techniques are not applicable in the vicinity of MRI scanners. With the aid of a fibre optic, we measured screen luminances for the full space of screen position and image grey values and on that basis developed a compensation technique that involves both luminance homogenisation and position-dependent gamma correction. By the technique described, images displayed to a subject in functional MRI can be specified with high precision by a matrix of desired luminance values rather than by local grey value.

  20. Dying art of a history and physical: pulsatile tinnitus

    PubMed Central

    Fekete, Zoltan

    2017-01-01

    Modern medicine often leaves the history and physical by the wayside. Physicians instead skip directly to diagnostic modalities like MRI and angiography. In this case report, we discuss a patient who presented with migraine symptoms. Auscultation revealed signs of pulsatile tinnitus. Further imaging concluded that it was secondary to a type I dural arteriovenous fistula. Thanks to a proper and thorough history and physical, the patient was streamlined into an accurate and efficient work-up leading to symptomatic relief and quality of life improvement. Imaging is a powerful adjunctive technique in modern medicine, but physicians must not rely on machines to diagnose their patients. If this trend continues, it will have a tremendous negative impact on the cost and calibre of healthcare. Our hope is that this case will spread awareness in the medical community, urging physicians to use the lost art of a history and physical. PMID:29183894

  1. The production of calibration specimens for impact testing of subsize Charpy specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, D.J.; Corwin, W.R.; Owings, T.D.

    1994-09-01

    Calibration specimens have been manufactured for checking the performance of a pendulum impact testing machine that has been configured for testing subsize specimens, both half-size (5.0 {times} 5.0 {times} 25.4 mm) and third-size (3.33 {times} 3.33 {times} 25.4 mm). Specimens were fabricated from quenched-and-tempered 4340 steel heat treated to produce different microstructures that would result in either high or low absorbed energy levels on testing. A large group of both half- and third-size specimens were tested at {minus}40{degrees}C. The results of the tests were analyzed for average value and standard deviation, and these values were used to establish calibration limitsmore » for the Charpy impact machine when testing subsize specimens. These average values plus or minus two standard deviations were set as the acceptable limits for the average of five tests for calibration of the impact testing machine.« less

  2. A Fabry-Perot Interferometry Based MRI-Compatible Miniature Uniaxial Force Sensor for Percutaneous Needle Placement

    PubMed Central

    Shang, Weijian; Su, Hao; Li, Gang; Furlong, Cosme; Fischer, Gregory S.

    2014-01-01

    Robot-assisted surgical procedures, taking advantage of the high soft tissue contrast and real-time imaging of magnetic resonance imaging (MRI), are developing rapidly. However, it is crucial to maintain tactile force feedback in MRI-guided needle-based procedures. This paper presents a Fabry-Perot interference (FPI) based system of an MRI-compatible fiber optic sensor which has been integrated into a piezoelectrically actuated robot for prostate cancer biopsy and brachytherapy in 3T MRI scanner. The opto-electronic sensing system design was minimized to fit inside an MRI-compatible robot controller enclosure. A flexure mechanism was designed that integrates the FPI sensor fiber for measuring needle insertion force, and finite element analysis was performed for optimizing the correct force-deformation relationship. The compact, low-cost FPI sensing system was integrated into the robot and calibration was conducted. The root mean square (RMS) error of the calibration among the range of 0–10 Newton was 0.318 Newton comparing to the theoretical model which has been proven sufficient for robot control and teleoperation. PMID:25126153

  3. 3D artifact for calibrating kinematic parameters of articulated arm coordinate measuring machines

    NASA Astrophysics Data System (ADS)

    Zhao, Huining; Yu, Liandong; Xia, Haojie; Li, Weishi; Jiang, Yizhou; Jia, Huakun

    2018-06-01

    In this paper, a 3D artifact is proposed to calibrate the kinematic parameters of articulated arm coordinate measuring machines (AACMMs). The artifact is composed of 14 reference points with three different heights, which provides 91 different reference lengths, and a method is proposed to calibrate the artifact with laser tracker multi-stations. Therefore, the kinematic parameters of an AACMM can be calibrated in one setup of the proposed artifact, instead of having to adjust the 1D or 2D artifacts to different positions and orientations in the existing methods. As a result, it saves time to calibrate the AACMM with the proposed artifact in comparison with the traditional 1D or 2D artifacts. The performance of the AACMM calibrated with the proposed artifact is verified with a 600.003 mm gauge block. The result shows that the measurement accuracy of the AACMM is improved effectively through calibration with the proposed artifact.

  4. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle.

    PubMed

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  5. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle

    NASA Astrophysics Data System (ADS)

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  6. Machine tools error characterization and compensation by on-line measurement of artifact

    NASA Astrophysics Data System (ADS)

    Wahid Khan, Abdul; Chen, Wuyi; Wu, Lili

    2009-11-01

    Most manufacturing machine tools are utilized for mass production or batch production with high accuracy at a deterministic manufacturing principle. Volumetric accuracy of machine tools depends on the positional accuracy of the cutting tool, probe or end effector related to the workpiece in the workspace volume. In this research paper, a methodology is presented for volumetric calibration of machine tools by on-line measurement of an artifact or an object of a similar type. The machine tool geometric error characterization was carried out through a standard or an artifact, having similar geometry to the mass production or batch production product. The artifact was measured at an arbitrary position in the volumetric workspace with a calibrated Renishaw touch trigger probe system. Positional errors were stored into a computer for compensation purpose, to further run the manufacturing batch through compensated codes. This methodology was found quite effective to manufacture high precision components with more dimensional accuracy and reliability. Calibration by on-line measurement gives the advantage to improve the manufacturing process by use of deterministic manufacturing principle and found efficient and economical but limited to the workspace or envelop surface of the measured artifact's geometry or the profile.

  7. A novel approach to calibrate the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Khoram, Nafiseh; Zayane, Chadia; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2016-03-15

    The calibration of the hemodynamic model that describes changes in blood flow and blood oxygenation during brain activation is a crucial step for successfully monitoring and possibly predicting brain activity. This in turn has the potential to provide diagnosis and treatment of brain diseases in early stages. We propose an efficient numerical procedure for calibrating the hemodynamic model using some fMRI measurements. The proposed solution methodology is a regularized iterative method equipped with a Kalman filtering-type procedure. The Newton component of the proposed method addresses the nonlinear aspect of the problem. The regularization feature is used to ensure the stability of the algorithm. The Kalman filter procedure is incorporated here to address the noise in the data. Numerical results obtained with synthetic data as well as with real fMRI measurements are presented to illustrate the accuracy, robustness to the noise, and the cost-effectiveness of the proposed method. We present numerical results that clearly demonstrate that the proposed method outperforms the Cubature Kalman Filter (CKF), one of the most prominent existing numerical methods. We have designed an iterative numerical technique, called the TNM-CKF algorithm, for calibrating the mathematical model that describes the single-event related brain response when fMRI measurements are given. The method appears to be highly accurate and effective in reconstructing the BOLD signal even when the measurements are tainted with high noise level (as high as 30%). Published by Elsevier B.V.

  8. Calibration standard of body tissue with magnetic nanocomposites for MRI and X-ray imaging

    NASA Astrophysics Data System (ADS)

    Rahn, Helene; Woodward, Robert; House, Michael; Engineer, Diana; Feindel, Kirk; Dutz, Silvio; Odenbach, Stefan; StPierre, Tim

    2016-05-01

    We present a first study of a long-term phantom for Magnetic Resonance Imaging (MRI) and X-ray imaging of biological tissues with magnetic nanocomposites (MNC) suitable for 3-dimensional and quantitative imaging of tissues after, e.g. magnetically assisted cancer treatments. We performed a cross-calibration of X-ray microcomputed tomography (XμCT) and MRI with a joint calibration standard for both imaging techniques. For this, we have designed a phantom for MRI and X-ray computed tomography which represents biological tissue enriched with MNC. The developed phantoms consist of an elastomer with different concentrations of multi-core MNC. The matrix material is a synthetic thermoplastic gel, PermaGel (PG). The developed phantoms have been analyzed with Nuclear Magnetic Resonance (NMR) Relaxometry (Bruker minispec mq 60) at 1.4 T to obtain R2 transverse relaxation rates, with SQUID (Superconducting QUantum Interference Device) magnetometry and Inductively Coupled Plasma Mass Spectrometry (ICP-MS) to verify the magnetite concentration, and with XμCT and 9.4 T MRI to visualize the phantoms 3-dimensionally and also to obtain T2 relaxation times. A specification of a sensitivity range is determined for standard imaging techniques X-ray computed tomography (XCT) and MRI as well as with NMR. These novel phantoms show a long-term stability over several months up to years. It was possible to suspend a particular MNC within the PG reaching a concentration range from 0 mg/ml to 6.914 mg/ml. The R2 relaxation rates from 1.4 T NMR-relaxometry show a clear connection (R2=0.994) with MNC concentrations between 0 mg/ml and 4.5 mg/ml. The MRI experiments have shown a linear correlation of R2 relaxation and MNC concentrations as well but in a range between MNC concentrations of 0 mg/ml and 1.435 mg/ml. It could be shown that XμCT displays best moderate and high MNC concentrations. The sensitivity range for this particular XμCT apparatus yields from 0.569 mg/ml to 6.914 mg/ml. The cross-calibration has defined a shared sensitivity range of XμCT, 1.4 T NMR relaxometer minispec, and 9.4 T MRI. The shared sensitivity range for the measuring method (NMR relaxometry) and the imaging modalities (XμCT and MRI) is from 0.569 mg/ml, limited by XμCT, and 1.435 mg/ml, limited by MRI. The presented phantoms have been found to be suitable to act as a body tissue substitute for XCT imaging as well as an acceptable T2 phantom of biological tissue enriched with magnetic nanoparticles for MRI.

  9. Data filtering with support vector machines in geometric camera calibration.

    PubMed

    Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C

    2010-02-01

    The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.

  10. Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras

    NASA Astrophysics Data System (ADS)

    Quinn, Mark Kenneth

    2018-05-01

    Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.

  11. Prediction of activation patterns preceding hallucinations in patients with schizophrenia using machine learning with structured sparsity.

    PubMed

    de Pierrefeu, Amicie; Fovet, Thomas; Hadj-Selem, Fouad; Löfstedt, Tommy; Ciuciu, Philippe; Lefebvre, Stephanie; Thomas, Pierre; Lopes, Renaud; Jardri, Renaud; Duchesnay, Edouard

    2018-04-01

    Despite significant progress in the field, the detection of fMRI signal changes during hallucinatory events remains difficult and time-consuming. This article first proposes a machine-learning algorithm to automatically identify resting-state fMRI periods that precede hallucinations versus periods that do not. When applied to whole-brain fMRI data, state-of-the-art classification methods, such as support vector machines (SVM), yield dense solutions that are difficult to interpret. We proposed to extend the existing sparse classification methods by taking the spatial structure of brain images into account with structured sparsity using the total variation penalty. Based on this approach, we obtained reliable classifying performances associated with interpretable predictive patterns, composed of two clearly identifiable clusters in speech-related brain regions. The variation in transition-to-hallucination functional patterns not only from one patient to another but also from one occurrence to the next (e.g., also depending on the sensory modalities involved) appeared to be the major difficulty when developing effective classifiers. Consequently, second, this article aimed to characterize the variability within the prehallucination patterns using an extension of principal component analysis with spatial constraints. The principal components (PCs) and the associated basis patterns shed light on the intrinsic structures of the variability present in the dataset. Such results are promising in the scope of innovative fMRI-guided therapy for drug-resistant hallucinations, such as fMRI-based neurofeedback. © 2018 Wiley Periodicals, Inc.

  12. Portable MRI developed at Los Alamos

    ScienceCinema

    Espy, Michelle

    2018-02-14

    Scientists at Los Alamos National Laboratory are developing an ultra-low-field Magnetic Resonance Imaging (MRI) system that could be low-power and lightweight enough for forward deployment on the battlefield and to field hospitals in the World's poorest regions. "MRI technology is a powerful medical diagnostic tool," said Michelle Espy, the Battlefield MRI (bMRI) project leader, "ideally suited for imaging soft-tissue injury, particularly to the brain." But hospital-based MRI devices are big and expensive, and require considerable infrastructure, such as large quantities of cryogens like liquid nitrogen and helium, and they typically use a large amount of energy. "Standard MRI machines just can't go everywhere," said Espy. "Soldiers wounded in battle usually have to be flown to a large hospital and people in emerging nations just don't have access to MRI at all. We've been in contact with doctors who routinely work in the Third World and report that MRI would be extremely valuable in treating pediatric encephalopathy, and other serious diseases in children." So the Los Alamos team started thinking about a way to make an MRI device that could be relatively easy to transport, set up, and use in an unconventional setting. Conventional MRI machines use very large magnetic fields that align the protons in water molecules to then create magnetic resonance signals, which are detected by the machine and turned into images. The large magnetic fields create exceptionally detailed images, but they are difficult and expensive to make. Espy and her team wanted to see if images of sufficient quality could be made with ultra-low-magnetic fields, similar in strength to the Earth's magnetic field. To achieve images at such low fields they use exquisitely sensitive detectors called Superconducting Quantum Interference Devices, or SQUIDs. SQUIDs are among the most sensitive magnetic field detectors available, so interference with the signal is the primary stumbling block. "SQUIDs are so sensitive they'll respond to a truck driving by outside or a radio signal 50 miles away," said Al Urbaitis, a bMRI engineer. The team's first generation bMRI had to be built in a large metal housing in order to shield it from interference. Now the Los Alamos team is working in the open environment without the large metal housing using a lightweight series of wire coils that surround the bMRI system to compensate the Earth’s magnetic field. In the future, the field compensation system will also function similar to noise-cancelling headphones to eradicate invading magnetic field signals on-the-fly.

  13. Portable MRI developed at Los Alamos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Espy, Michelle

    Scientists at Los Alamos National Laboratory are developing an ultra-low-field Magnetic Resonance Imaging (MRI) system that could be low-power and lightweight enough for forward deployment on the battlefield and to field hospitals in the World's poorest regions. "MRI technology is a powerful medical diagnostic tool," said Michelle Espy, the Battlefield MRI (bMRI) project leader, "ideally suited for imaging soft-tissue injury, particularly to the brain." But hospital-based MRI devices are big and expensive, and require considerable infrastructure, such as large quantities of cryogens like liquid nitrogen and helium, and they typically use a large amount of energy. "Standard MRI machines justmore » can't go everywhere," said Espy. "Soldiers wounded in battle usually have to be flown to a large hospital and people in emerging nations just don't have access to MRI at all. We've been in contact with doctors who routinely work in the Third World and report that MRI would be extremely valuable in treating pediatric encephalopathy, and other serious diseases in children." So the Los Alamos team started thinking about a way to make an MRI device that could be relatively easy to transport, set up, and use in an unconventional setting. Conventional MRI machines use very large magnetic fields that align the protons in water molecules to then create magnetic resonance signals, which are detected by the machine and turned into images. The large magnetic fields create exceptionally detailed images, but they are difficult and expensive to make. Espy and her team wanted to see if images of sufficient quality could be made with ultra-low-magnetic fields, similar in strength to the Earth's magnetic field. To achieve images at such low fields they use exquisitely sensitive detectors called Superconducting Quantum Interference Devices, or SQUIDs. SQUIDs are among the most sensitive magnetic field detectors available, so interference with the signal is the primary stumbling block. "SQUIDs are so sensitive they'll respond to a truck driving by outside or a radio signal 50 miles away," said Al Urbaitis, a bMRI engineer. The team's first generation bMRI had to be built in a large metal housing in order to shield it from interference. Now the Los Alamos team is working in the open environment without the large metal housing using a lightweight series of wire coils that surround the bMRI system to compensate the Earth’s magnetic field. In the future, the field compensation system will also function similar to noise-cancelling headphones to eradicate invading magnetic field signals on-the-fly.« less

  14. Segmentation of white matter hyperintensities using convolutional neural networks with global spatial information in routine clinical brain MRI with none or mild vascular pathology.

    PubMed

    Rachmadi, Muhammad Febrian; Valdés-Hernández, Maria Del C; Agan, Maria Leonora Fatimah; Di Perri, Carol; Komura, Taku

    2018-06-01

    We propose an adaptation of a convolutional neural network (CNN) scheme proposed for segmenting brain lesions with considerable mass-effect, to segment white matter hyperintensities (WMH) characteristic of brains with none or mild vascular pathology in routine clinical brain magnetic resonance images (MRI). This is a rather difficult segmentation problem because of the small area (i.e., volume) of the WMH and their similarity to non-pathological brain tissue. We investigate the effectiveness of the 2D CNN scheme by comparing its performance against those obtained from another deep learning approach: Deep Boltzmann Machine (DBM), two conventional machine learning approaches: Support Vector Machine (SVM) and Random Forest (RF), and a public toolbox: Lesion Segmentation Tool (LST), all reported to be useful for segmenting WMH in MRI. We also introduce a way to incorporate spatial information in convolution level of CNN for WMH segmentation named global spatial information (GSI). Analysis of covariance corroborated known associations between WMH progression, as assessed by all methods evaluated, and demographic and clinical data. Deep learning algorithms outperform conventional machine learning algorithms by excluding MRI artefacts and pathologies that appear similar to WMH. Our proposed approach of incorporating GSI also successfully helped CNN to achieve better automatic WMH segmentation regardless of network's settings tested. The mean Dice Similarity Coefficient (DSC) values for LST-LGA, SVM, RF, DBM, CNN and CNN-GSI were 0.2963, 0.1194, 0.1633, 0.3264, 0.5359 and 5389 respectively. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  15. Thoughts turned into high-level commands: Proof-of-concept study of a vision-guided robot arm driven by functional MRI (fMRI) signals.

    PubMed

    Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia

    2012-06-01

    Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. Predicting conversion from MCI to AD using resting-state fMRI, graph theoretical approach and SVM.

    PubMed

    Hojjati, Seyed Hani; Ebrahimzadeh, Ata; Khazaee, Ali; Babajani-Feremi, Abbas

    2017-04-15

    We investigated identifying patients with mild cognitive impairment (MCI) who progress to Alzheimer's disease (AD), MCI converter (MCI-C), from those with MCI who do not progress to AD, MCI non-converter (MCI-NC), based on resting-state fMRI (rs-fMRI). Graph theory and machine learning approach were utilized to predict progress of patients with MCI to AD using rs-fMRI. Eighteen MCI converts (average age 73.6 years; 11 male) and 62 age-matched MCI non-converters (average age 73.0 years, 28 male) were included in this study. We trained and tested a support vector machine (SVM) to classify MCI-C from MCI-NC using features constructed based on the local and global graph measures. A novel feature selection algorithm was developed and utilized to select an optimal subset of features. Using subset of optimal features in SVM, we classified MCI-C from MCI-NC with an accuracy, sensitivity, specificity, and the area under the receiver operating characteristic (ROC) curve of 91.4%, 83.24%, 90.1%, and 0.95, respectively. Furthermore, results of our statistical analyses were used to identify the affected brain regions in AD. To the best of our knowledge, this is the first study that combines the graph measures (constructed based on rs-fMRI) with machine learning approach and accurately classify MCI-C from MCI-NC. Results of this study demonstrate potential of the proposed approach for early AD diagnosis and demonstrate capability of rs-fMRI to predict conversion from MCI to AD by identifying affected brain regions underlying this conversion. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Assessment of New Load Schedules for the Machine Calibration of a Force Balance

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Gisler, R.; Kew, R.

    2015-01-01

    New load schedules for the machine calibration of a six-component force balance are currently being developed and evaluated at the NASA Ames Balance Calibration Laboratory. One of the proposed load schedules is discussed in the paper. It has a total of 2082 points that are distributed across 16 load series. Several criteria were applied to define the load schedule. It was decided, for example, to specify the calibration load set in force balance format as this approach greatly simplifies the definition of the lower and upper bounds of the load schedule. In addition, all loads are assumed to be applied in a calibration machine by using the one-factor-at-a-time approach. At first, all single-component loads are applied in six load series. Then, three two-component load series are applied. They consist of the load pairs (N1, N2), (S1, S2), and (RM, AF). Afterwards, four three-component load series are applied. They consist of the combinations (N1, N2, AF), (S1, S2, AF), (N1, N2, RM), and (S1, S2, RM). In the next step, one four-component load series is applied. It is the load combination (N1, N2, S1, S2). Finally, two five-component load series are applied. They are the load combination (N1, N2, S1, S2, AF) and (N1, N2, S1, S2, RM). The maximum difference between loads of two subsequent data points of the load schedule is limited to 33 % of capacity. This constraint helps avoid unwanted load "jumps" in the load schedule that can have a negative impact on the performance of a calibration machine. Only loadings of the single- and two-component load series are loaded to 100 % of capacity. This approach was selected because it keeps the total number of calibration points to a reasonable limit while still allowing for the application of some of the more complex load combinations. Data from two of NASA's force balances is used to illustrate important characteristics of the proposed 2082-point calibration load schedule.

  18. Usage of CT data in biomechanical research

    NASA Astrophysics Data System (ADS)

    Safonov, Roman A.; Golyadkina, Anastasiya A.; Kirillova, Irina V.; Kossovich, Leonid Y.

    2017-02-01

    Object of study: The investigation is focused on development of personalized medicine. The determination of mechanical properties of bone tissues based on in vivo data was considered. Methods: CT, MRI, natural experiments on versatile test machine Instron 5944, numerical experiments using Python programs. Results: The medical diagnostics methods, which allows determination of mechanical properties of bone tissues based on in vivo data. The series of experiments to define the values of mechanical parameters of bone tissues. For one and the same sample, computed tomography (CT), magnetic resonance imaging (MRI), ultrasonic investigations and mechanical experiments on single-column test machine Instron 5944 were carried out. The computer program for comparison of CT and MRI images was created. The grayscale values in the same points of the samples were determined on both CT and MRI images. The Haunsfield grayscale values were used to determine rigidity (Young module) and tensile strength of the samples. The obtained data was compared to natural experiments results for verification.

  19. Identification of Alzheimer's disease and mild cognitive impairment using multimodal sparse hierarchical extreme learning machine.

    PubMed

    Kim, Jongin; Lee, Boreom

    2018-05-07

    Different modalities such as structural MRI, FDG-PET, and CSF have complementary information, which is likely to be very useful for diagnosis of AD and MCI. Therefore, it is possible to develop a more effective and accurate AD/MCI automatic diagnosis method by integrating complementary information of different modalities. In this paper, we propose multi-modal sparse hierarchical extreme leaning machine (MSH-ELM). We used volume and mean intensity extracted from 93 regions of interest (ROIs) as features of MRI and FDG-PET, respectively, and used p-tau, t-tau, and Aβ42 as CSF features. In detail, high-level representation was individually extracted from each of MRI, FDG-PET, and CSF using a stacked sparse extreme learning machine auto-encoder (sELM-AE). Then, another stacked sELM-AE was devised to acquire a joint hierarchical feature representation by fusing the high-level representations obtained from each modality. Finally, we classified joint hierarchical feature representation using a kernel-based extreme learning machine (KELM). The results of MSH-ELM were compared with those of conventional ELM, single kernel support vector machine (SK-SVM), multiple kernel support vector machine (MK-SVM) and stacked auto-encoder (SAE). Performance was evaluated through 10-fold cross-validation. In the classification of AD vs. HC and MCI vs. HC problem, the proposed MSH-ELM method showed mean balanced accuracies of 96.10% and 86.46%, respectively, which is much better than those of competing methods. In summary, the proposed algorithm exhibits consistently better performance than SK-SVM, ELM, MK-SVM and SAE in the two binary classification problems (AD vs. HC and MCI vs. HC). © 2018 Wiley Periodicals, Inc.

  20. Machine learning: a useful radiological adjunct in determination of a newly diagnosed glioma's grade and IDH status.

    PubMed

    De Looze, Céline; Beausang, Alan; Cryan, Jane; Loftus, Teresa; Buckley, Patrick G; Farrell, Michael; Looby, Seamus; Reilly, Richard; Brett, Francesca; Kearney, Hugh

    2018-05-16

    Machine learning methods have been introduced as a computer aided diagnostic tool, with applications to glioma characterisation on MRI. Such an algorithmic approach may provide a useful adjunct for a rapid and accurate diagnosis of a glioma. The aim of this study is to devise a machine learning algorithm that may be used by radiologists in routine practice to aid diagnosis of both: WHO grade and IDH mutation status in de novo gliomas. To evaluate the status quo, we interrogated the accuracy of neuroradiology reports in relation to WHO grade: grade II 96.49% (95% confidence intervals [CI] 0.88, 0.99); III 36.51% (95% CI 0.24, 0.50); IV 72.9% (95% CI 0.67, 0.78). We derived five MRI parameters from the same diagnostic brain scans, in under two minutes per case, and then supplied these data to a random forest algorithm. Machine learning resulted in a high level of accuracy in prediction of tumour grade: grade II/III; area under the receiver operating characteristic curve (AUC) = 98%, sensitivity = 0.82, specificity = 0.94; grade II/IV; AUC = 100%, sensitivity = 1.0, specificity = 1.0; grade III/IV; AUC = 97%, sensitivity = 0.83, specificity = 0.97. Furthermore, machine learning also facilitated the discrimination of IDH status: AUC of 88%, sensitivity = 0.81, specificity = 0.77. These data demonstrate the ability of machine learning to accurately classify diffuse gliomas by both WHO grade and IDH status from routine MRI alone-without significant image processing, which may facilitate usage as a diagnostic adjunct in clinical practice.

  1. Application of machine learning classification for structural brain MRI in mood disorders: Critical review from a clinical perspective.

    PubMed

    Kim, Yong-Ku; Na, Kyoung-Sae

    2018-01-03

    Mood disorders are a highly prevalent group of mental disorders causing substantial socioeconomic burden. There are various methodological approaches for identifying the underlying mechanisms of the etiology, symptomatology, and therapeutics of mood disorders; however, neuroimaging studies have provided the most direct evidence for mood disorder neural substrates by visualizing the brains of living individuals. The prefrontal cortex, hippocampus, amygdala, thalamus, ventral striatum, and corpus callosum are associated with depression and bipolar disorder. Identifying the distinct and common contributions of these anatomical regions to depression and bipolar disorder have broadened and deepened our understanding of mood disorders. However, the extent to which neuroimaging research findings contribute to clinical practice in the real-world setting is unclear. As traditional or non-machine learning MRI studies have analyzed group-level differences, it is not possible to directly translate findings from research to clinical practice; the knowledge gained pertains to the disorder, but not to individuals. On the other hand, a machine learning approach makes it possible to provide individual-level classifications. For the past two decades, many studies have reported on the classification accuracy of machine learning-based neuroimaging studies from the perspective of diagnosis and treatment response. However, for the application of a machine learning-based brain MRI approach in real world clinical settings, several major issues should be considered. Secondary changes due to illness duration and medication, clinical subtypes and heterogeneity, comorbidities, and cost-effectiveness restrict the generalization of the current machine learning findings. Sophisticated classification of clinical and diagnostic subtypes is needed. Additionally, as the approach is inevitably limited by sample size, multi-site participation and data-sharing are needed in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Department of Defense Logistics Roadmap 2008. Volume 1

    DTIC Science & Technology

    2008-07-01

    machine readable identification mark on the Department’s tangible qualifying assets, and establishes the data management protocols needed to...uniquely identify items with a Unique Item Identifier (UII) via machine - readable information (MRI) marking represented by a two-dimensional data...property items with a machine -readable Unique Item Identifier (UII), which is a set of globally unique data elements. The UII is used in functional

  3. Multiparametric MRI characterization and prediction in autism spectrum disorder using graph theory and machine learning.

    PubMed

    Zhou, Yongxia; Yu, Fang; Duong, Timothy

    2014-01-01

    This study employed graph theory and machine learning analysis of multiparametric MRI data to improve characterization and prediction in autism spectrum disorders (ASD). Data from 127 children with ASD (13.5±6.0 years) and 153 age- and gender-matched typically developing children (14.5±5.7 years) were selected from the multi-center Functional Connectome Project. Regional gray matter volume and cortical thickness increased, whereas white matter volume decreased in ASD compared to controls. Small-world network analysis of quantitative MRI data demonstrated decreased global efficiency based on gray matter cortical thickness but not with functional connectivity MRI (fcMRI) or volumetry. An integrative model of 22 quantitative imaging features was used for classification and prediction of phenotypic features that included the autism diagnostic observation schedule, the revised autism diagnostic interview, and intelligence quotient scores. Among the 22 imaging features, four (caudate volume, caudate-cortical functional connectivity and inferior frontal gyrus functional connectivity) were found to be highly informative, markedly improving classification and prediction accuracy when compared with the single imaging features. This approach could potentially serve as a biomarker in prognosis, diagnosis, and monitoring disease progression.

  4. Machine learning for predicting the response of breast cancer to neoadjuvant chemotherapy

    PubMed Central

    Mani, Subramani; Chen, Yukun; Li, Xia; Arlinghaus, Lori; Chakravarthy, A Bapsi; Abramson, Vandana; Bhave, Sandeep R; Levy, Mia A; Xu, Hua; Yankeelov, Thomas E

    2013-01-01

    Objective To employ machine learning methods to predict the eventual therapeutic response of breast cancer patients after a single cycle of neoadjuvant chemotherapy (NAC). Materials and methods Quantitative dynamic contrast-enhanced MRI and diffusion-weighted MRI data were acquired on 28 patients before and after one cycle of NAC. A total of 118 semiquantitative and quantitative parameters were derived from these data and combined with 11 clinical variables. We used Bayesian logistic regression in combination with feature selection using a machine learning framework for predictive model building. Results The best predictive models using feature selection obtained an area under the curve of 0.86 and an accuracy of 0.86, with a sensitivity of 0.88 and a specificity of 0.82. Discussion With the numerous options for NAC available, development of a method to predict response early in the course of therapy is needed. Unfortunately, by the time most patients are found not to be responding, their disease may no longer be surgically resectable, and this situation could be avoided by the development of techniques to assess response earlier in the treatment regimen. The method outlined here is one possible solution to this important clinical problem. Conclusions Predictive modeling approaches based on machine learning using readily available clinical and quantitative MRI data show promise in distinguishing breast cancer responders from non-responders after the first cycle of NAC. PMID:23616206

  5. The precision measurement and assembly for miniature parts based on double machine vision systems

    NASA Astrophysics Data System (ADS)

    Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.

    2015-02-01

    In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.

  6. Machine Learning-based Texture Analysis of Contrast-enhanced MR Imaging to Differentiate between Glioblastoma and Primary Central Nervous System Lymphoma.

    PubMed

    Kunimatsu, Akira; Kunimatsu, Natsuko; Yasaka, Koichiro; Akai, Hiroyuki; Kamiya, Kouhei; Watadani, Takeyuki; Mori, Harushi; Abe, Osamu

    2018-05-16

    Although advanced MRI techniques are increasingly available, imaging differentiation between glioblastoma and primary central nervous system lymphoma (PCNSL) is sometimes confusing. We aimed to evaluate the performance of image classification by support vector machine, a method of traditional machine learning, using texture features computed from contrast-enhanced T 1 -weighted images. This retrospective study on preoperative brain tumor MRI included 76 consecutives, initially treated patients with glioblastoma (n = 55) or PCNSL (n = 21) from one institution, consisting of independent training group (n = 60: 44 glioblastomas and 16 PCNSLs) and test group (n = 16: 11 glioblastomas and 5 PCNSLs) sequentially separated by time periods. A total set of 67 texture features was computed on routine contrast-enhanced T 1 -weighted images of the training group, and the top four most discriminating features were selected as input variables to train support vector machine classifiers. These features were then evaluated on the test group with subsequent image classification. The area under the receiver operating characteristic curves on the training data was calculated at 0.99 (95% confidence interval [CI]: 0.96-1.00) for the classifier with a Gaussian kernel and 0.87 (95% CI: 0.77-0.95) for the classifier with a linear kernel. On the test data, both of the classifiers showed prediction accuracy of 75% (12/16) of the test images. Although further improvement is needed, our preliminary results suggest that machine learning-based image classification may provide complementary diagnostic information on routine brain MRI.

  7. Calibrationless parallel magnetic resonance imaging: a joint sparsity model.

    PubMed

    Majumdar, Angshul; Chaudhury, Kunal Narayan; Ward, Rabab

    2013-12-05

    State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation) stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than) state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets-eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used-Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods-CS SENSE and l1SPIRiT and two calibration free techniques-Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  8. Dying art of a history and physical: pulsatile tinnitus.

    PubMed

    Lee, Jonathan; Fekete, Zoltan

    2017-11-27

    Modern medicine often leaves the history and physical by the wayside. Physicians instead skip directly to diagnostic modalities like MRI and angiography. In this case report, we discuss a patient who presented with migraine symptoms. Auscultation revealed signs of pulsatile tinnitus. Further imaging concluded that it was secondary to a type I dural arteriovenous fistula. Thanks to a proper and thorough history and physical, the patient was streamlined into an accurate and efficient work-up leading to symptomatic relief and quality of life improvement. Imaging is a powerful adjunctive technique in modern medicine, but physicians must not rely on machines to diagnose their patients. If this trend continues, it will have a tremendous negative impact on the cost and calibre of healthcare. Our hope is that this case will spread awareness in the medical community, urging physicians to use the lost art of a history and physical. © BMJ Publishing Group Ltd (unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Modeling of solid-state and excimer laser processes for 3D micromachining

    NASA Astrophysics Data System (ADS)

    Holmes, Andrew S.; Onischenko, Alexander I.; George, David S.; Pedder, James E.

    2005-04-01

    An efficient simulation method has recently been developed for multi-pulse ablation processes. This is based on pulse-by-pulse propagation of the machined surface according to one of several phenomenological models for the laser-material interaction. The technique allows quantitative predictions to be made about the surface shapes of complex machined parts, given only a minimal set of input data for parameter calibration. In the case of direct-write machining of polymers or glasses with ns-duration pulses, this data set can typically be limited to the surface profiles of a small number of standard test patterns. The use of phenomenological models for the laser-material interaction, calibrated by experimental feedback, allows fast simulation, and can achieve a high degree of accuracy for certain combinations of material, laser and geometry. In this paper, the capabilities and limitations of the approach are discussed, and recent results are presented for structures machined in SU8 photoresist.

  10. Machine-Learning Based Co-adaptive Calibration: A Perspective to Fight BCI Illiteracy

    NASA Astrophysics Data System (ADS)

    Vidaurre, Carmen; Sannelli, Claudia; Müller, Klaus-Robert; Blankertz, Benjamin

    "BCI illiteracy" is one of the biggest problems and challenges in BCI research. It means that BCI control cannot be achieved by a non-negligible number of subjects (estimated 20% to 25%). There are two main causes for BCI illiteracy in BCI users: either no SMR idle rhythm is observed over motor areas, or this idle rhythm is not attenuated during motor imagery, resulting in a classification performance lower than 70% (criterion level) already for offline calibration data. In a previous work of the same authors, the concept of machine learning based co-adaptive calibration was introduced. This new type of calibration provided substantially improved performance for a variety of users. Here, we use a similar approach and investigate to what extent co-adapting learning enables substantial BCI control for completely novice users and those who suffered from BCI illiteracy before.

  11. Multivariate data analysis and machine learning in Alzheimer's disease with a focus on structural magnetic resonance imaging.

    PubMed

    Falahati, Farshad; Westman, Eric; Simmons, Andrew

    2014-01-01

    Machine learning algorithms and multivariate data analysis methods have been widely utilized in the field of Alzheimer's disease (AD) research in recent years. Advances in medical imaging and medical image analysis have provided a means to generate and extract valuable neuroimaging information. Automatic classification techniques provide tools to analyze this information and observe inherent disease-related patterns in the data. In particular, these classifiers have been used to discriminate AD patients from healthy control subjects and to predict conversion from mild cognitive impairment to AD. In this paper, recent studies are reviewed that have used machine learning and multivariate analysis in the field of AD research. The main focus is on studies that used structural magnetic resonance imaging (MRI), but studies that included positron emission tomography and cerebrospinal fluid biomarkers in addition to MRI are also considered. A wide variety of materials and methods has been employed in different studies, resulting in a range of different outcomes. Influential factors such as classifiers, feature extraction algorithms, feature selection methods, validation approaches, and cohort properties are reviewed, as well as key MRI-based and multi-modal based studies. Current and future trends are discussed.

  12. Perspectives on Machine Learning for Classification of Schizotypy Using fMRI Data.

    PubMed

    Madsen, Kristoffer H; Krohne, Laerke G; Cai, Xin-Lu; Wang, Yi; Chan, Raymond C K

    2018-03-15

    Functional magnetic resonance imaging is capable of estimating functional activation and connectivity in the human brain, and lately there has been increased interest in the use of these functional modalities combined with machine learning for identification of psychiatric traits. While these methods bear great potential for early diagnosis and better understanding of disease processes, there are wide ranges of processing choices and pitfalls that may severely hamper interpretation and generalization performance unless carefully considered. In this perspective article, we aim to motivate the use of machine learning schizotypy research. To this end, we describe common data processing steps while commenting on best practices and procedures. First, we introduce the important role of schizotypy to motivate the importance of reliable classification, and summarize existing machine learning literature on schizotypy. Then, we describe procedures for extraction of features based on fMRI data, including statistical parametric mapping, parcellation, complex network analysis, and decomposition methods, as well as classification with a special focus on support vector classification and deep learning. We provide more detailed descriptions and software as supplementary material. Finally, we present current challenges in machine learning for classification of schizotypy and comment on future trends and perspectives.

  13. Biomarkers for Musculoskeletal Pain Conditions: Use of Brain Imaging and Machine Learning.

    PubMed

    Boissoneault, Jeff; Sevel, Landrew; Letzen, Janelle; Robinson, Michael; Staud, Roland

    2017-01-01

    Chronic musculoskeletal pain condition often shows poor correlations between tissue abnormalities and clinical pain. Therefore, classification of pain conditions like chronic low back pain, osteoarthritis, and fibromyalgia depends mostly on self report and less on objective findings like X-ray or magnetic resonance imaging (MRI) changes. However, recent advances in structural and functional brain imaging have identified brain abnormalities in chronic pain conditions that can be used for illness classification. Because the analysis of complex and multivariate brain imaging data is challenging, machine learning techniques have been increasingly utilized for this purpose. The goal of machine learning is to train specific classifiers to best identify variables of interest on brain MRIs (i.e., biomarkers). This report describes classification techniques capable of separating MRI-based brain biomarkers of chronic pain patients from healthy controls with high accuracy (70-92%) using machine learning, as well as critical scientific, practical, and ethical considerations related to their potential clinical application. Although self-report remains the gold standard for pain assessment, machine learning may aid in the classification of chronic pain disorders like chronic back pain and fibromyalgia as well as provide mechanistic information regarding their neural correlates.

  14. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    NASA Technical Reports Server (NTRS)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  15. OT calibration and service maintenance manual.

    DOT National Transportation Integrated Search

    2012-01-01

    The machine conditions, as well as the values at the calibration and control parameters, may determine the quality of each test results obtained. In order to keep consistency and accuracy, the conditions, performance and measurements of an OT must be...

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cernoch, Antonin; Soubusta, Jan; Celechovska, Lucie

    We report on experimental implementation of the optimal universal asymmetric 1->2 quantum cloning machine for qubits encoded into polarization states of single photons. Our linear-optical machine performs asymmetric cloning by partially symmetrizing the input polarization state of signal photon and a blank copy idler photon prepared in a maximally mixed state. We show that the employed method of measurement of mean clone fidelities exhibits strong resilience to imperfect calibration of the relative efficiencies of single-photon detectors used in the experiment. Reliable characterization of the quantum cloner is thus possible even when precise detector calibration is difficult to achieve.

  17. Accuracy of automated classification of major depressive disorder as a function of symptom severity.

    PubMed

    Ramasubbu, Rajamannar; Brown, Matthew R G; Cortese, Filmeno; Gaxiola, Ismael; Goodyear, Bradley; Greenshaw, Andrew J; Dursun, Serdar M; Greiner, Russell

    2016-01-01

    Growing evidence documents the potential of machine learning for developing brain based diagnostic methods for major depressive disorder (MDD). As symptom severity may influence brain activity, we investigated whether the severity of MDD affected the accuracies of machine learned MDD-vs-Control diagnostic classifiers. Forty-five medication-free patients with DSM-IV defined MDD and 19 healthy controls participated in the study. Based on depression severity as determined by the Hamilton Rating Scale for Depression (HRSD), MDD patients were sorted into three groups: mild to moderate depression (HRSD 14-19), severe depression (HRSD 20-23), and very severe depression (HRSD ≥ 24). We collected functional magnetic resonance imaging (fMRI) data during both resting-state and an emotional-face matching task. Patients in each of the three severity groups were compared against controls in separate analyses, using either the resting-state or task-based fMRI data. We use each of these six datasets with linear support vector machine (SVM) binary classifiers for identifying individuals as patients or controls. The resting-state fMRI data showed statistically significant classification accuracy only for the very severe depression group (accuracy 66%, p = 0.012 corrected), while mild to moderate (accuracy 58%, p = 1.0 corrected) and severe depression (accuracy 52%, p = 1.0 corrected) were only at chance. With task-based fMRI data, the automated classifier performed at chance in all three severity groups. Binary linear SVM classifiers achieved significant classification of very severe depression with resting-state fMRI, but the contribution of brain measurements may have limited potential in differentiating patients with less severe depression from healthy controls.

  18. Identifying patients with Alzheimer's disease using resting-state fMRI and graph theory.

    PubMed

    Khazaee, Ali; Ebrahimzadeh, Ata; Babajani-Feremi, Abbas

    2015-11-01

    Study of brain network on the basis of resting-state functional magnetic resonance imaging (fMRI) has provided promising results to investigate changes in connectivity among different brain regions because of diseases. Graph theory can efficiently characterize different aspects of the brain network by calculating measures of integration and segregation. In this study, we combine graph theoretical approaches with advanced machine learning methods to study functional brain network alteration in patients with Alzheimer's disease (AD). Support vector machine (SVM) was used to explore the ability of graph measures in diagnosis of AD. We applied our method on the resting-state fMRI data of twenty patients with AD and twenty age and gender matched healthy subjects. The data were preprocessed and each subject's graph was constructed by parcellation of the whole brain into 90 distinct regions using the automated anatomical labeling (AAL) atlas. The graph measures were then calculated and used as the discriminating features. Extracted network-based features were fed to different feature selection algorithms to choose most significant features. In addition to the machine learning approach, statistical analysis was performed on connectivity matrices to find altered connectivity patterns in patients with AD. Using the selected features, we were able to accurately classify patients with AD from healthy subjects with accuracy of 100%. Results of this study show that pattern recognition and graph of brain network, on the basis of the resting state fMRI data, can efficiently assist in the diagnosis of AD. Classification based on the resting-state fMRI can be used as a non-invasive and automatic tool to diagnosis of Alzheimer's disease. Copyright © 2015 International Federation of Clinical Neurophysiology. All rights reserved.

  19. Deriving stable multi-parametric MRI radiomic signatures in the presence of inter-scanner variations: survival prediction of glioblastoma via imaging pattern analysis and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Rathore, Saima; Bakas, Spyridon; Akbari, Hamed; Shukla, Gaurav; Rozycki, Martin; Davatzikos, Christos

    2018-02-01

    There is mounting evidence that assessment of multi-parametric magnetic resonance imaging (mpMRI) profiles can noninvasively predict survival in many cancers, including glioblastoma. The clinical adoption of mpMRI as a prognostic biomarker, however, depends on its applicability in a multicenter setting, which is hampered by inter-scanner variations. This concept has not been addressed in existing studies. We developed a comprehensive set of within-patient normalized tumor features such as intensity profile, shape, volume, and tumor location, extracted from multicenter mpMRI of two large (npatients=353) cohorts, comprising the Hospital of the University of Pennsylvania (HUP, npatients=252, nscanners=3) and The Cancer Imaging Archive (TCIA, npatients=101, nscanners=8). Inter-scanner harmonization was conducted by normalizing the tumor intensity profile, with that of the contralateral healthy tissue. The extracted features were integrated by support vector machines to derive survival predictors. The predictors' generalizability was evaluated within each cohort, by two cross-validation configurations: i) pooled/scanner-agnostic, and ii) across scanners (training in multiple scanners and testing in one). The median survival in each configuration was used as a cut-off to divide patients in long- and short-survivors. Accuracy (ACC) for predicting long- versus short-survivors, for these configurations was ACCpooled=79.06% and ACCpooled=84.7%, ACCacross=73.55% and ACCacross=74.76%, in HUP and TCIA datasets, respectively. The hazard ratio at 95% confidence interval was 3.87 (2.87-5.20, P<0.001) and 6.65 (3.57-12.36, P<0.001) for HUP and TCIA datasets, respectively. Our findings suggest that adequate data normalization coupled with machine learning classification allows robust prediction of survival estimates on mpMRI acquired by multiple scanners.

  20. An Introduction to Normalization and Calibration Methods in Functional MRI

    ERIC Educational Resources Information Center

    Liu, Thomas T.; Glover, Gary H.; Mueller, Bryon A.; Greve, Douglas N.; Brown, Gregory G.

    2013-01-01

    In functional magnetic resonance imaging (fMRI), the blood oxygenation level dependent (BOLD) signal is often interpreted as a measure of neural activity. However, because the BOLD signal reflects the complex interplay of neural, vascular, and metabolic processes, such an interpretation is not always valid. There is growing evidence that changes…

  1. Automated detection of focal cortical dysplasia type II with surface-based magnetic resonance imaging postprocessing and machine learning.

    PubMed

    Jin, Bo; Krishnan, Balu; Adler, Sophie; Wagstyl, Konrad; Hu, Wenhan; Jones, Stephen; Najm, Imad; Alexopoulos, Andreas; Zhang, Kai; Zhang, Jianguo; Ding, Meiping; Wang, Shuang; Wang, Zhong Irene

    2018-05-01

    Focal cortical dysplasia (FCD) is a major pathology in patients undergoing surgical resection to treat pharmacoresistant epilepsy. Magnetic resonance imaging (MRI) postprocessing methods may provide essential help for detection of FCD. In this study, we utilized surface-based MRI morphometry and machine learning for automated lesion detection in a mixed cohort of patients with FCD type II from 3 different epilepsy centers. Sixty-one patients with pharmacoresistant epilepsy and histologically proven FCD type II were included in the study. The patients had been evaluated at 3 different epilepsy centers using 3 different MRI scanners. T1-volumetric sequence was used for postprocessing. A normal database was constructed with 120 healthy controls. We also included 35 healthy test controls and 15 disease test controls with histologically confirmed hippocampal sclerosis to assess specificity. Features were calculated and incorporated into a nonlinear neural network classifier, which was trained to identify lesional cluster. We optimized the threshold of the output probability map from the classifier by performing receiver operating characteristic (ROC) analyses. Success of detection was defined by overlap between the final cluster and the manual labeling. Performance was evaluated using k-fold cross-validation. The threshold of 0.9 showed optimal sensitivity of 73.7% and specificity of 90.0%. The area under the curve for the ROC analysis was 0.75, which suggests a discriminative classifier. Sensitivity and specificity were not significantly different for patients from different centers, suggesting robustness of performance. Correct detection rate was significantly lower in patients with initially normal MRI than patients with unequivocally positive MRI. Subgroup analysis showed the size of the training group and normal control database impacted classifier performance. Automated surface-based MRI morphometry equipped with machine learning showed robust performance across cohorts from different centers and scanners. The proposed method may be a valuable tool to improve FCD detection in presurgical evaluation for patients with pharmacoresistant epilepsy. Wiley Periodicals, Inc. © 2018 International League Against Epilepsy.

  2. Supervised machine learning-based classification scheme to segment the brainstem on MRI in multicenter brain tumor treatment context.

    PubMed

    Dolz, Jose; Laprie, Anne; Ken, Soléakhéna; Leroy, Henri-Arthur; Reyns, Nicolas; Massoptier, Laurent; Vermandel, Maximilien

    2016-01-01

    To constrain the risk of severe toxicity in radiotherapy and radiosurgery, precise volume delineation of organs at risk is required. This task is still manually performed, which is time-consuming and prone to observer variability. To address these issues, and as alternative to atlas-based segmentation methods, machine learning techniques, such as support vector machines (SVM), have been recently presented to segment subcortical structures on magnetic resonance images (MRI). SVM is proposed to segment the brainstem on MRI in multicenter brain cancer context. A dataset composed by 14 adult brain MRI scans is used to evaluate its performance. In addition to spatial and probabilistic information, five different image intensity values (IIVs) configurations are evaluated as features to train the SVM classifier. Segmentation accuracy is evaluated by computing the Dice similarity coefficient (DSC), absolute volumes difference (AVD) and percentage volume difference between automatic and manual contours. Mean DSC for all proposed IIVs configurations ranged from 0.89 to 0.90. Mean AVD values were below 1.5 cm(3), where the value for best performing IIVs configuration was 0.85 cm(3), representing an absolute mean difference of 3.99% with respect to the manual segmented volumes. Results suggest consistent volume estimation and high spatial similarity with respect to expert delineations. The proposed approach outperformed presented methods to segment the brainstem, not only in volume similarity metrics, but also in segmentation time. Preliminary results showed that the approach might be promising for adoption in clinical use.

  3. Spectrophotometric determination of ternary mixtures of thiamin, riboflavin and pyridoxal in pharmaceutical and human plasma by least-squares support vector machines.

    PubMed

    Niazi, Ali; Zolgharnein, Javad; Afiuni-Zadeh, Somaie

    2007-11-01

    Ternary mixtures of thiamin, riboflavin and pyridoxal have been simultaneously determined in synthetic and real samples by applications of spectrophotometric and least-squares support vector machines. The calibration graphs were linear in the ranges of 1.0 - 20.0, 1.0 - 10.0 and 1.0 - 20.0 microg ml(-1) with detection limits of 0.6, 0.5 and 0.7 microg ml(-1) for thiamin, riboflavin and pyridoxal, respectively. The experimental calibration matrix was designed with 21 mixtures of these chemicals. The concentrations were varied between calibration graph concentrations of vitamins. The simultaneous determination of these vitamin mixtures by using spectrophotometric methods is a difficult problem, due to spectral interferences. The partial least squares (PLS) modeling and least-squares support vector machines were used for the multivariate calibration of the spectrophotometric data. An excellent model was built using LS-SVM, with low prediction errors and superior performance in relation to PLS. The root mean square errors of prediction (RMSEP) for thiamin, riboflavin and pyridoxal with PLS and LS-SVM were 0.6926, 0.3755, 0.4322 and 0.0421, 0.0318, 0.0457, respectively. The proposed method was satisfactorily applied to the rapid simultaneous determination of thiamin, riboflavin and pyridoxal in commercial pharmaceutical preparations and human plasma samples.

  4. Snack food as a modulator of human resting-state functional connectivity.

    PubMed

    Mendez-Torrijos, Andrea; Kreitz, Silke; Ivan, Claudiu; Konerth, Laura; Rösch, Julie; Pischetsrieder, Monika; Moll, Gunther; Kratz, Oliver; Dörfler, Arnd; Horndasch, Stefanie; Hess, Andreas

    2018-04-04

    To elucidate the mechanisms of how snack foods may induce non-homeostatic food intake, we used resting state functional magnetic resonance imaging (fMRI), as resting state networks can individually adapt to experience after short time exposures. In addition, we used graph theoretical analysis together with machine learning techniques (support vector machine) to identifying biomarkers that can categorize between high-caloric (potato chips) vs. low-caloric (zucchini) food stimulation. Seventeen healthy human subjects with body mass index (BMI) 19 to 27 underwent 2 different fMRI sessions where an initial resting state scan was acquired, followed by visual presentation of different images of potato chips and zucchini. There was then a 5-minute pause to ingest food (day 1=potato chips, day 3=zucchini), followed by a second resting state scan. fMRI data were further analyzed using graph theory analysis and support vector machine techniques. Potato chips vs. zucchini stimulation led to significant connectivity changes. The support vector machine was able to accurately categorize the 2 types of food stimuli with 100% accuracy. Visual, auditory, and somatosensory structures, as well as thalamus, insula, and basal ganglia were found to be important for food classification. After potato chips consumption, the BMI was associated with the path length and degree in nucleus accumbens, middle temporal gyrus, and thalamus. The results suggest that high vs. low caloric food stimulation in healthy individuals can induce significant changes in resting state networks. These changes can be detected using graph theory measures in conjunction with support vector machine. Additionally, we found that the BMI affects the response of the nucleus accumbens when high caloric food is consumed.

  5. Discriminative analysis of schizophrenia using support vector machine and recursive feature elimination on structural MRI images.

    PubMed

    Lu, Xiaobing; Yang, Yongzhe; Wu, Fengchun; Gao, Minjian; Xu, Yong; Zhang, Yue; Yao, Yongcheng; Du, Xin; Li, Chengwei; Wu, Lei; Zhong, Xiaomei; Zhou, Yanling; Fan, Ni; Zheng, Yingjun; Xiong, Dongsheng; Peng, Hongjun; Escudero, Javier; Huang, Biao; Li, Xiaobo; Ning, Yuping; Wu, Kai

    2016-07-01

    Structural abnormalities in schizophrenia (SZ) patients have been well documented with structural magnetic resonance imaging (MRI) data using voxel-based morphometry (VBM) and region of interest (ROI) analyses. However, these analyses can only detect group-wise differences and thus, have a poor predictive value for individuals. In the present study, we applied a machine learning method that combined support vector machine (SVM) with recursive feature elimination (RFE) to discriminate SZ patients from normal controls (NCs) using their structural MRI data. We first employed both VBM and ROI analyses to compare gray matter volume (GMV) and white matter volume (WMV) between 41 SZ patients and 42 age- and sex-matched NCs. The method of SVM combined with RFE was used to discriminate SZ patients from NCs using significant between-group differences in both GMV and WMV as input features. We found that SZ patients showed GM and WM abnormalities in several brain structures primarily involved in the emotion, memory, and visual systems. An SVM with a RFE classifier using the significant structural abnormalities identified by the VBM analysis as input features achieved the best performance (an accuracy of 88.4%, a sensitivity of 91.9%, and a specificity of 84.4%) in the discriminative analyses of SZ patients. These results suggested that distinct neuroanatomical profiles associated with SZ patients might provide a potential biomarker for disease diagnosis, and machine-learning methods can reveal neurobiological mechanisms in psychiatric diseases.

  6. Self-Calibrating Surface Measuring Machine

    NASA Astrophysics Data System (ADS)

    Greenleaf, Allen H.

    1983-04-01

    A new kind of surface-measuring machine has been developed under government contract at Itek Optical Systems, a Division of Itek Corporation, to assist in the fabrication of large, highly aspheric optical elements. The machine uses four steerable distance-measuring interferometers at the corners of a tetrahedron to measure the positions of a retroreflective target placed at various locations against the surface being measured. Using four interferometers gives redundant information so that, from a set of measurement data, the dimensions of the machine as well as the coordinates of the measurement points can be determined. The machine is, therefore, self-calibrating and does not require a structure made to high accuracy. A wood-structured prototype of this machine was made whose key components are a simple form of air bearing steering mirror, a wide-angle cat's eye retroreflector used as the movable target, and tracking sensors and servos to provide automatic tracking of the cat's eye by the four laser beams. The data are taken and analyzed by computer. The output is given in terms of error relative to an equation of the desired surface. In tests of this machine, measurements of a 0.7 m diameter mirror blank have been made with an accuracy on the order of 0.2µm rms.

  7. Rapid geodesic mapping of brain functional connectivity: implementation of a dedicated co-processor in a field-programmable gate array (FPGA) and application to resting state functional MRI.

    PubMed

    Minati, Ludovico; Cercignani, Mara; Chan, Dennis

    2013-10-01

    Graph theory-based analyses of brain network topology can be used to model the spatiotemporal correlations in neural activity detected through fMRI, and such approaches have wide-ranging potential, from detection of alterations in preclinical Alzheimer's disease through to command identification in brain-machine interfaces. However, due to prohibitive computational costs, graph-based analyses to date have principally focused on measuring connection density rather than mapping the topological architecture in full by exhaustive shortest-path determination. This paper outlines a solution to this problem through parallel implementation of Dijkstra's algorithm in programmable logic. The processor design is optimized for large, sparse graphs and provided in full as synthesizable VHDL code. An acceleration factor between 15 and 18 is obtained on a representative resting-state fMRI dataset, and maps of Euclidean path length reveal the anticipated heterogeneous cortical involvement in long-range integrative processing. These results enable high-resolution geodesic connectivity mapping for resting-state fMRI in patient populations and real-time geodesic mapping to support identification of imagined actions for fMRI-based brain-machine interfaces. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Small mammal MRI imaging in spinal cord injury: a novel practical technique for using a 1.5 T MRI.

    PubMed

    Levene, Howard B; Mohamed, Feroze B; Faro, Scott H; Seshadri, Asha B; Loftus, Christopher M; Tuma, Ronald F; Jallo, Jack I

    2008-07-30

    The field of spinal cord injury research is an active one. The pathophysiology of SCI is not yet entirely revealed. As such, animal models are required for the exploration of new therapies and treatments. We present a novel technique using available hospital MRI machines to examine SCI in a mouse SCI model. The model is a 60 kdyne direct contusion injury in a mouse thoracic spine. No new electronic equipment is required. A 1.5T MRI machine with a human wrist coil is employed. A standard multisection 2D fast spin-echo (FSE) T2-weighted sequence is used for imaging the mouse. The contrast-to-noise ratio (CNR) between the injured and normal area of the spinal cord showed a three-fold increase in the contrast between these two regions. The MRI findings could be correlated with kinematic outcome scores of ambulation, such as BBB or BMS. The ability to follow a SCI in the same animal over time should improve the quality of data while reducing the quantity of animals required in SCI research. It is the aim of the authors to share this non-invasive technique and to make it available to the scientific research community.

  9. Automated segmentation of the parotid gland based on atlas registration and machine learning: a longitudinal MRI study in head-and-neck radiation therapy.

    PubMed

    Yang, Xiaofeng; Wu, Ning; Cheng, Guanghui; Zhou, Zhengyang; Yu, David S; Beitler, Jonathan J; Curran, Walter J; Liu, Tian

    2014-12-01

    To develop an automated magnetic resonance imaging (MRI) parotid segmentation method to monitor radiation-induced parotid gland changes in patients after head and neck radiation therapy (RT). The proposed method combines the atlas registration method, which captures the global variation of anatomy, with a machine learning technology, which captures the local statistical features, to automatically segment the parotid glands from the MRIs. The segmentation method consists of 3 major steps. First, an atlas (pre-RT MRI and manually contoured parotid gland mask) is built for each patient. A hybrid deformable image registration is used to map the pre-RT MRI to the post-RT MRI, and the transformation is applied to the pre-RT parotid volume. Second, the kernel support vector machine (SVM) is trained with the subject-specific atlas pair consisting of multiple features (intensity, gradient, and others) from the aligned pre-RT MRI and the transformed parotid volume. Third, the well-trained kernel SVM is used to differentiate the parotid from surrounding tissues in the post-RT MRIs by statistically matching multiple texture features. A longitudinal study of 15 patients undergoing head and neck RT was conducted: baseline MRI was acquired prior to RT, and the post-RT MRIs were acquired at 3-, 6-, and 12-month follow-up examinations. The resulting segmentations were compared with the physicians' manual contours. Successful parotid segmentation was achieved for all 15 patients (42 post-RT MRIs). The average percentage of volume differences between the automated segmentations and those of the physicians' manual contours were 7.98% for the left parotid and 8.12% for the right parotid. The average volume overlap was 91.1% ± 1.6% for the left parotid and 90.5% ± 2.4% for the right parotid. The parotid gland volume reduction at follow-up was 25% at 3 months, 27% at 6 months, and 16% at 12 months. We have validated our automated parotid segmentation algorithm in a longitudinal study. This segmentation method may be useful in future studies to address radiation-induced xerostomia in head and neck radiation therapy. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. The role of fMRI in cognitive neuroscience: where do we stand?

    PubMed

    Poldrack, Russell A

    2008-04-01

    Functional magnetic resonance imaging (fMRI) has quickly become the most prominent tool in cognitive neuroscience. In this article, I outline some of the limits on the kinds of inferences that can be supported by fMRI, focusing particularly on reverse inference, in which the engagement of specific mental processes is inferred from patterns of brain activation. Although this form of inference is weak, newly developed methods from the field of machine learning offer the potential to formalize and strengthen reverse inferences. I conclude by discussing the increasing presence of fMRI results in the popular media and the ethical implications of the increasing predictive power of fMRI.

  11. Machine-Specific Magnetic Resonance Imaging Quality Control Procedures for Stereotactic Radiosurgery Treatment Planning

    PubMed Central

    Taghizadeh, Somayeh; Yang, Claus Chunli; R. Kanakamedala, Madhava; Morris, Bart; Vijayakumar, Srinivasan

    2017-01-01

    Purpose Magnetic resonance (MR) images are necessary for accurate contouring of intracranial targets, determination of gross target volume and evaluation of organs at risk during stereotactic radiosurgery (SRS) treatment planning procedures. Many centers use magnetic resonance imaging (MRI) simulators or regular diagnostic MRI machines for SRS treatment planning; while both types of machine require two stages of quality control (QC), both machine- and patient-specific, before use for SRS, no accepted guidelines for such QC currently exist. This article describes appropriate machine-specific QC procedures for SRS applications. Methods and materials We describe the adaptation of American College of Radiology (ACR)-recommended QC tests using an ACR MRI phantom for SRS treatment planning. In addition, commercial Quasar MRID3D and Quasar GRID3D phantoms were used to evaluate the effects of static magnetic field (B0) inhomogeneity, gradient nonlinearity, and a Leksell G frame (SRS frame) and its accessories on geometrical distortion in MR images. Results QC procedures found in-plane distortions (Maximum = 3.5 mm, Mean = 0.91 mm, Standard deviation = 0.67 mm, >2.5 mm (%) = 2) in X-direction (Maximum = 2.51 mm, Mean = 0.52 mm, Standard deviation = 0.39 mm, > 2.5 mm (%) = 0) and in Y-direction (Maximum = 13. 1 mm , Mean = 2.38 mm, Standard deviation = 2.45 mm, > 2.5 mm (%) = 34) in Z-direction and < 1 mm distortion at a head-sized region of interest. MR images acquired using a Leksell G frame and localization devices showed a mean absolute deviation of 2.3 mm from isocenter. The results of modified ACR tests were all within recommended limits, and baseline measurements have been defined for regular weekly QC tests. Conclusions With appropriate QC procedures in place, it is possible to routinely obtain clinically useful MR images suitable for SRS treatment planning purposes. MRI examination for SRS planning can benefit from the improved localization and planning possible with the superior image quality and soft tissue contrast achieved under optimal conditions. PMID:29487771

  12. Machine-Specific Magnetic Resonance Imaging Quality Control Procedures for Stereotactic Radiosurgery Treatment Planning.

    PubMed

    Fatemi, Ali; Taghizadeh, Somayeh; Yang, Claus Chunli; R Kanakamedala, Madhava; Morris, Bart; Vijayakumar, Srinivasan

    2017-12-18

    Purpose Magnetic resonance (MR) images are necessary for accurate contouring of intracranial targets, determination of gross target volume and evaluation of organs at risk during stereotactic radiosurgery (SRS) treatment planning procedures. Many centers use magnetic resonance imaging (MRI) simulators or regular diagnostic MRI machines for SRS treatment planning; while both types of machine require two stages of quality control (QC), both machine- and patient-specific, before use for SRS, no accepted guidelines for such QC currently exist. This article describes appropriate machine-specific QC procedures for SRS applications. Methods and materials We describe the adaptation of American College of Radiology (ACR)-recommended QC tests using an ACR MRI phantom for SRS treatment planning. In addition, commercial Quasar MRID 3D and Quasar GRID 3D phantoms were used to evaluate the effects of static magnetic field (B 0 ) inhomogeneity, gradient nonlinearity, and a Leksell G frame (SRS frame) and its accessories on geometrical distortion in MR images. Results QC procedures found in-plane distortions (Maximum = 3.5 mm, Mean = 0.91 mm, Standard deviation = 0.67 mm, >2.5 mm (%) = 2) in X-direction (Maximum = 2.51 mm, Mean = 0.52 mm, Standard deviation = 0.39 mm, > 2.5 mm (%) = 0) and in Y-direction (Maximum = 13. 1 mm , Mean = 2.38 mm, Standard deviation = 2.45 mm, > 2.5 mm (%) = 34) in Z-direction and < 1 mm distortion at a head-sized region of interest. MR images acquired using a Leksell G frame and localization devices showed a mean absolute deviation of 2.3 mm from isocenter. The results of modified ACR tests were all within recommended limits, and baseline measurements have been defined for regular weekly QC tests. Conclusions With appropriate QC procedures in place, it is possible to routinely obtain clinically useful MR images suitable for SRS treatment planning purposes. MRI examination for SRS planning can benefit from the improved localization and planning possible with the superior image quality and soft tissue contrast achieved under optimal conditions.

  13. SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jen, M; Yan, F; Tseng, Y

    2015-06-15

    Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtractionmore » of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL.« less

  14. Can multi-slice or navigator-gated R2* MRI replace single-slice breath-hold acquisition for hepatic iron quantification?

    PubMed

    Loeffler, Ralf B; McCarville, M Beth; Wagstaff, Anne W; Smeltzer, Matthew P; Krafft, Axel J; Song, Ruitian; Hankins, Jane S; Hillenbrand, Claudia M

    2017-01-01

    Liver R2* values calculated from multi-gradient echo (mGRE) magnetic resonance images (MRI) are strongly correlated with hepatic iron concentration (HIC) as shown in several independently derived biopsy calibration studies. These calibrations were established for axial single-slice breath-hold imaging at the location of the portal vein. Scanning in multi-slice mode makes the exam more efficient, since whole-liver coverage can be achieved with two breath-holds and the optimal slice can be selected afterward. Navigator echoes remove the need for breath-holds and allow use in sedated patients. To evaluate if the existing biopsy calibrations can be applied to multi-slice and navigator-controlled mGRE imaging in children with hepatic iron overload, by testing if there is a bias-free correlation between single-slice R2* and multi-slice or multi-slice navigator controlled R2*. This study included MRI data from 71 patients with transfusional iron overload, who received an MRI exam to estimate HIC using gradient echo sequences. Patient scans contained 2 or 3 of the following imaging methods used for analysis: single-slice images (n = 71), multi-slice images (n = 69) and navigator-controlled images (n = 17). Small and large blood corrected region of interests were selected on axial images of the liver to obtain R2* values for all data sets. Bland-Altman and linear regression analysis were used to compare R2* values from single-slice images to those of multi-slice images and navigator-controlled images. Bland-Altman analysis showed that all imaging method comparisons were strongly associated with each other and had high correlation coefficients (0.98 ≤ r ≤ 1.00) with P-values ≤0.0001. Linear regression yielded slopes that were close to 1. We found that navigator-gated or breath-held multi-slice R2* MRI for HIC determination measures R2* values comparable to the biopsy-validated single-slice, single breath-hold scan. We conclude that these three R2* methods can be interchangeably used in existing R2*-HIC calibrations.

  15. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standardmore » reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.« less

  16. Teaching Camera Calibration by a Constructivist Methodology

    ERIC Educational Resources Information Center

    Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.

    2010-01-01

    This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…

  17. Power Doppler signal calibration between ultrasound machines by use of a capillary-flow phantom for pannus vascularity in rheumatoid finger joints: a basic study.

    PubMed

    Sakano, Ryosuke; Kamishima, Tamotsu; Nishida, Mutsumi; Horie, Tatsunori

    2015-01-01

    Ultrasound allows the detection and grading of inflammation in rheumatology. Despite these advantages of ultrasound in the management of rheumatoid patients, it is well known that there are significant machine-to-machine disagreements regarding signal quantification. In this study, we tried to calibrate the power Doppler (PD) signal of two models of ultrasound machines by using a capillary-flow phantom. After flow velocity analysis in the perfusion cartridge at various injection rates (0.1-0.5 ml/s), we measured the signal count in the perfusion cartridge at various injection rates and pulse repetition frequencies (PRFs) by using PD, perfusing an ultrasound micro-bubble contrast agent diluted with normal saline simulating human blood. By use of the data from two models of ultrasound machines, Aplio 500 (Toshiba) and Avius (Hitachi Aloka), the quantitative PD (QPD) index [the summation of the colored pixels in a 1 cm × 1 cm rectangular region of interest (ROI)] was calculated via Image J (internet free software). We found a positive correlation between the injection rate and the flow velocity. In Aplio 500 and Avius, we found negative correlations between the PRF and the QPD index when the flow velocity was constant, and a positive correlation between flow velocity and the QPD index at constant PRF. The equation for the relationship of the PRF between Aplio 500 and Avius was: y = 0.023x + 0.36 [y = PRF of Avius (kHz), x = PRF of Aplio 500 (kHz)]. Our results suggested that the signal calibration of various models of ultrasound machines is possible by adjustment of the PRF setting.

  18. Classification of fMRI independent components using IC-fingerprints and support vector machine classifiers.

    PubMed

    De Martino, Federico; Gentile, Francesco; Esposito, Fabrizio; Balsi, Marco; Di Salle, Francesco; Goebel, Rainer; Formisano, Elia

    2007-01-01

    We present a general method for the classification of independent components (ICs) extracted from functional MRI (fMRI) data sets. The method consists of two steps. In the first step, each fMRI-IC is associated with an IC-fingerprint, i.e., a representation of the component in a multidimensional space of parameters. These parameters are post hoc estimates of global properties of the ICs and are largely independent of a specific experimental design and stimulus timing. In the second step a machine learning algorithm automatically separates the IC-fingerprints into six general classes after preliminary training performed on a small subset of expert-labeled components. We illustrate this approach in a multisubject fMRI study employing visual structure-from-motion stimuli encoding faces and control random shapes. We show that: (1) IC-fingerprints are a valuable tool for the inspection, characterization and selection of fMRI-ICs and (2) automatic classifications of fMRI-ICs in new subjects present a high correspondence with those obtained by expert visual inspection of the components. Importantly, our classification procedure highlights several neurophysiologically interesting processes. The most intriguing of which is reflected, with high intra- and inter-subject reproducibility, in one IC exhibiting a transiently task-related activation in the 'face' region of the primary sensorimotor cortex. This suggests that in addition to or as part of the mirror system, somatotopic regions of the sensorimotor cortex are involved in disambiguating the perception of a moving body part. Finally, we show that the same classification algorithm can be successfully applied, without re-training, to fMRI collected using acquisition parameters, stimulation modality and timing considerably different from those used for training.

  19. Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)

    NASA Astrophysics Data System (ADS)

    Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.

    2013-12-01

    This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.

  20. Study on on-machine defects measuring system on high power laser optical elements

    NASA Astrophysics Data System (ADS)

    Luo, Chi; Shi, Feng; Lin, Zhifan; Zhang, Tong; Wang, Guilin

    2017-10-01

    The influence of surface defects on high power laser optical elements will cause some harm to the performances of imaging system, including the energy consumption and the damage of film layer. To further increase surface defects on high power laser optical element, on-machine defects measuring system was investigated. Firstly, the selection and design are completed by the working condition analysis of the on-machine defects detection system. By designing on processing algorithms to realize the classification recognition and evaluation of surface defects. The calibration experiment of the scratch was done by using the self-made standard alignment plate. Finally, the detection and evaluation of surface defects of large diameter semi-cylindrical silicon mirror are realized. The calibration results show that the size deviation is less than 4% that meet the precision requirement of the detection of the defects. Through the detection of images the on-machine defects detection system can realize the accurate identification of surface defects.

  1. Decision forests for learning prostate cancer probability maps from multiparametric MRI

    NASA Astrophysics Data System (ADS)

    Ehrenberg, Henry R.; Cornfeld, Daniel; Nawaf, Cayce B.; Sprenkle, Preston C.; Duncan, James S.

    2016-03-01

    Objectives: Advances in multiparametric magnetic resonance imaging (mpMRI) and ultrasound/MRI fusion imaging offer a powerful alternative to the typical undirected approach to diagnosing prostate cancer. However, these methods require the time and expertise needed to interpret mpMRI image scenes. In this paper, a machine learning framework for automatically detecting and localizing cancerous lesions within the prostate is developed and evaluated. Methods: Two studies were performed to gather MRI and pathology data. The 12 patients in the first study underwent an MRI session to obtain structural, diffusion-weighted, and dynamic contrast enhanced image vol- umes of the prostate, and regions suspected of being cancerous from the MRI data were manually contoured by radiologists. Whole-mount slices of the prostate were obtained for the patients in the second study, in addition to structural and diffusion-weighted MRI data, for pathology verification. A 3-D feature set for voxel-wise appear- ance description combining intensity data, textural operators, and zonal approximations was generated. Voxels in a test set were classified as normal or cancer using a decision forest-based model initialized using Gaussian discriminant analysis. A leave-one-patient-out cross-validation scheme was used to assess the predictions against the expert manual segmentations confirmed as cancer by biopsy. Results: We achieved an area under the average receiver-operator characteristic curve of 0.923 for the first study, and visual assessment of the probability maps showed 21 out of 22 tumors were identified while a high level of specificity was maintained. In addition to evaluating the model against related approaches, the effects of the individual MRI parameter types were explored, and pathological verification using whole-mount slices from the second study was performed. Conclusions: The results of this paper show that the combination of mpMRI and machine learning is a powerful tool for quantitatively diagnosing prostate cancer.

  2. An fMRI and effective connectivity study investigating miss errors during advice utilization from human and machine agents.

    PubMed

    Goodyear, Kimberly; Parasuraman, Raja; Chernyak, Sergey; de Visser, Ewart; Madhavan, Poornima; Deshpande, Gopikrishna; Krueger, Frank

    2017-10-01

    As society becomes more reliant on machines and automation, understanding how people utilize advice is a necessary endeavor. Our objective was to reveal the underlying neural associations during advice utilization from expert human and machine agents with fMRI and multivariate Granger causality analysis. During an X-ray luggage-screening task, participants accepted or rejected good or bad advice from either the human or machine agent framed as experts with manipulated reliability (high miss rate). We showed that the machine-agent group decreased their advice utilization compared to the human-agent group and these differences in behaviors during advice utilization could be accounted for by high expectations of reliable advice and changes in attention allocation due to miss errors. Brain areas involved with the salience and mentalizing networks, as well as sensory processing involved with attention, were recruited during the task and the advice utilization network consisted of attentional modulation of sensory information with the lingual gyrus as the driver during the decision phase and the fusiform gyrus as the driver during the feedback phase. Our findings expand on the existing literature by showing that misses degrade advice utilization, which is represented in a neural network involving salience detection and self-processing with perceptual integration.

  3. MO-F-CAMPUS-T-04: Implementation of a Standardized Monthly Quality Check for Linac Output Management in a Large Multi-Site Clinic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, H; Yi, B; Prado, K

    2015-06-15

    Purpose: This work is to investigate the feasibility of a standardized monthly quality check (QC) of LINAC output determination in a multi-site, multi-LINAC institution. The QC was developed to determine individual LINAC output using the same optimized measurement setup and a constant calibration factor for all machines across the institution. Methods: The QA data over 4 years of 7 Varian machines over four sites, were analyzed. The monthly output constancy checks were performed using a fixed source-to-chamber-distance (SCD), with no couch position adjustment throughout the measurement cycle for all the photon energies: 6 and 18MV, and electron energies: 6, 9,more » 12, 16 and 20 MeV. The constant monthly output calibration factor (Nconst) was determined by averaging the machines’ output data, acquired with the same monthly ion chamber. If a different monthly ion chamber was used, Nconst was then re-normalized to consider its different NDW,Co-60. Here, the possible changes of Nconst over 4 years have been tracked, and the precision of output results based on this standardized monthly QA program relative to the TG-51 calibration for each machine was calculated. Any outlier of the group was investigated. Results: The possible changes of Nconst varied between 0–0.9% over 4 years. The normalization of absorbed-dose-to-water calibration factors corrects for up to 3.3% variations of different monthly QA chambers. The LINAC output precision based on this standardized monthly QC relative to the TG-51 output calibration is within 1% for 6MV photon energy and 2% for 18MV and all the electron energies. A human error in one TG-51 report was found through a close scrutiny of outlier data. Conclusion: This standardized QC allows for a reasonably simplified, precise and robust monthly LINAC output constancy check, with the increased sensitivity needed to detect possible human errors and machine problems.« less

  4. [How do metallic middle ear implants behave in the MRI?].

    PubMed

    Kwok, P; Waldeck, A; Strutz, J

    2003-01-01

    Magnetic resonance imaging (MRI) has gained in frequency and importance as a diagnostic procedure. In respect to the close anatomical relationship in the temporal bone it is necessary to know whether it is hazardous to patients with metallic middle ear implants regarding displacement and rise in temperature. For the MR image quality artefacts caused by metallic prostheses should be low. Four different stapes prostheses made from titanium, gold, teflon/platinum and teflon/steel, a titanium total ossicular reconstruction prosthesis (TORP) and two ventilation tubes (made from titanium and gold) were tested in a 1.5 Tesla MRI machine regarding their displacement. All objects were first placed in a petri dish, then suspended from a thread and finally immersed in a dish filled with Gadolinium. Temperature changes of the implants were recorded by a pyrometer. None of the implants moved when they were placed in the petri dish or suspended from the thread. On the water surface the teflon/platinum and the teflon/steel pistons adjusted their direction with their axis longitudinally to the MRI scanner opening and the teflon/steel piston floated towards the MRI-machine when put close enough to the scanner opening. No rise in temperature was recorded. All implants showed as little artefacts that would still make an evaluation of the surrounding tissue possible. Patients with any of the metallic middle ear implants that were examined in this study may undergo MRI-investigations without significant adverse effects.

  5. Single slice US-MRI registration for neurosurgical MRI-guided US

    NASA Astrophysics Data System (ADS)

    Pardasani, Utsav; Baxter, John S. H.; Peters, Terry M.; Khan, Ali R.

    2016-03-01

    Image-based ultrasound to magnetic resonance image (US-MRI) registration can be an invaluable tool in image-guided neuronavigation systems. State-of-the-art commercial and research systems utilize image-based registration to assist in functions such as brain-shift correction, image fusion, and probe calibration. Since traditional US-MRI registration techniques use reconstructed US volumes or a series of tracked US slices, the functionality of this approach can be compromised by the limitations of optical or magnetic tracking systems in the neurosurgical operating room. These drawbacks include ergonomic issues, line-of-sight/magnetic interference, and maintenance of the sterile field. For those seeking a US vendor-agnostic system, these issues are compounded with the challenge of instrumenting the probe without permanent modification and calibrating the probe face to the tracking tool. To address these challenges, this paper explores the feasibility of a real-time US-MRI volume registration in a small virtual craniotomy site using a single slice. We employ the Linear Correlation of Linear Combination (LC2) similarity metric in its patch-based form on data from MNI's Brain Images for Tumour Evaluation (BITE) dataset as a PyCUDA enabled Python module in Slicer. By retaining the original orientation information, we are able to improve on the poses using this approach. To further assist the challenge of US-MRI registration, we also present the BOXLC2 metric which demonstrates a speed improvement to LC2, while retaining a similar accuracy in this context.

  6. Implementation of compressive sensing for preclinical cine-MRI

    NASA Astrophysics Data System (ADS)

    Tan, Elliot; Yang, Ming; Ma, Lixin; Zheng, Yahong Rosa

    2014-03-01

    This paper presents a practical implementation of Compressive Sensing (CS) for a preclinical MRI machine to acquire randomly undersampled k-space data in cardiac function imaging applications. First, random undersampling masks were generated based on Gaussian, Cauchy, wrapped Cauchy and von Mises probability distribution functions by the inverse transform method. The best masks for undersampling ratios of 0.3, 0.4 and 0.5 were chosen for animal experimentation, and were programmed into a Bruker Avance III BioSpec 7.0T MRI system through method programming in ParaVision. Three undersampled mouse heart datasets were obtained using a fast low angle shot (FLASH) sequence, along with a control undersampled phantom dataset. ECG and respiratory gating was used to obtain high quality images. After CS reconstructions were applied to all acquired data, resulting images were quantitatively analyzed using the performance metrics of reconstruction error and Structural Similarity Index (SSIM). The comparative analysis indicated that CS reconstructed images from MRI machine undersampled data were indeed comparable to CS reconstructed images from retrospective undersampled data, and that CS techniques are practical in a preclinical setting. The implementation achieved 2 to 4 times acceleration for image acquisition and satisfactory quality of image reconstruction.

  7. Ensemble support vector machine classification of dementia using structural MRI and mini-mental state examination.

    PubMed

    Sørensen, Lauge; Nielsen, Mads

    2018-05-15

    The International Challenge for Automated Prediction of MCI from MRI data offered independent, standardized comparison of machine learning algorithms for multi-class classification of normal control (NC), mild cognitive impairment (MCI), converting MCI (cMCI), and Alzheimer's disease (AD) using brain imaging and general cognition. We proposed to use an ensemble of support vector machines (SVMs) that combined bagging without replacement and feature selection. SVM is the most commonly used algorithm in multivariate classification of dementia, and it was therefore valuable to evaluate the potential benefit of ensembling this type of classifier. The ensemble SVM, using either a linear or a radial basis function (RBF) kernel, achieved multi-class classification accuracies of 55.6% and 55.0% in the challenge test set (60 NC, 60 MCI, 60 cMCI, 60 AD), resulting in a third place in the challenge. Similar feature subset sizes were obtained for both kernels, and the most frequently selected MRI features were the volumes of the two hippocampal subregions left presubiculum and right subiculum. Post-challenge analysis revealed that enforcing a minimum number of selected features and increasing the number of ensemble classifiers improved classification accuracy up to 59.1%. The ensemble SVM outperformed single SVM classifications consistently in the challenge test set. Ensemble methods using bagging and feature selection can improve the performance of the commonly applied SVM classifier in dementia classification. This resulted in competitive classification accuracies in the International Challenge for Automated Prediction of MCI from MRI data. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Support vector machine learning-based fMRI data group analysis.

    PubMed

    Wang, Ze; Childress, Anna R; Wang, Jiongjiong; Detre, John A

    2007-07-15

    To explore the multivariate nature of fMRI data and to consider the inter-subject brain response discrepancies, a multivariate and brain response model-free method is fundamentally required. Two such methods are presented in this paper by integrating a machine learning algorithm, the support vector machine (SVM), and the random effect model. Without any brain response modeling, SVM was used to extract a whole brain spatial discriminance map (SDM), representing the brain response difference between the contrasted experimental conditions. Population inference was then obtained through the random effect analysis (RFX) or permutation testing (PMU) on the individual subjects' SDMs. Applied to arterial spin labeling (ASL) perfusion fMRI data, SDM RFX yielded lower false-positive rates in the null hypothesis test and higher detection sensitivity for synthetic activations with varying cluster size and activation strengths, compared to the univariate general linear model (GLM)-based RFX. For a sensory-motor ASL fMRI study, both SDM RFX and SDM PMU yielded similar activation patterns to GLM RFX and GLM PMU, respectively, but with higher t values and cluster extensions at the same significance level. Capitalizing on the absence of temporal noise correlation in ASL data, this study also incorporated PMU in the individual-level GLM and SVM analyses accompanied by group-level analysis through RFX or group-level PMU. Providing inferences on the probability of being activated or deactivated at each voxel, these individual-level PMU-based group analysis methods can be used to threshold the analysis results of GLM RFX, SDM RFX or SDM PMU.

  9. Geometric calibration of a coordinate measuring machine using a laser tracking system

    NASA Astrophysics Data System (ADS)

    Umetsu, Kenta; Furutnani, Ryosyu; Osawa, Sonko; Takatsuji, Toshiyuki; Kurosawa, Tomizo

    2005-12-01

    This paper proposes a calibration method for a coordinate measuring machine (CMM) using a laser tracking system. The laser tracking system can measure three-dimensional coordinates based on the principle of trilateration with high accuracy and is easy to set up. The accuracy of length measurement of a single laser tracking interferometer (laser tracker) is about 0.3 µm over a length of 600 mm. In this study, we first measured 3D coordinates using the laser tracking system. Secondly, 21 geometric errors, namely, parametric errors of the CMM, were estimated by the comparison of the coordinates obtained by the laser tracking system and those obtained by the CMM. As a result, the estimated parametric errors agreed with those estimated by a ball plate measurement, which demonstrates the validity of the proposed calibration system.

  10. Omega-X micromachining system

    DOEpatents

    Miller, Donald M.

    1978-01-01

    A micromachining tool system with X- and omega-axes is used to machine spherical, aspherical, and irregular surfaces with a maximum contour error of 100 nonometers (nm) and surface waviness of no more than 0.8 nm RMS. The omega axis, named for the angular measurement of the rotation of an eccentric mechanism supporting one end of a tool bar, enables the pulse increments of the tool toward the workpiece to be as little as 0 to 4.4 nm. A dedicated computer coordinates motion in the two axes to produce the workpiece contour. Inertia is reduced by reducing the mass pulsed toward the workpiece to about one-fifth of its former value. The tool system includes calibration instruments to calibrate the micromachining tool system. Backlash is reduced and flexing decreased by using a rotary table and servomotor to pulse the tool in the omega-axis instead of a ball screw mechanism. A thermally-stabilized spindle rotates the workpiece and is driven by a motor not mounted on the micromachining tool base through a torque-smoothing pulley and vibrationless rotary coupling. Abbe offset errors are almost eliminated by tool setting and calibration at spindle center height. Tool contour and workpiece contour are gaged on the machine; this enables the source of machining errors to be determined more readily, because the workpiece is gaged before its shape can be changed by removal from the machine.

  11. Review of the energy check of an electron-only linear accelerator over a 6 year period: sensitivity of the technique to energy shift.

    PubMed

    Biggs, Peter J

    2003-04-01

    The calibration and monthly QA of an electron-only linear accelerator dedicated to intra-operative radiation therapy has been reviewed. Since this machine is calibrated prior to every procedure, there was no necessity to adjust the output calibration at any time except, perhaps, when the magnetron is changed, provided the machine output is reasonably stable. This gives a unique opportunity to study the dose output of the machine per monitor unit, variation in the timer error, flatness and symmetry of the beam and the energy check as a function of time. The results show that, although the dose per monitor unit varied within +/- 2%, the timer error within +/- 0.005 MU and the asymmetry within 1-2%, none of these parameters showed any systematic change with time. On the other hand, the energy check showed a linear drift with time for 6, 9, and 12 MeV (2.1, 3.5, and 2.5%, respectively, over 5 years), while at 15 and 18 MeV, the energy check was relatively constant. It is further shown that based on annual calibrations and RPC TLD checks, the energy of each beam is constant and that therefore the energy check is an exquisitely sensitive one. The consistency of the independent checks is demonstrated.

  12. Machine Learning Principles Can Improve Hip Fracture Prediction.

    PubMed

    Kruse, Christian; Eiken, Pia; Vestergaard, Peter

    2017-04-01

    Apply machine learning principles to predict hip fractures and estimate predictor importance in Dual-energy X-ray absorptiometry (DXA)-scanned men and women. Dual-energy X-ray absorptiometry data from two Danish regions between 1996 and 2006 were combined with national Danish patient data to comprise 4722 women and 717 men with 5 years of follow-up time (original cohort n = 6606 men and women). Twenty-four statistical models were built on 75% of data points through k-5, 5-repeat cross-validation, and then validated on the remaining 25% of data points to calculate area under the curve (AUC) and calibrate probability estimates. The best models were retrained with restricted predictor subsets to estimate the best subsets. For women, bootstrap aggregated flexible discriminant analysis ("bagFDA") performed best with a test AUC of 0.92 [0.89; 0.94] and well-calibrated probabilities following Naïve Bayes adjustments. A "bagFDA" model limited to 11 predictors (among them bone mineral densities (BMD), biochemical glucose measurements, general practitioner and dentist use) achieved a test AUC of 0.91 [0.88; 0.93]. For men, eXtreme Gradient Boosting ("xgbTree") performed best with a test AUC of 0.89 [0.82; 0.95], but with poor calibration in higher probabilities. A ten predictor subset (BMD, biochemical cholesterol and liver function tests, penicillin use and osteoarthritis diagnoses) achieved a test AUC of 0.86 [0.78; 0.94] using an "xgbTree" model. Machine learning can improve hip fracture prediction beyond logistic regression using ensemble models. Compiling data from international cohorts of longer follow-up and performing similar machine learning procedures has the potential to further improve discrimination and calibration.

  13. Volumetric brain magnetic resonance imaging predicts functioning in bipolar disorder: A machine learning approach.

    PubMed

    Sartori, Juliana M; Reckziegel, Ramiro; Passos, Ives Cavalcante; Czepielewski, Leticia S; Fijtman, Adam; Sodré, Leonardo A; Massuda, Raffael; Goi, Pedro D; Vianna-Sulzbach, Miréia; Cardoso, Taiane de Azevedo; Kapczinski, Flávio; Mwangi, Benson; Gama, Clarissa S

    2018-08-01

    Neuroimaging studies have been steadily explored in Bipolar Disorder (BD) in the last decades. Neuroanatomical changes tend to be more pronounced in patients with repeated episodes. Although the role of such changes in cognition and memory is well established, daily-life functioning impairments bulge among the consequences of the proposed progression. The objective of this study was to analyze MRI volumetric modifications in BD and healthy controls (HC) as possible predictors of daily-life functioning through a machine learning approach. Ninety-four participants (35 DSM-IV BD type I and 59 HC) underwent clinical and functioning assessments, and structural MRI. Functioning was assessed using the Functioning Assessment Short Test (FAST). The machine learning analysis was used to identify possible candidates of regional brain volumes that could predict functioning status, through a support vector regression algorithm. Patients with BD and HC did not differ in age, education and marital status. There were significant differences between groups in gender, BMI, FAST score, and employment status. There was significant correlation between observed and predicted FAST score for patients with BD, but not for controls. According to the model, the brain structures volumes that could predict FAST scores were: left superior frontal cortex, left rostral medial frontal cortex, right white matter total volume and right lateral ventricle volume. The machine learning approach demonstrated that brain volume changes in MRI were predictors of FAST score in patients with BD and could identify specific brain areas related to functioning impairment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Temperature Measurement and Numerical Prediction in Machining Inconel 718.

    PubMed

    Díaz-Álvarez, José; Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar

    2017-06-30

    Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning.

  15. Design and calibration of a scanning tunneling microscope for large machined surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigg, D.A.; Russell, P.E.; Dow, T.A.

    During the last year the large sample STM has been designed, built and used for the observation of several different samples. Calibration of the scanner for prope dimensional interpretation of surface features has been a chief concern, as well as corrections for non-linear effects such as hysteresis during scans. Several procedures used in calibration and correction of piezoelectric scanners used in the laboratorys STMs are described.

  16. Low-cost precision rotary index calibration

    NASA Astrophysics Data System (ADS)

    Ng, T. W.; Lim, T. S.

    2005-08-01

    The traditional method for calibrating angular indexing repeatability of rotary axes on machine tools and measuring equipment is with a precision polygon (usually 12 sided) and an autocollimator or angular interferometer. Such a setup is typically expensive. Here, we propose a far more cost-effective approach that uses just a laser, diffractive optical element, and CCD camera. We show that significantly high accuracies can be achieved for angular index calibration.

  17. Force Measurement Services at Kebs: AN Overview of Equipment, Procedures and Uncertainty

    NASA Astrophysics Data System (ADS)

    Bangi, J. O.; Maranga, S. M.; Nganga, S. P.; Mutuli, S. M.

    This paper describes the facilities, instrumentation and procedures currently used in the force laboratory at the Kenya Bureau of Standards (KEBS) for force measurement services. The laboratory uses the Force Calibration Machine (FCM) to calibrate force-measuring instruments. The FCM derives its traceability via comparisons using reference transfer force transducers calibrated by the Force Standard Machines (FSM) of a National Metrology Institute (NMI). The force laboratory is accredited to ISO/IEC 17025 by the Germany Accreditation Body (DAkkS). The accredited measurement scope of the laboratory is 1 MN to calibrate force transducers in both compression and tension modes. ISO 376 procedures are used while calibrating force transducers. The KEBS reference transfer standards have capacities of 10, 50, 300 and 1000 kN to cover the full range of the FCM. The uncertainty in the forces measured by the FCM were reviewed and determined in accordance to the new EURAMET calibration guide. The relative expanded uncertainty of force W realized by FCM was evaluated in a range from 10 kN-1 MN, and was found to be 5.0 × 10-4 with the coverage factor k being equal to 2. The overall normalized error (En) of the comparison results was also found to be less than 1. The accredited Calibration and Measurement Capability (CMC) of the KEBS force laboratory was based on the results of those intercomparisons. The FCM enables KEBS to provide traceability for the calibration of class ‘1’ force instruments as per the ISO 376.

  18. Calibrated fMRI in the Medial Temporal Lobe During a Memory Encoding Task

    PubMed Central

    Restom, Khaled; Perthen, Joanna E.; Liu, Thomas T.

    2008-01-01

    Prior measures of the blood oxygenation level dependent (BOLD) and cerebral blood flow (CBF) responses to a memory encoding task within the medial temporal lobe have suggested that the coupling between functional changes in CBF and changes in the cerebral metabolic rate of oxgyen (CMRO2) may be tighter in the medial temporal lobe as compared to the primary sensory areas. In this study, we used a calibrated functional magnetic resonance imaging (fMRI) approach to directly estimate memory-encoding-related changes in CMRO2 and to assess the coupling between CBF and CMRO2 in the medial temporal lobe. The CBF-CMRO2 coupling ratio was estimated using a linear fit to the flow and metabolism changes observed across subjects. In addition, we examined the effect of region-of-interest (ROI) selection on the estimates. In response to the memory encoding task, CMRO2 increased by 23.1% ± 8.8 to 25.3% ± 5.7 (depending upon ROI), with an estimated CBF-CMRO2 coupling ratio of 1.66 ± 0.07 to 1.75± 0.16. There was not a significant effect of ROI selection on either the CMRO2 or coupling ratio estimates. The observed coupling ratios were significantly lower than the values (2 to 4.5) that have been reported in previous calibrated fMRI studies of the visual and motor cortices. In addition, the estimated coupling ratio was found to be less sensitive to the calibration procedure for functional responses in the medial temporal lobe as compared to the primary sensory areas. PMID:18329291

  19. Identifying autism from neural representations of social interactions: neurocognitive markers of autism.

    PubMed

    Just, Marcel Adam; Cherkassky, Vladimir L; Buchweitz, Augusto; Keller, Timothy A; Mitchell, Tom M

    2014-01-01

    Autism is a psychiatric/neurological condition in which alterations in social interaction (among other symptoms) are diagnosed by behavioral psychiatric methods. The main goal of this study was to determine how the neural representations and meanings of social concepts (such as to insult) are altered in autism. A second goal was to determine whether these alterations can serve as neurocognitive markers of autism. The approach is based on previous advances in fMRI analysis methods that permit (a) the identification of a concept, such as the thought of a physical object, from its fMRI pattern, and (b) the ability to assess the semantic content of a concept from its fMRI pattern. These factor analysis and machine learning methods were applied to the fMRI activation patterns of 17 adults with high-functioning autism and matched controls, scanned while thinking about 16 social interactions. One prominent neural representation factor that emerged (manifested mainly in posterior midline regions) was related to self-representation, but this factor was present only for the control participants, and was near-absent in the autism group. Moreover, machine learning algorithms classified individuals as autistic or control with 97% accuracy from their fMRI neurocognitive markers. The findings suggest that psychiatric alterations of thought can begin to be biologically understood by assessing the form and content of the altered thought's underlying brain activation patterns.

  20. Identifying Autism from Neural Representations of Social Interactions: Neurocognitive Markers of Autism

    PubMed Central

    Just, Marcel Adam; Cherkassky, Vladimir L.; Buchweitz, Augusto; Keller, Timothy A.; Mitchell, Tom M.

    2014-01-01

    Autism is a psychiatric/neurological condition in which alterations in social interaction (among other symptoms) are diagnosed by behavioral psychiatric methods. The main goal of this study was to determine how the neural representations and meanings of social concepts (such as to insult) are altered in autism. A second goal was to determine whether these alterations can serve as neurocognitive markers of autism. The approach is based on previous advances in fMRI analysis methods that permit (a) the identification of a concept, such as the thought of a physical object, from its fMRI pattern, and (b) the ability to assess the semantic content of a concept from its fMRI pattern. These factor analysis and machine learning methods were applied to the fMRI activation patterns of 17 adults with high-functioning autism and matched controls, scanned while thinking about 16 social interactions. One prominent neural representation factor that emerged (manifested mainly in posterior midline regions) was related to self-representation, but this factor was present only for the control participants, and was near-absent in the autism group. Moreover, machine learning algorithms classified individuals as autistic or control with 97% accuracy from their fMRI neurocognitive markers. The findings suggest that psychiatric alterations of thought can begin to be biologically understood by assessing the form and content of the altered thought’s underlying brain activation patterns. PMID:25461818

  1. SU-F-P-49: Comparison of Mapcheck 2 Commission for Photon and Electron Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, J; Yang, C; Morris, B

    2016-06-15

    Purpose: We will investigate the performance variation of the MapCheck2 detector array with different array calibration and dose calibration pairs from different radiation therapy machine. Methods: A MapCheck2 detector array was calibrated on 3 Elekta accelerators with different energy of photon (6 MV, 10 MV, 15 MV and 18 MV) and electron (6 MeV, 9 MeV, 12 MeV, 15 MeV, 18 MeV and 20 MeV) beams. Dose calibration was conducted by referring a water phantom measurement following TG-51 protocol and commission data for each accelerator. A 10 cm × 10 cm beam was measured. This measured map was morphed bymore » applying different calibration pairs. Then the difference was quantified by comparing the doses and similarity using gamma analysis of criteria (0.5 %, 0 mm). Profile variation was evaluated on a same dataset with different calibration pairs. The passing rate of an IMRT QA planar dose was calculated by using 3 mm and 3% criteria and compared with respect to each calibration pairs. Results: In this study, a dose variation up to 0.67% for matched photons and 1.0% for electron beams is observed. Differences of flatness and symmetry can be as high as 1% and 0.7% respectively. Gamma analysis shows a passing rate ranging from 34% to 85% for the standard 10 × 10 cm field. Conclusion: Our work demonstrated that a customized array calibration and dose calibration for each machine is preferred to fulfill a high standard patient QA task.« less

  2. The prototype of high stiffness load cell for Rockwell hardness testing machine calibration according to ISO 6508-2:2015

    NASA Astrophysics Data System (ADS)

    Pakkratoke, M.; Sanponpute, T.

    2017-09-01

    The penetrated depth of the Rockwell hardness testing machine is normally not more than 0.260 mm. Using commercial load cell cannot achieve the proposed force calibration according to ISO 6508-2[1]. For these reason, the high stiffness load cell (HSL) was fabricated. Its obvious advantage is deformation less than 0.020 mm at 150 kgf maximum load applied. The HSL prototype was designed in concept of direct compression and then confirmed with finite element analysis, FEA. The results showed that the maximum deformation was lower than 0.012 mm at capacity.

  3. SU-F-J-166: Volumetric Spatial Distortions Comparison for 1.5 Tesla Versus 3 Tesla MRI for Gamma Knife Radiosurgery Scans Using Frame Marker Fusion and Co-Registration Modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neyman, G

    Purpose: To compare typical volumetric spatial distortions for 1.5 Tesla versus 3 Tesla MRI Gamma Knife radiosurgery scans in the frame marker fusion and co-registration frame-less modes. Methods: Quasar phantom by Modus Medical Devices Inc. with GRID image distortion software was used for measurements of volumetric distortions. 3D volumetric T1 weighted scans of the phantom were produced on 1.5 T Avanto and 3 T Skyra MRI Siemens scanners. The analysis was done two ways: for scans with localizer markers from the Leksell frame and relatively to the phantom only (simulated co-registration technique). The phantom grid contained a total of 2002more » vertices or control points that were used in the assessment of volumetric geometric distortion for all scans. Results: Volumetric mean absolute spatial deviations relatively to the frame localizer markers for 1.5 and 3 Tesla machine were: 1.39 ± 0.15 and 1.63 ± 0.28 mm with max errors of 1.86 and 2.65 mm correspondingly. Mean 2D errors from the Gamma Plan were 0.3 and 1.0 mm. For simulated co-registration technique the volumetric mean absolute spatial deviations relatively to the phantom for 1.5 and 3 Tesla machine were: 0.36 ± 0.08 and 0.62 ± 0.13 mm with max errors of 0.57 and 1.22 mm correspondingly. Conclusion: Volumetric spatial distortions are lower for 1.5 Tesla versus 3 Tesla MRI machines localized with markers on frames and significantly lower for co-registration techniques with no frame localization. The results show the advantage of using co-registration technique for minimizing MRI volumetric spatial distortions which can be especially important for steep dose gradient fields typically used in Gamma Knife radiosurgery. Consultant for Elekta AB.« less

  4. Calibration Device Designed for proof ring used in SCC Experiment

    NASA Astrophysics Data System (ADS)

    Hu, X. Y.; Kang, Z. Y.; Yu, Y. L.

    2017-11-01

    In this paper, a calibration device for proof ring used in SCC (Stress Corrosion Cracking) experiment was designed. A compact size loading device was developed to replace traditional force standard machine or a long screw nut. The deformation of the proof ring was measured by a CCD (Charge-Coupled Device) during the calibration instead of digital caliper or a dial gauge. The calibration device was verified at laboratory that the precision of force loading is ±0.1% and the precision of deformation measurement is ±0.002mm.

  5. Automated Segmentation of the Parotid Gland Based on Atlas Registration and Machine Learning: A Longitudinal MRI Study in Head-and-Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofeng; Wu, Ning; Cheng, Guanghui

    Purpose: To develop an automated magnetic resonance imaging (MRI) parotid segmentation method to monitor radiation-induced parotid gland changes in patients after head and neck radiation therapy (RT). Methods and Materials: The proposed method combines the atlas registration method, which captures the global variation of anatomy, with a machine learning technology, which captures the local statistical features, to automatically segment the parotid glands from the MRIs. The segmentation method consists of 3 major steps. First, an atlas (pre-RT MRI and manually contoured parotid gland mask) is built for each patient. A hybrid deformable image registration is used to map the pre-RTmore » MRI to the post-RT MRI, and the transformation is applied to the pre-RT parotid volume. Second, the kernel support vector machine (SVM) is trained with the subject-specific atlas pair consisting of multiple features (intensity, gradient, and others) from the aligned pre-RT MRI and the transformed parotid volume. Third, the well-trained kernel SVM is used to differentiate the parotid from surrounding tissues in the post-RT MRIs by statistically matching multiple texture features. A longitudinal study of 15 patients undergoing head and neck RT was conducted: baseline MRI was acquired prior to RT, and the post-RT MRIs were acquired at 3-, 6-, and 12-month follow-up examinations. The resulting segmentations were compared with the physicians' manual contours. Results: Successful parotid segmentation was achieved for all 15 patients (42 post-RT MRIs). The average percentage of volume differences between the automated segmentations and those of the physicians' manual contours were 7.98% for the left parotid and 8.12% for the right parotid. The average volume overlap was 91.1% ± 1.6% for the left parotid and 90.5% ± 2.4% for the right parotid. The parotid gland volume reduction at follow-up was 25% at 3 months, 27% at 6 months, and 16% at 12 months. Conclusions: We have validated our automated parotid segmentation algorithm in a longitudinal study. This segmentation method may be useful in future studies to address radiation-induced xerostomia in head and neck radiation therapy.« less

  6. Articulated Arm Coordinate Measuring Machine Calibration by Laser Tracker Multilateration

    PubMed Central

    Majarena, Ana C.; Brau, Agustín; Velázquez, Jesús

    2014-01-01

    A new procedure for the calibration of an articulated arm coordinate measuring machine (AACMM) is presented in this paper. First, a self-calibration algorithm of four laser trackers (LTs) is developed. The spatial localization of a retroreflector target, placed in different positions within the workspace, is determined by means of a geometric multilateration system constructed from the four LTs. Next, a nonlinear optimization algorithm for the identification procedure of the AACMM is explained. An objective function based on Euclidean distances and standard deviations is developed. This function is obtained from the captured nominal data (given by the LTs used as a gauge instrument) and the data obtained by the AACMM and compares the measured and calculated coordinates of the target to obtain the identified model parameters that minimize this difference. Finally, results show that the procedure presented, using the measurements of the LTs as a gauge instrument, is very effective by improving the AACMM precision. PMID:24688418

  7. Advice Taking from Humans and Machines: An fMRI and Effective Connectivity Study.

    PubMed

    Goodyear, Kimberly; Parasuraman, Raja; Chernyak, Sergey; Madhavan, Poornima; Deshpande, Gopikrishna; Krueger, Frank

    2016-01-01

    With new technological advances, advice can come from different sources such as machines or humans, but how individuals respond to such advice and the neural correlates involved need to be better understood. We combined functional MRI and multivariate Granger causality analysis with an X-ray luggage-screening task to investigate the neural basis and corresponding effective connectivity involved with advice utilization from agents framed as experts. Participants were asked to accept or reject good or bad advice from a human or machine agent with low reliability (high false alarm rate). We showed that unreliable advice decreased performance overall and participants interacting with the human agent had a greater depreciation of advice utilization during bad advice compared to the machine agent. These differences in advice utilization can be perceivably due to reevaluation of expectations arising from association of dispositional credibility for each agent. We demonstrated that differences in advice utilization engaged brain regions that may be associated with evaluation of personal characteristics and traits (precuneus, posterior cingulate cortex, temporoparietal junction) and interoception (posterior insula). We found that the right posterior insula and left precuneus were the drivers of the advice utilization network that were reciprocally connected to each other and also projected to all other regions. Our behavioral and neuroimaging results have significant implications for society because of progressions in technology and increased interactions with machines.

  8. Advice Taking from Humans and Machines: An fMRI and Effective Connectivity Study

    PubMed Central

    Goodyear, Kimberly; Parasuraman, Raja; Chernyak, Sergey; Madhavan, Poornima; Deshpande, Gopikrishna; Krueger, Frank

    2016-01-01

    With new technological advances, advice can come from different sources such as machines or humans, but how individuals respond to such advice and the neural correlates involved need to be better understood. We combined functional MRI and multivariate Granger causality analysis with an X-ray luggage-screening task to investigate the neural basis and corresponding effective connectivity involved with advice utilization from agents framed as experts. Participants were asked to accept or reject good or bad advice from a human or machine agent with low reliability (high false alarm rate). We showed that unreliable advice decreased performance overall and participants interacting with the human agent had a greater depreciation of advice utilization during bad advice compared to the machine agent. These differences in advice utilization can be perceivably due to reevaluation of expectations arising from association of dispositional credibility for each agent. We demonstrated that differences in advice utilization engaged brain regions that may be associated with evaluation of personal characteristics and traits (precuneus, posterior cingulate cortex, temporoparietal junction) and interoception (posterior insula). We found that the right posterior insula and left precuneus were the drivers of the advice utilization network that were reciprocally connected to each other and also projected to all other regions. Our behavioral and neuroimaging results have significant implications for society because of progressions in technology and increased interactions with machines. PMID:27867351

  9. A SYSTEMS APPROACH UTILIZING GENERAL-PURPOSE AND SPECIAL-PURPOSE TEACHING MACHINES.

    ERIC Educational Resources Information Center

    SILVERN, LEONARD C.

    IN ORDER TO IMPROVE THE EMPLOYEE TRAINING-EVALUATION METHOD, TEACHING MACHINES AND PERFORMANCE AIDS MUST BE PHYSICALLY AND OPERATIONALLY INTEGRATED INTO THE SYSTEM, THUS RETURNING TRAINING TO THE ACTUAL JOB ENVIRONMENT. GIVEN THESE CONDITIONS, TRAINING CAN BE MEASURED, CALIBRATED, AND CONTROLLED WITH RESPECT TO ACTUAL JOB PERFORMANCE STANDARDS AND…

  10. Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes

    PubMed Central

    Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian

    2016-01-01

    Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes. PMID:26751451

  11. Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes.

    PubMed

    Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian

    2016-01-07

    Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes.

  12. Prediction of brain maturity in infants using machine-learning algorithms.

    PubMed

    Smyser, Christopher D; Dosenbach, Nico U F; Smyser, Tara A; Snyder, Abraham Z; Rogers, Cynthia E; Inder, Terrie E; Schlaggar, Bradley L; Neil, Jeffrey J

    2016-08-01

    Recent resting-state functional MRI investigations have demonstrated that much of the large-scale functional network architecture supporting motor, sensory and cognitive functions in older pediatric and adult populations is present in term- and prematurely-born infants. Application of new analytical approaches can help translate the improved understanding of early functional connectivity provided through these studies into predictive models of neurodevelopmental outcome. One approach to achieving this goal is multivariate pattern analysis, a machine-learning, pattern classification approach well-suited for high-dimensional neuroimaging data. It has previously been adapted to predict brain maturity in children and adolescents using structural and resting state-functional MRI data. In this study, we evaluated resting state-functional MRI data from 50 preterm-born infants (born at 23-29weeks of gestation and without moderate-severe brain injury) scanned at term equivalent postmenstrual age compared with data from 50 term-born control infants studied within the first week of life. Using 214 regions of interest, binary support vector machines distinguished term from preterm infants with 84% accuracy (p<0.0001). Inter- and intra-hemispheric connections throughout the brain were important for group categorization, indicating that widespread changes in the brain's functional network architecture associated with preterm birth are detectable by term equivalent age. Support vector regression enabled quantitative estimation of birth gestational age in single subjects using only term equivalent resting state-functional MRI data, indicating that the present approach is sensitive to the degree of disruption of brain development associated with preterm birth (using gestational age as a surrogate for the extent of disruption). This suggests that support vector regression may provide a means for predicting neurodevelopmental outcome in individual infants. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Prediction of brain maturity in infants using machine-learning algorithms

    PubMed Central

    Smyser, Christopher D.; Dosenbach, Nico U.F.; Smyser, Tara A.; Snyder, Abraham Z.; Rogers, Cynthia E.; Inder, Terrie E.; Schlaggar, Bradley L.; Neil, Jeffrey J.

    2016-01-01

    Recent resting-state functional MRI investigations have demonstrated that much of the large-scale functional network architecture supporting motor, sensory and cognitive functions in older pediatric and adult populations is present in term- and prematurely-born infants. Application of new analytical approaches can help translate the improved understanding of early functional connectivity provided through these studies into predictive models of neurodevelopmental outcome. One approach to achieving this goal is multivariate pattern analysis, a machine-learning, pattern classification approach well-suited for high-dimensional neuroimaging data. It has previously been adapted to predict brain maturity in children and adolescents using structural and resting state-functional MRI data. In this study, we evaluated resting state-functional MRI data from 50 preterm-born infants (born at 23–29 weeks of gestation and without moderate–severe brain injury) scanned at term equivalent postmenstrual age compared with data from 50 term-born control infants studied within the first week of life. Using 214 regions of interest, binary support vector machines distinguished term from preterm infants with 84% accuracy (p < 0.0001). Inter- and intra-hemispheric connections throughout the brain were important for group categorization, indicating that widespread changes in the brain's functional network architecture associated with preterm birth are detectable by term equivalent age. Support vector regression enabled quantitative estimation of birth gestational age in single subjects using only term equivalent resting state-functional MRI data, indicating that the present approach is sensitive to the degree of disruption of brain development associated with preterm birth (using gestational age as a surrogate for the extent of disruption). This suggests that support vector regression may provide a means for predicting neurodevelopmental outcome in individual infants. PMID:27179605

  14. Application of advanced machine learning methods on resting-state fMRI network for identification of mild cognitive impairment and Alzheimer's disease.

    PubMed

    Khazaee, Ali; Ebrahimzadeh, Ata; Babajani-Feremi, Abbas

    2016-09-01

    The study of brain networks by resting-state functional magnetic resonance imaging (rs-fMRI) is a promising method for identifying patients with dementia from healthy controls (HC). Using graph theory, different aspects of the brain network can be efficiently characterized by calculating measures of integration and segregation. In this study, we combined a graph theoretical approach with advanced machine learning methods to study the brain network in 89 patients with mild cognitive impairment (MCI), 34 patients with Alzheimer's disease (AD), and 45 age-matched HC. The rs-fMRI connectivity matrix was constructed using a brain parcellation based on a 264 putative functional areas. Using the optimal features extracted from the graph measures, we were able to accurately classify three groups (i.e., HC, MCI, and AD) with accuracy of 88.4 %. We also investigated performance of our proposed method for a binary classification of a group (e.g., MCI) from two other groups (e.g., HC and AD). The classification accuracies for identifying HC from AD and MCI, AD from HC and MCI, and MCI from HC and AD, were 87.3, 97.5, and 72.0 %, respectively. In addition, results based on the parcellation of 264 regions were compared to that of the automated anatomical labeling atlas (AAL), consisted of 90 regions. The accuracy of classification of three groups using AAL was degraded to 83.2 %. Our results show that combining the graph measures with the machine learning approach, on the basis of the rs-fMRI connectivity analysis, may assist in diagnosis of AD and MCI.

  15. INFLUENCE OF IRON CHELATION ON R1 AND R2 CALIBRATION CURVES IN GERBIL LIVER AND HEART

    PubMed Central

    Wood, John C.; Aguilar, Michelle; Otto-Duessel, Maya; Nick, Hanspeter; Nelson, Marvin D.; Moats, Rex

    2008-01-01

    MRI is gaining increasing importance for the noninvasive quantification of organ iron burden. Since transverse relaxation rates depend on iron distribution as well as iron concentration, physiologic and pharmacologic processes that alter iron distribution could change MRI calibration curves. This paper compares the effect of three iron chelators, deferoxamine, deferiprone, and deferasirox on R1 and R2 calibration curves according to two iron loading and chelation strategies. 33 Mongolian gerbils underwent iron loading (iron dextran 500 mg/kg/wk) for 3 weeks followed by 4 weeks of chelation. An additional 56 animals received less aggressive loading (200 mg/kg/week) for 10 weeks, followed by 12 weeks of chelation. R1 and R2 calibration curves were compared to results from 23 iron-loaded animals that had not received chelation. Acute iron loading and chelation biased R1 and R2 from the unchelated reference calibration curves but chelator-specific changes were not observed, suggesting physiologic rather than pharmacologic differences in iron distribution. Long term chelation deferiprone treatment increased liver R1 50% (p<0.01), while long term deferasirox lowered liver R2 30.9% (p<0.0001). The relationship between R1 and R2 and organ iron concentration may depend upon the acuity of iron loading and unloading as well as the iron chelator administered. PMID:18581418

  16. Coupling machine learning with mechanistic models to study runoff production and river flow at the hillslope scale

    NASA Astrophysics Data System (ADS)

    Marçais, J.; Gupta, H. V.; De Dreuzy, J. R.; Troch, P. A. A.

    2016-12-01

    Geomorphological structure and geological heterogeneity of hillslopes are major controls on runoff responses. The diversity of hillslopes (morphological shapes and geological structures) on one hand, and the highly non linear runoff mechanism response on the other hand, make it difficult to transpose what has been learnt at one specific hillslope to another. Therefore, making reliable predictions on runoff appearance or river flow for a given hillslope is a challenge. Applying a classic model calibration (based on inverse problems technique) requires doing it for each specific hillslope and having some data available for calibration. When applied to thousands of cases it cannot always be promoted. Here we propose a novel modeling framework based on coupling process based models with data based approach. First we develop a mechanistic model, based on hillslope storage Boussinesq equations (Troch et al. 2003), able to model non linear runoff responses to rainfall at the hillslope scale. Second we set up a model database, representing thousands of non calibrated simulations. These simulations investigate different hillslope shapes (real ones obtained by analyzing 5m digital elevation model of Brittany and synthetic ones), different hillslope geological structures (i.e. different parametrizations) and different hydrologic forcing terms (i.e. different infiltration chronicles). Then, we use this model library to train a machine learning model on this physically based database. Machine learning model performance is then assessed by a classic validating phase (testing it on new hillslopes and comparing machine learning with mechanistic outputs). Finally we use this machine learning model to learn what are the hillslope properties controlling runoffs. This methodology will be further tested combining synthetic datasets with real ones.

  17. Calibrator device for the extrusion of cable coatings

    NASA Astrophysics Data System (ADS)

    Garbacz, Tomasz; Dulebová, Ľudmila; Spišák, Emil; Dulebová, Martina

    2016-05-01

    This paper presents selected results of theoretical and experimental research works on a new calibration device (calibrators) used to produce coatings of electric cables. The aim of this study is to present design solution calibration equipment and present a new calibration machine, which is an important element of the modernized technology extrusion lines for coating cables. As a result of the extrusion process of PVC modified with blowing agents, an extrudate in the form of an electrical cable was obtained. The conditions of the extrusion process were properly selected, which made it possible to obtain a product with solid external surface and cellular core.

  18. Improvement of the repeatability of parallel transmission at 7T using interleaved acquisition in the calibration scan.

    PubMed

    Kameda, Hiroyuki; Kudo, Kohsuke; Matsuda, Tsuyoshi; Harada, Taisuke; Iwadate, Yuji; Uwano, Ikuko; Yamashita, Fumio; Yoshioka, Kunihiro; Sasaki, Makoto; Shirato, Hiroki

    2017-12-04

    Respiration-induced phase shift affects B 0 /B 1 + mapping repeatability in parallel transmission (pTx) calibration for 7T brain MRI, but is improved by breath-holding (BH). However, BH cannot be applied during long scans. To examine whether interleaved acquisition during calibration scanning could improve pTx repeatability and image homogeneity. Prospective. Nine healthy subjects. 7T MRI with a two-channel RF transmission system was used. Calibration scanning for B 0 /B 1 + mapping was performed under sequential acquisition/free-breathing (Seq-FB), Seq-BH, and interleaved acquisition/FB (Int-FB) conditions. The B 0 map was calculated with two echo times, and the B 1 + map was obtained using the Bloch-Siegert method. Actual flip-angle imaging (AFI) and gradient echo (GRE) imaging were performed using pTx and quadrature-Tx (qTx). All scans were acquired in five sessions. Repeatability was evaluated using intersession standard deviation (SD) or coefficient of variance (CV), and in-plane homogeneity was evaluated using in-plane CV. A paired t-test with Bonferroni correction for multiple comparisons was used. The intersession CV/SDs for the B 0 /B 1 + maps were significantly smaller in Int-FB than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The intersession CVs for the AFI and GRE images were also significantly smaller in Int-FB, Seq-BH, and qTx than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The in-plane CVs for the AFI and GRE images in Seq-FB, Int-FB, and Seq-BH were significantly smaller than in qTx (Bonferroni-corrected P < 0.01 for all). Using interleaved acquisition during calibration scans of pTx for 7T brain MRI improved the repeatability of B 0 /B 1 + mapping, AFI, and GRE images, without BH. 1 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Common component classification: what can we learn from machine learning?

    PubMed

    Anderson, Ariana; Labus, Jennifer S; Vianna, Eduardo P; Mayer, Emeran A; Cohen, Mark S

    2011-05-15

    Machine learning methods have been applied to classifying fMRI scans by studying locations in the brain that exhibit temporal intensity variation between groups, frequently reporting classification accuracy of 90% or better. Although empirical results are quite favorable, one might doubt the ability of classification methods to withstand changes in task ordering and the reproducibility of activation patterns over runs, and question how much of the classification machines' power is due to artifactual noise versus genuine neurological signal. To examine the true strength and power of machine learning classifiers we create and then deconstruct a classifier to examine its sensitivity to physiological noise, task reordering, and across-scan classification ability. The models are trained and tested both within and across runs to assess stability and reproducibility across conditions. We demonstrate the use of independent components analysis for both feature extraction and artifact removal and show that removal of such artifacts can reduce predictive accuracy even when data has been cleaned in the preprocessing stages. We demonstrate how mistakes in the feature selection process can cause the cross-validation error seen in publication to be a biased estimate of the testing error seen in practice and measure this bias by purposefully making flawed models. We discuss other ways to introduce bias and the statistical assumptions lying behind the data and model themselves. Finally we discuss the complications in drawing inference from the smaller sample sizes typically seen in fMRI studies, the effects of small or unbalanced samples on the Type 1 and Type 2 error rates, and how publication bias can give a false confidence of the power of such methods. Collectively this work identifies challenges specific to fMRI classification and methods affecting the stability of models. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Neuroimaging biomarkers to predict treatment response in schizophrenia: the end of 30 years of solitude?

    PubMed

    Dazzan, Paola

    2014-12-01

    Studies that have used structural magnetic resonance imaging (MRI) suggest that individuals with psychoses have brain alterations, particularly in frontal and temporal cortices, and in the white matter tracts that connect them. Furthermore, these studies suggest that brain alterations may be particularly prominent, already at illness onset, in those individuals more likely to have poorer outcomes (eg, higher number of hospital admissions, and poorer symptom remission, level of functioning, and response to the first treatment with antipsychotic drugs). The fact that, even when present, these brain alterations are subtle and distributed in nature, has limited, until now, the utility of MRI in the clinical management of these disorders. More recently, MRI approaches, such as machine learning, have suggested that these neuroanatomical biomarkers can be used for direct clinical benefits. For example, using support vector machine, MRI data obtained at illness onset have been used to predict, with significant accuracy, whether a specific individual is likely to experience a remission of symptoms later on in the course of the illness. Taken together, this evidence suggests that validated, strong neuroanatomical markers could be used not only to inform tailored intervention strategies in a single individual, but also to allow patient stratification in clinical trials for new treatments.

  1. A Non-Parametric Approach for the Activation Detection of Block Design fMRI Simulated Data Using Self-Organizing Maps and Support Vector Machine.

    PubMed

    Bahrami, Sheyda; Shamsi, Mousa

    2017-01-01

    Functional magnetic resonance imaging (fMRI) is a popular method to probe the functional organization of the brain using hemodynamic responses. In this method, volume images of the entire brain are obtained with a very good spatial resolution and low temporal resolution. However, they always suffer from high dimensionality in the face of classification algorithms. In this work, we combine a support vector machine (SVM) with a self-organizing map (SOM) for having a feature-based classification by using SVM. Then, a linear kernel SVM is used for detecting the active areas. Here, we use SOM for feature extracting and labeling the datasets. SOM has two major advances: (i) it reduces dimension of data sets for having less computational complexity and (ii) it is useful for identifying brain regions with small onset differences in hemodynamic responses. Our non-parametric model is compared with parametric and non-parametric methods. We use simulated fMRI data sets and block design inputs in this paper and consider the contrast to noise ratio (CNR) value equal to 0.6 for simulated datasets. fMRI simulated dataset has contrast 1-4% in active areas. The accuracy of our proposed method is 93.63% and the error rate is 6.37%.

  2. Magnetic resonance imaging using chemical exchange saturation transfer

    NASA Astrophysics Data System (ADS)

    Park, Jaeseok

    2012-10-01

    Magnetic resonance imaging (MRI) has been widely used as a valuable diagnostic imaging modality that exploits water content and water relaxation properties to provide both structural and functional information with high resolution. Chemical exchange saturation transfer (CEST) in MRI has been recently introduced as a new mechanism of image contrast, wherein exchangeable protons from mobile proteins and peptides are indirectly detected through saturation transfer and are not observable using conventional MRI. It has been demonstrated that CEST MRI can detect important tissue metabolites and byproducts such as glucose, glycogen, and lactate. Additionally, CEST MRI is sensitive to pH or temperature and can calibrate microenvironment dependent on pH or temperature. In this work, we provide an overview on recent trends in CEST MRI, introducing general principles of CEST mechanism, quantitative description of proton transfer process between water pool and exchangeable solute pool in the presence or absence of conventional magnetization transfer effect, and its applications

  3. Temperature Measurement and Numerical Prediction in Machining Inconel 718

    PubMed Central

    Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar

    2017-01-01

    Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning. PMID:28665312

  4. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.

  5. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    PubMed

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  6. Prostate cancer localization with multispectral MRI using cost-sensitive support vector machines and conditional random fields.

    PubMed

    Artan, Yusuf; Haider, Masoom A; Langer, Deanna L; van der Kwast, Theodorus H; Evans, Andrew J; Yang, Yongyi; Wernick, Miles N; Trachtenberg, John; Yetik, Imam Samil

    2010-09-01

    Prostate cancer is a leading cause of cancer death for men in the United States. Fortunately, the survival rate for early diagnosed patients is relatively high. Therefore, in vivo imaging plays an important role for the detection and treatment of the disease. Accurate prostate cancer localization with noninvasive imaging can be used to guide biopsy, radiotherapy, and surgery as well as to monitor disease progression. Magnetic resonance imaging (MRI) performed with an endorectal coil provides higher prostate cancer localization accuracy, when compared to transrectal ultrasound (TRUS). However, in general, a single type of MRI is not sufficient for reliable tumor localization. As an alternative, multispectral MRI, i.e., the use of multiple MRI-derived datasets, has emerged as a promising noninvasive imaging technique for the localization of prostate cancer; however almost all studies are with human readers. There is a significant inter and intraobserver variability for human readers, and it is substantially difficult for humans to analyze the large dataset of multispectral MRI. To solve these problems, this study presents an automated localization method using cost-sensitive support vector machines (SVMs) and shows that this method results in improved localization accuracy than classical SVM. Additionally, we develop a new segmentation method by combining conditional random fields (CRF) with a cost-sensitive framework and show that our method further improves cost-sensitive SVM results by incorporating spatial information. We test SVM, cost-sensitive SVM, and the proposed cost-sensitive CRF on multispectral MRI datasets acquired from 21 biopsy-confirmed cancer patients. Our results show that multispectral MRI helps to increase the accuracy of prostate cancer localization when compared to single MR images; and that using advanced methods such as cost-sensitive SVM as well as the proposed cost-sensitive CRF can boost the performance significantly when compared to SVM.

  7. Machine learning methods for the classification of gliomas: Initial results using features extracted from MR spectroscopy.

    PubMed

    Ranjith, G; Parvathy, R; Vikas, V; Chandrasekharan, Kesavadas; Nair, Suresh

    2015-04-01

    With the advent of new imaging modalities, radiologists are faced with handling increasing volumes of data for diagnosis and treatment planning. The use of automated and intelligent systems is becoming essential in such a scenario. Machine learning, a branch of artificial intelligence, is increasingly being used in medical image analysis applications such as image segmentation, registration and computer-aided diagnosis and detection. Histopathological analysis is currently the gold standard for classification of brain tumors. The use of machine learning algorithms along with extraction of relevant features from magnetic resonance imaging (MRI) holds promise of replacing conventional invasive methods of tumor classification. The aim of the study is to classify gliomas into benign and malignant types using MRI data. Retrospective data from 28 patients who were diagnosed with glioma were used for the analysis. WHO Grade II (low-grade astrocytoma) was classified as benign while Grade III (anaplastic astrocytoma) and Grade IV (glioblastoma multiforme) were classified as malignant. Features were extracted from MR spectroscopy. The classification was done using four machine learning algorithms: multilayer perceptrons, support vector machine, random forest and locally weighted learning. Three of the four machine learning algorithms gave an area under ROC curve in excess of 0.80. Random forest gave the best performance in terms of AUC (0.911) while sensitivity was best for locally weighted learning (86.1%). The performance of different machine learning algorithms in the classification of gliomas is promising. An even better performance may be expected by integrating features extracted from other MR sequences. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  8. Time-dependent correlation of cerebral blood flow with oxygen metabolism in activated human visual cortex as measured by fMRI.

    PubMed

    Lin, Ai-Ling; Fox, Peter T; Yang, Yihong; Lu, Hanzhang; Tan, Li-Hai; Gao, Jia-Hong

    2009-01-01

    The aim of this study was to investigate the relationship between relative cerebral blood flow (delta CBF) and relative cerebral metabolic rate of oxygen (delta CMRO(2)) during continuous visual stimulation (21 min at 8 Hz) with fMRI biophysical models by simultaneously measuring of BOLD, CBF and CBV fMRI signals. The delta CMRO(2) was determined by both a newly calibrated single-compartment model (SCM) and a multi-compartment model (MCM) and was in agreement between these two models (P>0.5). The duration-varying delta CBF and delta CMRO(2) showed a negative correlation with time (r=-0.97, P<0.001); i.e., delta CBF declines while delta CMRO(2) increases during continuous stimulation. This study also illustrated that without properly calibrating the critical parameters employed in the SCM, an incorrect and even an opposite appearance of the flow-metabolism relationship during prolonged visual stimulation (positively linear coupling) can result. The time-dependent negative correlation between flow and metabolism demonstrated in this fMRI study is consistent with a previous PET observation and further supports the view that the increase in CBF is driven by factors other than oxygen demand and the energy demands will eventually require increased aerobic metabolism as stimulation continues.

  9. Voxel-wise prostate cell density prediction using multiparametric magnetic resonance imaging and machine learning.

    PubMed

    Sun, Yu; Reynolds, Hayley M; Wraith, Darren; Williams, Scott; Finnegan, Mary E; Mitchell, Catherine; Murphy, Declan; Haworth, Annette

    2018-04-26

    There are currently no methods to estimate cell density in the prostate. This study aimed to develop predictive models to estimate prostate cell density from multiparametric magnetic resonance imaging (mpMRI) data at a voxel level using machine learning techniques. In vivo mpMRI data were collected from 30 patients before radical prostatectomy. Sequences included T2-weighted imaging, diffusion-weighted imaging and dynamic contrast-enhanced imaging. Ground truth cell density maps were computed from histology and co-registered with mpMRI. Feature extraction and selection were performed on mpMRI data. Final models were fitted using three regression algorithms including multivariate adaptive regression spline (MARS), polynomial regression (PR) and generalised additive model (GAM). Model parameters were optimised using leave-one-out cross-validation on the training data and model performance was evaluated on test data using root mean square error (RMSE) measurements. Predictive models to estimate voxel-wise prostate cell density were successfully trained and tested using the three algorithms. The best model (GAM) achieved a RMSE of 1.06 (± 0.06) × 10 3 cells/mm 2 and a relative deviation of 13.3 ± 0.8%. Prostate cell density can be quantitatively estimated non-invasively from mpMRI data using high-quality co-registered data at a voxel level. These cell density predictions could be used for tissue classification, treatment response evaluation and personalised radiotherapy.

  10. Machine learning on brain MRI data for differential diagnosis of Parkinson's disease and Progressive Supranuclear Palsy.

    PubMed

    Salvatore, C; Cerasa, A; Castiglioni, I; Gallivanone, F; Augimeri, A; Lopez, M; Arabia, G; Morelli, M; Gilardi, M C; Quattrone, A

    2014-01-30

    Supervised machine learning has been proposed as a revolutionary approach for identifying sensitive medical image biomarkers (or combination of them) allowing for automatic diagnosis of individual subjects. The aim of this work was to assess the feasibility of a supervised machine learning algorithm for the assisted diagnosis of patients with clinically diagnosed Parkinson's disease (PD) and Progressive Supranuclear Palsy (PSP). Morphological T1-weighted Magnetic Resonance Images (MRIs) of PD patients (28), PSP patients (28) and healthy control subjects (28) were used by a supervised machine learning algorithm based on the combination of Principal Components Analysis as feature extraction technique and on Support Vector Machines as classification algorithm. The algorithm was able to obtain voxel-based morphological biomarkers of PD and PSP. The algorithm allowed individual diagnosis of PD versus controls, PSP versus controls and PSP versus PD with an Accuracy, Specificity and Sensitivity>90%. Voxels influencing classification between PD and PSP patients involved midbrain, pons, corpus callosum and thalamus, four critical regions known to be strongly involved in the pathophysiological mechanisms of PSP. Classification accuracy of individual PSP patients was consistent with previous manual morphological metrics and with other supervised machine learning application to MRI data, whereas accuracy in the detection of individual PD patients was significantly higher with our classification method. The algorithm provides excellent discrimination of PD patients from PSP patients at an individual level, thus encouraging the application of computer-based diagnosis in clinical practice. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Axial calibration methods of piezoelectric load sharing dynamometer

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Chang, Qingbing; Ren, Zongjin; Shao, Jun; Wang, Xinlei; Tian, Yu

    2018-06-01

    The relationship between input and output of load sharing dynamometer is seriously non-linear in different loading points of a plane, so it's significant for accutately measuring force to precisely calibrate the non-linear relationship. In this paper, firstly, based on piezoelectric load sharing dynamometer, calibration experiments of different loading points are performed in a plane. And then load sharing testing system is respectively calibrated based on BP algorithm and ELM (Extreme Learning Machine) algorithm. Finally, the results show that the calibration result of ELM is better than BP for calibrating the non-linear relationship between input and output of loading sharing dynamometer in the different loading points of a plane, which verifies that ELM algorithm is feasible in solving force non-linear measurement problem.

  12. 76 FR 37838 - Petitions for Modification of Application of Existing Mandatory Safety Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-28

    ... may include periodic tests of methane levels and limits on the minimum methane concentrations that may...) Methane monitor(s) will be calibrated on the longwall, continuous mining machine, or cutting machine and... petitioner will test for methane with a hand-held methane detector at least every 10 minutes from the time...

  13. A catalog of stellar spectrophotometry (Adelman, et al. 1989): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.; Adelman, Saul J.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the astronomical data centers, is described. The catalog is a collection of spectrophotometric observations made using rotating grating scanners and calibrated with the fluxes of Vega. The observations cover various wavelength regions between about 330 and 1080 nm.

  14. The Usefulness of Zone Division Using Belt Partition at the Entry Zone of MRI Machine Room: An Analysis of the Restrictive Effect of Dangerous Action Using a Questionnaire.

    PubMed

    Funada, Tatsuro; Shibuya, Tsubasa

    2016-08-01

    The American College of Radiology recommends dividing magnetic resonance imaging (MRI) machine rooms into four zones depending on the education level. However, structural limitations restrict us to apply such recommendation in most of the Japanese facilities. This study examines the effectiveness of the usage of a belt partition to create the zonal division by a questionnaire survey including three critical parameters. They are, the influence of individuals' background (relevance to MRI, years of experience, individuals' post, occupation [i.e., nurse or nursing assistant], outpatient section or ward), the presence or absence of a door or belt partition (opening or closing), and any four personnel scenarios that may be encountered during a visit to an MRI site (e.g., from visiting the MRI site to receive a patient) . In this survey, the influence of dangerous action is uncertain on individuals' backgrounds (maximum odds ratio: 6.3, 95% CI: 1.47-27.31) and the scenarios of personnel (maximum risk ratio: 2.4, 95% CI: 1.16-4.85). Conversely, the presence of the door and belt partition influences significantly (maximum risk ratio: 17.4, 95% CI: 7.94-17.38). For that reason, we suggest that visual impression has a strong influence on an individuals' actions. Even if structural limitations are present, zonal division by belt partition will provide a visual deterrent. Then, the partitioned zone will serve as a buffer zone. We conclude that if the belt partition is used properly, it is an inexpensive and effective safety management device for MRI rooms.

  15. Regional autonomy changes in resting-state functional MRI in patients with HIV associated neurocognitive disorder

    NASA Astrophysics Data System (ADS)

    DSouza, Adora M.; Abidin, Anas Z.; Chockanathan, Udaysankar; Wismüller, Axel

    2018-03-01

    In this study, we investigate whether there are discernable changes in influence that brain regions have on themselves once patients show symptoms of HIV Associated Neurocognitive Disorder (HAND) using functional MRI (fMRI). Simple functional connectivity measures, such as correlation cannot reveal such information. To this end, we use mutual connectivity analysis (MCA) with Local Models (LM), which reveals a measure of influence in terms of predictability. Once such measures of interaction are obtained, we train two classifiers to characterize difference in patterns of regional self-influence between healthy subjects and subjects presenting with HAND symptoms. The two classifiers we use are Support Vector Machines (SVM) and Localized Generalized Matrix Learning Vector Quantization (LGMLVQ). Performing machine learning on fMRI connectivity measures is popularly known as multi-voxel pattern analysis (MVPA). By performing such an analysis, we are interested in studying the impact HIV infection has on an individual's brain. The high area under receiver operating curve (AUC) and accuracy values for 100 different train/test separations using MCA-LM self-influence measures (SVM: mean AUC=0.86, LGMLVQ: mean AUC=0.88, SVM and LGMLVQ: mean accuracy=0.78) compared with standard MVPA analysis using cross-correlation between fMRI time-series (SVM: mean AUC=0.58, LGMLVQ: mean AUC=0.57), demonstrates that self-influence features can be more discriminative than measures of interaction between time-series pairs. Furthermore, our results suggest that incorporating measures of self-influence in MVPA analysis used commonly in fMRI analysis has the potential to provide a performance boost and indicate important changes in dynamics of regions in the brain as a consequence of HIV infection.

  16. Quantitative Machine Learning Analysis of Brain MRI Morphology throughout Aging.

    PubMed

    Shamir, Lior; Long, Joe

    2016-01-01

    While cognition is clearly affected by aging, it is unclear whether the process of brain aging is driven solely by accumulation of environmental damage, or involves biological pathways. We applied quantitative image analysis to profile the alteration of brain tissues during aging. A dataset of 463 brain MRI images taken from a cohort of 416 subjects was analyzed using a large set of low-level numerical image content descriptors computed from the entire brain MRI images. The correlation between the numerical image content descriptors and the age was computed, and the alterations of the brain tissues during aging were quantified and profiled using machine learning. The comprehensive set of global image content descriptors provides high Pearson correlation of ~0.9822 with the chronological age, indicating that the machine learning analysis of global features is sensitive to the age of the subjects. Profiling of the predicted age shows several periods of mild changes, separated by shorter periods of more rapid alterations. The periods with the most rapid changes were around the age of 55, and around the age of 65. The results show that the process of brain aging of is not linear, and exhibit short periods of rapid aging separated by periods of milder change. These results are in agreement with patterns observed in cognitive decline, mental health status, and general human aging, suggesting that brain aging might not be driven solely by accumulation of environmental damage. Code and data used in the experiments are publicly available.

  17. Photogrammetry in 3d Modelling of Human Bone Structures from Radiographs

    NASA Astrophysics Data System (ADS)

    Hosseinian, S.; Arefi, H.

    2017-05-01

    Photogrammetry can have great impact on the success of medical processes for diagnosis, treatment and surgeries. Precise 3D models which can be achieved by photogrammetry improve considerably the results of orthopedic surgeries and processes. Usual 3D imaging techniques, computed tomography (CT) and magnetic resonance imaging (MRI), have some limitations such as being used only in non-weight-bearing positions, costs and high radiation dose(for CT) and limitations of MRI for patients with ferromagnetic implants or objects in their bodies. 3D reconstruction of bony structures from biplanar X-ray images is a reliable and accepted alternative for achieving accurate 3D information with low dose radiation in weight-bearing positions. The information can be obtained from multi-view radiographs by using photogrammetry. The primary step for 3D reconstruction of human bone structure from medical X-ray images is calibration which is done by applying principles of photogrammetry. After the calibration step, 3D reconstruction can be done using efficient methods with different levels of automation. Because of the different nature of X-ray images from optical images, there are distinct challenges in medical applications for calibration step of stereoradiography. In this paper, after demonstrating the general steps and principles of 3D reconstruction from X-ray images, a comparison will be done on calibration methods for 3D reconstruction from radiographs and they are assessed from photogrammetry point of view by considering various metrics such as their camera models, calibration objects, accuracy, availability, patient-friendly and cost.

  18. Autonomous Landmark Calibration Method for Indoor Localization

    PubMed Central

    Kim, Jae-Hoon; Kim, Byoung-Seop

    2017-01-01

    Machine-generated data expansion is a global phenomenon in recent Internet services. The proliferation of mobile communication and smart devices has increased the utilization of machine-generated data significantly. One of the most promising applications of machine-generated data is the estimation of the location of smart devices. The motion sensors integrated into smart devices generate continuous data that can be used to estimate the location of pedestrians in an indoor environment. We focus on the estimation of the accurate location of smart devices by determining the landmarks appropriately for location error calibration. In the motion sensor-based location estimation, the proposed threshold control method determines valid landmarks in real time to avoid the accumulation of errors. A statistical method analyzes the acquired motion sensor data and proposes a valid landmark for every movement of the smart devices. Motion sensor data used in the testbed are collected from the actual measurements taken throughout a commercial building to demonstrate the practical usefulness of the proposed method. PMID:28837071

  19. Calibrators measurement system for headlamp tester of motor vehicle base on machine vision

    NASA Astrophysics Data System (ADS)

    Pan, Yue; Zhang, Fan; Xu, Xi-ping; Zheng, Zhe

    2014-09-01

    With the development of photoelectric detection technology, machine vision has a wider use in the field of industry. The paper mainly introduces auto lamps tester calibrator measuring system, of which CCD image sampling system is the core. Also, it shows the measuring principle of optical axial angle and light intensity, and proves the linear relationship between calibrator's facula illumination and image plane illumination. The paper provides an important specification of CCD imaging system. Image processing by MATLAB can get flare's geometric midpoint and average gray level. By fitting the statistics via the method of the least square, we can get regression equation of illumination and gray level. It analyzes the error of experimental result of measurement system, and gives the standard uncertainty of synthesis and the resource of optical axial angle. Optical axial angle's average measuring accuracy is controlled within 40''. The whole testing process uses digital means instead of artificial factors, which has higher accuracy, more repeatability and better mentality than any other measuring systems.

  20. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations

    PubMed Central

    Simon, Aaron B.; Dubowitz, David J.; Blockley, Nicholas P.; Buxton, Richard B.

    2016-01-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2′ as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2′, we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2′-based estimate of the metabolic response to CO2 of 1.4%, and R2′- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2′-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. PMID:26790354

  1. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations.

    PubMed

    Simon, Aaron B; Dubowitz, David J; Blockley, Nicholas P; Buxton, Richard B

    2016-04-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2' as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2', we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2'-based estimate of the metabolic response to CO2 of 1.4%, and R2'- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2'-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Improving near-infrared prediction model robustness with support vector machine regression: a pharmaceutical tablet assay example.

    PubMed

    Igne, Benoît; Drennen, James K; Anderson, Carl A

    2014-01-01

    Changes in raw materials and process wear and tear can have significant effects on the prediction error of near-infrared calibration models. When the variability that is present during routine manufacturing is not included in the calibration, test, and validation sets, the long-term performance and robustness of the model will be limited. Nonlinearity is a major source of interference. In near-infrared spectroscopy, nonlinearity can arise from light path-length differences that can come from differences in particle size or density. The usefulness of support vector machine (SVM) regression to handle nonlinearity and improve the robustness of calibration models in scenarios where the calibration set did not include all the variability present in test was evaluated. Compared to partial least squares (PLS) regression, SVM regression was less affected by physical (particle size) and chemical (moisture) differences. The linearity of the SVM predicted values was also improved. Nevertheless, although visualization and interpretation tools have been developed to enhance the usability of SVM-based methods, work is yet to be done to provide chemometricians in the pharmaceutical industry with a regression method that can supplement PLS-based methods.

  3. Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision.

    PubMed

    Tu, Junchao; Zhang, Liyan

    2018-01-12

    A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.

  4. Quantitative MRI for hepatic fat fraction and T2* measurement in pediatric patients with non-alcoholic fatty liver disease.

    PubMed

    Deng, Jie; Fishbein, Mark H; Rigsby, Cynthia K; Zhang, Gang; Schoeneman, Samantha E; Donaldson, James S

    2014-11-01

    Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease in children. The gold standard for diagnosis is liver biopsy. MRI is a non-invasive imaging method to provide quantitative measurement of hepatic fat content. The methodology is particularly appealing for the pediatric population because of its rapidity and radiation-free imaging techniques. To develop a multi-point Dixon MRI method with multi-interference models (multi-fat-peak modeling and bi-exponential T2* correction) for accurate hepatic fat fraction (FF) and T2* measurements in pediatric patients with NAFLD. A phantom study was first performed to validate the accuracy of the MRI fat fraction measurement by comparing it with the chemical fat composition of the ex-vivo pork liver-fat homogenate. The most accurate model determined from the phantom study was used for fat fraction and T2* measurements in 52 children and young adults referred from the pediatric hepatology clinic with suspected or identified NAFLD. Separate T2* values of water (T2*W) and fat (T2*F) components derived from the bi-exponential fitting were evaluated and plotted as a function of fat fraction. In ten patients undergoing liver biopsy, we compared histological analysis of liver fat fraction with MRI fat fraction. In the phantom study the 6-point Dixon with 5-fat-peak, bi-exponential T2* modeling demonstrated the best precision and accuracy in fat fraction measurements compared with other methods. This model was further calibrated with chemical fat fraction and applied in patients, where similar patterns were observed as in the phantom study that conventional 2-point and 3-point Dixon methods underestimated fat fraction compared to the calibrated 6-point 5-fat-peak bi-exponential model (P < 0.0001). With increasing fat fraction, T2*W (27.9 ± 3.5 ms) decreased, whereas T2*F (20.3 ± 5.5 ms) increased; and T2*W and T2*F became increasingly more similar when fat fraction was higher than 15-20%. Histological fat fraction measurements in ten patients were highly correlated with calibrated MRI fat fraction measurements (Pearson correlation coefficient r = 0.90 with P = 0.0004). Liver MRI using multi-point Dixon with multi-fat-peak and bi-exponential T2* modeling provided accurate fat quantification in children and young adults with non-alcoholic fatty liver disease and may be used to screen at-risk or affected individuals and to monitor disease progress noninvasively.

  5. A general prediction model for the detection of ADHD and Autism using structural and functional MRI.

    PubMed

    Sen, Bhaskar; Borle, Neil C; Greiner, Russell; Brown, Matthew R G

    2018-01-01

    This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject's fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data-exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.

  6. Functional connectivity analysis of resting-state fMRI networks in nicotine dependent patients

    NASA Astrophysics Data System (ADS)

    Smith, Aria; Ehtemami, Anahid; Fratte, Daniel; Meyer-Baese, Anke; Zavala-Romero, Olmo; Goudriaan, Anna E.; Schmaal, Lianne; Schulte, Mieke H. J.

    2016-03-01

    Brain imaging studies identified brain networks that play a key role in nicotine dependence-related behavior. Functional connectivity of the brain is dynamic; it changes over time due to different causes such as learning, or quitting a habit. Functional connectivity analysis is useful in discovering and comparing patterns between functional magnetic resonance imaging (fMRI) scans of patients' brains. In the resting state, the patient is asked to remain calm and not do any task to minimize the contribution of external stimuli. The study of resting-state fMRI networks have shown functionally connected brain regions that have a high level of activity during this state. In this project, we are interested in the relationship between these functionally connected brain regions to identify nicotine dependent patients, who underwent a smoking cessation treatment. Our approach is on the comparison of the set of connections between the fMRI scans before and after treatment. We applied support vector machines, a machine learning technique, to classify patients based on receiving the treatment or the placebo. Using the functional connectivity (CONN) toolbox, we were able to form a correlation matrix based on the functional connectivity between different regions of the brain. The experimental results show that there is inadequate predictive information to classify nicotine dependent patients using the SVM classifier. We propose other classification methods be explored to better classify the nicotine dependent patients.

  7. Automatic Determination of the Need for Intravenous Contrast in Musculoskeletal MRI Examinations Using IBM Watson's Natural Language Processing Algorithm.

    PubMed

    Trivedi, Hari; Mesterhazy, Joseph; Laguna, Benjamin; Vu, Thienkhai; Sohn, Jae Ho

    2018-04-01

    Magnetic resonance imaging (MRI) protocoling can be time- and resource-intensive, and protocols can often be suboptimal dependent upon the expertise or preferences of the protocoling radiologist. Providing a best-practice recommendation for an MRI protocol has the potential to improve efficiency and decrease the likelihood of a suboptimal or erroneous study. The goal of this study was to develop and validate a machine learning-based natural language classifier that can automatically assign the use of intravenous contrast for musculoskeletal MRI protocols based upon the free-text clinical indication of the study, thereby improving efficiency of the protocoling radiologist and potentially decreasing errors. We utilized a deep learning-based natural language classification system from IBM Watson, a question-answering supercomputer that gained fame after challenging the best human players on Jeopardy! in 2011. We compared this solution to a series of traditional machine learning-based natural language processing techniques that utilize a term-document frequency matrix. Each classifier was trained with 1240 MRI protocols plus their respective clinical indications and validated with a test set of 280. Ground truth of contrast assignment was obtained from the clinical record. For evaluation of inter-reader agreement, a blinded second reader radiologist analyzed all cases and determined contrast assignment based on only the free-text clinical indication. In the test set, Watson demonstrated overall accuracy of 83.2% when compared to the original protocol. This was similar to the overall accuracy of 80.2% achieved by an ensemble of eight traditional machine learning algorithms based on a term-document matrix. When compared to the second reader's contrast assignment, Watson achieved 88.6% agreement. When evaluating only the subset of cases where the original protocol and second reader were concordant (n = 251), agreement climbed further to 90.0%. The classifier was relatively robust to spelling and grammatical errors, which were frequent. Implementation of this automated MR contrast determination system as a clinical decision support tool may save considerable time and effort of the radiologist while potentially decreasing error rates, and require no change in order entry or workflow.

  8. Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.

    PubMed

    Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias

    2017-11-27

    Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.

  9. An fMRI-Based Neural Signature of Decisions to Smoke Cannabis.

    PubMed

    Bedi, Gillinder; Lindquist, Martin A; Haney, Margaret

    2015-11-01

    Drug dependence may be at its core a pathology of choice, defined by continued decisions to use drugs irrespective of negative consequences. Despite evidence of dysregulated decision making in addiction, little is known about the neural processes underlying the most clinically relevant decisions drug users make: decisions to use drugs. Here, we combined functional magnetic resonance imaging (fMRI), machine learning, and human laboratory drug administration to investigate neural activation underlying decisions to smoke cannabis. Nontreatment-seeking daily cannabis smokers completed an fMRI choice task, making repeated decisions to purchase or decline 1-12 placebo or active cannabis 'puffs' ($0.25-$5/puff). One randomly selected decision was implemented. If the selected choice had been bought, the cost was deducted from study earnings and the purchased cannabis smoked in the laboratory; alternatively, the participant remained in the laboratory without cannabis. Machine learning with leave-one-subject-out cross-validation identified distributed neural activation patterns discriminating decisions to buy cannabis from declined offers. A total of 21 participants were included in behavioral analyses; 17 purchased cannabis and were thus included in fMRI analyses. Purchasing varied lawfully with dose and cost. The classifier discriminated with 100% accuracy between fMRI activation patterns for purchased vs declined cannabis at the level of the individual. Dorsal striatum, insula, posterior parietal regions, anterior and posterior cingulate, and dorsolateral prefrontal cortex all contributed reliably to this neural signature of decisions to smoke cannabis. These findings provide the basis for a brain-based characterization of drug-related decision making in drug abuse, including effects of psychological and pharmacological interventions on these processes.

  10. Differentiating between bipolar and unipolar depression in functional and structural MRI studies.

    PubMed

    Han, Kyu-Man; De Berardis, Domenico; Fornaro, Michele; Kim, Yong-Ku

    2018-03-28

    Distinguishing depression in bipolar disorder (BD) from unipolar depression (UD) solely based on clinical clues is difficult, which has led to the exploration of promising neural markers in neuroimaging measures for discriminating between BD depression and UD. In this article, we review structural and functional magnetic resonance imaging (MRI) studies that directly compare UD and BD depression based on neuroimaging modalities including functional MRI studies on regional brain activation or functional connectivity, structural MRI on gray or white matter morphology, and pattern classification analyses using a machine learning approach. Numerous studies have reported distinct functional and structural alterations in emotion- or reward-processing neural circuits between BD depression and UD. Different activation patterns in neural networks including the amygdala, anterior cingulate cortex (ACC), prefrontal cortex (PFC), and striatum during emotion-, reward-, or cognition-related tasks have been reported between BD and UD. A stronger functional connectivity pattern in BD was pronounced in default mode and in frontoparietal networks and brain regions including the PFC, ACC, parietal and temporal regions, and thalamus compared to UD. Gray matter volume differences in the ACC, hippocampus, amygdala, and dorsolateral prefrontal cortex (DLPFC) have been reported between BD and UD, along with a thinner DLPFC in BD compared to UD. BD showed reduced integrity in the anterior part of the corpus callosum and posterior cingulum compared to UD. Several studies performed pattern classification analysis using structural and functional MRI data to distinguish between UD and BD depression using a supervised machine learning approach, which yielded a moderate level of accuracy in classification. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Spatially Regularized Machine Learning for Task and Resting-state fMRI

    PubMed Central

    Song, Xiaomu; Panych, Lawrence P.; Chen, Nan-kuei

    2015-01-01

    Background Reliable mapping of brain function across sessions and/or subjects in task- and resting-state has been a critical challenge for quantitative fMRI studies although it has been intensively addressed in the past decades. New Method A spatially regularized support vector machine (SVM) technique was developed for the reliable brain mapping in task- and resting-state. Unlike most existing SVM-based brain mapping techniques, which implement supervised classifications of specific brain functional states or disorders, the proposed method performs a semi-supervised classification for the general brain function mapping where spatial correlation of fMRI is integrated into the SVM learning. The method can adapt to intra- and inter-subject variations induced by fMRI nonstationarity, and identify a true boundary between active and inactive voxels, or between functionally connected and unconnected voxels in a feature space. Results The method was evaluated using synthetic and experimental data at the individual and group level. Multiple features were evaluated in terms of their contributions to the spatially regularized SVM learning. Reliable mapping results in both task- and resting-state were obtained from individual subjects and at the group level. Comparison with Existing Methods A comparison study was performed with independent component analysis, general linear model, and correlation analysis methods. Experimental results indicate that the proposed method can provide a better or comparable mapping performance at the individual and group level. Conclusions The proposed method can provide accurate and reliable mapping of brain function in task- and resting-state, and is applicable to a variety of quantitative fMRI studies. PMID:26470627

  12. Prediction of individual brain maturity using fMRI.

    PubMed

    Dosenbach, Nico U F; Nardos, Binyam; Cohen, Alexander L; Fair, Damien A; Power, Jonathan D; Church, Jessica A; Nelson, Steven M; Wig, Gagan S; Vogel, Alecia C; Lessov-Schlaggar, Christina N; Barnes, Kelly Anne; Dubis, Joseph W; Feczko, Eric; Coalson, Rebecca S; Pruett, John R; Barch, Deanna M; Petersen, Steven E; Schlaggar, Bradley L

    2010-09-10

    Group functional connectivity magnetic resonance imaging (fcMRI) studies have documented reliable changes in human functional brain maturity over development. Here we show that support vector machine-based multivariate pattern analysis extracts sufficient information from fcMRI data to make accurate predictions about individuals' brain maturity across development. The use of only 5 minutes of resting-state fcMRI data from 238 scans of typically developing volunteers (ages 7 to 30 years) allowed prediction of individual brain maturity as a functional connectivity maturation index. The resultant functional maturation curve accounted for 55% of the sample variance and followed a nonlinear asymptotic growth curve shape. The greatest relative contribution to predicting individual brain maturity was made by the weakening of short-range functional connections between the adult brain's major functional networks.

  13. STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.

    PubMed

    Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X

    2009-08-01

    This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.

  14. Quantitative prediction of radio frequency induced local heating derived from measured magnetic field maps in magnetic resonance imaging: A phantom validation at 7 T

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaotong; Liu, Jiaen; Van de Moortele, Pierre-Francois

    2014-12-15

    Electrical Properties Tomography (EPT) technique utilizes measurable radio frequency (RF) coil induced magnetic fields (B1 fields) in a Magnetic Resonance Imaging (MRI) system to quantitatively reconstruct the local electrical properties (EP) of biological tissues. Information derived from the same data set, e.g., complex numbers of B1 distribution towards electric field calculation, can be used to estimate, on a subject-specific basis, local Specific Absorption Rate (SAR). SAR plays a significant role in RF pulse design for high-field MRI applications, where maximum local tissue heating remains one of the most constraining limits. The purpose of the present work is to investigate themore » feasibility of such B1-based local SAR estimation, expanding on previously proposed EPT approaches. To this end, B1 calibration was obtained in a gelatin phantom at 7 T with a multi-channel transmit coil, under a particular multi-channel B1-shim setting (B1-shim I). Using this unique set of B1 calibration, local SAR distribution was subsequently predicted for B1-shim I, as well as for another B1-shim setting (B1-shim II), considering a specific set of parameter for a heating MRI protocol consisting of RF pulses plaid at 1% duty cycle. Local SAR results, which could not be directly measured with MRI, were subsequently converted into temperature change which in turn were validated against temperature changes measured by MRI Thermometry based on the proton chemical shift.« less

  15. Standard method of test for grindability of coal by the Hardgrove-machine method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1975-01-01

    A procedure is described for sampling coal, grinding in a Hardgrove grinding machine, and passing through standard sieves to determine the degree of pulverization of coals. The grindability index of the coal tested is calculated from a calibration chart prepared by plotting weight of material passing a No. 200 sieve versus the Hardgrove Grindability Index for the standard reference samples. The Hardgrove machine is shown schematically. The method for preparing and determining grindability indexes of standard reference samples is given in the appendix. (BLM)

  16. System and method for calibrating a rotary absolute position sensor

    NASA Technical Reports Server (NTRS)

    Davis, Donald R. (Inventor); Permenter, Frank Noble (Inventor); Radford, Nicolaus A (Inventor)

    2012-01-01

    A system includes a rotary device, a rotary absolute position (RAP) sensor generating encoded pairs of voltage signals describing positional data of the rotary device, a host machine, and an algorithm. The algorithm calculates calibration parameters usable to determine an absolute position of the rotary device using the encoded pairs, and is adapted for linearly-mapping an ellipse defined by the encoded pairs to thereby calculate the calibration parameters. A method of calibrating the RAP sensor includes measuring the rotary position as encoded pairs of voltage signals, linearly-mapping an ellipse defined by the encoded pairs to thereby calculate the calibration parameters, and calculating an absolute position of the rotary device using the calibration parameters. The calibration parameters include a positive definite matrix (A) and a center point (q) of the ellipse. The voltage signals may include an encoded sine and cosine of a rotary angle of the rotary device.

  17. Decoding of visual activity patterns from fMRI responses using multivariate pattern analyses and convolutional neural network.

    PubMed

    Zafar, Raheel; Kamel, Nidal; Naufal, Mohamad; Malik, Aamir Saeed; Dass, Sarat C; Ahmad, Rana Fayyaz; Abdullah, Jafri M; Reza, Faruque

    2017-01-01

    Decoding of human brain activity has always been a primary goal in neuroscience especially with functional magnetic resonance imaging (fMRI) data. In recent years, Convolutional neural network (CNN) has become a popular method for the extraction of features due to its higher accuracy, however it needs a lot of computation and training data. In this study, an algorithm is developed using Multivariate pattern analysis (MVPA) and modified CNN to decode the behavior of brain for different images with limited data set. Selection of significant features is an important part of fMRI data analysis, since it reduces the computational burden and improves the prediction performance; significant features are selected using t-test. MVPA uses machine learning algorithms to classify different brain states and helps in prediction during the task. General linear model (GLM) is used to find the unknown parameters of every individual voxel and the classification is done using multi-class support vector machine (SVM). MVPA-CNN based proposed algorithm is compared with region of interest (ROI) based method and MVPA based estimated values. The proposed method showed better overall accuracy (68.6%) compared to ROI (61.88%) and estimation values (64.17%).

  18. SU-D-18C-02: Feasibility of Using a Short ASL Scan for Calibrating Cerebral Blood Flow Obtained From DSC-MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P; Chang, T; Huang, K

    2014-06-01

    Purpose: This study aimed to evaluate the feasibility of using a short arterial spin labeling (ASL) scan for calibrating the dynamic susceptibility contrast- (DSC-) MRI in a group of patients with internal carotid artery stenosis. Methods: Six patients with unilateral ICA stenosis enrolled in the study on a 3T clinical MRI scanner. The ASL-cerebral blood flow (-CBF) maps were calculated by averaging different number of dynamic points (N=1-45) acquired by using a Q2TIPS sequence. For DSC perfusion analysis, arterial input function was selected to derive the relative cerebral blood flow (rCBF) map and the delay (Tmax) map. Patient-specific CF wasmore » calculated from the mean ASL- and DSC-CBF obtained from three different masks: (1)Tmax< 3s, (2)combined gray matter mask with mask 1, (3)mask 2 with large vessels removed. One CF value was created for each number of averages by using each of the three masks for calibrating the DSC-CBF map. The CF value of the largest number of averages (NL=45) was used to determine the acceptable range(< 10%, <15%, and <20%) of CF values corresponding to the minimally acceptable number of average (NS) for each patient. Results: Comparing DSC CBF maps corrected by CF values of NL (CBFL) in ACA, MCA and PCA territories, all masks resulted in smaller CBF on the ipsilateral side than the contralateral side of the MCA territory(p<.05). The values obtained from mask 1 were significantly different than the mask 3(p<.05). Using mask 3, the medium values of Ns were 4(<10%), 2(<15%) and 2(<20%), with the worst case scenario (maximum Ns) of 25, 4, and 4, respectively. Conclusion: This study found that reliable calibration of DSC-CBF can be achieved from a short pulsed ASL scan. We suggested use a mask based on the Tmax threshold, the inclusion of gray matter only and the exclusion of large vessels for performing the calibration.« less

  19. SU-F-T-284: The Effect of Linear Accelerator Output Variation On the Quality of Patient Specific Rapid Arc Verification Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandhu, G; Cao, F; Szpala, S

    2016-06-15

    Purpose: The aim of the current study is to investigate the effect of machine output variation on the delivery of the RapidArc verification plans. Methods: Three verification plans were generated using Eclipse™ treatment planning system (V11.031) with plan normalization value 100.0%. These plans were delivered on the linear accelerators using ArcCHECK− device, with machine output 1.000 cGy/MU at calibration point. These planned and delivered dose distributions were used as reference plans. Additional plans were created in Eclipse− with normalization values ranging 92.80%–102% to mimic the machine output ranging 1.072cGy/MU-0.980cGy/MU, at the calibration point. These plans were compared against the referencemore » plans using gamma indices (3%, 3mm) and (2%, 2mm). Calculated gammas were studied for its dependence on machine output. Plans were considered passed if 90% of the points satisfy the defined gamma criteria. Results: The gamma index (3%, 3mm) was insensitive to output fluctuation within the output tolerance level (2% of calibration), and showed failures, when the machine output exceeds ≥3%. Gamma (2%, 2mm) was found to be more sensitive to the output variation compared to the gamma (3%, 3mm), and showed failures, when output exceeds ≥1.7%. The variation of the gamma indices with output variability also showed dependence upon the plan parameters (e.g. MLC movement and gantry rotation). The variation of the percentage points passing gamma criteria with output variation followed a non-linear decrease beyond the output tolerance level. Conclusion: Data from the limited plans and output conditions showed that gamma (2%, 2mm) is more sensitive to the output fluctuations compared to Gamma (3%,3mm). Work under progress, including detail data from a large number of plans and a wide range of output conditions, may be able to conclude the quantitative dependence of gammas on machine output, and hence the effect on the quality of delivered rapid arc plans.« less

  20. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli

    PubMed Central

    Mandelkow, Hendrik; de Zwart, Jacco A.; Duyn, Jeff H.

    2016-01-01

    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these results, the combination of naturalistic movie stimuli and classification analysis in fMRI experiments may prove to be a sensitive tool for the assessment of changes in natural cognitive processes under experimental manipulation. PMID:27065832

  1. SU-G-JeP2-12: Quantification of 3D Geometric Distortion for 1.5T and 3T MRI Scanners Used for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stowe, M; Gupta, N; Raterman, B

    Purpose: To quantify the magnitude of geometric distortion for MRI scanners and provide recommendations for MRI imaging for radiation therapy Methods: A novel phantom, QUASAR MRID3D [Modus Medical Devices Inc.], was scanned to evaluate the level of 3D geometric distortion present in five MRI scanners used for radiation therapy in our department. The phantom was scanned using the body coil with 1mm image slice thickness to acquire 3D images of the phantom body. The phantom was aligned to its geometric center for each scan, and the field of view was set to visualize the entire phantom. The dependence of distortionmore » magnitude with distance from imaging isocenter and with magnetic field strength (1.5T and 3T) was investigated. Additionally, the characteristics of distortion for Siemens and GE machines were compared. The image distortion for each scanner was quantified in terms of mean, standard deviation (STD), maximum distortion, and skewness. Results: The 3T and 1.5T scans show a similar absolute distortion with a mean of 1.38mm (0.33mm STD) for 3T and 1.39mm (0.34mm STD) for 1.5T for a 100mm radius distance from isocenter. Some machines can have a distortion larger than 10mm at a distance of 200mm from the isocenter. The distortions are presented with plots of the x, y, and z directional components. Conclusion: The results indicate that quantification of MRI image distortion is crucial in radiation oncology for target and organ delineation and treatment planning. The magnitude of geometric distortion determines the margin needed for target contouring which is usually neglected in treatment planning process, especially for SRS/SBRT treatments. Understanding the 3D distribution of the MRI image distortion will improve the accuracy of target delineation and, hence, treatment efficacy. MRI imaging with proper patient alignment to the isocenter is vital to reducing the effects of MRI distortion in treatment planning.« less

  2. Calibrating random forests for probability estimation.

    PubMed

    Dankowski, Theresa; Ziegler, Andreas

    2016-09-30

    Probabilities can be consistently estimated using random forests. It is, however, unclear how random forests should be updated to make predictions for other centers or at different time points. In this work, we present two approaches for updating random forests for probability estimation. The first method has been proposed by Elkan and may be used for updating any machine learning approach yielding consistent probabilities, so-called probability machines. The second approach is a new strategy specifically developed for random forests. Using the terminal nodes, which represent conditional probabilities, the random forest is first translated to logistic regression models. These are, in turn, used for re-calibration. The two updating strategies were compared in a simulation study and are illustrated with data from the German Stroke Study Collaboration. In most simulation scenarios, both methods led to similar improvements. In the simulation scenario in which the stricter assumptions of Elkan's method were not met, the logistic regression-based re-calibration approach for random forests outperformed Elkan's method. It also performed better on the stroke data than Elkan's method. The strength of Elkan's method is its general applicability to any probability machine. However, if the strict assumptions underlying this approach are not met, the logistic regression-based approach is preferable for updating random forests for probability estimation. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  3. Emotional Intelligence: Advocating for the Softer Side of Leadership

    DTIC Science & Technology

    2013-03-01

    handles social rejection and physical pain.30 In one study , patients in fMRI machines were told they were playing a game with two other players — a...operated more freely.”43 Yet these results do not indicate the cognitive system can be allowed to take a backseat. In another study , fMRI showed that...The roots of empathy can be found at an early age, which implies empathy is hardwired into the primitive limbic system. One study observed a toddler

  4. Neuroimaging of neurocutaneous diseases.

    PubMed

    Nandigam, Kaveer; Mechtler, Laszlo L; Smirniotopoulos, James G

    2014-02-01

    An in-depth knowledge of the imaging characteristics of the common neurocutaneous diseases (NCD) described in this article will help neurologists understand the screening imaging modalities in these patients. The future of neuroimaging is geared towards developing and refining magnetic resonance imaging (MRI) sequences. The detection of tumors in NCD has greatly improved with availability of high-field strength 3T MRI machines. Neuroimaging will remain at the heart and soul of the multidisciplinary care of such complex diagnoses to guide early detection and monitor treatment. Published by Elsevier Inc.

  5. Testing the quality of images for permanent magnet desktop MRI systems using specially designed phantoms.

    PubMed

    Qiu, Jianfeng; Wang, Guozhu; Min, Jiao; Wang, Xiaoyan; Wang, Pengcheng

    2013-12-21

    Our aim was to measure the performance of desktop magnetic resonance imaging (MRI) systems using specially designed phantoms, by testing imaging parameters and analysing the imaging quality. We designed multifunction phantoms with diameters of 18 and 60 mm for desktop MRI scanners in accordance with the American Association of Physicists in Medicine (AAPM) report no. 28. We scanned the phantoms with three permanent magnet 0.5 T desktop MRI systems, measured the MRI image parameters, and analysed imaging quality by comparing the data with the AAPM criteria and Chinese national standards. Image parameters included: resonance frequency, high contrast spatial resolution, low contrast object detectability, slice thickness, geometrical distortion, signal-to-noise ratio (SNR), and image uniformity. The image parameters of three desktop MRI machines could be measured using our specially designed phantoms, and most parameters were in line with MRI quality control criterion, including: resonance frequency, high contrast spatial resolution, low contrast object detectability, slice thickness, geometrical distortion, image uniformity and slice position accuracy. However, SNR was significantly lower than in some references. The imaging test and quality control are necessary for desktop MRI systems, and should be performed with the applicable phantom and corresponding standards.

  6. Control-group feature normalization for multivariate pattern analysis of structural MRI data using the support vector machine.

    PubMed

    Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T

    2016-05-15

    Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. First steps in using machine learning on fMRI data to predict intrusive memories of traumatic film footage

    PubMed Central

    Clark, Ian A.; Niehaus, Katherine E.; Duff, Eugene P.; Di Simplicio, Martina C.; Clifford, Gari D.; Smith, Stephen M.; Mackay, Clare E.; Woolrich, Mark W.; Holmes, Emily A.

    2014-01-01

    After psychological trauma, why do some only some parts of the traumatic event return as intrusive memories while others do not? Intrusive memories are key to cognitive behavioural treatment for post-traumatic stress disorder, and an aetiological understanding is warranted. We present here analyses using multivariate pattern analysis (MVPA) and a machine learning classifier to investigate whether peri-traumatic brain activation was able to predict later intrusive memories (i.e. before they had happened). To provide a methodological basis for understanding the context of the current results, we first show how functional magnetic resonance imaging (fMRI) during an experimental analogue of trauma (a trauma film) via a prospective event-related design was able to capture an individual's later intrusive memories. Results showed widespread increases in brain activation at encoding when viewing a scene in the scanner that would later return as an intrusive memory in the real world. These fMRI results were replicated in a second study. While traditional mass univariate regression analysis highlighted an association between brain processing and symptomatology, this is not the same as prediction. Using MVPA and a machine learning classifier, it was possible to predict later intrusive memories across participants with 68% accuracy, and within a participant with 97% accuracy; i.e. the classifier could identify out of multiple scenes those that would later return as an intrusive memory. We also report here brain networks key in intrusive memory prediction. MVPA opens the possibility of decoding brain activity to reconstruct idiosyncratic cognitive events with relevance to understanding and predicting mental health symptoms. PMID:25151915

  8. Machine learning based analytics of micro-MRI trabecular bone microarchitecture and texture in type 1 Gaucher disease.

    PubMed

    Sharma, Gulshan B; Robertson, Douglas D; Laney, Dawn A; Gambello, Michael J; Terk, Michael

    2016-06-14

    Type 1 Gaucher disease (GD) is an autosomal recessive lysosomal storage disease, affecting bone metabolism, structure and strength. Current bone assessment methods are not ideal. Semi-quantitative MRI scoring is unreliable, not standardized, and only evaluates bone marrow. DXA BMD is also used but is a limited predictor of bone fragility/fracture risk. Our purpose was to measure trabecular bone microarchitecture, as a biomarker of bone disease severity, in type 1 GD individuals with different GD genotypes and to apply machine learning based analytics to discriminate between GD patients and healthy individuals. Micro-MR imaging of the distal radius was performed on 20 type 1 GD patients and 10 healthy controls (HC). Fifteen stereological and textural measures (STM) were calculated from the MR images. General linear models demonstrated significant differences between GD and HC, and GD genotypes. Stereological measures, main contributors to the first two principal components (PCs), explained ~50% of data variation and were significantly different between males and females. Subsequent PCs textural measures were significantly different between GD patients and HC individuals. Textural measures also significantly differed between GD genotypes, and distinguished between GD patients with normal and pathologic DXA scores. PCA and SVM predictive analyses discriminated between GD and HC with maximum accuracy of 73% and area under ROC curve of 0.79. Trabecular STM differences can be quantified between GD patients and HC, and GD sub-types using micro-MRI and machine learning based analytics. Work is underway to expand this approach to evaluate GD disease burden and treatment efficacy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Force supplementary comparison at SIM (SIM.M.F-S5), compression testing machines calibration, up to 200 kN

    NASA Astrophysics Data System (ADS)

    Cárdenas Moctezuma, A.; Torres Guzmán, J. C.

    2016-01-01

    CENAM, through the Force and Pressure Division, organized a comparison on testing machines calibration, in compression mode. The participating laboratories were SIM National Institutes of Metrology from Colombia, Peru and Costa Rica, where CENAM, Mexico was the pilot and reference laboratory. The results obtained by the laboratories are presented in this paper as well as the analysis of compatibility. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  10. Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions.

    PubMed

    Akkus, Zeynettin; Galimzianova, Alfiia; Hoogi, Assaf; Rubin, Daniel L; Erickson, Bradley J

    2017-08-01

    Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.

  11. Creep Laboratory manual

    NASA Astrophysics Data System (ADS)

    Osgerby, S.; Loveday, M. S.

    1992-06-01

    A manual for the NPL Creep Laboratory, a collective name given to two testing laboratories, the Uniaxial Creep Laboratory and the Advanced High Temperature Mechanical Testing Laboratory, is presented. The first laboratory is devoted to uniaxial creep testing and houses approximately 50 high sensitivity creep machines including 10 constant stress cam lever machines. The second laboratory houses a low cycle fatigue testing machine of 100 kN capacity driven by a servo-electric actuator, five machines for uniaxial tensile creep testing of engineering ceramics at temperatures up to 1600C, and an electronic creep machine. Details of the operational procedures for carrying out uniaxial creep testing are given. Calibration procedures to be followed in order to comply with the specifications laid down by British standards, and to provide traceability back to the primary standards are described.

  12. Impact of Human like Cues on Human Trust in Machines: Brain Imaging and Modeling Studies for Human-Machine Interactions

    DTIC Science & Technology

    2018-01-05

    research team recorded fMRI or event-related potentials while subjects were playing two cognitive games . At the first experiment, human subjects played a...theory-of-mind bilateral game with two types of computerized agents: with or without humanlike cues. At the second experiment, human subjects played...a unilateral game in which the human subjects played the role of the Coach (or supervisor) while a computer agent played as the Player

  13. AUTOMATIC CALIBRATING SYSTEM FOR PRESSURE TRANSDUCERS

    DOEpatents

    Amonette, E.L.; Rodgers, G.W.

    1958-01-01

    An automatic system for calibrating a number of pressure transducers is described. The disclosed embodiment of the invention uses a mercurial manometer to measure the air pressure applied to the transducer. A servo system follows the top of the mercury column as the pressure is changed and operates an analog- to-digital converter This converter furnishes electrical pulses, each representing an increment of pressure change, to a reversible counterThe transducer furnishes a signal at each calibration point, causing an electric typewriter and a card-punch machine to record the pressure at the instant as indicated by the counter. Another counter keeps track of the calibration points so that a number identifying each point is recorded with the corresponding pressure. A special relay control system controls the pressure trend and programs the sequential calibration of several transducers.

  14. A High Performance Torque Sensor for Milling Based on a Piezoresistive MEMS Strain Gauge

    PubMed Central

    Qin, Yafei; Zhao, Yulong; Li, Yingxue; Zhao, You; Wang, Peng

    2016-01-01

    In high speed and high precision machining applications, it is important to monitor the machining process in order to ensure high product quality. For this purpose, it is essential to develop a dynamometer with high sensitivity and high natural frequency which is suited to these conditions. This paper describes the design, calibration and performance of a milling torque sensor based on piezoresistive MEMS strain. A detailed design study is carried out to optimize the two mutually-contradictory indicators sensitivity and natural frequency. The developed torque sensor principally consists of a thin-walled cylinder, and a piezoresistive MEMS strain gauge bonded on the surface of the sensing element where the shear strain is maximum. The strain gauge includes eight piezoresistances and four are connected in a full Wheatstone circuit bridge, which is used to measure the applied torque force during machining procedures. Experimental static calibration results show that the sensitivity of torque sensor has been improved to 0.13 mv/Nm. A modal impact test indicates that the natural frequency of torque sensor reaches 1216 Hz, which is suitable for high speed machining processes. The dynamic test results indicate that the developed torque sensor is stable and practical for monitoring the milling process. PMID:27070620

  15. Individualized prediction of illness course at the first psychotic episode: a support vector machine MRI study.

    PubMed

    Mourao-Miranda, J; Reinders, A A T S; Rocha-Rego, V; Lappin, J; Rondina, J; Morgan, C; Morgan, K D; Fearon, P; Jones, P B; Doody, G A; Murray, R M; Kapur, S; Dazzan, P

    2012-05-01

    To date, magnetic resonance imaging (MRI) has made little impact on the diagnosis and monitoring of psychoses in individual patients. In this study, we used a support vector machine (SVM) whole-brain classification approach to predict future illness course at the individual level from MRI data obtained at the first psychotic episode. One hundred patients at their first psychotic episode and 91 healthy controls had an MRI scan. Patients were re-evaluated 6.2 years (s.d.=2.3) later, and were classified as having a continuous, episodic or intermediate illness course. Twenty-eight subjects with a continuous course were compared with 28 patients with an episodic course and with 28 healthy controls. We trained each SVM classifier independently for the following contrasts: continuous versus episodic, continuous versus healthy controls, and episodic versus healthy controls. At baseline, patients with a continuous course were already distinguishable, with significance above chance level, from both patients with an episodic course (p=0.004, sensitivity=71, specificity=68) and healthy individuals (p=0.01, sensitivity=71, specificity=61). Patients with an episodic course could not be distinguished from healthy individuals. When patients with an intermediate outcome were classified according to the discriminating pattern episodic versus continuous, 74% of those who did not develop other episodes were classified as episodic, and 65% of those who did develop further episodes were classified as continuous (p=0.035). We provide preliminary evidence of MRI application in the individualized prediction of future illness course, using a simple and automated SVM pipeline. When replicated and validated in larger groups, this could enable targeted clinical decisions based on imaging data.

  16. Individualized prediction of illness course at the first psychotic episode: a support vector machine MRI study

    PubMed Central

    Mourao-Miranda, J.; Reinders, A. A. T. S.; Rocha-Rego, V.; Lappin, J.; Rondina, J.; Morgan, C.; Morgan, K. D.; Fearon, P.; Jones, P. B.; Doody, G. A.; Murray, R. M.; Kapur, S.; Dazzan, P.

    2012-01-01

    Background To date, magnetic resonance imaging (MRI) has made little impact on the diagnosis and monitoring of psychoses in individual patients. In this study, we used a support vector machine (SVM) whole-brain classification approach to predict future illness course at the individual level from MRI data obtained at the first psychotic episode. Method One hundred patients at their first psychotic episode and 91 healthy controls had an MRI scan. Patients were re-evaluated 6.2 years (s.d.=2.3) later, and were classified as having a continuous, episodic or intermediate illness course. Twenty-eight subjects with a continuous course were compared with 28 patients with an episodic course and with 28 healthy controls. We trained each SVM classifier independently for the following contrasts: continuous versus episodic, continuous versus healthy controls, and episodic versus healthy controls. Results At baseline, patients with a continuous course were already distinguishable, with significance above chance level, from both patients with an episodic course (p=0.004, sensitivity=71, specificity=68) and healthy individuals (p=0.01, sensitivity=71, specificity=61). Patients with an episodic course could not be distinguished from healthy individuals. When patients with an intermediate outcome were classified according to the discriminating pattern episodic versus continuous, 74% of those who did not develop other episodes were classified as episodic, and 65% of those who did develop further episodes were classified as continuous (p=0.035). Conclusions We provide preliminary evidence of MRI application in the individualized prediction of future illness course, using a simple and automated SVM pipeline. When replicated and validated in larger groups, this could enable targeted clinical decisions based on imaging data. PMID:22059690

  17. Unsupervised nonlinear dimensionality reduction machine learning methods applied to multiparametric MRI in cerebral ischemia: preliminary results

    NASA Astrophysics Data System (ADS)

    Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.

    2014-03-01

    The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.

  18. Radiological and microwave Protection at NRL, January - December 1983

    DTIC Science & Technology

    1984-06-27

    reduced to background. 18 Surveys with TLD badges were made on pulsed electron beam machines in Buildings 101 and A68 throughout the year. The Gamble...calibration of radiation dosimetry systems required by the Laboratory’s radiological safety program, or by other Laboratory or Navy groups. The Section...provides consultation and assistance on dosimetry problems to the Staff, Laboratory, and Navy. The Section maintains and calibrates fixed-field radiac

  19. Calibration procedures of the Tore-Supra infrared endoscopes

    NASA Astrophysics Data System (ADS)

    Desgranges, C.; Jouve, M.; Balorin, C.; Reichle, R.; Firdaouss, M.; Lipa, M.; Chantant, M.; Gardarein, J. L.; Saille, A.; Loarer, T.

    2018-01-01

    Five endoscopes equipped with infrared cameras working in the medium infrared range (3-5 μm) are installed on the controlled thermonuclear fusion research device Tore-Supra. These endoscopes aim at monitoring the plasma facing components surface temperature to prevent their overheating. Signals delivered by infrared cameras through endoscopes are analysed and used on the one hand through a real time feedback control loop acting on the heating systems of the plasma to decrease plasma facing components surface temperatures when necessary, on the other hand for physics studies such as determination of the incoming heat flux . To ensure these two roles a very accurate knowledge of the absolute surface temperatures is mandatory. Consequently the infrared endoscopes must be calibrated through a very careful procedure. This means determining their transmission coefficients which is a delicate operation. Methods to calibrate infrared endoscopes during the shutdown period of the Tore-Supra machine will be presented. As they do not allow determining the possible transmittances evolution during operation an in-situ method is presented. It permits the validation of the calibration performed in laboratory as well as the monitoring of their evolution during machine operation. This is possible by the use of the endoscope shutter and a dedicated plasma scenario developed to heat it. Possible improvements of this method are briefly evoked.

  20. NMR, MRI, and spectroscopic MRI in inhomogeneous fields

    DOEpatents

    Demas, Vasiliki; Pines, Alexander; Martin, Rachel W; Franck, John; Reimer, Jeffrey A

    2013-12-24

    A method for locally creating effectively homogeneous or "clean" magnetic field gradients (of high uniformity) for imaging (with NMR, MRI, or spectroscopic MRI) both in in-situ and ex-situ systems with high degrees of inhomogeneous field strength. THe method of imaging comprises: a) providing a functional approximation of an inhomogeneous static magnetic field strength B.sub.0({right arrow over (r)}) at a spatial position {right arrow over (r)}; b) providing a temporal functional approximation of {right arrow over (G)}.sub.shim(t) with i basis functions and j variables for each basis function, resulting in v.sub.ij variables; c) providing a measured value .OMEGA., which is an temporally accumulated dephasing due to the inhomogeneities of B.sub.0({right arrow over(r)}); and d) minimizing a difference in the local dephasing angle .phi.({right arrow over (r)},t)=.gamma..intg..sub.0.sup.t{square root over (|{right arrow over (B)}.sub.1({right arrow over (r)},t')|.sup.2+({right arrow over (r)}{right arrow over (G)}.sub.shimG.sub.shim(t')+.parallel.{right arrow over (B)}.sub.0({right arrow over (r)}).parallel..DELTA..omega.({right arrow over (r)},t'/.gamma/).sup.2)}dt'-.OMEGA. by varying the v.sub.ij variables to form a set of minimized v.sub.ij variables. The method requires calibration of the static fields prior to minimization, but may thereafter be implemented without such calibration, may be used in open or closed systems, and potentially portable systems.

  1. Random forest feature selection, fusion and ensemble strategy: Combining multiple morphological MRI measures to discriminate among healhy elderly, MCI, cMCI and alzheimer's disease patients: From the alzheimer's disease neuroimaging initiative (ADNI) database.

    PubMed

    Dimitriadis, S I; Liparas, Dimitris; Tsolaki, Magda N

    2018-05-15

    In the era of computer-assisted diagnostic tools for various brain diseases, Alzheimer's disease (AD) covers a large percentage of neuroimaging research, with the main scope being its use in daily practice. However, there has been no study attempting to simultaneously discriminate among Healthy Controls (HC), early mild cognitive impairment (MCI), late MCI (cMCI) and stable AD, using features derived from a single modality, namely MRI. Based on preprocessed MRI images from the organizers of a neuroimaging challenge, 3 we attempted to quantify the prediction accuracy of multiple morphological MRI features to simultaneously discriminate among HC, MCI, cMCI and AD. We explored the efficacy of a novel scheme that includes multiple feature selections via Random Forest from subsets of the whole set of features (e.g. whole set, left/right hemisphere etc.), Random Forest classification using a fusion approach and ensemble classification via majority voting. From the ADNI database, 60 HC, 60 MCI, 60 cMCI and 60 CE were used as a training set with known labels. An extra dataset of 160 subjects (HC: 40, MCI: 40, cMCI: 40 and AD: 40) was used as an external blind validation dataset to evaluate the proposed machine learning scheme. In the second blind dataset, we succeeded in a four-class classification of 61.9% by combining MRI-based features with a Random Forest-based Ensemble Strategy. We achieved the best classification accuracy of all teams that participated in this neuroimaging competition. The results demonstrate the effectiveness of the proposed scheme to simultaneously discriminate among four groups using morphological MRI features for the very first time in the literature. Hence, the proposed machine learning scheme can be used to define single and multi-modal biomarkers for AD. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Robot calibration with a photogrammetric on-line system using reseau scanning cameras

    NASA Astrophysics Data System (ADS)

    Diewald, Bernd; Godding, Robert; Henrich, Andreas

    1994-03-01

    The possibility for testing and calibration of industrial robots becomes more and more important for manufacturers and users of such systems. Exacting applications in connection with the off-line programming techniques or the use of robots as measuring machines are impossible without a preceding robot calibration. At the LPA an efficient calibration technique has been developed. Instead of modeling the kinematic behavior of a robot, the new method describes the pose deviations within a user-defined section of the robot's working space. High- precision determination of 3D coordinates of defined path positions is necessary for calibration and can be done by digital photogrammetric systems. For the calibration of a robot at the LPA a digital photogrammetric system with three Rollei Reseau Scanning Cameras was used. This system allows an automatic measurement of a large number of robot poses with high accuracy.

  3. Exploiting Task Constraints for Self-Calibrated Brain-Machine Interface Control Using Error-Related Potentials

    PubMed Central

    Iturrate, Iñaki; Grizou, Jonathan; Omedes, Jason; Oudeyer, Pierre-Yves; Lopes, Manuel; Montesano, Luis

    2015-01-01

    This paper presents a new approach for self-calibration BCI for reaching tasks using error-related potentials. The proposed method exploits task constraints to simultaneously calibrate the decoder and control the device, by using a robust likelihood function and an ad-hoc planner to cope with the large uncertainty resulting from the unknown task and decoder. The method has been evaluated in closed-loop online experiments with 8 users using a previously proposed BCI protocol for reaching tasks over a grid. The results show that it is possible to have a usable BCI control from the beginning of the experiment without any prior calibration. Furthermore, comparisons with simulations and previous results obtained using standard calibration hint that both the quality of recorded signals and the performance of the system were comparable to those obtained with a standard calibration approach. PMID:26131890

  4. Game controller modification for fMRI hyperscanning experiments in a cooperative virtual reality environment.

    PubMed

    Trees, Jason; Snider, Joseph; Falahpour, Maryam; Guo, Nick; Lu, Kun; Johnson, Douglas C; Poizner, Howard; Liu, Thomas T

    2014-01-01

    Hyperscanning, an emerging technique in which data from multiple interacting subjects' brains are simultaneously recorded, has become an increasingly popular way to address complex topics, such as "theory of mind." However, most previous fMRI hyperscanning experiments have been limited to abstract social interactions (e.g. phone conversations). Our new method utilizes a virtual reality (VR) environment used for military training, Virtual Battlespace 2 (VBS2), to create realistic avatar-avatar interactions and cooperative tasks. To control the virtual avatar, subjects use a MRI compatible Playstation 3 game controller, modified by removing all extraneous metal components and replacing any necessary ones with 3D printed plastic models. Control of both scanners' operation is initiated by a VBS2 plugin to sync scanner time to the known time within the VR environment. Our modifications include:•Modification of game controller to be MRI compatible.•Design of VBS2 virtual environment for cooperative interactions.•Syncing two MRI machines for simultaneous recording.

  5. Brain MRI analysis for Alzheimer's disease diagnosis using an ensemble system of deep convolutional neural networks.

    PubMed

    Islam, Jyoti; Zhang, Yanqing

    2018-05-31

    Alzheimer's disease is an incurable, progressive neurological brain disorder. Earlier detection of Alzheimer's disease can help with proper treatment and prevent brain tissue damage. Several statistical and machine learning models have been exploited by researchers for Alzheimer's disease diagnosis. Analyzing magnetic resonance imaging (MRI) is a common practice for Alzheimer's disease diagnosis in clinical research. Detection of Alzheimer's disease is exacting due to the similarity in Alzheimer's disease MRI data and standard healthy MRI data of older people. Recently, advanced deep learning techniques have successfully demonstrated human-level performance in numerous fields including medical image analysis. We propose a deep convolutional neural network for Alzheimer's disease diagnosis using brain MRI data analysis. While most of the existing approaches perform binary classification, our model can identify different stages of Alzheimer's disease and obtains superior performance for early-stage diagnosis. We conducted ample experiments to demonstrate that our proposed model outperformed comparative baselines on the Open Access Series of Imaging Studies dataset.

  6. Game controller modification for fMRI hyperscanning experiments in a cooperative virtual reality environment

    PubMed Central

    Trees, Jason; Snider, Joseph; Falahpour, Maryam; Guo, Nick; Lu, Kun; Johnson, Douglas C.; Poizner, Howard; Liu, Thomas T.

    2014-01-01

    Hyperscanning, an emerging technique in which data from multiple interacting subjects’ brains are simultaneously recorded, has become an increasingly popular way to address complex topics, such as “theory of mind.” However, most previous fMRI hyperscanning experiments have been limited to abstract social interactions (e.g. phone conversations). Our new method utilizes a virtual reality (VR) environment used for military training, Virtual Battlespace 2 (VBS2), to create realistic avatar-avatar interactions and cooperative tasks. To control the virtual avatar, subjects use a MRI compatible Playstation 3 game controller, modified by removing all extraneous metal components and replacing any necessary ones with 3D printed plastic models. Control of both scanners’ operation is initiated by a VBS2 plugin to sync scanner time to the known time within the VR environment. Our modifications include:•Modification of game controller to be MRI compatible.•Design of VBS2 virtual environment for cooperative interactions.•Syncing two MRI machines for simultaneous recording. PMID:26150964

  7. Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform

    NASA Astrophysics Data System (ADS)

    Brau-Avila, A.; Santolaria, J.; Acero, R.; Valenzuela-Galvan, M.; Herrera-Jimenez, V. M.; Aguilar, J. J.

    2017-03-01

    The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs.

  8. Machine learning-based analysis of MR radiomics can help to improve the diagnostic performance of PI-RADS v2 in clinically relevant prostate cancer.

    PubMed

    Wang, Jing; Wu, Chen-Jiang; Bao, Mei-Ling; Zhang, Jing; Wang, Xiao-Ning; Zhang, Yu-Dong

    2017-10-01

    To investigate whether machine learning-based analysis of MR radiomics can help improve the performance PI-RADS v2 in clinically relevant prostate cancer (PCa). This IRB-approved study included 54 patients with PCa undergoing multi-parametric (mp) MRI before prostatectomy. Imaging analysis was performed on 54 tumours, 47 normal peripheral (PZ) and 48 normal transitional (TZ) zone based on histological-radiological correlation. Mp-MRI was scored via PI-RADS, and quantified by measuring radiomic features. Predictive model was developed using a novel support vector machine trained with: (i) radiomics, (ii) PI-RADS scores, (iii) radiomics and PI-RADS scores. Paired comparison was made via ROC analysis. For PCa versus normal TZ, the model trained with radiomics had a significantly higher area under the ROC curve (Az) (0.955 [95% CI 0.923-0.976]) than PI-RADS (Az: 0.878 [0.834-0.914], p < 0.001). The Az between them was insignificant for PCa versus PZ (0.972 [0.945-0.988] vs. 0.940 [0.905-0.965], p = 0.097). When radiomics was added, performance of PI-RADS was significantly improved for PCa versus PZ (Az: 0.983 [0.960-0.995]) and PCa versus TZ (Az: 0.968 [0.940-0.985]). Machine learning analysis of MR radiomics can help improve the performance of PI-RADS in clinically relevant PCa. • Machine-based analysis of MR radiomics outperformed in TZ cancer against PI-RADS. • Adding MR radiomics significantly improved the performance of PI-RADS. • DKI-derived Dapp and Kapp were two strong markers for the diagnosis of PCa.

  9. Invited Article: A novel calibration method for the JET real-time far infrared polarimeter and integration of polarimetry-based line-integrated density measurements for machine protection of a fusion plant.

    PubMed

    Boboc, A; Bieg, B; Felton, R; Dalley, S; Kravtsov, Yu

    2015-09-01

    In this paper, we present the work in the implementation of a new calibration for the JET real-time polarimeter based on the complex amplitude ratio technique and a new self-validation mechanism of data. This allowed easy integration of the polarimetry measurements into the JET plasma density control (gas feedback control) and as well as machine protection systems (neutral beam injection heating safety interlocks). The new addition was used successfully during 2014 JET Campaign and is envisaged that will operate routinely from 2015 campaign onwards in any plasma condition (including ITER relevant scenarios). This mode of operation elevated the importance of the polarimetry as a diagnostic tool in the view of future fusion experiments.

  10. Invited Article: A novel calibration method for the JET real-time far infrared polarimeter and integration of polarimetry-based line-integrated density measurements for machine protection of a fusion plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboc, A., E-mail: Alexandru.Boboc@ccfe.ac.uk; Felton, R.; Dalley, S.

    2015-09-15

    In this paper, we present the work in the implementation of a new calibration for the JET real-time polarimeter based on the complex amplitude ratio technique and a new self-validation mechanism of data. This allowed easy integration of the polarimetry measurements into the JET plasma density control (gas feedback control) and as well as machine protection systems (neutral beam injection heating safety interlocks). The new addition was used successfully during 2014 JET Campaign and is envisaged that will operate routinely from 2015 campaign onwards in any plasma condition (including ITER relevant scenarios). This mode of operation elevated the importance ofmore » the polarimetry as a diagnostic tool in the view of future fusion experiments.« less

  11. Radiomics based targeted radiotherapy planning (Rad-TRaP): a computational framework for prostate cancer treatment planning with MRI.

    PubMed

    Shiradkar, Rakesh; Podder, Tarun K; Algohary, Ahmad; Viswanath, Satish; Ellis, Rodney J; Madabhushi, Anant

    2016-11-10

    Radiomics or computer - extracted texture features have been shown to achieve superior performance than multiparametric MRI (mpMRI) signal intensities alone in targeting prostate cancer (PCa) lesions. Radiomics along with deformable co-registration tools can be used to develop a framework to generate targeted focal radiotherapy treatment plans. The Rad-TRaP framework comprises three distinct modules. Firstly, a module for radiomics based detection of PCa lesions on mpMRI via a feature enabled machine learning classifier. The second module comprises a multi-modal deformable co-registration scheme to map tissue, organ, and delineated target volumes from MRI onto CT. Finally, the third module involves generation of a radiomics based dose plan on MRI for brachytherapy and on CT for EBRT using the target delineations transferred from the MRI to the CT. Rad-TRaP framework was evaluated using a retrospective cohort of 23 patient studies from two different institutions. 11 patients from the first institution were used to train a radiomics classifier, which was used to detect tumor regions in 12 patients from the second institution. The ground truth cancer delineations for training the machine learning classifier were made by an experienced radiation oncologist using mpMRI, knowledge of biopsy location and radiology reports. The detected tumor regions were used to generate treatment plans for brachytherapy using mpMRI, and tumor regions mapped from MRI to CT to generate corresponding treatment plans for EBRT. For each of EBRT and brachytherapy, 3 dose plans were generated - whole gland homogeneous ([Formula: see text]) which is the current clinical standard, radiomics based focal ([Formula: see text]), and whole gland with a radiomics based focal boost ([Formula: see text]). Comparison of [Formula: see text] against conventional [Formula: see text] revealed that targeted focal brachytherapy would result in a marked reduction in dosage to the OARs while ensuring that the prescribed dose is delivered to the lesions. [Formula: see text] resulted in only a marginal increase in dosage to the OARs compared to [Formula: see text]. A similar trend was observed in case of EBRT with [Formula: see text] and [Formula: see text] compared to [Formula: see text]. A radiotherapy planning framework to generate targeted focal treatment plans has been presented. The focal treatment plans generated using the framework showed reduction in dosage to the organs at risk and a boosted dose delivered to the cancerous lesions.

  12. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    PubMed Central

    Yu, Chengyi; Chen, Xiaobo; Xi, Juntong

    2017-01-01

    A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844

  13. Assessment of the extent of pituitary macroadenomas resection in immediate postoperative MRI.

    PubMed

    Taberner López, E; Vañó Molina, M; Calatayud Gregori, J; Jornet Sanz, M; Jornet Fayos, J; Pastor Del Campo, A; Caño Gómez, A; Mollá Olmos, E

    To evaluate if it is possible to determine the extent of pituitary macroadenomas resection in the immediate postoperative pituitary magnetic resonance imaging (MRI). MRI of patient with pituitary macroadenomas from January 2010 until October 2014 were reviewed. Those patients who had diagnostic MRI, immediate post-surgical MRI and at least one MRI control were included. We evaluate if the findings between the immediate postsurgical MRI and the subsequent MRI were concordant. Cases which didn't have evolutionary controls and those who were reoperation for recurrence were excluded. The degree of tumor resection was divided into groups: total resection, partial resection and doubtful. All MRI studies were performed on a1.5T machine following the same protocol sequences for all cases. One morphological part, a dynamic contrast iv and late contrast part. Of the 73 cases included, immediate postoperative pituitary MRI was interpreted as total resection in 38 cases and tumoral rest in 28 cases, uncertainty among rest or inflammatory changes in 7 cases. Follow- up MRI identified 41 cases total resection and tumoral rest in 32. Sensitivity and specificity of 0.78 and 0.82 and positive and negative predictive value (PPV and NPV) 0.89 and 0.89 respectively were calculated. Immediate post-surgery pituitary MRI is useful for assessing the degree of tumor resection and is a good predictor of the final degree of real resection compared with the following MRI studies. It allows us to decide the most appropriate treatment at an early stage. Copyright © 2017 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.

  14. Evaluating the diagnostic utility of applying a machine learning algorithm to diffusion tensor MRI measures in individuals with major depressive disorder.

    PubMed

    Schnyer, David M; Clasen, Peter C; Gonzalez, Christopher; Beevers, Christopher G

    2017-06-30

    Using MRI to diagnose mental disorders has been a long-term goal. Despite this, the vast majority of prior neuroimaging work has been descriptive rather than predictive. The current study applies support vector machine (SVM) learning to MRI measures of brain white matter to classify adults with Major Depressive Disorder (MDD) and healthy controls. In a precisely matched group of individuals with MDD (n =25) and healthy controls (n =25), SVM learning accurately (74%) classified patients and controls across a brain map of white matter fractional anisotropy values (FA). The study revealed three main findings: 1) SVM applied to DTI derived FA maps can accurately classify MDD vs. healthy controls; 2) prediction is strongest when only right hemisphere white matter is examined; and 3) removing FA values from a region identified by univariate contrast as significantly different between MDD and healthy controls does not change the SVM accuracy. These results indicate that SVM learning applied to neuroimaging data can classify the presence versus absence of MDD and that predictive information is distributed across brain networks rather than being highly localized. Finally, MDD group differences revealed through typical univariate contrasts do not necessarily reveal patterns that provide accurate predictive information. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  15. Differentiation of Glioblastoma and Lymphoma Using Feature Extraction and Support Vector Machine.

    PubMed

    Yang, Zhangjing; Feng, Piaopiao; Wen, Tian; Wan, Minghua; Hong, Xunning

    2017-01-01

    Differentiation of glioblastoma multiformes (GBMs) and lymphomas using multi-sequence magnetic resonance imaging (MRI) is an important task that is valuable for treatment planning. However, this task is a challenge because GBMs and lymphomas may have a similar appearance in MRI images. This similarity may lead to misclassification and could affect the treatment results. In this paper, we propose a semi-automatic method based on multi-sequence MRI to differentiate these two types of brain tumors. Our method consists of three steps: 1) the key slice is selected from 3D MRIs and region of interests (ROIs) are drawn around the tumor region; 2) different features are extracted based on prior clinical knowledge and validated using a t-test; and 3) features that are helpful for classification are used to build an original feature vector and a support vector machine is applied to perform classification. In total, 58 GBM cases and 37 lymphoma cases are used to validate our method. A leave-one-out crossvalidation strategy is adopted in our experiments. The global accuracy of our method was determined as 96.84%, which indicates that our method is effective for the differentiation of GBM and lymphoma and can be applied in clinical diagnosis. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. Using human brain activity to guide machine learning.

    PubMed

    Fong, Ruth C; Scheirer, Walter J; Cox, David D

    2018-03-29

    Machine learning is a field of computer science that builds algorithms that learn. In many cases, machine learning algorithms are used to recreate a human ability like adding a caption to a photo, driving a car, or playing a game. While the human brain has long served as a source of inspiration for machine learning, little effort has been made to directly use data collected from working brains as a guide for machine learning algorithms. Here we demonstrate a new paradigm of "neurally-weighted" machine learning, which takes fMRI measurements of human brain activity from subjects viewing images, and infuses these data into the training process of an object recognition learning algorithm to make it more consistent with the human brain. After training, these neurally-weighted classifiers are able to classify images without requiring any additional neural data. We show that our neural-weighting approach can lead to large performance gains when used with traditional machine vision features, as well as to significant improvements with already high-performing convolutional neural network features. The effectiveness of this approach points to a path forward for a new class of hybrid machine learning algorithms which take both inspiration and direct constraints from neuronal data.

  17. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2015-08-01

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology.

  18. Assessing perioperative complications associated with use of intraoperative magnetic resonance imaging during glioma surgery - a single centre experience with 516 cases.

    PubMed

    Ahmadi, Rezvan; Campos, Benito; Haux, Daniel; Rieke, Jörn; Beigel, Bernhard; Unterberg, Andreas

    2016-08-01

    Intraoperative magnetic resonance imaging (io-MRI) improves the extent of glioma resection. Due to the magnetic field, patients have to be covered with sterile drape and are then transferred into an io-MRI chamber, where ferromagnetic anaesthesia monitors and machines must be kept at distance and can only be applied with limitations. Despite the development of specific paramagnetic equipment for io-MRI use, this method is suspected to carry a higher risk for anaesthesiological and surgical complications. Particularly, serial draping and un-draping cycles as well as the extended surgery duration might increase the risk of perioperative infection. Given the importance of io-MRI for glioma surgery, the question regarding io-MRI safety needs to be answered. We prospectively evaluate the perioperative anaesthesiological and surgical complications for 516 cases of brain tumour surgery involving io-MRI (MRI cohort). As a control group, we evaluate a cohort of 610 cases of brain tumour surgery, performed without io-MRI (control group). The io-MRI procedure (including draping/undraping, transfer to and from the MRI cabinet and io-MRI scan) significantly extended surgery, defined as "skin to skin" time, by 57 min (SD = 16 min) (p ≤ 0.01). Still, we show low and comparable rates of surgical complications in the MRI cohort and the control group. Postoperative haemorrhage (3.7% versus 3.0% in MRI cohort versus control group; p = 0.49) and infections (2.2% versus 1.8% in MRI cohort versus control group; p = 0.69) were not significantly different between both groups. No anaesthesiological disturbances were reported. Despite prolonged surgery and serial draping and un-draping cycles, io-MRI was not linked to higher rates of infections and postoperative haemorrhage in this study.

  19. Optical/MRI Multimodality Molecular Imaging

    NASA Astrophysics Data System (ADS)

    Ma, Lixin; Smith, Charles; Yu, Ping

    2007-03-01

    Multimodality molecular imaging that combines anatomical and functional information has shown promise in development of tumor-targeted pharmaceuticals for cancer detection or therapy. We present a new multimodality imaging technique that combines fluorescence molecular tomography (FMT) and magnetic resonance imaging (MRI) for in vivo molecular imaging of preclinical tumor models. Unlike other optical/MRI systems, the new molecular imaging system uses parallel phase acquisition based on heterodyne principle. The system has a higher accuracy of phase measurements, reduced noise bandwidth, and an efficient modulation of the fluorescence diffuse density waves. Fluorescent Bombesin probes were developed for targeting breast cancer cells and prostate cancer cells. Tissue phantom and small animal experiments were performed for calibration of the imaging system and validation of the targeting probes.

  20. In vivo measurements of proton relaxation times in human brain, liver, and skeletal muscle: a multicenter MRI study.

    PubMed

    de Certaines, J D; Henriksen, O; Spisni, A; Cortsen, M; Ring, P B

    1993-01-01

    Quantitative magnetic resonance imaging may offer unique potential for tissue characterization in vivo. In this connection texture analysis of quantitative MR images may be of special importance. Because evaluation of texture analysis needs large data material, multicenter approaches become mandatory. Within the frame of BME Concerted Action on Tissue Characterization by MRI and MRS, a pilot multicenter study was launched in order to evaluate the technical problems including comparability of relaxation time measurements carried out in the individual sites. Human brain, skeletal muscle, and liver were used as models. A total of 218 healthy volunteers were studied. Fifteen MRI scanners with field strength ranging from 0.08 T to 1.5 T were induced. Measurement accuracy was tested on the Eurospin relaxation time test object (TO5) and the obtained calibration curve was used for correction of the in vivo data. The results established that, by following a standardized procedure, comparable quantitative measurements can be obtained in vivo from a number of MR sites. The overall variation coefficient in vivo was in the same order of magnitude as ex vivo relaxometry. Thus, it is possible to carry out international multicenter studies on quantitative imaging, provided that quality control with respect to measurement accuracy and calibration of the MR equipments are performed.

  1. Four in vivo g-ratio-weighted imaging methods: Comparability and repeatability at the group level.

    PubMed

    Ellerbrock, Isabel; Mohammadi, Siawoosh

    2018-01-01

    A recent method, denoted in vivo g-ratio-weighted imaging, has related the microscopic g-ratio, only accessible by ex vivo histology, to noninvasive MRI markers for the fiber volume fraction (FVF) and myelin volume fraction (MVF). Different MRI markers have been proposed for g-ratio weighted imaging, leaving open the question which combination of imaging markers is optimal. To address this question, the repeatability and comparability of four g-ratio methods based on different combinations of, respectively, two imaging markers for FVF (tract-fiber density, TFD, and neurite orientation dispersion and density imaging, NODDI) and two imaging markers for MVF (magnetization transfer saturation rate, MT, and, from proton density maps, macromolecular tissue volume, MTV) were tested in a scan-rescan experiment in two groups. Moreover, it was tested how the repeatability and comparability were affected by two key processing steps, namely the masking of unreliable voxels (e.g., due to partial volume effects) at the group level and the calibration value used to link MRI markers to MVF (and FVF). Our data showed that repeatability and comparability depend largely on the marker for the FVF (NODDI outperformed TFD), and that they were improved by masking. Overall, the g-ratio method based on NODDI and MT showed the highest repeatability (90%) and lowest variability between groups (3.5%). Finally, our results indicate that the calibration procedure is crucial, for example, calibration to a lower g-ratio value (g = 0.6) than the commonly used one (g = 0.7) can change not only repeatability and comparability but also the reported dependency on the FVF imaging marker. Hum Brain Mapp 39:24-41, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. Cardiac Iron Determines Cardiac T2*, T2, and T1 in the Gerbil Model of Iron Cardiomyopathy

    PubMed Central

    Wood, John C.; Otto-Duessel, Maya; Aguilar, Michelle; Nick, Hanspeter; Nelson, Marvin D.; Coates, Thomas D.; Pollack, Harvey; Moats, Rex

    2010-01-01

    Background Transfusional therapy for thalassemia major and sickle cell disease can lead to iron deposition and damage to the heart, liver, and endocrine organs. Iron causes the MRI parameters T1, T2, and T2* to shorten in these organs, which creates a potential mechanism for iron quantification. However, because of the danger and variability of cardiac biopsy, tissue validation of cardiac iron estimates by MRI has not been performed. In this study, we demonstrate that iron produces similar T1, T2, and T2* changes in the heart and liver using a gerbil iron-overload model. Methods and Results Twelve gerbils underwent iron dextran loading (200 mg · kg−1 · wk−1) from 2 to 14 weeks; 5 age-matched controls were studied as well. Animals had in vivo assessment of cardiac T2* and hepatic T2 and T2* and postmortem assessment of cardiac and hepatic T1 and T2. Relaxation measurements were performed in a clinical 1.5-T magnet and a 60-MHz nuclear magnetic resonance relaxometer. Cardiac and liver iron concentrations rose linearly with administered dose. Cardiac 1/T2*, 1/T2, and 1/T1 rose linearly with cardiac iron concentration. Liver 1/T2*, 1/T2, and 1/T1 also rose linearly, proportional to hepatic iron concentration. Liver and heart calibrations were similar on a dry-weight basis. Conclusions MRI measurements of cardiac T2 and T2* can be used to quantify cardiac iron. The similarity of liver and cardiac iron calibration curves in the gerbil suggests that extrapolation of human liver calibration curves to heart may be a rational approximation in humans. PMID:16027257

  3. New concept on an integrated interior magnetic resonance imaging and medical linear accelerator system for radiation therapy.

    PubMed

    Jia, Xun; Tian, Zhen; Xi, Yan; Jiang, Steve B; Wang, Ge

    2017-01-01

    Image guidance plays a critical role in radiotherapy. Currently, cone-beam computed tomography (CBCT) is routinely used in clinics for this purpose. While this modality can provide an attenuation image for therapeutic planning, low soft-tissue contrast affects the delineation of anatomical and pathological features. Efforts have recently been devoted to several MRI linear accelerator (LINAC) projects that lead to the successful combination of a full diagnostic MRI scanner with a radiotherapy machine. We present a new concept for the development of the MRI-LINAC system. Instead of combining a full MRI scanner with the LINAC platform, we propose using an interior MRI (iMRI) approach to image a specific region of interest (RoI) containing the radiation treatment target. While the conventional CBCT component still delivers a global image of the patient's anatomy, the iMRI offers local imaging of high soft-tissue contrast for tumor delineation. We describe a top-level system design for the integration of an iMRI component into an existing LINAC platform. We performed numerical analyses of the magnetic field for the iMRI to show potentially acceptable field properties in a spherical RoI with a diameter of 15 cm. This field could be shielded to a sufficiently low level around the LINAC region to avoid electromagnetic interference. Furthermore, we investigate the dosimetric impacts of this integration on the radiotherapy beam.

  4. Machine Learning Classification of Cirrhotic Patients with and without Minimal Hepatic Encephalopathy Based on Regional Homogeneity of Intrinsic Brain Activity.

    PubMed

    Chen, Qiu-Feng; Chen, Hua-Jun; Liu, Jun; Sun, Tao; Shen, Qun-Tai

    2016-01-01

    Machine learning-based approaches play an important role in examining functional magnetic resonance imaging (fMRI) data in a multivariate manner and extracting features predictive of group membership. This study was performed to assess the potential for measuring brain intrinsic activity to identify minimal hepatic encephalopathy (MHE) in cirrhotic patients, using the support vector machine (SVM) method. Resting-state fMRI data were acquired in 16 cirrhotic patients with MHE and 19 cirrhotic patients without MHE. The regional homogeneity (ReHo) method was used to investigate the local synchrony of intrinsic brain activity. Psychometric Hepatic Encephalopathy Score (PHES) was used to define MHE condition. SVM-classifier was then applied using leave-one-out cross-validation, to determine the discriminative ReHo-map for MHE. The discrimination map highlights a set of regions, including the prefrontal cortex, anterior cingulate cortex, anterior insular cortex, inferior parietal lobule, precentral and postcentral gyri, superior and medial temporal cortices, and middle and inferior occipital gyri. The optimized discriminative model showed total accuracy of 82.9% and sensitivity of 81.3%. Our results suggested that a combination of the SVM approach and brain intrinsic activity measurement could be helpful for detection of MHE in cirrhotic patients.

  5. Automated assessment of thigh composition using machine learning for Dixon magnetic resonance images.

    PubMed

    Yang, Yu Xin; Chong, Mei Sian; Tay, Laura; Yew, Suzanne; Yeo, Audrey; Tan, Cher Heng

    2016-10-01

    To develop and validate a machine learning based automated segmentation method that jointly analyzes the four contrasts provided by Dixon MRI technique for improved thigh composition segmentation accuracy. The automatic detection of body composition is formulized as a three-class classification issue. Each image voxel in the training dataset is assigned with a correct label. A voxel classifier is trained and subsequently used to predict unseen data. Morphological operations are finally applied to generate volumetric segmented images for different structures. We applied this algorithm on datasets of (1) four contrast images, (2) water and fat images, and (3) unsuppressed images acquired from 190 subjects. The proposed method using four contrasts achieved most accurate and robust segmentation compared to the use of combined fat and water images and the use of unsuppressed image, average Dice coefficients of 0.94 ± 0.03, 0.96 ± 0.03, 0.80 ± 0.03, and 0.97 ± 0.01 has been achieved to bone region, subcutaneous adipose tissue (SAT), inter-muscular adipose tissue (IMAT), and muscle respectively. Our proposed method based on machine learning produces accurate tissue quantification and showed an effective use of large information provided by the four contrast images from Dixon MRI.

  6. Computer-assisted segmentation of white matter lesions in 3D MR images using support vector machine.

    PubMed

    Lao, Zhiqiang; Shen, Dinggang; Liu, Dengfeng; Jawad, Abbas F; Melhem, Elias R; Launer, Lenore J; Bryan, R Nick; Davatzikos, Christos

    2008-03-01

    Brain lesions, especially white matter lesions (WMLs), are associated with cardiac and vascular disease, but also with normal aging. Quantitative analysis of WML in large clinical trials is becoming more and more important. In this article, we present a computer-assisted WML segmentation method, based on local features extracted from multiparametric magnetic resonance imaging (MRI) sequences (ie, T1-weighted, T2-weighted, proton density-weighted, and fluid attenuation inversion recovery MRI scans). A support vector machine classifier is first trained on expert-defined WMLs, and is then used to classify new scans. Postprocessing analysis further reduces false positives by using anatomic knowledge and measures of distance from the training set. Cross-validation on a population of 35 patients from three different imaging sites with WMLs of varying sizes, shapes, and locations tests the robustness and accuracy of the proposed segmentation method, compared with the manual segmentation results from two experienced neuroradiologists.

  7. Calibration of 3D ultrasound to an electromagnetic tracking system

    NASA Astrophysics Data System (ADS)

    Lang, Andrew; Parthasarathy, Vijay; Jain, Ameet

    2011-03-01

    The use of electromagnetic (EM) tracking is an important guidance tool that can be used to aid procedures requiring accurate localization such as needle injections or catheter guidance. Using EM tracking, the information from different modalities can be easily combined using pre-procedural calibration information. These calibrations are performed individually, per modality, allowing different imaging systems to be mixed and matched according to the procedure at hand. In this work, a framework for the calibration of a 3D transesophageal echocardiography probe to EM tracking is developed. The complete calibration framework includes three required steps: data acquisition, needle segmentation, and calibration. Ultrasound (US) images of an EM tracked needle must be acquired with the position of the needles in each volume subsequently extracted by segmentation. The calibration transformation is determined through a registration between the segmented points and the recorded EM needle positions. Additionally, the speed of sound is compensated for since calibration is performed in water that has a different speed then is assumed by the US machine. A statistical validation framework has also been developed to provide further information related to the accuracy and consistency of the calibration. Further validation of the calibration showed an accuracy of 1.39 mm.

  8. Study of a non-equilibrium plasma pinch with application for microwave generation

    NASA Astrophysics Data System (ADS)

    Al Agry, Ahmad Farouk

    The Non-Equilibrium Plasma Pinch (NEPP), also known as the Dense Plasma Focus (DPF) is well known as a source of energetic ions, relativistic electrons and neutrons as well as electromagnetic radiation extending from the infrared to X-ray. In this dissertation, the operation of a 15 kJ, Mather type, NEPP machine is studied in detail. A large number of experiments are carried out to tune the machine parameters for best performance using helium and hydrogen as filling gases. The NEPP machine is modified to be able to extract the copious number of electrons generated at the pinch. A hollow anode with small hole at the flat end, and a mock magnetron without biasing magnetic field are built. The electrons generated at the pinch are very difficult to capture, therefore a novel device is built to capture and transport the electrons from the pinch to the magnetron. The novel cup-rod-needle device successfully serves the purpose to capture and transport electrons to monitor the pinch current. Further, the device has the potential to field emit charges from its needle end acting as a pulsed electron source for other devices such as the magnetron. Diagnostics tools are designed, modeled, built, calibrated, and implemented in the machine to measure the pinch dynamics. A novel, UNLV patented electromagnetic dot sensors are successfully calibrated, and implemented in the machine. A new calibration technique is developed and test stands designed and built to measure the dot's ability to track the impetus signal over its dynamic range starting and ending in the noise region. The patented EM-dot sensor shows superior performance over traditional electromagnetic sensors, such as Rogowski coils. On the other hand, the cup-rod structure, when grounded on the rod side, serves as a diagnostic tool to monitor the pinch current by sampling the actual current, a quantity that has been always very challenging to measure without perturbing the pinch. To the best of our knowledge, this method of measuring the pinch current is unique and has never been done before. Agreement with other models is shown. The operation of the NEPP machine with the hole in the center of the anode and the magnetron connected including the cup-rod structure is examined against the NEPP machine signature with solid anode. Both cases showed excellent agreement. This suggests that the existence of the hole and the diagnostic tool inside the anode have negligible effects on the pinch.

  9. Multi-Parametric MRI and Texture Analysis to Visualize Spatial Histologic Heterogeneity and Tumor Extent in Glioblastoma.

    PubMed

    Hu, Leland S; Ning, Shuluo; Eschbacher, Jennifer M; Gaw, Nathan; Dueck, Amylou C; Smith, Kris A; Nakaji, Peter; Plasencia, Jonathan; Ranjbar, Sara; Price, Stephen J; Tran, Nhan; Loftus, Joseph; Jenkins, Robert; O'Neill, Brian P; Elmquist, William; Baxter, Leslie C; Gao, Fei; Frakes, David; Karis, John P; Zwart, Christine; Swanson, Kristin R; Sarkaria, Jann; Wu, Teresa; Mitchell, J Ross; Li, Jing

    2015-01-01

    Genetic profiling represents the future of neuro-oncology but suffers from inadequate biopsies in heterogeneous tumors like Glioblastoma (GBM). Contrast-enhanced MRI (CE-MRI) targets enhancing core (ENH) but yields adequate tumor in only ~60% of cases. Further, CE-MRI poorly localizes infiltrative tumor within surrounding non-enhancing parenchyma, or brain-around-tumor (BAT), despite the importance of characterizing this tumor segment, which universally recurs. In this study, we use multiple texture analysis and machine learning (ML) algorithms to analyze multi-parametric MRI, and produce new images indicating tumor-rich targets in GBM. We recruited primary GBM patients undergoing image-guided biopsies and acquired pre-operative MRI: CE-MRI, Dynamic-Susceptibility-weighted-Contrast-enhanced-MRI, and Diffusion Tensor Imaging. Following image coregistration and region of interest placement at biopsy locations, we compared MRI metrics and regional texture with histologic diagnoses of high- vs low-tumor content (≥80% vs <80% tumor nuclei) for corresponding samples. In a training set, we used three texture analysis algorithms and three ML methods to identify MRI-texture features that optimized model accuracy to distinguish tumor content. We confirmed model accuracy in a separate validation set. We collected 82 biopsies from 18 GBMs throughout ENH and BAT. The MRI-based model achieved 85% cross-validated accuracy to diagnose high- vs low-tumor in the training set (60 biopsies, 11 patients). The model achieved 81.8% accuracy in the validation set (22 biopsies, 7 patients). Multi-parametric MRI and texture analysis can help characterize and visualize GBM's spatial histologic heterogeneity to identify regional tumor-rich biopsy targets.

  10. Direct estimation of evoked hemoglobin changes by multimodality fusion imaging

    PubMed Central

    Huppert, Theodore J.; Diamond, Solomon G.; Boas, David A.

    2009-01-01

    In the last two decades, both diffuse optical tomography (DOT) and blood oxygen level dependent (BOLD)-based functional magnetic resonance imaging (fMRI) methods have been developed as noninvasive tools for imaging evoked cerebral hemodynamic changes in studies of brain activity. Although these two technologies measure functional contrast from similar physiological sources, i.e., changes in hemoglobin levels, these two modalities are based on distinct physical and biophysical principles leading to both limitations and strengths to each method. In this work, we describe a unified linear model to combine the complimentary spatial, temporal, and spectroscopic resolutions of concurrently measured optical tomography and fMRI signals. Using numerical simulations, we demonstrate that concurrent optical and BOLD measurements can be used to create cross-calibrated estimates of absolute micromolar deoxyhemoglobin changes. We apply this new analysis tool to experimental data acquired simultaneously with both DOT and BOLD imaging during a motor task, demonstrate the ability to more robustly estimate hemoglobin changes in comparison to DOT alone, and show how this approach can provide cross-calibrated estimates of hemoglobin changes. Using this multimodal method, we estimate the calibration of the 3 tesla BOLD signal to be −0.55% ± 0.40% signal change per micromolar change of deoxyhemoglobin. PMID:19021411

  11. Magnetic Field Gradient Calibration as an Experiment to Illustrate Magnetic Resonance Imaging

    ERIC Educational Resources Information Center

    Seedhouse, Steven J.; Hoffmann, Markus M.

    2008-01-01

    A nuclear magnetic resonance (NMR) spectroscopy experiment for the undergraduate physical chemistry laboratory is described that encompasses both qualitative and quantitative pedagogical goals. Qualitatively, the experiment illustrates how images are obtained in magnetic resonance imaging (MRI). Quantitatively, students experience the…

  12. Design features and results from fatigue reliability research machines.

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Kececioglu, D.; Mcconnell, J. B.

    1971-01-01

    The design, fabrication, development, operation, calibration and results from reversed bending combined with steady torque fatigue research machines are presented. Fifteen-centimeter long, notched, SAE 4340 steel specimens are subjected to various combinations of these stresses and cycled to failure. Failure occurs when the crack in the notch passes through the specimen automatically shutting down the test machine. These cycles-to-failure data are statistically analyzed to develop a probabilistic S-N diagram. These diagrams have many uses; a rotating component design example given in the literature shows that minimum size and weight for a specified number of cycles and reliability can be calculated using these diagrams.

  13. Deriving global parameter estimates for the Noah land surface model using FLUXNET and machine learning

    NASA Astrophysics Data System (ADS)

    Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.

    2016-11-01

    With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.

  14. Fully automated system for the quantification of human osteoarthritic knee joint effusion volume using magnetic resonance imaging.

    PubMed

    Li, Wei; Abram, François; Pelletier, Jean-Pierre; Raynauld, Jean-Pierre; Dorais, Marc; d'Anjou, Marc-André; Martel-Pelletier, Johanne

    2010-01-01

    Joint effusion is frequently associated with osteoarthritis (OA) flare-up and is an important marker of therapeutic response. This study aimed at developing and validating a fully automated system based on magnetic resonance imaging (MRI) for the quantification of joint effusion volume in knee OA patients. MRI examinations consisted of two axial sequences: a T2-weighted true fast imaging with steady-state precession and a T1-weighted gradient echo. An automated joint effusion volume quantification system using MRI was developed and validated (a) with calibrated phantoms (cylinder and sphere) and effusion from knee OA patients; (b) with assessment by manual quantification; and (c) by direct aspiration. Twenty-five knee OA patients with joint effusion were included in the study. The automated joint effusion volume quantification was developed as a four stage sequencing process: bone segmentation, filtering of unrelated structures, segmentation of joint effusion, and subvoxel volume calculation. Validation experiments revealed excellent coefficients of variation with the calibrated cylinder (1.4%) and sphere (0.8%) phantoms. Comparison of the OA knee joint effusion volume assessed by the developed automated system and by manual quantification was also excellent (r = 0.98; P < 0.0001), as was the comparison with direct aspiration (r = 0.88; P = 0.0008). The newly developed fully automated MRI-based system provided precise quantification of OA knee joint effusion volume with excellent correlation with data from phantoms, a manual system, and joint aspiration. Such an automated system will be instrumental in improving the reproducibility/reliability of the evaluation of this marker in clinical application.

  15. Construction of a Calibrated Probabilistic Classification Catalog: Application to 50k Variable Sources in the All-Sky Automated Survey

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Brink, Henrik; Crellin-Quick, Arien

    2012-12-01

    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.

  16. CONSTRUCTION OF A CALIBRATED PROBABILISTIC CLASSIFICATION CATALOG: APPLICATION TO 50k VARIABLE SOURCES IN THE ALL-SKY AUTOMATED SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.

    2012-12-15

    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In additionmore » to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.« less

  17. Support vector machine for breast cancer classification using diffusion-weighted MRI histogram features: Preliminary study.

    PubMed

    Vidić, Igor; Egnell, Liv; Jerome, Neil P; Teruel, Jose R; Sjøbakk, Torill E; Østlie, Agnes; Fjøsne, Hans E; Bathen, Tone F; Goa, Pål Erik

    2018-05-01

    Diffusion-weighted MRI (DWI) is currently one of the fastest developing MRI-based techniques in oncology. Histogram properties from model fitting of DWI are useful features for differentiation of lesions, and classification can potentially be improved by machine learning. To evaluate classification of malignant and benign tumors and breast cancer subtypes using support vector machine (SVM). Prospective. Fifty-one patients with benign (n = 23) and malignant (n = 28) breast tumors (26 ER+, whereof six were HER2+). Patients were imaged with DW-MRI (3T) using twice refocused spin-echo echo-planar imaging with echo time / repetition time (TR/TE) = 9000/86 msec, 90 × 90 matrix size, 2 × 2 mm in-plane resolution, 2.5 mm slice thickness, and 13 b-values. Apparent diffusion coefficient (ADC), relative enhanced diffusivity (RED), and the intravoxel incoherent motion (IVIM) parameters diffusivity (D), pseudo-diffusivity (D*), and perfusion fraction (f) were calculated. The histogram properties (median, mean, standard deviation, skewness, kurtosis) were used as features in SVM (10-fold cross-validation) for differentiation of lesions and subtyping. Accuracies of the SVM classifications were calculated to find the combination of features with highest prediction accuracy. Mann-Whitney tests were performed for univariate comparisons. For benign versus malignant tumors, univariate analysis found 11 histogram properties to be significant differentiators. Using SVM, the highest accuracy (0.96) was achieved from a single feature (mean of RED), or from three feature combinations of IVIM or ADC. Combining features from all models gave perfect classification. No single feature predicted HER2 status of ER + tumors (univariate or SVM), although high accuracy (0.90) was achieved with SVM combining several features. Importantly, these features had to include higher-order statistics (kurtosis and skewness), indicating the importance to account for heterogeneity. Our findings suggest that SVM, using features from a combination of diffusion models, improves prediction accuracy for differentiation of benign versus malignant breast tumors, and may further assist in subtyping of breast cancer. 3 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2018;47:1205-1216. © 2017 International Society for Magnetic Resonance in Medicine.

  18. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  19. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    PubMed

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  20. Surface-Based fMRI-Driven Diffusion Tractography in the Presence of Significant Brain Pathology: A Study Linking Structure and Function in Cerebral Palsy

    PubMed Central

    Cunnington, Ross; Boyd, Roslyn N.; Rose, Stephen E.

    2016-01-01

    Diffusion MRI (dMRI) tractography analyses are difficult to perform in the presence of brain pathology. Automated methods that rely on cortical parcellation for structural connectivity studies often fail, while manually defining regions is extremely time consuming and can introduce human error. Both methods also make assumptions about structure-function relationships that may not hold after cortical reorganisation. Seeding tractography with functional-MRI (fMRI) activation is an emerging method that reduces these confounds, but inherent smoothing of fMRI signal may result in the inclusion of irrelevant pathways. This paper describes a novel fMRI-seeded dMRI-analysis pipeline based on surface-meshes that reduces these issues and utilises machine-learning to generate task specific white matter pathways, minimising the requirement for manually-drawn ROIs. We directly compared this new strategy to a standard voxelwise fMRI-dMRI approach, by investigating correlations between clinical scores and dMRI metrics of thalamocortical and corticomotor tracts in 31 children with unilateral cerebral palsy. The surface-based approach successfully processed more participants (87%) than the voxel-based approach (65%), and provided significantly more-coherent tractography. Significant correlations between dMRI metrics and five clinical scores of function were found for the more superior regions of these tracts. These significant correlations were stronger and more frequently found with the surface-based method (15/20 investigated were significant; R2 = 0.43–0.73) than the voxelwise analysis (2 sig. correlations; 0.38 & 0.49). More restricted fMRI signal, better-constrained tractography, and the novel track-classification method all appeared to contribute toward these differences. PMID:27487011

  1. Computer-aided design/computer-aided manufacturing skull base drill.

    PubMed

    Couldwell, William T; MacDonald, Joel D; Thomas, Charles L; Hansen, Bradley C; Lapalikar, Aniruddha; Thakkar, Bharat; Balaji, Alagar K

    2017-05-01

    The authors have developed a simple device for computer-aided design/computer-aided manufacturing (CAD-CAM) that uses an image-guided system to define a cutting tool path that is shared with a surgical machining system for drilling bone. Information from 2D images (obtained via CT and MRI) is transmitted to a processor that produces a 3D image. The processor generates code defining an optimized cutting tool path, which is sent to a surgical machining system that can drill the desired portion of bone. This tool has applications for bone removal in both cranial and spine neurosurgical approaches. Such applications have the potential to reduce surgical time and associated complications such as infection or blood loss. The device enables rapid removal of bone within 1 mm of vital structures. The validity of such a machining tool is exemplified in the rapid (< 3 minutes machining time) and accurate removal of bone for transtemporal (for example, translabyrinthine) approaches.

  2. [Application of near infrared spectroscopy combined with particle swarm optimization based least square support vactor machine to rapid quantitative analysis of Corni Fructus].

    PubMed

    Liu, Xue-song; Sun, Fen-fang; Jin, Ye; Wu, Yong-jiang; Gu, Zhi-xin; Zhu, Li; Yan, Dong-lan

    2015-12-01

    A novel method was developed for the rapid determination of multi-indicators in corni fructus by means of near infrared (NIR) spectroscopy. Particle swarm optimization (PSO) based least squares support vector machine was investigated to increase the levels of quality control. The calibration models of moisture, extractum, morroniside and loganin were established using the PSO-LS-SVM algorithm. The performance of PSO-LS-SVM models was compared with partial least squares regression (PLSR) and back propagation artificial neural network (BP-ANN). The calibration and validation results of PSO-LS-SVM were superior to both PLS and BP-ANN. For PSO-LS-SVM models, the correlation coefficients (r) of calibrations were all above 0.942. The optimal prediction results were also achieved by PSO-LS-SVM models with the RMSEP (root mean square error of prediction) and RSEP (relative standard errors of prediction) less than 1.176 and 15.5% respectively. The results suggest that PSO-LS-SVM algorithm has a good model performance and high prediction accuracy. NIR has a potential value for rapid determination of multi-indicators in Corni Fructus.

  3. Design and Mechanical Evaluation of a Capacitive Sensor-Based Indexed Platform for Verification of Portable Coordinate Measuring Instruments

    PubMed Central

    Avila, Agustín Brau; Mazo, Jorge Santolaria; Martín, Juan José Aguilar

    2014-01-01

    During the last years, the use of Portable Coordinate Measuring Machines (PCMMs) in industry has increased considerably, mostly due to their flexibility for accomplishing in-line measuring tasks as well as their reduced costs and operational advantages as compared to traditional coordinate measuring machines (CMMs). However, their operation has a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification procedures. In this work the mechanical design of an indexed metrology platform (IMP) is presented. The aim of the IMP is to increase the final accuracy and to radically simplify the calibration, identification and verification of geometrical parameter procedures of PCMMs. The IMP allows us to fix the calibrated gauge object and move the measuring instrument in such a way that it is possible to cover most of the instrument working volume, reducing the time and operator fatigue to carry out these types of procedures. PMID:24451458

  4. Design and mechanical evaluation of a capacitive sensor-based indexed platform for verification of portable coordinate measuring instruments.

    PubMed

    Avila, Agustín Brau; Mazo, Jorge Santolaria; Martín, Juan José Aguilar

    2014-01-02

    During the last years, the use of Portable Coordinate Measuring Machines (PCMMs) in industry has increased considerably, mostly due to their flexibility for accomplishing in-line measuring tasks as well as their reduced costs and operational advantages as compared to traditional coordinate measuring machines (CMMs). However, their operation has a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification procedures. In this work the mechanical design of an indexed metrology platform (IMP) is presented. The aim of the IMP is to increase the final accuracy and to radically simplify the calibration, identification and verification of geometrical parameter procedures of PCMMs. The IMP allows us to fix the calibrated gauge object and move the measuring instrument in such a way that it is possible to cover most of the instrument working volume, reducing the time and operator fatigue to carry out these types of procedures.

  5. Towards System Calibration of Panoramic Laser Scanners from a Single Station

    PubMed Central

    Medić, Tomislav; Holst, Christoph; Kuhlmann, Heiner

    2017-01-01

    Terrestrial laser scanner measurements suffer from systematic errors due to internal misalignments. The magnitude of the resulting errors in the point cloud in many cases exceeds the magnitude of random errors. Hence, the task of calibrating a laser scanner is important for applications with high accuracy demands. This paper primarily addresses the case of panoramic terrestrial laser scanners. Herein, it is proven that most of the calibration parameters can be estimated from a single scanner station without a need for any reference information. This hypothesis is confirmed through an empirical experiment, which was conducted in a large machine hall using a Leica Scan Station P20 panoramic laser scanner. The calibration approach is based on the widely used target-based self-calibration approach, with small modifications. A new angular parameterization is used in order to implicitly introduce measurements in two faces of the instrument and for the implementation of calibration parameters describing genuine mechanical misalignments. Additionally, a computationally preferable calibration algorithm based on the two-face measurements is introduced. In the end, the calibration results are discussed, highlighting all necessary prerequisites for the scanner calibration from a single scanner station. PMID:28513548

  6. Hybrid MRI-Ultrasound acquisitions, and scannerless real-time imaging.

    PubMed

    Preiswerk, Frank; Toews, Matthew; Cheng, Cheng-Chieh; Chiou, Jr-Yuan George; Mei, Chang-Sheng; Schaefer, Lena F; Hoge, W Scott; Schwartz, Benjamin M; Panych, Lawrence P; Madore, Bruno

    2017-09-01

    To combine MRI, ultrasound, and computer science methodologies toward generating MRI contrast at the high frame rates of ultrasound, inside and even outside the MRI bore. A small transducer, held onto the abdomen with an adhesive bandage, collected ultrasound signals during MRI. Based on these ultrasound signals and their correlations with MRI, a machine-learning algorithm created synthetic MR images at frame rates up to 100 per second. In one particular implementation, volunteers were taken out of the MRI bore with the ultrasound sensor still in place, and MR images were generated on the basis of ultrasound signal and learned correlations alone in a "scannerless" manner. Hybrid ultrasound-MRI data were acquired in eight separate imaging sessions. Locations of liver features, in synthetic images, were compared with those from acquired images: The mean error was 1.0 pixel (2.1 mm), with best case 0.4 and worst case 4.1 pixels (in the presence of heavy coughing). For results from outside the bore, qualitative validation involved optically tracked ultrasound imaging with/without coughing. The proposed setup can generate an accurate stream of high-speed MR images, up to 100 frames per second, inside or even outside the MR bore. Magn Reson Med 78:897-908, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  7. Imaging patterns predict patient survival and molecular subtype in glioblastoma via machine learning techniques

    PubMed Central

    Macyszyn, Luke; Akbari, Hamed; Pisapia, Jared M.; Da, Xiao; Attiah, Mark; Pigrish, Vadim; Bi, Yingtao; Pal, Sharmistha; Davuluri, Ramana V.; Roccograndi, Laura; Dahmane, Nadia; Martinez-Lage, Maria; Biros, George; Wolf, Ronald L.; Bilello, Michel; O'Rourke, Donald M.; Davatzikos, Christos

    2016-01-01

    Background MRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB). Methods One hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients. Results Survival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy. Conclusions By employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood–brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients. PMID:26188015

  8. Analysing exoplanetary data using unsupervised machine-learning

    NASA Astrophysics Data System (ADS)

    Waldmann, I. P.

    2012-04-01

    The field of transiting extrasolar planets and especially the study of their atmospheres is one of the youngest and most dynamic subjects in current astrophysics. Permanently at the edge of technical feasibility, we are successfully discovering and characterising smaller and smaller planets. To study exoplanetary atmospheres, we typically require a 10-4 to 10-5 level of accuracy in flux. Achieving such a precision has become the central challenge to exoplanetary research and is often impeded by systematic (nongaussian) noise from either the instrument, stellar activity or both. Dedicated missions, such as Kepler, feature an a priori instrument calibration plan to the required accuracy but nonetheless remain limited by stellar systematics. More generic instruments often lack a sufficiently defined instrument response function, making it very hard to calibrate. In these cases, it becomes interesting to know how well we can calibrate the data without any additional or prior knowledge of the instrument or star. In this conference, we present a non-parametric machine-learning algorithm, based on the concept of independent component analysis, to de-convolve the systematic noise and all non-Gaussian signals from the desired astrophysical signal. Such a 'blind' signal de-mixing is commonly known as the 'Cocktail Party problem' in signal-processing. We showcase the importance and broad applicability of unsupervised machine learning in exoplanetary data analysis by discussing: 1) the removal of instrument systematics in a re-analysis of an HD189733b transmission spectrum obtained with Hubble/NICMOS; 2) the removal of time-correlated stellar noise in individual lightcurves observed by the Kepler mission.

  9. MRI Detects Myocardial Iron in the Human Heart

    PubMed Central

    Ghugre, Nilesh R.; Enriquez, Cathleen M.; Gonzalez, Ignacio; Nelson, Marvin D.; Coates, Thomas D.; Wood, John C.

    2010-01-01

    Iron-induced cardiac dysfunction is a leading cause of death in transfusion-dependent anemia. MRI relaxation rates R2(1/T2) and R2∗(1∕T2∗) accurately predict liver iron concentration, but their ability to predict cardiac iron has been challenged by some investigators. Studies in animal models support similar R2 and R2∗ behavior with heart and liver iron, but human studies are lacking. To determine the relationship between MRI relaxivities and cardiac iron, regional variations in R2 and R2∗ were compared with iron distribution in one freshly deceased, unfixed, iron-loaded heart. R2 and R2∗ were proportionally related to regional iron concentrations and highly concordant with one another within the interventricular septum. A comparison of postmortem and in vitro measurements supports the notion that cardiac R2∗ should be assessed in the septum rather than the whole heart. These data, along with measurements from controls, provide bounds on MRI-iron calibration curves in human heart and further support the clinical use of cardiac MRI in iron-overload syndromes. PMID:16888797

  10. Standing on the shoulders of giants: improving medical image segmentation via bias correction.

    PubMed

    Wang, Hongzhi; Das, Sandhitsu; Pluta, John; Craige, Caryne; Altinay, Murat; Avants, Brian; Weiner, Michael; Mueller, Susanne; Yushkevich, Paul

    2010-01-01

    We propose a simple strategy to improve automatic medical image segmentation. The key idea is that without deep understanding of a segmentation method, we can still improve its performance by directly calibrating its results with respect to manual segmentation. We formulate the calibration process as a bias correction problem, which is addressed by machine learning using training data. We apply this methodology on three segmentation problems/methods and show significant improvements for all of them.

  11. New concept on an integrated interior magnetic resonance imaging and medical linear accelerator system for radiation therapy

    PubMed Central

    Jia, Xun; Tian, Zhen; Xi, Yan; Jiang, Steve B.; Wang, Ge

    2017-01-01

    Abstract. Image guidance plays a critical role in radiotherapy. Currently, cone-beam computed tomography (CBCT) is routinely used in clinics for this purpose. While this modality can provide an attenuation image for therapeutic planning, low soft-tissue contrast affects the delineation of anatomical and pathological features. Efforts have recently been devoted to several MRI linear accelerator (LINAC) projects that lead to the successful combination of a full diagnostic MRI scanner with a radiotherapy machine. We present a new concept for the development of the MRI-LINAC system. Instead of combining a full MRI scanner with the LINAC platform, we propose using an interior MRI (iMRI) approach to image a specific region of interest (RoI) containing the radiation treatment target. While the conventional CBCT component still delivers a global image of the patient’s anatomy, the iMRI offers local imaging of high soft-tissue contrast for tumor delineation. We describe a top-level system design for the integration of an iMRI component into an existing LINAC platform. We performed numerical analyses of the magnetic field for the iMRI to show potentially acceptable field properties in a spherical RoI with a diameter of 15 cm. This field could be shielded to a sufficiently low level around the LINAC region to avoid electromagnetic interference. Furthermore, we investigate the dosimetric impacts of this integration on the radiotherapy beam. PMID:28331888

  12. A Set of Functional Brain Networks for the Comprehensive Evaluation of Human Characteristics.

    PubMed

    Sung, Yul-Wan; Kawachi, Yousuke; Choi, Uk-Su; Kang, Daehun; Abe, Chihiro; Otomo, Yuki; Ogawa, Seiji

    2018-01-01

    Many human characteristics must be evaluated to comprehensively understand an individual, and measurements of the corresponding cognition/behavior are required. Brain imaging by functional MRI (fMRI) has been widely used to examine brain function related to human cognition/behavior. However, few aspects of cognition/behavior of individuals or experimental groups can be examined through task-based fMRI. Recently, resting state fMRI (rs-fMRI) signals have been shown to represent functional infrastructure in the brain that is highly involved in processing information related to cognition/behavior. Using rs-fMRI may allow diverse information about the brain through a single MRI scan to be obtained, as rs-fMRI does not require stimulus tasks. In this study, we attempted to identify a set of functional networks representing cognition/behavior that are related to a wide variety of human characteristics and to evaluate these characteristics using rs-fMRI data. If possible, these findings would support the potential of rs-fMRI to provide diverse information about the brain. We used resting-state fMRI and a set of 130 psychometric parameters that cover most human characteristics, including those related to intelligence and emotional quotients and social ability/skill. We identified 163 brain regions by VBM analysis using regression analysis with 130 psychometric parameters. Next, using a 163 × 163 correlation matrix, we identified functional networks related to 111 of the 130 psychometric parameters. Finally, we made an 8-class support vector machine classifiers corresponding to these 111 functional networks. Our results demonstrate that rs-fMRI signals contain intrinsic information about brain function related to cognition/behaviors and that this set of 111 networks/classifiers can be used to comprehensively evaluate human characteristics.

  13. Kernel Principal Component Analysis for dimensionality reduction in fMRI-based diagnosis of ADHD.

    PubMed

    Sidhu, Gagan S; Asgarian, Nasimeh; Greiner, Russell; Brown, Matthew R G

    2012-01-01

    This study explored various feature extraction methods for use in automated diagnosis of Attention-Deficit Hyperactivity Disorder (ADHD) from functional Magnetic Resonance Image (fMRI) data. Each participant's data consisted of a resting state fMRI scan as well as phenotypic data (age, gender, handedness, IQ, and site of scanning) from the ADHD-200 dataset. We used machine learning techniques to produce support vector machine (SVM) classifiers that attempted to differentiate between (1) all ADHD patients vs. healthy controls and (2) ADHD combined (ADHD-c) type vs. ADHD inattentive (ADHD-i) type vs. controls. In different tests, we used only the phenotypic data, only the imaging data, or else both the phenotypic and imaging data. For feature extraction on fMRI data, we tested the Fast Fourier Transform (FFT), different variants of Principal Component Analysis (PCA), and combinations of FFT and PCA. PCA variants included PCA over time (PCA-t), PCA over space and time (PCA-st), and kernelized PCA (kPCA-st). Baseline chance accuracy was 64.2% produced by guessing healthy control (the majority class) for all participants. Using only phenotypic data produced 72.9% accuracy on two class diagnosis and 66.8% on three class diagnosis. Diagnosis using only imaging data did not perform as well as phenotypic-only approaches. Using both phenotypic and imaging data with combined FFT and kPCA-st feature extraction yielded accuracies of 76.0% on two class diagnosis and 68.6% on three class diagnosis-better than phenotypic-only approaches. Our results demonstrate the potential of using FFT and kPCA-st with resting-state fMRI data as well as phenotypic data for automated diagnosis of ADHD. These results are encouraging given known challenges of learning ADHD diagnostic classifiers using the ADHD-200 dataset (see Brown et al., 2012).

  14. Quantitative performance evaluation of 124I PET/MRI lesion dosimetry in differentiated thyroid cancer

    NASA Astrophysics Data System (ADS)

    Wierts, R.; Jentzen, W.; Quick, H. H.; Wisselink, H. J.; Pooters, I. N. A.; Wildberger, J. E.; Herrmann, K.; Kemerink, G. J.; Backes, W. H.; Mottaghy, F. M.

    2018-01-01

    The aim was to investigate the quantitative performance of 124I PET/MRI for pre-therapy lesion dosimetry in differentiated thyroid cancer (DTC). Phantom measurements were performed on a PET/MRI system (Biograph mMR, Siemens Healthcare) using 124I and 18F. The PET calibration factor and the influence of radiofrequency coil attenuation were determined using a cylindrical phantom homogeneously filled with radioactivity. The calibration factor was 1.00  ±  0.02 for 18F and 0.88  ±  0.02 for 124I. Near the radiofrequency surface coil an underestimation of less than 5% in radioactivity concentration was observed. Soft-tissue sphere recovery coefficients were determined using the NEMA IEC body phantom. Recovery coefficients were systematically higher for 18F than for 124I. In addition, the six spheres of the phantom were segmented using a PET-based iterative segmentation algorithm. For all 124I measurements, the deviations in segmented lesion volume and mean radioactivity concentration relative to the actual values were smaller than 15% and 25%, respectively. The effect of MR-based attenuation correction (three- and four-segment µ-maps) on bone lesion quantification was assessed using radioactive spheres filled with a K2HPO4 solution mimicking bone lesions. The four-segment µ-map resulted in an underestimation of the imaged radioactivity concentration of up to 15%, whereas the three-segment µ-map resulted in an overestimation of up to 10%. For twenty lesions identified in six patients, a comparison of 124I PET/MRI to PET/CT was performed with respect to segmented lesion volume and radioactivity concentration. The interclass correlation coefficients showed excellent agreement in segmented lesion volume and radioactivity concentration (0.999 and 0.95, respectively). In conclusion, it is feasible that accurate quantitative 124I PET/MRI could be used to perform radioiodine pre-therapy lesion dosimetry in DTC.

  15. Separation of parallel encoded complex-valued slices (SPECS) from a single complex-valued aliased coil image.

    PubMed

    Rowe, Daniel B; Bruce, Iain P; Nencka, Andrew S; Hyde, James S; Kociuba, Mary C

    2016-04-01

    Achieving a reduction in scan time with minimal inter-slice signal leakage is one of the significant obstacles in parallel MR imaging. In fMRI, multiband-imaging techniques accelerate data acquisition by simultaneously magnetizing the spatial frequency spectrum of multiple slices. The SPECS model eliminates the consequential inter-slice signal leakage from the slice unaliasing, while maintaining an optimal reduction in scan time and activation statistics in fMRI studies. When the combined k-space array is inverse Fourier reconstructed, the resulting aliased image is separated into the un-aliased slices through a least squares estimator. Without the additional spatial information from a phased array of receiver coils, slice separation in SPECS is accomplished with acquired aliased images in shifted FOV aliasing pattern, and a bootstrapping approach of incorporating reference calibration images in an orthogonal Hadamard pattern. The aliased slices are effectively separated with minimal expense to the spatial and temporal resolution. Functional activation is observed in the motor cortex, as the number of aliased slices is increased, in a bilateral finger tapping fMRI experiment. The SPECS model incorporates calibration reference images together with coefficients of orthogonal polynomials into an un-aliasing estimator to achieve separated images, with virtually no residual artifacts and functional activation detection in separated images. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Relaxivity-iron calibration in hepatic iron overload: Probing underlying biophysical mechanisms using a Monte Carlo model

    PubMed Central

    Ghugre, Nilesh R.; Wood, John C.

    2010-01-01

    Iron overload is a serious condition for patients with β-thalassemia, transfusion-dependent sickle cell anemia and inherited disorders of iron metabolism. MRI is becoming increasingly important in non-invasive quantification of tissue iron, overcoming the drawbacks of traditional techniques (liver biopsy). R2*(1/T2*) rises linearly with iron while R2(1/T2) has a curvilinear relationship in human liver. Although recent work has demonstrated clinically-valid estimates of human liver iron, the calibration varies with MRI sequence, field strength, iron chelation therapy and organ imaged, forcing recalibration in patients. To understand and correct these limitations, a thorough understanding of the underlying biophysics is of critical importance. Toward this end, a Monte Carlo based approach, using human liver as a ‘model’ tissue system, was employed to determine the contribution of particle size and distribution on MRI signal relaxation. Relaxivities were determined for hepatic iron concentrations (HIC) ranging from 0.5–40 mg iron/ g dry tissue weight. Model predictions captured the linear and curvilinear relationship of R2* and R2 with HIC respectively and were within in vivo confidence bounds; contact or chemical exchange mechanisms were not necessary. A validated and optimized model will aid understanding and quantification of iron-mediated relaxivity in tissues where biopsy is not feasible (heart, spleen). PMID:21337413

  17. Left and right brain-oriented hemisity subjects show opposite behavioral preferences.

    PubMed

    Morton, Bruce E

    2012-01-01

    Recently, three independent, intercorrelated biophysical measures have provided the first quantitative measures of a binary form of behavioral laterality called "Hemisity," a term referring to inherent opposite right or left brain-oriented differences in thinking and behavioral styles. Crucially, the right or left brain-orientation of individuals assessed by these methods was later found to be essentially congruent with the thicker side of their ventral gyrus of the anterior cingulate cortex (vgACC) as revealed by a 3 min MRI procedure. Laterality of this putative executive structural element has thus become the primary standard defining individual hemisity. Here, the behavior of 150 subjects, whose hemisity had been calibrated by MRI, was assessed using five MRI-calibrated preference questionnaires, two of which were new. Right and left brain-oriented subjects selected opposite answers (p > 0.05) for 47 of the 107 "either-or," forced choice type preference questionnaire items. The resulting 30 hemisity subtype preference differences were present in several areas. These were: (1) in logical orientation, (2) in type of consciousness, (3) in fear level and sensitivity, (4) in social-professional orientation, and (5) in pair bonding-spousal dominance style. The right and left brain-oriented hemisity subtype subjects, sorted on the anatomical basis of upon which brain side their vgACC was thickest, showed 30 significant differences in their "either-or" type of behavioral preferences.

  18. Real-time distortion correction of spiral and echo planar images using the gradient system impulse response function.

    PubMed

    Campbell-Washburn, Adrienne E; Xue, Hui; Lederman, Robert J; Faranesh, Anthony Z; Hansen, Michael S

    2016-06-01

    MRI-guided interventions demand high frame rate imaging, making fast imaging techniques such as spiral imaging and echo planar imaging (EPI) appealing. In this study, we implemented a real-time distortion correction framework to enable the use of these fast acquisitions for interventional MRI. Distortions caused by gradient waveform inaccuracies were corrected using the gradient impulse response function (GIRF), which was measured by standard equipment and saved as a calibration file on the host computer. This file was used at runtime to calculate the predicted k-space trajectories for image reconstruction. Additionally, the off-resonance reconstruction frequency was modified in real time to interactively deblur spiral images. Real-time distortion correction for arbitrary image orientations was achieved in phantoms and healthy human volunteers. The GIRF-predicted k-space trajectories matched measured k-space trajectories closely for spiral imaging. Spiral and EPI image distortion was visibly improved using the GIRF-predicted trajectories. The GIRF calibration file showed no systematic drift in 4 months and was demonstrated to correct distortions after 30 min of continuous scanning despite gradient heating. Interactive off-resonance reconstruction was used to sharpen anatomical boundaries during continuous imaging. This real-time distortion correction framework will enable the use of these high frame rate imaging methods for MRI-guided interventions. Magn Reson Med 75:2278-2285, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. Real-time distortion correction of spiral and echo planar images using the gradient system impulse response function

    PubMed Central

    Campbell-Washburn, Adrienne E; Xue, Hui; Lederman, Robert J; Faranesh, Anthony Z; Hansen, Michael S

    2015-01-01

    Purpose MRI-guided interventions demand high frame-rate imaging, making fast imaging techniques such as spiral imaging and echo planar imaging (EPI) appealing. In this study, we implemented a real-time distortion correction framework to enable the use of these fast acquisitions for interventional MRI. Methods Distortions caused by gradient waveform inaccuracies were corrected using the gradient impulse response function (GIRF), which was measured by standard equipment and saved as a calibration file on the host computer. This file was used at runtime to calculate the predicted k-space trajectories for image reconstruction. Additionally, the off-resonance reconstruction frequency was modified in real-time to interactively de-blur spiral images. Results Real-time distortion correction for arbitrary image orientations was achieved in phantoms and healthy human volunteers. The GIRF predicted k-space trajectories matched measured k-space trajectories closely for spiral imaging. Spiral and EPI image distortion was visibly improved using the GIRF predicted trajectories. The GIRF calibration file showed no systematic drift in 4 months and was demonstrated to correct distortions after 30 minutes of continuous scanning despite gradient heating. Interactive off-resonance reconstruction was used to sharpen anatomical boundaries during continuous imaging. Conclusions This real-time distortion correction framework will enable the use of these high frame-rate imaging methods for MRI-guided interventions. PMID:26114951

  20. Magnetic resonance imaging following InterStim®: an institutional experience with imaging safety and patient satisfaction.

    PubMed

    Chermansky, Christopher J; Krlin, Ryan M; Holley, Thomas D; Woo, Howard H; Winters, J Christian

    2011-11-01

    We retrospectively assessed patient safety and satisfaction after magnetic resonance imaging (MRI) in patients with an InterStim® unit. The records of all patients implanted with InterStim® between 1998 and 2006 were reviewed. Nine of these patients underwent MRI following InterStim® implantation. The patients' neurologists requested the MRI exams for medical reasons. Both 0.6  Tesla (T) and 1.5  T machines were used. Patient safety, interference of implanted pulse generator (IPG) with radiological interpretation, and patient satisfaction were assessed in these patients. The first patient in the series had IPG failure following MRI. For this patient, the voltage amplitude was set to zero, the IPG was turned off, and the IPG magnetic switch was left on. The patient underwent MRI uneventfully; however, the IPG did not function upon reprogramming. The IPG magnetic switch was turned off for the eight subsequent patients, all of whom underwent MRI safely. In addition, all of their IPGs functioned appropriately following reprogramming. Of the 15 MRIs performed, the lumbar spine was imaged in eight studies, the pelvis was imaged in one study, and the remaining examinations involved imaging the brain or cervical spine. Neither the IPG nor the sacral leads interfered with MRI interpretation. None of the eight patients perceived a change in perception or satisfaction following MRI. Although we don't advocate the routine use of MRI following InterStim® implantation, our experience suggests MRI may be feasible under controlled conditions and without adverse events. Copyright © 2011 Wiley Periodicals, Inc.

  1. Smart Cutting Tools and Smart Machining: Development Approaches, and Their Implementation and Application Perspectives

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Niu, Zhi-Chao; Wang, Robin C.; Rakowski, Richard; Bateman, Richard

    2017-09-01

    Smart machining has tremendous potential and is becoming one of new generation high value precision manufacturing technologies in line with the advance of Industry 4.0 concepts. This paper presents some innovative design concepts and, in particular, the development of four types of smart cutting tools, including a force-based smart cutting tool, a temperature-based internally-cooled cutting tool, a fast tool servo (FTS) and smart collets for ultraprecision and micro manufacturing purposes. Implementation and application perspectives of these smart cutting tools are explored and discussed particularly for smart machining against a number of industrial application requirements. They are contamination-free machining, machining of tool-wear-prone Si-based infra-red devices and medical applications, high speed micro milling and micro drilling, etc. Furthermore, implementation techniques are presented focusing on: (a) plug-and-produce design principle and the associated smart control algorithms, (b) piezoelectric film and surface acoustic wave transducers to measure cutting forces in process, (c) critical cutting temperature control in real-time machining, (d) in-process calibration through machining trials, (e) FE-based design and analysis of smart cutting tools, and (f) application exemplars on adaptive smart machining.

  2. TU-F-CAMPUS-J-05: Fast Volumetric MRI On An MRI-Linac Enables On-Line QA On Dose Deposition in the Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crijns, S; Glitzner, M; Kontaxis, C

    Purpose: The introduction of the MRI-linac in radiotherapy brings MRI-guided treatment with daily plan adaptions within reach. This paradigm demands on-line QA. With its ability to perform continuous volumetric imaging in an outstanding soft-tissue contrast, the MRI- linac promises to elucidate the dose deposition process during a treatment session. Here we study for a prostate case how dynamic MRI combined with linac machine parameters and a fast dose-engine can be used for on-line dose accumulation. Methods: Prostate imaging was performed in healthy volunteer on a 1.5T MR-scanner (Philips, Best, NL) according to a clinical MR-sim protocol, followed by 10min ofmore » dynamic imaging (FLASH, 4s/volume, FOV 40×40×12cm{sup 3}, voxels 3×3×3mm{sup 3}, TR/TE/α=3.5ms/1.7ms/5°). An experienced radiation oncologist made delineations, considering the prostate CTV. Planning was performed on a two-compartment pseudoCT (air/water density) according to clinical constraints (77Gy in PTV) using a Monte-Carlo (MC) based TPS that accounts for magnetic fields. Delivery of one fraction (2.2Gy) was simulated on an emulator for the Axesse linac (Elekta, Stockholm, SE). Machine parameters (MLC settings, gantry angle, dose rate, etc.) were recorded at 25Hz. These were re-grouped per dynamic volume and fed into the MC-engine to calculate a dose delivered for each of the dynamics. Deformations derived from non-rigid registration of each dynamic against the first allowed dose accumulation on a common reference grid. Results: The DVH parameters on the PTV compared to the optimized plan showed little changes. Local deformations however resulted in local deviations, primarily around the air/rectum interface. This clearly indicates the potential of intra-fraction adaptations based on the accumulated dose. Application in each fraction helps to track the influence of plan adaptations to the eventual dose distribution. Calculation times were about twice the delivery time. Conclusion: The current Result paves the way to perform on-line treatment delivery QA on the MRI-linac in the near future.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Prinzio, Renato; Almeida, Carlos Eduardo de; Laboratorio de Ciencias Radiologicas-Universidade do Estado do Rio de Janeiro

    In Brazil there are over 100 high dose rate (HDR) brachytherapy facilities using well-type chambers for the determination of the air kerma rate of {sup 192}Ir sources. This paper presents the methodology developed and extensively tested by the Laboratorio de Ciencias Radiologicas (LCR) and presently in use to calibrate those types of chambers. The system was initially used to calibrate six well-type chambers of brachytherapy services, and the maximum deviation of only 1.0% was observed between the calibration coefficients obtained and the ones in the calibration certificate provided by the UWADCL. In addition to its traceability to the Brazilian Nationalmore » Standards, the whole system was taken to University of Wisconsin Accredited Dosimetry Calibration Laboratory (UWADCL) for a direct comparison and the same formalism to calculate the air kerma was used. The comparison results between the two laboratories show an agreement of 0.9% for the calibration coefficients. Three Brazilian well-type chambers were calibrated at the UWADCL, and by LCR, in Brazil, using the developed system and a clinical HDR machine. The results of the calibration of three well chambers have shown an agreement better than 1.0%. Uncertainty analyses involving the measurements made both at the UWADCL and LCR laboratories are discussed.« less

  4. Tracked ultrasound calibration studies with a phantom made of LEGO bricks

    NASA Astrophysics Data System (ADS)

    Soehl, Marie; Walsh, Ryan; Rankin, Adam; Lasso, Andras; Fichtinger, Gabor

    2014-03-01

    In this study, spatial calibration of tracked ultrasound was compared by using a calibration phantom made of LEGO® bricks and two 3-D printed N-wire phantoms. METHODS: The accuracy and variance of calibrations were compared under a variety of operating conditions. Twenty trials were performed using an electromagnetic tracking device with a linear probe and three trials were performed using varied probes, varied tracking devices and the three aforementioned phantoms. The accuracy and variance of spatial calibrations found through the standard deviation and error of the 3-D image reprojection were used to compare the calibrations produced from the phantoms. RESULTS: This study found no significant difference between the measured variables of the calibrations. The average standard deviation of multiple 3-D image reprojections with the highest performing printed phantom and those from the phantom made of LEGO® bricks differed by 0.05 mm and the error of the reprojections differed by 0.13 mm. CONCLUSION: Given that the phantom made of LEGO® bricks is significantly less expensive, more readily available, and more easily modified than precision-machined N-wire phantoms, it prompts to be a viable calibration tool especially for quick laboratory research and proof of concept implementations of tracked ultrasound navigation.

  5. Diffusion and utilization of magnetic resonance imaging in Asia.

    PubMed

    Hutubessy, Raymond C W; Hanvoravongchai, Piya; Edejer, Tessa Tan-Torres

    2002-01-01

    An assessment of the current status of magnetic resonance imaging (MRI) was undertaken to provide input for future government decisions on the introduction of new technologies in Asia. The objective of the study is to describe and explain the diffusion pattern of this costly technology in several Asian settings. Data on the diffusion pattern of MRI for different Asian countries (the Republic of Korea, Malaysia, Indonesia, the Philippines and Thailand) and regions (the cities of Shanghai and Hong Kong in China and the state of Tamil Nadu in India) were obtained from national representatives of professional bodies by using standardized questionnaires for the year 1997-98. In addition, utilization data were collected at the hospital level in three countries before and after the economic crisis in the region. For four countries plus Hong Kong, background information on the legal framework for "big ticket" technologies was collected. Since the introduction of the first MRI in the region in 1987, the number of MRIs has gradually increased both in public and private facilities in Asia. In 1998 the average number of MRI machines installed varied from less than 0.5 machine per million population to more than 5 machines per million population. The maintenance and operating costs, and not the absence of regulation, account for the low number of MRIs in the Philippines and Malaysia. Overall, installed MRIs have low magnetic field strength, vary with respect to brand and type, and are mostly in the private sector and in the urban areas of the region. The diffusion pattern of MRIs in countries of the Asian region appears to follow two types of patterns of diffusion: one set of countries seems to be composed of mostly early adopters and another set of countries appears to be composed mostly of late adopters. Total number of MRIs per population in this region, though quite small compared to most OECD countries, reflects a higher share of the country's health-resource devoted to expensive high-technology devices. It is difficult to state the appropriate number of MRIs for each country; however, the study shows that there are observable problems in terms of efficiency, equity, and quality of MRI services. The research team proposes a few key recommendations to counteract these problems. Purchasing and regulatory bodies must be empowered with skill and knowledge of health technology assessment. Likewise, the fundamental problems resulting from inefficient and unfair health financing should not be overlooked, so that there is more equitable use of the technology.

  6. Metrological Characterization of the Vickers Hardness Primary Standard Machine Established at CSIR-NPL

    NASA Astrophysics Data System (ADS)

    Titus, S. Seelakumar; Vikram; Girish; Jain, Sushil Kumar

    2018-06-01

    CSIR-National Physical Laboratory (CSIR-NPL) is the National Metrological Institute (NMI) of India, which has the mandate for the realization of SI units of measurements and dissemination of the same to the user organizations. CSIR-NPL has established a hardness standardizing machine for realizing the Vickers hardness scale as per ISO 6507-3 standard for providing national traceability in hardness measurement. Direct verification of the machine has been carried out by measuring the uncertainty in the generated force, the indenter geometry and the indentation measuring system. From these measurements, it is found that the machine exhibits a calibration and measurement capability (CMC) of ±1.5% for HV1-HV3 scales and ±1.0% for HV5-HV50 scales and ±0.8% for HV100 scale.

  7. Machine learning for neuroimaging with scikit-learn.

    PubMed

    Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël

    2014-01-01

    Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain.

  8. Machine learning for neuroimaging with scikit-learn

    PubMed Central

    Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël

    2014-01-01

    Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain. PMID:24600388

  9. Rey's Auditory Verbal Learning Test scores can be predicted from whole brain MRI in Alzheimer's disease.

    PubMed

    Moradi, Elaheh; Hallikainen, Ilona; Hänninen, Tuomo; Tohka, Jussi

    2017-01-01

    Rey's Auditory Verbal Learning Test (RAVLT) is a powerful neuropsychological tool for testing episodic memory, which is widely used for the cognitive assessment in dementia and pre-dementia conditions. Several studies have shown that an impairment in RAVLT scores reflect well the underlying pathology caused by Alzheimer's disease (AD), thus making RAVLT an effective early marker to detect AD in persons with memory complaints. We investigated the association between RAVLT scores (RAVLT Immediate and RAVLT Percent Forgetting) and the structural brain atrophy caused by AD. The aim was to comprehensively study to what extent the RAVLT scores are predictable based on structural magnetic resonance imaging (MRI) data using machine learning approaches as well as to find the most important brain regions for the estimation of RAVLT scores. For this, we built a predictive model to estimate RAVLT scores from gray matter density via elastic net penalized linear regression model. The proposed approach provided highly significant cross-validated correlation between the estimated and observed RAVLT Immediate (R = 0.50) and RAVLT Percent Forgetting (R = 0.43) in a dataset consisting of 806 AD, mild cognitive impairment (MCI) or healthy subjects. In addition, the selected machine learning method provided more accurate estimates of RAVLT scores than the relevance vector regression used earlier for the estimation of RAVLT based on MRI data. The top predictors were medial temporal lobe structures and amygdala for the estimation of RAVLT Immediate and angular gyrus, hippocampus and amygdala for the estimation of RAVLT Percent Forgetting. Further, the conversion of MCI subjects to AD in 3-years could be predicted based on either observed or estimated RAVLT scores with an accuracy comparable to MRI-based biomarkers.

  10. Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis.

    PubMed

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2014-11-01

    For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)(2), a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Decoding the individual finger movements from single-trial functional magnetic resonance imaging recordings of human brain activity.

    PubMed

    Shen, Guohua; Zhang, Jing; Wang, Mengxing; Lei, Du; Yang, Guang; Zhang, Shanmin; Du, Xiaoxia

    2014-06-01

    Multivariate pattern classification analysis (MVPA) has been applied to functional magnetic resonance imaging (fMRI) data to decode brain states from spatially distributed activation patterns. Decoding upper limb movements from non-invasively recorded human brain activation is crucial for implementing a brain-machine interface that directly harnesses an individual's thoughts to control external devices or computers. The aim of this study was to decode the individual finger movements from fMRI single-trial data. Thirteen healthy human subjects participated in a visually cued delayed finger movement task, and only one slight button press was performed in each trial. Using MVPA, the decoding accuracy (DA) was computed separately for the different motor-related regions of interest. For the construction of feature vectors, the feature vectors from two successive volumes in the image series for a trial were concatenated. With these spatial-temporal feature vectors, we obtained a 63.1% average DA (84.7% for the best subject) for the contralateral primary somatosensory cortex and a 46.0% average DA (71.0% for the best subject) for the contralateral primary motor cortex; both of these values were significantly above the chance level (20%). In addition, we implemented searchlight MVPA to search for informative regions in an unbiased manner across the whole brain. Furthermore, by applying searchlight MVPA to each volume of a trial, we visually demonstrated the information for decoding, both spatially and temporally. The results suggest that the non-invasive fMRI technique may provide informative features for decoding individual finger movements and the potential of developing an fMRI-based brain-machine interface for finger movement. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Hierarchical Feature Representation and Multimodal Fusion with Deep Learning for AD/MCI Diagnosis

    PubMed Central

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2014-01-01

    For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)1, a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET. PMID:25042445

  13. Predicting complications of percutaneous coronary intervention using a novel support vector method.

    PubMed

    Lee, Gyemin; Gurm, Hitinder S; Syed, Zeeshan

    2013-01-01

    To explore the feasibility of a novel approach using an augmented one-class learning algorithm to model in-laboratory complications of percutaneous coronary intervention (PCI). Data from the Blue Cross Blue Shield of Michigan Cardiovascular Consortium (BMC2) multicenter registry for the years 2007 and 2008 (n=41 016) were used to train models to predict 13 different in-laboratory PCI complications using a novel one-plus-class support vector machine (OP-SVM) algorithm. The performance of these models in terms of discrimination and calibration was compared to the performance of models trained using the following classification algorithms on BMC2 data from 2009 (n=20 289): logistic regression (LR), one-class support vector machine classification (OC-SVM), and two-class support vector machine classification (TC-SVM). For the OP-SVM and TC-SVM approaches, variants of the algorithms with cost-sensitive weighting were also considered. The OP-SVM algorithm and its cost-sensitive variant achieved the highest area under the receiver operating characteristic curve for the majority of the PCI complications studied (eight cases). Similar improvements were observed for the Hosmer-Lemeshow χ(2) value (seven cases) and the mean cross-entropy error (eight cases). The OP-SVM algorithm based on an augmented one-class learning problem improved discrimination and calibration across different PCI complications relative to LR and traditional support vector machine classification. Such an approach may have value in a broader range of clinical domains.

  14. Predicting complications of percutaneous coronary intervention using a novel support vector method

    PubMed Central

    Lee, Gyemin; Gurm, Hitinder S; Syed, Zeeshan

    2013-01-01

    Objective To explore the feasibility of a novel approach using an augmented one-class learning algorithm to model in-laboratory complications of percutaneous coronary intervention (PCI). Materials and methods Data from the Blue Cross Blue Shield of Michigan Cardiovascular Consortium (BMC2) multicenter registry for the years 2007 and 2008 (n=41 016) were used to train models to predict 13 different in-laboratory PCI complications using a novel one-plus-class support vector machine (OP-SVM) algorithm. The performance of these models in terms of discrimination and calibration was compared to the performance of models trained using the following classification algorithms on BMC2 data from 2009 (n=20 289): logistic regression (LR), one-class support vector machine classification (OC-SVM), and two-class support vector machine classification (TC-SVM). For the OP-SVM and TC-SVM approaches, variants of the algorithms with cost-sensitive weighting were also considered. Results The OP-SVM algorithm and its cost-sensitive variant achieved the highest area under the receiver operating characteristic curve for the majority of the PCI complications studied (eight cases). Similar improvements were observed for the Hosmer–Lemeshow χ2 value (seven cases) and the mean cross-entropy error (eight cases). Conclusions The OP-SVM algorithm based on an augmented one-class learning problem improved discrimination and calibration across different PCI complications relative to LR and traditional support vector machine classification. Such an approach may have value in a broader range of clinical domains. PMID:23599229

  15. Cost-Benefit Analysis of Computer Resources for Machine Learning

    USGS Publications Warehouse

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  16. Mapping the pharmacological modulation of brain oxygen metabolism: The effects of caffeine on absolute CMRO2 measured using dual calibrated fMRI.

    PubMed

    Merola, Alberto; Germuska, Michael A; Warnert, Esther Ah; Richmond, Lewys; Helme, Daniel; Khot, Sharmila; Murphy, Kevin; Rogers, Peter J; Hall, Judith E; Wise, Richard G

    2017-07-15

    This study aims to map the acute effects of caffeine ingestion on grey matter oxygen metabolism and haemodynamics with a novel MRI method. Sixteen healthy caffeine consumers (8 males, age=24.7±5.1) were recruited to this randomised, double-blind, placebo-controlled study. Each participant was scanned on two days before and after the delivery of an oral caffeine (250mg) or placebo capsule. Our measurements were obtained with a newly proposed estimation approach applied to data from a dual calibration fMRI experiment that uses hypercapnia and hyperoxia to modulate brain blood flow and oxygenation. Estimates were based on a forward model that describes analytically the contributions of cerebral blood flow (CBF) and of the measured end-tidal partial pressures of CO 2 and O 2 to the acquired dual-echo GRE signal. The method allows the estimation of grey matter maps of: oxygen extraction fraction (OEF), CBF, CBF-related cerebrovascular reactivity (CVR) and cerebral metabolic rate of oxygen consumption (CMRO 2 ). Other estimates from a multi inversion time ASL acquisition (mTI-ASL), salivary samples of the caffeine concentration and behavioural measurements are also reported. We observed significant differences between caffeine and placebo on average across grey matter, with OEF showing an increase of 15.6% (SEM±4.9%, p<0.05) with caffeine, while CBF and CMRO 2 showed differences of -30.4% (SEM±1.6%, p<0.01) and -18.6% (SEM±2.9%, p<0.01) respectively with caffeine administration. The reduction in oxygen metabolism found is somehow unexpected, but consistent with a hypothesis of decreased energetic demand, supported by previous electrophysiological studies reporting reductions in spectral power with EEG. Moreover the maps of the physiological parameters estimated illustrate the spatial distribution of changes across grey matter enabling us to localise the effects of caffeine with voxel-wise resolution. CBF changes were widespread as reported by previous findings, while changes in OEF were found to be more restricted, leading to unprecedented mapping of significant CMRO 2 reductions mainly in frontal gyrus, parietal and occipital lobes. In conclusion, we propose the estimation framework based on our novel forward model with a dual calibrated fMRI experiment as a viable MRI method to map the effects of drugs on brain oxygen metabolism and haemodynamics with voxel-wise resolution. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckley, L; Lambert, C; Nyiri, B

    Purpose: To standardize the tube calibration for Elekta XVI cone beam CT (CBCT) systems in order to provide a meaningful estimate of the daily imaging dose and reduce the variation between units in a large centre with multiple treatment units. Methods: Initial measurements of the output from the CBCT systems were made using a Farmer chamber and standard CTDI phantom. The correlation between the measured CTDI and the tube current was confirmed using an Unfors Xi detector which was then used to perform a tube current calibration on each unit. Results: Initial measurements showed measured tube current variations of upmore » to 25% between units for scans with the same image settings. In order to reasonably estimate the imaging dose, a systematic approach to x-ray generator calibration was adopted to ensure that the imaging dose was consistent across all units at the centre and was adopted as part of the routine quality assurance program. Subsequent measurements show that the variation in measured dose across nine units is on the order of 5%. Conclusion: Increasingly, patients receiving radiation therapy have extended life expectancies and therefore the cumulative dose from daily imaging should not be ignored. In theory, an estimate of imaging dose can be made from the imaging parameters. However, measurements have shown that there are large differences in the x-ray generator calibration as installed at the clinic. Current protocols recommend routine checks of dose to ensure constancy. The present study suggests that in addition to constancy checks on a single machine, a tube current calibration should be performed on every unit to ensure agreement across multiple machines. This is crucial at a large centre with multiple units in order to provide physicians with a meaningful estimate of the daily imaging dose.« less

  18. Intelligent and automatic in vivo detection and quantification of transplanted cells in MRI.

    PubMed

    Afridi, Muhammad Jamal; Ross, Arun; Liu, Xiaoming; Bennewitz, Margaret F; Shuboni, Dorela D; Shapiro, Erik M

    2017-11-01

    Magnetic resonance imaging (MRI)-based cell tracking has emerged as a useful tool for identifying the location of transplanted cells, and even their migration. Magnetically labeled cells appear as dark contrast in T2*-weighted MRI, with sensitivities of individual cells. One key hurdle to the widespread use of MRI-based cell tracking is the inability to determine the number of transplanted cells based on this contrast feature. In the case of single cell detection, manual enumeration of spots in three-dimensional (3D) MRI in principle is possible; however, it is a tedious and time-consuming task that is prone to subjectivity and inaccuracy on a large scale. This research presents the first comprehensive study on how a computer-based intelligent, automatic, and accurate cell quantification approach can be designed for spot detection in MRI scans. Magnetically labeled mesenchymal stem cells (MSCs) were transplanted into rats using an intracardiac injection, accomplishing single cell seeding in the brain. T2*-weighted MRI of these rat brains were performed where labeled MSCs appeared as spots. Using machine learning and computer vision paradigms, approaches were designed to systematically explore the possibility of automatic detection of these spots in MRI. Experiments were validated against known in vitro scenarios. Using the proposed deep convolutional neural network (CNN) architecture, an in vivo accuracy up to 97.3% and in vitro accuracy of up to 99.8% was achieved for automated spot detection in MRI data. The proposed approach for automatic quantification of MRI-based cell tracking will facilitate the use of MRI in large-scale cell therapy studies. Magn Reson Med 78:1991-2002, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  19. Fully automated system for the quantification of human osteoarthritic knee joint effusion volume using magnetic resonance imaging

    PubMed Central

    2010-01-01

    Introduction Joint effusion is frequently associated with osteoarthritis (OA) flare-up and is an important marker of therapeutic response. This study aimed at developing and validating a fully automated system based on magnetic resonance imaging (MRI) for the quantification of joint effusion volume in knee OA patients. Methods MRI examinations consisted of two axial sequences: a T2-weighted true fast imaging with steady-state precession and a T1-weighted gradient echo. An automated joint effusion volume quantification system using MRI was developed and validated (a) with calibrated phantoms (cylinder and sphere) and effusion from knee OA patients; (b) with assessment by manual quantification; and (c) by direct aspiration. Twenty-five knee OA patients with joint effusion were included in the study. Results The automated joint effusion volume quantification was developed as a four stage sequencing process: bone segmentation, filtering of unrelated structures, segmentation of joint effusion, and subvoxel volume calculation. Validation experiments revealed excellent coefficients of variation with the calibrated cylinder (1.4%) and sphere (0.8%) phantoms. Comparison of the OA knee joint effusion volume assessed by the developed automated system and by manual quantification was also excellent (r = 0.98; P < 0.0001), as was the comparison with direct aspiration (r = 0.88; P = 0.0008). Conclusions The newly developed fully automated MRI-based system provided precise quantification of OA knee joint effusion volume with excellent correlation with data from phantoms, a manual system, and joint aspiration. Such an automated system will be instrumental in improving the reproducibility/reliability of the evaluation of this marker in clinical application. PMID:20846392

  20. Annual Screening Strategies in BRCA1 and BRCA2 Gene Mutation Carriers: A Comparative Effectiveness Analysis

    PubMed Central

    Lowry, Kathryn P.; Lee, Janie M.; Kong, Chung Y.; McMahon, Pamela M.; Gilmore, Michael E.; Cott Chubiz, Jessica E.; Pisano, Etta D.; Gatsonis, Constantine; Ryan, Paula D.; Ozanne, Elissa M.; Gazelle, G. Scott

    2011-01-01

    Background While breast cancer screening with mammography and MRI is recommended for BRCA mutation carriers, there is no current consensus on the optimal screening regimen. Methods We used a computer simulation model to compare six annual screening strategies [film mammography (FM), digital mammography (DM), FM and magnetic resonance imaging (MRI) or DM and MRI contemporaneously, and alternating FM/MRI or DM/MRI at six-month intervals] beginning at ages 25, 30, 35, and 40, and two strategies of annual MRI with delayed alternating DM/FM to clinical surveillance alone. Strategies were evaluated without and with mammography-induced breast cancer risk, using two models of excess relative risk. Input parameters were obtained from the medical literature, publicly available databases, and calibration. Results Without radiation risk effects, alternating DM/MRI starting at age 25 provided the highest life expectancy (BRCA1: 72.52 years, BRCA2: 77.63 years). When radiation risk was included, a small proportion of diagnosed cancers were attributable to radiation exposure (BRCA1: <2%, BRCA2: <4%). With radiation risk, alternating DM/MRI at age 25 or annual MRI at age 25/delayed alternating DM at age 30 were most effective, depending on the radiation risk model used. Alternating DM/MRI starting at age 25 also had the highest number of false-positive screens/person (BRCA1: 4.5, BRCA2: 8.1). Conclusions Annual MRI at 25/delayed alternating DM at age 30 is likely the most effective screening strategy in BRCA mutation carriers. Screening benefits, associated risks and personal acceptance of false-positive results, should be considered in choosing the optimal screening strategy for individual women. PMID:21935911

  1. Recalibrating sleep: is recalibration and readjustment of sense organs and brain-body connections the core function of sleep?

    PubMed

    Smetacek, Victor

    2010-10-01

    Sleep is an enigma because we all know what it means and does to us, yet a scientific explanation for why animals including humans need to sleep is still lacking. However, the enigma can be resolved if the animal body is regarded as a purposeful machine whose moving parts are coordinated with spatial information provided by a disparate array of sense organs. The performance of all complex machines deteriorates with time due to inevitable instrument drift of the individual sensors combined with wear and tear of the moving parts which result in declining precision and coordination. Peak performance is restored by servicing the machine, which involves calibrating the sensors against baselines and standards, then with one another, and finally readjusting the connections between instruments and moving parts. It follows that the animal body and its sensors will also require regular calibration of sense organs and readjustment of brain-body connections which will need to be carried out while the animal is not in functional but in calibration mode. I suggest that this is the core function of sleep. This recalibration hypothesis of sleep can be tested subjectively. We all know from personal experience that sleep is needed to recover from tiredness that sets in towards the end of a long day. This tiredness, which is quite distinct from mental or muscular exhaustion caused by strenuous exertion, manifests itself in deteriorating general performance: the sense organs lose precision, movements become clumsy and the mind struggles to maintain focus. We can all agree that sleep sharpens the sense organs and restores agility to mind and body. I now propose that the sense of freshness and buoyancy after a good night's sleep is the feeling of recalibrated sensory and motor systems. The hypothesis can be tested rigorously by examining available data on sleep cycles and stages against this background. For instance, REM and deep sleep cycles can be interpreted as successive, separate calibration runs of the vestibulo-ocular reflex and the sensory-motor systems, respectively, amongst other functions running in parallel, such as dreaming. Because the split-second connections between sensory information and emotional responses will also require calibration, some aspects of dreaming could be interpreted in this light. Much of the baffling behaviour and patterns of brain activity of sleeping animals and humans make sense in the framework of this technological paradigm since different animal lineages will have evolved different techniques to achieve calibration. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. Nano Mechanical Machining Using AFM Probe

    NASA Astrophysics Data System (ADS)

    Mostofa, Md. Golam

    Complex miniaturized components with high form accuracy will play key roles in the future development of many products, as they provide portability, disposability, lower material consumption in production, low power consumption during operation, lower sample requirements for testing, and higher heat transfer due to their very high surface-to-volume ratio. Given the high market demand for such micro and nano featured components, different manufacturing methods have been developed for their fabrication. Some of the common technologies in micro/nano fabrication are photolithography, electron beam lithography, X-ray lithography and other semiconductor processing techniques. Although these methods are capable of fabricating micro/nano structures with a resolution of less than a few nanometers, some of the shortcomings associated with these methods, such as high production costs for customized products, limited material choices, necessitate the development of other fabricating techniques. Micro/nano mechanical machining, such an atomic force microscope (AFM) probe based nano fabrication, has, therefore, been used to overcome some the major restrictions of the traditional processes. This technique removes material from the workpiece by engaging micro/nano size cutting tool (i.e. AFM probe) and is applicable on a wider range of materials compared to the photolithographic process. In spite of the unique benefits of nano mechanical machining, there are also some challenges with this technique, since the scale is reduced, such as size effects, burr formations, chip adhesions, fragility of tools and tool wear. Moreover, AFM based machining does not have any rotational movement, which makes fabrication of 3D features more difficult. Thus, vibration-assisted machining is introduced into AFM probe based nano mechanical machining to overcome the limitations associated with the conventional AFM probe based scratching method. Vibration-assisted machining reduced the cutting forces and burr formations through intermittent cutting. Combining the AFM probe based machining with vibration-assisted machining enhanced nano mechanical machining processes by improving the accuracy, productivity and surface finishes. In this study, several scratching tests are performed with a single crystal diamond AFM probe to investigate the cutting characteristics and model the ploughing cutting forces. Calibration of the probe for lateral force measurements, which is essential, is also extended through the force balance method. Furthermore, vibration-assisted machining system is developed and applied to fabricate different materials to overcome some of the limitations of the AFM probe based single point nano mechanical machining. The novelty of this study includes the application of vibration-assisted AFM probe based nano scale machining to fabricate micro/nano scale features, calibration of an AFM by considering different factors, and the investigation of the nano scale material removal process from a different perspective.

  3. Pattern Recognition Approaches for Breast Cancer DCE-MRI Classification: A Systematic Review.

    PubMed

    Fusco, Roberta; Sansone, Mario; Filice, Salvatore; Carone, Guglielmo; Amato, Daniela Maria; Sansone, Carlo; Petrillo, Antonella

    2016-01-01

    We performed a systematic review of several pattern analysis approaches for classifying breast lesions using dynamic, morphological, and textural features in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Several machine learning approaches, namely artificial neural networks (ANN), support vector machines (SVM), linear discriminant analysis (LDA), tree-based classifiers (TC), and Bayesian classifiers (BC), and features used for classification are described. The findings of a systematic review of 26 studies are presented. The sensitivity and specificity are respectively 91 and 83 % for ANN, 85 and 82 % for SVM, 96 and 85 % for LDA, 92 and 87 % for TC, and 82 and 85 % for BC. The sensitivity and specificity are respectively 82 and 74 % for dynamic features, 93 and 60 % for morphological features, 88 and 81 % for textural features, 95 and 86 % for a combination of dynamic and morphological features, and 88 and 84 % for a combination of dynamic, morphological, and other features. LDA and TC have the best performance. A combination of dynamic and morphological features gives the best performance.

  4. Gage for 3-d contours

    NASA Technical Reports Server (NTRS)

    Haynie, C. C.

    1980-01-01

    Simple gage, used with template, can help inspectors determine whether three-dimensional curved surface has correct contour. Gage was developed as aid in explosive forming of Space Shuttle emergency-escape hatch. For even greater accuracy, wedge can be made of metal and calibrated by indexing machine.

  5. Guide for fabricating and installing shallow ground water observation wells

    Treesearch

    Carolyn C. Bohn

    2001-01-01

    The fabrication and use of three tools to assist in the manual installation of shallow ground water observation wells are described. These tools are easily fabricated at a local machine shop. A method for calibrating pressure transducers is also described.

  6. Camera positioning and calibration techniques for integrating traffic surveillance video systems with machine-vision vehicle detection devices.

    DOT National Transportation Integrated Search

    2002-12-01

    The Virginia Department of Transportation, like many other transportation agencies, has invested significantly in extensive closed circuit television (CCTV) systems to monitor freeways in urban areas. Although these systems have proven very effective...

  7. Computing moment to moment BOLD activation for real-time neurofeedback

    PubMed Central

    Hinds, Oliver; Ghosh, Satrajit; Thompson, Todd W.; Yoo, Julie J.; Whitfield-Gabrieli, Susan; Triantafyllou, Christina; Gabrieli, John D.E.

    2013-01-01

    Estimating moment to moment changes in blood oxygenation level dependent (BOLD) activation levels from functional magnetic resonance imaging (fMRI) data has applications for learned regulation of regional activation, brain state monitoring, and brain-machine interfaces. In each of these contexts, accurate estimation of the BOLD signal in as little time as possible is desired. This is a challenging problem due to the low signal-to-noise ratio of fMRI data. Previous methods for real-time fMRI analysis have either sacrificed the ability to compute moment to moment activation changes by averaging several acquisitions into a single activation estimate or have sacrificed accuracy by failing to account for prominent sources of noise in the fMRI signal. Here we present a new method for computing the amount of activation present in a single fMRI acquisition that separates moment to moment changes in the fMRI signal intensity attributable to neural sources from those due to noise, resulting in a feedback signal more reflective of neural activation. This method computes an incremental general linear model fit to the fMRI timeseries, which is used to calculate the expected signal intensity at each new acquisition. The difference between the measured intensity and the expected intensity is scaled by the variance of the estimator in order to transform this residual difference into a statistic. Both synthetic and real data were used to validate this method and compare it to the only other published real-time fMRI method. PMID:20682350

  8. Non-invasive in vivo evaluation of in situ forming PLGA implants by benchtop magnetic resonance imaging (BT-MRI) and EPR spectroscopy.

    PubMed

    Kempe, Sabine; Metz, Hendrik; Pereira, Priscila G C; Mäder, Karsten

    2010-01-01

    In the present study, we used benchtop magnetic resonance imaging (BT-MRI) for non-invasive and continuous in vivo studies of in situ forming poly(lactide-co-glycolide) (PLGA) implants without the use of contrast agents. Polyethylene glycol (PEG) 400 was used as an alternative solvent to the clinically used NMP. In addition to BT-MRI, we applied electron paramagnetic resonance (EPR) spectroscopy to characterize implant formation and drug delivery processes in vitro and in vivo. We were able to follow key processes of implant formation by EPR and MRI. Because EPR spectra are sensitive to polarity and mobility, we were able to follow the kinetics of the solvent/non-solvent exchange and the PLGA precipitation. Due to the high water affinity of PEG 400, we observed a transient accumulation of water in the implant neighbourhood. Furthermore, we detected the encapsulation by BT-MRI of the implant as a response of the biological system to the polymer, followed by degradation over a period of two months. We could show that MRI in general has the potential to get new insights in the in vivo fate of in situ forming implants. The study also clearly shows that BT-MRI is a new viable and much less expensive alternative for superconducting MRI machines to monitor drug delivery processes in vivo in small mammals. Copyright 2009 Elsevier B.V. All rights reserved.

  9. Motion prediction in MRI-guided radiotherapy based on interleaved orthogonal cine-MRI

    NASA Astrophysics Data System (ADS)

    Seregni, M.; Paganelli, C.; Lee, D.; Greer, P. B.; Baroni, G.; Keall, P. J.; Riboldi, M.

    2016-01-01

    In-room cine-MRI guidance can provide non-invasive target localization during radiotherapy treatment. However, in order to cope with finite imaging frequency and system latencies between target localization and dose delivery, tumour motion prediction is required. This work proposes a framework for motion prediction dedicated to cine-MRI guidance, aiming at quantifying the geometric uncertainties introduced by this process for both tumour tracking and beam gating. The tumour position, identified through scale invariant features detected in cine-MRI slices, is estimated at high-frequency (25 Hz) using three independent predictors, one for each anatomical coordinate. Linear extrapolation, auto-regressive and support vector machine algorithms are compared against systems that use no prediction or surrogate-based motion estimation. Geometric uncertainties are reported as a function of image acquisition period and system latency. Average results show that the tracking error RMS can be decreased down to a [0.2; 1.2] mm range, for acquisition periods between 250 and 750 ms and system latencies between 50 and 300 ms. Except for the linear extrapolator, tracking and gating prediction errors were, on average, lower than those measured for surrogate-based motion estimation. This finding suggests that cine-MRI guidance, combined with appropriate prediction algorithms, could relevantly decrease geometric uncertainties in motion compensated treatments.

  10. Accuracy of magnetic resonance based susceptibility measurements

    NASA Astrophysics Data System (ADS)

    Erdevig, Hannah E.; Russek, Stephen E.; Carnicka, Slavka; Stupic, Karl F.; Keenan, Kathryn E.

    2017-05-01

    Magnetic Resonance Imaging (MRI) is increasingly used to map the magnetic susceptibility of tissue to identify cerebral microbleeds associated with traumatic brain injury and pathological iron deposits associated with neurodegenerative diseases such as Parkinson's and Alzheimer's disease. Accurate measurements of susceptibility are important for determining oxygen and iron content in blood vessels and brain tissue for use in noninvasive clinical diagnosis and treatment assessments. Induced magnetic fields with amplitude on the order of 100 nT, can be detected using MRI phase images. The induced field distributions can then be inverted to obtain quantitative susceptibility maps. The focus of this research was to determine the accuracy of MRI-based susceptibility measurements using simple phantom geometries and to compare the susceptibility measurements with magnetometry measurements where SI-traceable standards are available. The susceptibilities of paramagnetic salt solutions in cylindrical containers were measured as a function of orientation relative to the static MRI field. The observed induced fields as a function of orientation of the cylinder were in good agreement with simple models. The MRI susceptibility measurements were compared with SQUID magnetometry using NIST-traceable standards. MRI can accurately measure relative magnetic susceptibilities while SQUID magnetometry measures absolute magnetic susceptibility. Given the accuracy of moment measurements of tissue mimicking samples, and the need to look at small differences in tissue properties, the use of existing NIST standard reference materials to calibrate MRI reference structures is problematic and better reference materials are required.

  11. An RF-induced voltage sensor for investigating pacemaker safety in MRI.

    PubMed

    Barbier, Thérèse; Piumatti, Roberto; Hecker, Bertrand; Odille, Freddy; Felblinger, Jacques; Pasquier, Cédric

    2014-12-01

    Magnetic resonance imaging (MRI) is inadvisable for patients with pacemakers, as radiofrequency (RF) voltages induced in the pacemaker leads may cause the device to malfunction. Our goal is to develop a sensor to measure such RF-induced voltages during MRI safety tests. A sensor was designed (16.6 cm(2)) for measuring voltages at the connection between the pacemaker lead and its case. The induced voltage is demodulated, digitized, and transferred by optical fibres. The sensor was calibrated on the bench using RF pulses of known amplitude and duration. Then the sensor was tested during MRI scanning at 1.5 T in a saline gel filled phantom. Bench tests showed measurement errors below 5% with a (-40 V; +40 V) range, a precision of 0.06 V, and a temporal resolution of 24.2 μs. In MRI tests, variability in the measured voltages was below 3.7% for 996 measurements with different sensors and RF exposure. Coupling between the sensor and the MRI electromagnetic environment was estimated with a second sensor connected and was below 6.2%. For a typical clinical MRI sequence, voltages around ten Vp were detected. We have built an accurate and reproducible tool for measuring RF-induced voltages in pacemaker leads during MR safety investigations. The sensor might also be used with other conducting cables including those used for electrocardiography and neurostimulation.

  12. External validation of the MRI-DRAGON score: early prediction of stroke outcome after intravenous thrombolysis.

    PubMed

    Turc, Guillaume; Aguettaz, Pierre; Ponchelle-Dequatre, Nelly; Hénon, Hilde; Naggara, Olivier; Leclerc, Xavier; Cordonnier, Charlotte; Leys, Didier; Mas, Jean-Louis; Oppenheim, Catherine

    2014-01-01

    The aim of our study was to validate in an independent cohort the MRI-DRAGON score, an adaptation of the (CT-) DRAGON score to predict 3-month outcome in acute ischemic stroke patients undergoing MRI before intravenous thrombolysis (IV-tPA). We reviewed consecutive (2009-2013) anterior circulation stroke patients treated within 4.5 hours by IV-tPA in the Lille stroke unit (France), where MRI is the first-line pretherapeutic work-up. We assessed the discrimination and calibration of the MRI-DRAGON score to predict poor 3-month outcome, defined as modified Rankin Score >2, using c-statistic and the Hosmer-Lemeshow test, respectively. We included 230 patients (mean ±SD age 70.4±16.0 years, median [IQR] baseline NIHSS 8 [5]-[14]; poor outcome in 78(34%) patients). The c-statistic was 0.81 (95%CI 0.75-0.87), and the Hosmer-Lemeshow test was not significant (p = 0.54). The MRI-DRAGON score showed good prognostic performance in the external validation cohort. It could therefore be used to inform the patient's relatives about long-term prognosis and help to identify poor responders to IV-tPA alone, who may be candidates for additional therapeutic strategies, if they are otherwise eligible for such procedures based on the institutional criteria.

  13. Imaging patterns predict patient survival and molecular subtype in glioblastoma via machine learning techniques.

    PubMed

    Macyszyn, Luke; Akbari, Hamed; Pisapia, Jared M; Da, Xiao; Attiah, Mark; Pigrish, Vadim; Bi, Yingtao; Pal, Sharmistha; Davuluri, Ramana V; Roccograndi, Laura; Dahmane, Nadia; Martinez-Lage, Maria; Biros, George; Wolf, Ronald L; Bilello, Michel; O'Rourke, Donald M; Davatzikos, Christos

    2016-03-01

    MRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB). One hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients. Survival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy. By employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood-brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Neuro-Oncology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Stable Atlas-based Mapped Prior (STAMP) machine-learning segmentation for multicenter large-scale MRI data.

    PubMed

    Kim, Eun Young; Magnotta, Vincent A; Liu, Dawei; Johnson, Hans J

    2014-09-01

    Machine learning (ML)-based segmentation methods are a common technique in the medical image processing field. In spite of numerous research groups that have investigated ML-based segmentation frameworks, there remains unanswered aspects of performance variability for the choice of two key components: ML algorithm and intensity normalization. This investigation reveals that the choice of those elements plays a major part in determining segmentation accuracy and generalizability. The approach we have used in this study aims to evaluate relative benefits of the two elements within a subcortical MRI segmentation framework. Experiments were conducted to contrast eight machine-learning algorithm configurations and 11 normalization strategies for our brain MR segmentation framework. For the intensity normalization, a Stable Atlas-based Mapped Prior (STAMP) was utilized to take better account of contrast along boundaries of structures. Comparing eight machine learning algorithms on down-sampled segmentation MR data, it was obvious that a significant improvement was obtained using ensemble-based ML algorithms (i.e., random forest) or ANN algorithms. Further investigation between these two algorithms also revealed that the random forest results provided exceptionally good agreement with manual delineations by experts. Additional experiments showed that the effect of STAMP-based intensity normalization also improved the robustness of segmentation for multicenter data sets. The constructed framework obtained good multicenter reliability and was successfully applied on a large multicenter MR data set (n>3000). Less than 10% of automated segmentations were recommended for minimal expert intervention. These results demonstrate the feasibility of using the ML-based segmentation tools for processing large amount of multicenter MR images. We demonstrated dramatically different result profiles in segmentation accuracy according to the choice of ML algorithm and intensity normalization chosen. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Stable Local Volatility Calibration Using Kernel Splines

    NASA Astrophysics Data System (ADS)

    Coleman, Thomas F.; Li, Yuying; Wang, Cheng

    2010-09-01

    We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.

  16. Irradiation dose detection of irradiated milk powder using visible and near-infrared spectroscopy and chemometrics.

    PubMed

    Kong, W W; Zhang, C; Liu, F; Gong, A P; He, Y

    2013-08-01

    The objective of this study was to examine the possibility of applying visible and near-infrared spectroscopy to the quantitative detection of irradiation dose of irradiated milk powder. A total of 150 samples were used: 100 for the calibration set and 50 for the validation set. The samples were irradiated at 5 different dose levels in the dose range 0 to 6.0 kGy. Six different pretreatment methods were compared. The prediction results of full spectra given by linear and nonlinear calibration methods suggested that Savitzky-Golay smoothing and first derivative were suitable pretreatment methods in this study. Regression coefficient analysis was applied to select effective wavelengths (EW). Less than 10 EW were selected and they were useful for portable detection instrument or sensor development. Partial least squares, extreme learning machine, and least squares support vector machine were used. The best prediction performance was achieved by the EW-extreme learning machine model with first-derivative spectra, and correlation coefficients=0.97 and root mean square error of prediction=0.844. This study provided a new approach for the fast detection of irradiation dose of milk powder. The results could be helpful for quality detection and safety monitoring of milk powder. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Enhanced Quality Control in Pharmaceutical Applications by Combining Raman Spectroscopy and Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Martinez, J. C.; Guzmán-Sepúlveda, J. R.; Bolañoz Evia, G. R.; Córdova, T.; Guzmán-Cabrera, R.

    2018-06-01

    In this work, we applied machine learning techniques to Raman spectra for the characterization and classification of manufactured pharmaceutical products. Our measurements were taken with commercial equipment, for accurate assessment of variations with respect to one calibrated control sample. Unlike the typical use of Raman spectroscopy in pharmaceutical applications, in our approach the principal components of the Raman spectrum are used concurrently as attributes in machine learning algorithms. This permits an efficient comparison and classification of the spectra measured from the samples under study. This also allows for accurate quality control as all relevant spectral components are considered simultaneously. We demonstrate our approach with respect to the specific case of acetaminophen, which is one of the most widely used analgesics in the market. In the experiments, commercial samples from thirteen different laboratories were analyzed and compared against a control sample. The raw data were analyzed based on an arithmetic difference between the nominal active substance and the measured values in each commercial sample. The principal component analysis was applied to the data for quantitative verification (i.e., without considering the actual concentration of the active substance) of the difference in the calibrated sample. Our results show that by following this approach adulterations in pharmaceutical compositions can be clearly identified and accurately quantified.

  18. The Optics and Alignment of the Divergent Beam Laboratory X-ray Powder Diffractometer and its Calibration Using NIST Standard Reference Materials.

    PubMed

    Cline, James P; Mendenhall, Marcus H; Black, David; Windover, Donald; Henins, Albert

    2015-01-01

    The laboratory X-ray powder diffractometer is one of the primary analytical tools in materials science. It is applicable to nearly any crystalline material, and with advanced data analysis methods, it can provide a wealth of information concerning sample character. Data from these machines, however, are beset by a complex aberration function that can be addressed through calibration with the use of NIST Standard Reference Materials (SRMs). Laboratory diffractometers can be set up in a range of optical geometries; considered herein are those of Bragg-Brentano divergent beam configuration using both incident and diffracted beam monochromators. We review the origin of the various aberrations affecting instruments of this geometry and the methods developed at NIST to align these machines in a first principles context. Data analysis methods are considered as being in two distinct categories: those that use empirical methods to parameterize the nature of the data for subsequent analysis, and those that use model functions to link the observation directly to a specific aspect of the experiment. We consider a multifaceted approach to instrument calibration using both the empirical and model based data analysis methods. The particular benefits of the fundamental parameters approach are reviewed.

  19. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    PubMed

    Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan

    2015-01-01

    A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the individual subjects, therefore, it can be used as a significant tool in clinical practice.

  20. Development of advanced Czochralski growth process to produce low-cost 150 kG silicon ingots from a single crucible for technology readiness

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The modified CG2000 crystal grower construction, installation, and machine check out was completed. The process development check out proceeded with several dry runs and one growth run. Several machine calibrations and functional problems were discovered and corrected. Exhaust gas analysis system alternatives were evaluated and an integrated system approved and ordered. Several growth runs on a development CG2000 RC grower show that complete neck, crown, and body automated growth can be achieved with only one operator input.

  1. Temperature measurement on neurological pulse generators during MR scans

    PubMed Central

    Kainz, Wolfgang; Neubauer, Georg; Überbacher, Richard; Alesch, François; Chan, Dulciana Dias

    2002-01-01

    According to manufacturers of both magnetic resonance imaging (MRI) machines, and implantable neurological pulse generators (IPGs), MRI is contraindicated for patients with IPGs. A major argument for this restriction is the risk to induce heat in the leads due to the electromagnetic field, which could be dangerous for the surrounding brain parenchyma. The temperature change on the surface of the case of an ITREL-III (Medtronic Inc., Minneapolis, MN) and the lead tip during MRI was determined. An anatomical realistic and a cubic phantom, filled with phantom material mimicking human tissue, and a typical lead configuration were used to imitate a patient who carries an IPG for deep brain stimulation. The measurements were performed in a 1.5 T and a 3.0 T MRI. 2.1°C temperature increases at the lead tip uncovered the lead tip as the most critical part concerning heating problems in IPGs. Temperature increases in other locations were low compared to the one at the lead tip. The measured temperature increase of 2.1°C can not be considered as harmful to the patient. Comparison with the results of other studies revealed the avoidance of loops as a practical method to reduce heating during MRI procedures. PMID:12437766

  2. Vision 20/20: Magnetic resonance imaging-guided attenuation correction in PET/MRI: Challenges, solutions, and opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch

    Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, inmore » contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in commercial systems will also be discussed.« less

  3. Vision 20/20: Magnetic resonance imaging-guided attenuation correction in PET/MRI: Challenges, solutions, and opportunities.

    PubMed

    Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib

    2016-03-01

    Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, in contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in commercial systems will also be discussed.

  4. Navigation-supported diagnosis of the substantia nigra by matching midbrain sonography and MRI

    NASA Astrophysics Data System (ADS)

    Salah, Zein; Weise, David; Preim, Bernhard; Classen, Joseph; Rose, Georg

    2012-03-01

    Transcranial sonography (TCS) is a well-established neuroimaging technique that allows for visualizing several brainstem structures, including the substantia nigra, and helps for the diagnosis and differential diagnosis of various movement disorders, especially in Parkinsonian syndromes. However, proximate brainstem anatomy can hardly be recognized due to the limited image quality of B-scans. In this paper, a visualization system for the diagnosis of the substantia nigra is presented, which utilizes neuronavigated TCS to reconstruct tomographical slices from registered MRI datasets and visualizes them simultaneously with corresponding TCS planes in realtime. To generate MRI tomographical slices, the tracking data of the calibrated ultrasound probe are passed to an optimized slicing algorithm, which computes cross sections at arbitrary positions and orientations from the registered MRI dataset. The extracted MRI cross sections are finally fused with the region of interest from the ultrasound image. The system allows for the computation and visualization of slices at a near real-time rate. Primary tests of the system show an added value to the pure sonographic imaging. The system also allows for reconstructing volumetric (3D) ultrasonic data of the region of interest, and thus contributes to enhancing the diagnostic yield of midbrain sonography.

  5. Development of Advanced Czochralski Growth Process to produce low cost 150 KG silicon ingots from a single crucible for technology readiness

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The modified CG2000 crystal grower construction, installation, and machine check-out was completed. The process development check-out proceeded with several dry runs and one growth run. Several machine calibrations and functional problems were discovered and corrected. Several exhaust gas analysis system alternatives were evaluated and an integrated system approved and ordered. A contract presentation was made at the Project Integration Meeting at JPL, including cost-projections using contract projected throughput and machine parameters. Several growth runs on a development CG200 RC grower show that complete neck, crown, and body automated growth can be achieved with only one operator input. Work continued for melt level, melt temperature, and diameter sensor development.

  6. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining

    PubMed Central

    Mendikute, Alberto; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-01-01

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g., 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras. PMID:28891946

  7. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining.

    PubMed

    Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-09-09

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras.

  8. RELIABILITY TESTING OF AN ON-HARVESTER COTTON WEIGHT MEASUREMENT SYSTEM

    USDA-ARS?s Scientific Manuscript database

    A system for weighing seed cotton onboard stripper harvesters was developed and installed on several producer owned and operated machines. The weight measurement system provides critical information to producers when in the process of calibrating yield monitors or conducting on-farm research. The ...

  9. Enhanced Reality Visualization in a Surgical Environment.

    DTIC Science & Technology

    1995-01-13

    perhaps by a significant amount fol- lowing calibration. Examples of these methods include [Brown, 1965, Lenz and Tsai, 1988, Maybank and Faugeras...Analysis And Machine In- telligence, 10(5):713-720, September 1988. [ Maybank and Faugeras, 1992] Stephen J. Maybank and Olivier D. Faugeras. A

  10. A catalog of stellar spectrophotometry

    NASA Technical Reports Server (NTRS)

    Adelman, S. J.; Pyper, D. M.; Shore, S. N.; White, R. E.; Warren, W. H., Jr.

    1989-01-01

    A machine-readable catalog of stellar spectrophotometric measurements made with rotating grating scanner is introduced. Consideration is given to the processes by which the stellar data were collected and calibrated with the fluxes of Vega (Hayes and Latham, 1975). A sample page from the spectrophotometric catalog is presented.

  11. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    PubMed

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  12. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    PubMed Central

    Spinosa, Emanuele; Roberts, David A.

    2017-01-01

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access. PMID:28757553

  13. Application of Hyperspectral Imaging to Detect Sclerotinia sclerotiorum on Oilseed Rape Stems

    PubMed Central

    Kong, Wenwen; Zhang, Chu; Huang, Weihao

    2018-01-01

    Hyperspectral imaging covering the spectral range of 384–1034 nm combined with chemometric methods was used to detect Sclerotinia sclerotiorum (SS) on oilseed rape stems by two sample sets (60 healthy and 60 infected stems for each set). Second derivative spectra and PCA loadings were used to select the optimal wavelengths. Discriminant models were built and compared to detect SS on oilseed rape stems, including partial least squares-discriminant analysis, radial basis function neural network, support vector machine and extreme learning machine. The discriminant models using full spectra and optimal wavelengths showed good performance with classification accuracies of over 80% for the calibration and prediction set. Comparing all developed models, the optimal classification accuracies of the calibration and prediction set were over 90%. The similarity of selected optimal wavelengths also indicated the feasibility of using hyperspectral imaging to detect SS on oilseed rape stems. The results indicated that hyperspectral imaging could be used as a fast, non-destructive and reliable technique to detect plant diseases on stems. PMID:29300315

  14. [Determination of process variable pH in solid-state fermentation by FT-NIR spectroscopy and extreme learning machine (ELM)].

    PubMed

    Liu, Guo-hai; Jiang, Hui; Xiao, Xia-hong; Zhang, Dong-juan; Mei, Cong-li; Ding, Yu-han

    2012-04-01

    Fourier transform near-infrared (FT-NIR) spectroscopy was attempted to determine pH, which is one of the key process parameters in solid-state fermentation of crop straws. First, near infrared spectra of 140 solid-state fermented product samples were obtained by near infrared spectroscopy system in the wavelength range of 10 000-4 000 cm(-1), and then the reference measurement results of pH were achieved by pH meter. Thereafter, the extreme learning machine (ELM) was employed to calibrate model. In the calibration model, the optimal number of PCs and the optimal number of hidden-layer nodes of ELM network were determined by the cross-validation. Experimental results showed that the optimal ELM model was achieved with 1040-1 topology construction as follows: R(p) = 0.961 8 and RMSEP = 0.104 4 in the prediction set. The research achievement could provide technological basis for the on-line measurement of the process parameters in solid-state fermentation.

  15. Current Status of Efforts on Standardizing Magnetic Resonance Imaging of Juvenile Idiopathic Arthritis: Report from the OMERACT MRI in JIA Working Group and Health-e-Child.

    PubMed

    Nusman, Charlotte M; Ording Muller, Lil-Sofie; Hemke, Robert; Doria, Andrea S; Avenarius, Derk; Tzaribachev, Nikolay; Malattia, Clara; van Rossum, Marion A J; Maas, Mario; Rosendahl, Karen

    2016-01-01

    To report on the progress of an ongoing research collaboration on magnetic resonance imaging (MRI) in juvenile idiopathic arthritis (JIA) and describe the proceedings of a meeting, held prior to Outcome Measures in Rheumatology (OMERACT) 12, bringing together the OMERACT MRI in JIA working group and the Health-e-Child radiology group. The goal of the meeting was to establish agreement on scoring definitions, locations, and scales for the assessment of MRI of patients with JIA for both large and small joints. The collaborative work process included premeeting surveys, presentations, group discussions, consensus on scoring methods, pilot scoring, conjoint review, and discussion of a future research agenda. The meeting resulted in preliminary statements on the MR imaging protocol of the JIA knee and wrist and determination of the starting point for development of MRI scoring systems based on previous studies. It was also considered important to be descriptive rather than explanatory in the assessment of MRI in JIA (e.g., "thickening" instead of "hypertrophy"). Further, the group agreed that well-designed calibration sessions were warranted before any future scoring exercises were conducted. The combined efforts of the OMERACT MRI in JIA working group and Health-e-Child included the assessment of currently available material in the literature and determination of the basis from which to start the development of MRI scoring systems for both the knee and wrist. The future research agenda for the knee and wrist will include establishment of MRI scoring systems, an atlas of MR imaging in healthy children, and MRI protocol requisites.

  16. Reliably detectable flaw size for NDE methods that use calibration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh18232 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.

  17. Reliably Detectable Flaw Size for NDE Methods that Use Calibration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh1823 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.

  18. Breast cancer Ki67 expression preoperative discrimination by DCE-MRI radiomics features

    NASA Astrophysics Data System (ADS)

    Ma, Wenjuan; Ji, Yu; Qin, Zhuanping; Guo, Xinpeng; Jian, Xiqi; Liu, Peifang

    2018-02-01

    To investigate whether quantitative radiomics features extracted from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) are associated with Ki67 expression of breast cancer. In this institutional review board approved retrospective study, we collected 377 cases Chinese women who were diagnosed with invasive breast cancer in 2015. This cohort included 53 low-Ki67 expression (Ki67 proliferation index less than 14%) and 324 cases with high-Ki67 expression (Ki67 proliferation index more than 14%). A binary-classification of low- vs. high- Ki67 expression was performed. A set of 52 quantitative radiomics features, including morphological, gray scale statistic, and texture features, were extracted from the segmented lesion area. Three most common machine learning classification methods, including Naive Bayes, k-Nearest Neighbor and support vector machine with Gaussian kernel, were employed for the classification and the least absolute shrink age and selection operator (LASSO) method was used to select most predictive features set for the classifiers. Classification performance was evaluated by the area under receiver operating characteristic curve (AUC), accuracy, sensitivity and specificity. The model that used Naive Bayes classification method achieved the best performance than the other two methods, yielding 0.773 AUC value, 0.757 accuracy, 0.777 sensitivity and 0.769 specificity. Our study showed that quantitative radiomics imaging features of breast tumor extracted from DCE-MRI are associated with breast cancer Ki67 expression. Future larger studies are needed in order to further evaluate the findings.

  19. Use of Fetal Magnetic Resonance Image Analysis and Machine Learning to Predict the Need for Postnatal Cerebrospinal Fluid Diversion in Fetal Ventriculomegaly.

    PubMed

    Pisapia, Jared M; Akbari, Hamed; Rozycki, Martin; Goldstein, Hannah; Bakas, Spyridon; Rathore, Saima; Moldenhauer, Julie S; Storm, Phillip B; Zarnow, Deborah M; Anderson, Richard C E; Heuer, Gregory G; Davatzikos, Christos

    2018-02-01

    Which children with fetal ventriculomegaly, or enlargement of the cerebral ventricles in utero, will develop hydrocephalus requiring treatment after birth is unclear. To determine whether extraction of multiple imaging features from fetal magnetic resonance imaging (MRI) and integration using machine learning techniques can predict which patients require postnatal cerebrospinal fluid (CSF) diversion after birth. This retrospective case-control study used an institutional database of 253 patients with fetal ventriculomegaly from January 1, 2008, through December 31, 2014, to generate a predictive model. Data were analyzed from January 1, 2008, through December 31, 2015. All 25 patients who required postnatal CSF diversion were selected and matched by gestational age with 25 patients with fetal ventriculomegaly who did not require CSF diversion (discovery cohort). The model was applied to a sample of 24 consecutive patients with fetal ventriculomegaly who underwent evaluation at a separate institution (replication cohort) from January 1, 1998, through December 31, 2007. Data were analyzed from January 1, 1998, through December 31, 2009. To generate the model, linear measurements, area, volume, and morphologic features were extracted from the fetal MRI, and a machine learning algorithm analyzed multiple features simultaneously to find the combination that was most predictive of the need for postnatal CSF diversion. Accuracy, sensitivity, and specificity of the model in correctly classifying patients requiring postnatal CSF diversion. A total of 74 patients (41 girls [55%] and 33 boys [45%]; mean [SD] gestational age, 27.0 [5.6] months) were included from both cohorts. In the discovery cohort, median time to CSF diversion was 6 days (interquartile range [IQR], 2-51 days), and patients with fetal ventriculomegaly who did not develop symptoms were followed up for a median of 29 months (IQR, 9-46 months). The model correctly classified patients who required CSF diversion with 82% accuracy, 80% sensitivity, and 84% specificity. In the replication cohort, the model achieved 91% accuracy, 75% sensitivity, and 95% specificity. Image analysis and machine learning can be applied to fetal MRI findings to predict the need for postnatal CSF diversion. The model provides prognostic information that may guide clinical management and select candidates for potential fetal surgical intervention.

  20. Technical Note: A safe, cheap, and easy-to-use isotropic diffusion MRI phantom for clinical and multicenter studies.

    PubMed

    Pullens, Pim; Bladt, Piet; Sijbers, Jan; Maas, Andrew I R; Parizel, Paul M

    2017-03-01

    Since Diffusion Weighted Imaging (DWI) data acquisition and processing are not standardized, substantial differences in DWI derived measures such as Apparent Diffusion Coefficient (ADC) may arise which are related to the acquisition or MRI processing method, but not to the sample under study. Quality assurance using a standardized test object, or phantom, is a key factor in standardizing DWI across scanners. Current diffusion phantoms are either complex to use, not available in larger quantities, contain substances unwanted in a clinical environment, or are expensive. A diffusion phantom based on a polyvinylpyrrolidone (PVP) solution, together with a phantom holder, is presented and compared to existing diffusion phantoms for use in clinical DWI scans. An ADC vs. temperature calibration curve was obtained. ADC of the phantom (808 to 857 ± 0.2 mm 2 /s) is in the same range as ADC values found in brain tissue. ADC measurements are highly reproducible across time with an intra-class correlation coefficient of > 0.8. ADC as function of temperature (in Kelvin) can be estimated as ADCm(T)=[exp(-7.09)·exp-2903.81T-1293.55] with a total uncertainty (95% confidence limit) of ± 1.7%. We present an isotropic diffusion MRI phantom, together with its temperature calibration curve, that is easy-to-use in a clinical environment, cost-effective, reproducible to produce, and that contains no harmful substances. © 2017 American Association of Physicists in Medicine.

  1. Calibration of raw accelerometer data to measure physical activity: A systematic review.

    PubMed

    de Almeida Mendes, Márcio; da Silva, Inácio C M; Ramires, Virgílio V; Reichert, Felipe F; Martins, Rafaela C; Tomasi, Elaine

    2018-03-01

    Most of calibration studies based on accelerometry were developed using count-based analyses. In contrast, calibration studies based on raw acceleration signals are relatively recent and their evidences are incipient. The aim of the current study was to systematically review the literature in order to summarize methodological characteristics and results from raw data calibration studies. The review was conducted up to May 2017 using four databases: PubMed, Scopus, SPORTDiscus and Web of Science. Methodological quality of the included studies was evaluated using the Landis and Koch's guidelines. Initially, 1669 titles were identified and, after assessing titles, abstracts and full-articles, 20 studies were included. All studies were conducted in high-income countries, most of them with relatively small samples and specific population groups. Physical activity protocols were different among studies and the indirect calorimetry was the criterion measure mostly used. High mean values of sensitivity, specificity and accuracy from the intensity thresholds of cut-point-based studies were observed (93.7%, 91.9% and 95.8%, respectively). The most frequent statistical approach applied was machine learning-based modelling, in which the mean coefficient of determination was 0.70 to predict physical activity energy expenditure. Regarding the recognition of physical activity types, the mean values of accuracy for sedentary, household and locomotive activities were 82.9%, 55.4% and 89.7%, respectively. In conclusion, considering the construct of physical activity that each approach assesses, linear regression, machine-learning and cut-point-based approaches presented promising validity parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. SU-F-T-485: Independent Remote Audits for TG51 NonCompliant Photon Beams Performed by the IROC Houston QA Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alvarez, P; Molineu, A; Lowenstein, J

    Purpose: IROC-H conducts external audits for output check verification of photon and electron beams. Many of these beams can meet the geometric requirements of the TG 51 calibration protocol. For those photon beams that are non TG 51 compliant like Elekta GammaKnife, Accuray CyberKnife and TomoTherapy, IROC-H has specific audit tools to monitor the reference calibration. Methods: IROC-H used its TLD and OSLD remote monitoring systems to verify the output of machines with TG 51 non compliant beams. Acrylic OSLD miniphantoms are used for the CyberKnife. Special TLD phantoms are used for TomoTherapy and GammaKnife machines to accommodate the specificmore » geometry of each machine. These remote audit tools are sent to institutions to be irradiated and returned to IROC-H for analysis. Results: The average IROC-H/institution ratios for 480 GammaKnife, 660 CyberKnife and 907 rotational TomoTherapy beams are 1.000±0.021, 1.008±0.019, 0.974±0.023, respectively. In the particular case of TomoTherapy, the overall ratio is 0.977±0.022 for HD units. The standard deviations of all results are consistent with values determined for TG 51 compliant photon beams. These ratios have shown some changes compared to values presented in 2008. The GammaKnife results were corrected by an experimentally determined scatter factor of 1.025 in 2013. The TomoTherapy helical beam results are now from a rotational beam whereas in 2008 the results were from a static beam. The decision to change modality was based on recommendations from the users. Conclusion: External audits of beam outputs is a valuable tool to confirm the calibrations of photon beams regardless of whether the machine is TG 51 or TG 51 non compliant. The difference found for TomoTherapy units is under investigation. This investigation was supported by IROC grant CA180803 awarded by the NCI.« less

  3. Radiomics for ultrafast dynamic contrast-enhanced breast MRI in the diagnosis of breast cancer: a pilot study

    NASA Astrophysics Data System (ADS)

    Drukker, Karen; Anderson, Rachel; Edwards, Alexandra; Papaioannou, John; Pineda, Fred; Abe, Hiroyuke; Karzcmar, Gregory; Giger, Maryellen L.

    2018-02-01

    Radiomics for dynamic contrast-enhanced (DCE) breast MRI have shown promise in the diagnosis of breast cancer as applied to conventional DCE-MRI protocols. Here, we investigate the potential of using such radiomic features in the diagnosis of breast cancer applied on ultrafast breast MRI in which images are acquired every few seconds. The dataset consisted of 64 lesions (33 malignant and 31 benign) imaged with both `conventional' and ultrafast DCE-MRI. After automated lesion segmentation in each image sequence, we calculated 38 radiomic features categorized as describing size, shape, margin, enhancement-texture, kinetics, and enhancement variance kinetics. For each feature, we calculated the 95% confidence interval of the area under the ROC curve (AUC) to determine whether the performance of each feature in the task of distinguishing between malignant and benign lesions was better than random guessing. Subsequently, we assessed performance of radiomic signatures in 10-fold cross-validation repeated 10 times using a support vector machine with as input all the features as well as features by category. We found that many of the features remained useful (AUC>0.5) for the ultrafast protocol, with the exception of some features, e.g., those designed for latephase kinetics such as the washout rate. For ultrafast MRI, the radiomics enhancement-texture signature achieved the best performance, which was comparable to that of the kinetics signature for `conventional' DCE-MRI, both achieving AUC values of 0.71. Radiomic developed for `conventional' DCE-MRI shows promise for translation to the ultrafast protocol, where enhancement texture appears to play a dominant role.

  4. Computed tomography synthesis from magnetic resonance images in the pelvis using multiple random forests and auto-context features

    NASA Astrophysics Data System (ADS)

    Andreasen, Daniel; Edmund, Jens M.; Zografos, Vasileios; Menze, Bjoern H.; Van Leemput, Koen

    2016-03-01

    In radiotherapy treatment planning that is only based on magnetic resonance imaging (MRI), the electron density information usually obtained from computed tomography (CT) must be derived from the MRI by synthesizing a so-called pseudo CT (pCT). This is a non-trivial task since MRI intensities are neither uniquely nor quantitatively related to electron density. Typical approaches involve either a classification or regression model requiring specialized MRI sequences to solve intensity ambiguities, or an atlas-based model necessitating multiple registrations between atlases and subject scans. In this work, we explore a machine learning approach for creating a pCT of the pelvic region from conventional MRI sequences without using atlases. We use a random forest provided with information about local texture, edges and spatial features derived from the MRI. This helps to solve intensity ambiguities. Furthermore, we use the concept of auto-context by sequentially training a number of classification forests to create and improve context features, which are finally used to train a regression forest for pCT prediction. We evaluate the pCT quality in terms of the voxel-wise error and the radiologic accuracy as measured by water-equivalent path lengths. We compare the performance of our method against two baseline pCT strategies, which either set all MRI voxels in the subject equal to the CT value of water, or in addition transfer the bone volume from the real CT. We show an improved performance compared to both baseline pCTs suggesting that our method may be useful for MRI-only radiotherapy.

  5. The Neuro Bureau ADHD-200 Preprocessed repository.

    PubMed

    Bellec, Pierre; Chu, Carlton; Chouinard-Decorte, François; Benhajali, Yassine; Margulies, Daniel S; Craddock, R Cameron

    2017-01-01

    In 2011, the "ADHD-200 Global Competition" was held with the aim of identifying biomarkers of attention-deficit/hyperactivity disorder from resting-state functional magnetic resonance imaging (rs-fMRI) and structural MRI (s-MRI) data collected on 973 individuals. Statisticians and computer scientists were potentially the most qualified for the machine learning aspect of the competition, but generally lacked the specialized skills to implement the necessary steps of data preparation for rs-fMRI. Realizing this barrier to entry, the Neuro Bureau prospectively collaborated with all competitors by preprocessing the data and sharing these results at the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC) (http://www.nitrc.org/frs/?group_id=383). This "ADHD-200 Preprocessed" release included multiple analytical pipelines to cater to different philosophies of data analysis. The processed derivatives included denoised and registered 4D fMRI volumes, regional time series extracted from brain parcellations, maps of 10 intrinsic connectivity networks, fractional amplitude of low frequency fluctuation, and regional homogeneity, along with grey matter density maps. The data was used by several teams who competed in the ADHD-200 Global Competition, including the winning entry by a group of biostaticians. To the best of our knowledge, the ADHD-200 Preprocessed release was the first large public resource of preprocessed resting-state fMRI and structural MRI data, and remains to this day the only resource featuring a battery of alternative processing paths. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Using temporal ICA to selectively remove global noise while preserving global signal in functional MRI data.

    PubMed

    Glasser, Matthew F; Coalson, Timothy S; Bijsterbosch, Janine D; Harrison, Samuel J; Harms, Michael P; Anticevic, Alan; Van Essen, David C; Smith, Stephen M

    2018-06-02

    Temporal fluctuations in functional Magnetic Resonance Imaging (fMRI) have been profitably used to study brain activity and connectivity for over two decades. Unfortunately, fMRI data also contain structured temporal "noise" from a variety of sources, including subject motion, subject physiology, and the MRI equipment. Recently, methods have been developed to automatically and selectively remove spatially specific structured noise from fMRI data using spatial Independent Components Analysis (ICA) and machine learning classifiers. Spatial ICA is particularly effective at removing spatially specific structured noise from high temporal and spatial resolution fMRI data of the type acquired by the Human Connectome Project and similar studies. However, spatial ICA is mathematically, by design, unable to separate spatially widespread "global" structured noise from fMRI data (e.g., blood flow modulations from subject respiration). No methods currently exist to selectively and completely remove global structured noise while retaining the global signal from neural activity. This has left the field in a quandary-to do or not to do global signal regression-given that both choices have substantial downsides. Here we show that temporal ICA can selectively segregate and remove global structured noise while retaining global neural signal in both task-based and resting state fMRI data. We compare the results before and after temporal ICA cleanup to those from global signal regression and show that temporal ICA cleanup removes the global positive biases caused by global physiological noise without inducing the network-specific negative biases of global signal regression. We believe that temporal ICA cleanup provides a "best of both worlds" solution to the global signal and global noise dilemma and that temporal ICA itself unlocks interesting neurobiological insights from fMRI data. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Juvenile Osteochondritis Dissecans: Correlation Between Histopathology and MRI.

    PubMed

    Zbojniewicz, Andrew M; Stringer, Keith F; Laor, Tal; Wall, Eric J

    2015-07-01

    The objective of our study was to correlate specimens of juvenile osteochondritis dissecans (OCD) lesions of the knee to MRI examinations to elucidate the histopathologic basis of characteristic imaging features. Five children (three boys and two girls; age range, 12-13 years old) who underwent transarticular biopsy of juvenile OCD lesions of the knee were retrospectively included in this study. Two radiologists reviewed the MRI examinations and a pathologist reviewed the histopathologic specimens and recorded characteristic features. Digital specimen photographs were calibrated to the size of the respective MR image with the use of a reference scale. Photographs were rendered semitransparent and over-laid onto the MR image with the location chosen on the basis of the site of the prior biopsy. A total of seven biopsy specimens were included. On MRI, all lesions showed cystlike foci in the subchondral bone, bone marrow edema pattern on proton density-or T2-weighted images, and relatively thick unossified epiphyseal cartilage. In four patients, a laminar signal intensity pattern was seen, and two patients had multiple breaks in the subchondral bone plate. Fibrovascular tissue was found at histopathology in all patients. Cleft spaces near the cartilage-bone interface and were seen in all patients while chondrocyte cloning was present in most cases. Focal bone necrosis and inflammation were infrequent MRI findings. Precise correlation of the MRI appearance to the histopathologic overlays consistently was found. A direct correlation exists between the histopathologic findings and the MRI features in patients with juvenile OCD. Additional studies are needed to correlate these MRI features with juvenile OCD healing success rates.

  8. A computerized MRI biomarker quantification scheme for a canine model of Duchenne muscular dystrophy.

    PubMed

    Wang, Jiahui; Fan, Zheng; Vandenborne, Krista; Walter, Glenn; Shiloh-Malawsky, Yael; An, Hongyu; Kornegay, Joe N; Styner, Martin A

    2013-09-01

    Golden retriever muscular dystrophy (GRMD) is a widely used canine model of Duchenne muscular dystrophy (DMD). Recent studies have shown that magnetic resonance imaging (MRI) can be used to non-invasively detect consistent changes in both DMD and GRMD. In this paper, we propose a semiautomated system to quantify MRI biomarkers of GRMD. Our system was applied to a database of 45 MRI scans from 8 normal and 10 GRMD dogs in a longitudinal natural history study. We first segmented six proximal pelvic limb muscles using a semiautomated full muscle segmentation method. We then performed preprocessing, including intensity inhomogeneity correction, spatial registration of different image sequences, intensity calibration of T2-weighted and T2-weighted fat-suppressed images, and calculation of MRI biomarker maps. Finally, for each of the segmented muscles, we automatically measured MRI biomarkers of muscle volume, intensity statistics over MRI biomarker maps, and statistical image texture features. The muscle volume and the mean intensities in T2 value, fat, and water maps showed group differences between normal and GRMD dogs. For the statistical texture biomarkers, both the histogram and run-length matrix features showed obvious group differences between normal and GRMD dogs. The full muscle segmentation showed significantly less error and variability in the proposed biomarkers when compared to the standard, limited muscle range segmentation. The experimental results demonstrated that this quantification tool could reliably quantify MRI biomarkers in GRMD dogs, suggesting that it would also be useful for quantifying disease progression and measuring therapeutic effect in DMD patients.

  9. fMRI Validation of fNIRS Measurements During a Naturalistic Task

    PubMed Central

    Noah, J. Adam; Ono, Yumie; Nomoto, Yasunori; Shimada, Sotaro; Tachibana, Atsumichi; Zhang, Xian; Bronner, Shaw; Hirsch, Joy

    2015-01-01

    We present a method to compare brain activity recorded with near-infrared spectroscopy (fNIRS) in a dance video game task to that recorded in a reduced version of the task using fMRI (functional magnetic resonance imaging). Recently, it has been shown that fNIRS can accurately record functional brain activities equivalent to those concurrently recorded with functional magnetic resonance imaging for classic psychophysical tasks and simple finger tapping paradigms. However, an often quoted benefit of fNIRS is that the technique allows for studying neural mechanisms of complex, naturalistic behaviors that are not possible using the constrained environment of fMRI. Our goal was to extend the findings of previous studies that have shown high correlation between concurrently recorded fNIRS and fMRI signals to compare neural recordings obtained in fMRI procedures to those separately obtained in naturalistic fNIRS experiments. Specifically, we developed a modified version of the dance video game Dance Dance Revolution (DDR) to be compatible with both fMRI and fNIRS imaging procedures. In this methodology we explain the modifications to the software and hardware for compatibility with each technique as well as the scanning and calibration procedures used to obtain representative results. The results of the study show a task-related increase in oxyhemoglobin in both modalities and demonstrate that it is possible to replicate the findings of fMRI using fNIRS in a naturalistic task. This technique represents a methodology to compare fMRI imaging paradigms which utilize a reduced-world environment to fNIRS in closer approximation to naturalistic, full-body activities and behaviors. Further development of this technique may apply to neurodegenerative diseases, such as Parkinson’s disease, late states of dementia, or those with magnetic susceptibility which are contraindicated for fMRI scanning. PMID:26132365

  10. Small animal simultaneous PET/MRI: initial experiences in a 9.4 T microMRI

    NASA Astrophysics Data System (ADS)

    Harsha Maramraju, Sri; Smith, S. David; Junnarkar, Sachin S.; Schulz, Daniela; Stoll, Sean; Ravindranath, Bosky; Purschke, Martin L.; Rescia, Sergio; Southekal, Sudeepti; Pratte, Jean-François; Vaska, Paul; Woody, Craig L.; Schlyer, David J.

    2011-04-01

    We developed a non-magnetic positron-emission tomography (PET) device based on the rat conscious animal PET that operates in a small-animal magnetic resonance imaging (MRI) scanner, thereby enabling us to carry out simultaneous PET/MRI studies. The PET detector comprises 12 detector blocks, each being a 4 × 8 array of lutetium oxyorthosilicate crystals (2.22 × 2.22 × 5 mm3) coupled to a matching non-magnetic avalanche photodiode array. The detector blocks, housed in a plastic case, form a 38 mm inner diameter ring with an 18 mm axial extent. Custom-built MRI coils fit inside the positron-emission tomography (PET) device, operating in transceiver mode. The PET insert is integrated with a Bruker 9.4 T 210 mm clear-bore diameter MRI scanner. We acquired simultaneous PET/MR images of phantoms, of in vivo rat brain, and of cardiac-gated mouse heart using [11C]raclopride and 2-deoxy-2-[18F]fluoro-d-glucose PET radiotracers. There was minor interference between the PET electronics and the MRI during simultaneous operation, and small effects on the signal-to-noise ratio in the MR images in the presence of the PET, but no noticeable visual artifacts. Gradient echo and high-duty-cycle spin echo radio frequency (RF) pulses resulted in a 7% and a 28% loss in PET counts, respectively, due to high PET counts during the RF pulses that had to be gated out. The calibration of the activity concentration of PET data during MR pulsing is reproducible within less than 6%. Our initial results demonstrate the feasibility of performing simultaneous PET and MRI studies in adult rats and mice using the same PET insert in a small-bore 9.4 T MRI.

  11. Role of Ongoing, Intrinsic Activity of Neuronal Populations for Quantitative Neuroimaging of Functional Magnetic Resonance Imaging–Based Networks

    PubMed Central

    Herman, Peter; Sanganahalli, Basavaraju G.; Coman, Daniel; Blumenfeld, Hal; Rothman, Douglas L.

    2011-01-01

    Abstract A primary objective in neuroscience is to determine how neuronal populations process information within networks. In humans and animal models, functional magnetic resonance imaging (fMRI) is gaining increasing popularity for network mapping. Although neuroimaging with fMRI—conducted with or without tasks—is actively discovering new brain networks, current fMRI data analysis schemes disregard the importance of the total neuronal activity in a region. In task fMRI experiments, the baseline is differenced away to disclose areas of small evoked changes in the blood oxygenation level-dependent (BOLD) signal. In resting-state fMRI experiments, the spotlight is on regions revealed by correlations of tiny fluctuations in the baseline (or spontaneous) BOLD signal. Interpretation of fMRI-based networks is obscured further, because the BOLD signal indirectly reflects neuronal activity, and difference/correlation maps are thresholded. Since the small changes of BOLD signal typically observed in cognitive fMRI experiments represent a minimal fraction of the total energy/activity in a given area, the relevance of fMRI-based networks is uncertain, because the majority of neuronal energy/activity is ignored. Thus, another alternative for quantitative neuroimaging of fMRI-based networks is a perspective in which the activity of a neuronal population is accounted for by the demanded oxidative energy (CMRO2). In this article, we argue that network mapping can be improved by including neuronal energy/activity of both the information about baseline and small differences/fluctuations of BOLD signal. Thus, total energy/activity information can be obtained through use of calibrated fMRI to quantify differences of ΔCMRO2 and through resting-state positron emission tomography/magnetic resonance spectroscopy measurements for average CMRO2. PMID:22433047

  12. Improved spatial accuracy of functional maps in the rat olfactory bulb using supervised machine learning approach.

    PubMed

    Murphy, Matthew C; Poplawsky, Alexander J; Vazquez, Alberto L; Chan, Kevin C; Kim, Seong-Gi; Fukuda, Mitsuhiro

    2016-08-15

    Functional MRI (fMRI) is a popular and important tool for noninvasive mapping of neural activity. As fMRI measures the hemodynamic response, the resulting activation maps do not perfectly reflect the underlying neural activity. The purpose of this work was to design a data-driven model to improve the spatial accuracy of fMRI maps in the rat olfactory bulb. This system is an ideal choice for this investigation since the bulb circuit is well characterized, allowing for an accurate definition of activity patterns in order to train the model. We generated models for both cerebral blood volume weighted (CBVw) and blood oxygen level dependent (BOLD) fMRI data. The results indicate that the spatial accuracy of the activation maps is either significantly improved or at worst not significantly different when using the learned models compared to a conventional general linear model approach, particularly for BOLD images and activity patterns involving deep layers of the bulb. Furthermore, the activation maps computed by CBVw and BOLD data show increased agreement when using the learned models, lending more confidence to their accuracy. The models presented here could have an immediate impact on studies of the olfactory bulb, but perhaps more importantly, demonstrate the potential for similar flexible, data-driven models to improve the quality of activation maps calculated using fMRI data. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. An EEG Finger-Print of fMRI deep regional activation.

    PubMed

    Meir-Hasson, Yehudit; Kinreich, Sivan; Podlipsky, Ilana; Hendler, Talma; Intrator, Nathan

    2014-11-15

    This work introduces a general framework for producing an EEG Finger-Print (EFP) which can be used to predict specific brain activity as measured by fMRI at a given deep region. This new approach allows for improved EEG spatial resolution based on simultaneous fMRI activity measurements. Advanced signal processing and machine learning methods were applied on EEG data acquired simultaneously with fMRI during relaxation training guided by on-line continuous feedback on changing alpha/theta EEG measure. We focused on demonstrating improved EEG prediction of activation in sub-cortical regions such as the amygdala. Our analysis shows that a ridge regression model that is based on time/frequency representation of EEG data from a single electrode, can predict the amygdala related activity significantly better than a traditional theta/alpha activity sampled from the best electrode and about 1/3 of the times, significantly better than a linear combination of frequencies with a pre-defined delay. The far-reaching goal of our approach is to be able to reduce the need for fMRI scanning for probing specific sub-cortical regions such as the amygdala as the basis for brain-training procedures. On the other hand, activity in those regions can be characterized with higher temporal resolution than is obtained by fMRI alone thus revealing additional information about their processing mode. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  15. The Alzheimer's Disease Neuroimaging Initiative (ADNI): MRI Methods

    PubMed Central

    Jack, Clifford R.; Bernstein, Matt A.; Fox, Nick C.; Thompson, Paul; Alexander, Gene; Harvey, Danielle; Borowski, Bret; Britson, Paula J.; Whitwell, Jennifer L.; Ward, Chadwick; Dale, Anders M.; Felmlee, Joel P.; Gunter, Jeffrey L.; Hill, Derek L.G.; Killiany, Ron; Schuff, Norbert; Fox-Bosetti, Sabrina; Lin, Chen; Studholme, Colin; DeCarli, Charles S.; Krueger, Gunnar; Ward, Heidi A.; Metzger, Gregory J.; Scott, Katherine T.; Mallozzi, Richard; Blezek, Daniel; Levy, Joshua; Debbins, Josef P.; Fleisher, Adam S.; Albert, Marilyn; Green, Robert; Bartzokis, George; Glover, Gary; Mugler, John; Weiner, Michael W.

    2008-01-01

    The Alzheimer's Disease Neuroimaging Initiative (ADNI) is a longitudinal multisite observational study of healthy elders, mild cognitive impairment (MCI), and Alzheimer's disease. Magnetic resonance imaging (MRI), (18F)-fluorode-oxyglucose positron emission tomography (FDG PET), urine serum, and cerebrospinal fluid (CSF) biomarkers, as well as clinical/psychometric assessments are acquiredat multiple time points. All data will be cross-linked and made available to the general scientific community. The purpose of this report is to describe the MRI methods employed in ADNI. The ADNI MRI core established specifications thatguided protocol development. A major effort was devoted toevaluating 3D T1-weighted sequences for morphometric analyses. Several options for this sequence were optimized for the relevant manufacturer platforms and then compared in a reduced-scale clinical trial. The protocol selected for the ADNI study includes: back-to-back 3D magnetization prepared rapid gradient echo (MP-RAGE) scans; B1-calibration scans when applicable; and an axial proton density-T2 dual contrast (i.e., echo) fast spin echo/turbo spin echo (FSE/TSE) for pathology detection. ADNI MRI methods seek to maximize scientific utility while minimizing the burden placed on participants. The approach taken in ADNI to standardization across sites and platforms of the MRI protocol, postacquisition corrections, and phantom-based monitoring of all scanners could be used as a model for other multisite trials. PMID:18302232

  16. Fronto-Temporal Connectivity Predicts ECT Outcome in Major Depression.

    PubMed

    Leaver, Amber M; Wade, Benjamin; Vasavada, Megha; Hellemann, Gerhard; Joshi, Shantanu H; Espinoza, Randall; Narr, Katherine L

    2018-01-01

    Electroconvulsive therapy (ECT) is arguably the most effective available treatment for severe depression. Recent studies have used MRI data to predict clinical outcome to ECT and other antidepressant therapies. One challenge facing such studies is selecting from among the many available metrics, which characterize complementary and sometimes non-overlapping aspects of brain function and connectomics. Here, we assessed the ability of aggregated, functional MRI metrics of basal brain activity and connectivity to predict antidepressant response to ECT using machine learning. A radial support vector machine was trained using arterial spin labeling (ASL) and blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) metrics from n = 46 (26 female, mean age 42) depressed patients prior to ECT (majority right-unilateral stimulation). Image preprocessing was applied using standard procedures, and metrics included cerebral blood flow in ASL, and regional homogeneity, fractional amplitude of low-frequency modulations, and graph theory metrics (strength, local efficiency, and clustering) in BOLD data. A 5-repeated 5-fold cross-validation procedure with nested feature-selection validated model performance. Linear regressions were applied post hoc to aid interpretation of discriminative features. The range of balanced accuracy in models performing statistically above chance was 58-68%. Here, prediction of non-responders was slightly higher than for responders (maximum performance 74 and 64%, respectively). Several features were consistently selected across cross-validation folds, mostly within frontal and temporal regions. Among these were connectivity strength among: a fronto-parietal network [including left dorsolateral prefrontal cortex (DLPFC)], motor and temporal networks (near ECT electrodes), and/or subgenual anterior cingulate cortex (sgACC). Our data indicate that pattern classification of multimodal fMRI metrics can successfully predict ECT outcome, particularly for individuals who will not respond to treatment. Notably, connectivity with networks highly relevant to ECT and depression were consistently selected as important predictive features. These included the left DLPFC and the sgACC, which are both targets of other neurostimulation therapies for depression, as well as connectivity between motor and right temporal cortices near electrode sites. Future studies that probe additional functional and structural MRI metrics and other patient characteristics may further improve the predictive power of these and similar models.

  17. Man versus machine: comparison of radiologists' interpretations and NeuroQuant® volumetric analyses of brain MRIs in patients with traumatic brain injury.

    PubMed

    Ross, David E; Ochs, Alfred L; Seabaugh, Jan M; Shrader, Carole R

    2013-01-01

    NeuroQuant® is a recently developed, FDA-approved software program for measuring brain MRI volume in clinical settings. The purpose of this study was to compare NeuroQuant with the radiologist's traditional approach, based on visual inspection, in 20 outpatients with mild or moderate traumatic brain injury (TBI). Each MRI was analyzed with NeuroQuant, and the resulting volumetric analyses were compared with the attending radiologist's interpretation. The radiologist's traditional approach found atrophy in 10.0% of patients; NeuroQuant found atrophy in 50.0% of patients. NeuroQuant was more sensitive for detecting brain atrophy than the traditional radiologist's approach.

  18. Machine processing of ERTS and ground truth data

    NASA Technical Reports Server (NTRS)

    Rogers, R. H. (Principal Investigator); Peacock, K.

    1973-01-01

    The author has identified the following significant results. Results achieved by ERTS-Atmospheric Experiment PR303, whose objective is to establish a radiometric calibration technique, are reported. This technique, which determines and removes solar and atmospheric parameters that degrade the radiometric fidelity of ERTS-1 data, transforms the ERTS-1 sensor radiance measurements to absolute target reflectance signatures. A radiant power measuring instrument and its use in determining atmospheric parameters needed for ground truth are discussed. The procedures used and results achieved in machine processing ERTS-1 computer -compatible tapes and atmospheric parameters to obtain target reflectance are reviewed.

  19. Graph theory for feature extraction and classification: a migraine pathology case study.

    PubMed

    Jorge-Hernandez, Fernando; Garcia Chimeno, Yolanda; Garcia-Zapirain, Begonya; Cabrera Zubizarreta, Alberto; Gomez Beldarrain, Maria Angeles; Fernandez-Ruanova, Begonya

    2014-01-01

    Graph theory is also widely used as a representational form and characterization of brain connectivity network, as is machine learning for classifying groups depending on the features extracted from images. Many of these studies use different techniques, such as preprocessing, correlations, features or algorithms. This paper proposes an automatic tool to perform a standard process using images of the Magnetic Resonance Imaging (MRI) machine. The process includes pre-processing, building the graph per subject with different correlations, atlas, relevant feature extraction according to the literature, and finally providing a set of machine learning algorithms which can produce analyzable results for physicians or specialists. In order to verify the process, a set of images from prescription drug abusers and patients with migraine have been used. In this way, the proper functioning of the tool has been proved, providing results of 87% and 92% of success depending on the classifier used.

  20. Generic decoding of seen and imagined objects using hierarchical visual features.

    PubMed

    Horikawa, Tomoyasu; Kamitani, Yukiyasu

    2017-05-22

    Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.

  1. SU-F-J-93: Automated Segmentation of High-Resolution 3D WholeBrain Spectroscopic MRI for Glioblastoma Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreibmann, E; Shu, H; Cordova, J

    Purpose: We report on an automated segmentation algorithm for defining radiation therapy target volumes using spectroscopic MR images (sMRI) acquired at nominal voxel resolution of 100 microliters. Methods: Wholebrain sMRI combining 3D echo-planar spectroscopic imaging, generalized auto-calibrating partially-parallel acquisitions, and elliptical k-space encoding were conducted on 3T MRI scanner with 32-channel head coil array creating images. Metabolite maps generated include choline (Cho), creatine (Cr), and N-acetylaspartate (NAA), as well as Cho/NAA, Cho/Cr, and NAA/Cr ratio maps. Automated segmentation was achieved by concomitantly considering sMRI metabolite maps with standard contrast enhancing (CE) imaging in a pipeline that first uses the watermore » signal for skull stripping. Subsequently, an initial blob of tumor region is identified by searching for regions of FLAIR abnormalities that also display reduced NAA activity using a mean ratio correlation and morphological filters. These regions are used as starting point for a geodesic level-set refinement that adapts the initial blob to the fine details specific to each metabolite. Results: Accuracy of the segmentation model was tested on a cohort of 12 patients that had sMRI datasets acquired pre, mid and post-treatment, providing a broad range of enhancement patterns. Compared to classical imaging, where heterogeneity in the tumor appearance and shape across posed a greater challenge to the algorithm, sMRI’s regions of abnormal activity were easily detected in the sMRI metabolite maps when combining the detail available in the standard imaging with the local enhancement produced by the metabolites. Results can be imported in the treatment planning, leading in general increase in the target volumes (GTV60) when using sMRI+CE MRI compared to the standard CE MRI alone. Conclusion: Integration of automated segmentation of sMRI metabolite maps into planning is feasible and will likely streamline acceptance of this new acquisition modality in clinical practice.« less

  2. Normal tissue complication probability (NTCP) modelling using spatial dose metrics and machine learning methods for severe acute oral mucositis resulting from head and neck radiotherapy.

    PubMed

    Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L

    2016-07-01

    Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.

  3. Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays.

    PubMed

    Itoh, Yuta; Klinker, Gudrun

    2015-04-01

    A critical requirement for AR applications with Optical See-Through Head-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user - more particularly, according to the user's eye position. Recently-proposed interaction-free calibration methods [16], [17] automatically estimate this projection by tracking the user's eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors - the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.

  4. Multi-class SVM model for fMRI-based classification and grading of liver fibrosis

    NASA Astrophysics Data System (ADS)

    Freiman, M.; Sela, Y.; Edrei, Y.; Pappo, O.; Joskowicz, L.; Abramovitch, R.

    2010-03-01

    We present a novel non-invasive automatic method for the classification and grading of liver fibrosis from fMRI maps based on hepatic hemodynamic changes. This method automatically creates a model for liver fibrosis grading based on training datasets. Our supervised learning method evaluates hepatic hemodynamics from an anatomical MRI image and three T2*-W fMRI signal intensity time-course scans acquired during the breathing of air, air-carbon dioxide, and carbogen. It constructs a statistical model of liver fibrosis from these fMRI scans using a binary-based one-against-all multi class Support Vector Machine (SVM) classifier. We evaluated the resulting classification model with the leave-one out technique and compared it to both full multi-class SVM and K-Nearest Neighbor (KNN) classifications. Our experimental study analyzed 57 slice sets from 13 mice, and yielded a 98.2% separation accuracy between healthy and low grade fibrotic subjects, and an overall accuracy of 84.2% for fibrosis grading. These results are better than the existing image-based methods which can only discriminate between healthy and high grade fibrosis subjects. With appropriate extensions, our method may be used for non-invasive classification and progression monitoring of liver fibrosis in human patients instead of more invasive approaches, such as biopsy or contrast-enhanced imaging.

  5. The radiation metrology network related to the field of mammography: implementation and uncertainty analysis of the calibration system

    NASA Astrophysics Data System (ADS)

    Peixoto, J. G. P.; de Almeida, C. E.

    2001-09-01

    It is recognized by the international guidelines that it is necessary to offer calibration services for mammography beams in order to improve the quality of clinical diagnosis. Major efforts have been made by several laboratories in order to establish an appropriate and traceable calibration infrastructure and to provide the basis for a quality control programme in mammography. The contribution of the radiation metrology network to the users of mammography is reviewed in this work. Also steps required for the implementation of a mammography calibration system using a constant potential x-ray and a clinical mammography x-ray machine are presented. The various qualities of mammography radiation discussed in this work are in accordance with the IEC 61674 and the AAPM recommendations. They are at present available at several primary standard dosimetry laboratories (PSDLs), namely the PTB, NIST and BEV and a few secondary standard dosimetry laboratories (SSDLs) such as at the University of Wisconsin and at the IAEA's SSDL. We discuss the uncertainties involved in all steps of the calibration chain in accord with the ISO recommendations.

  6. A theoretical framework to model DSC-MRI data acquired in the presence of contrast agent extravasation

    NASA Astrophysics Data System (ADS)

    Quarles, C. C.; Gochberg, D. F.; Gore, J. C.; Yankeelov, T. E.

    2009-10-01

    Dynamic susceptibility contrast (DSC) MRI methods rely on compartmentalization of the contrast agent such that a susceptibility gradient can be induced between the contrast-containing compartment and adjacent spaces, such as between intravascular and extravascular spaces. When there is a disruption of the blood-brain barrier, as is frequently the case with brain tumors, a contrast agent leaks out of the vasculature, resulting in additional T1, T2 and T*2 relaxation effects in the extravascular space, thereby affecting the signal intensity time course and reducing the reliability of the computed hemodynamic parameters. In this study, a theoretical model describing these dynamic intra- and extravascular T1, T2 and T*2 relaxation interactions is proposed. The applicability of using the proposed model to investigate the influence of relevant MRI pulse sequences (e.g. echo time, flip angle), and physical (e.g. susceptibility calibration factors, pre-contrast relaxation rates) and physiological parameters (e.g. permeability, blood flow, compartmental volume fractions) on DSC-MRI signal time curves is demonstrated. Such a model could yield important insights into the biophysical basis of contrast-agent-extravasastion-induced effects on measured DSC-MRI signals and provide a means to investigate pulse sequence optimization and appropriate data analysis methods for the extraction of physiologically relevant imaging metrics.

  7. Long-term reproducibility of phantom signal intensities in nonuniformity corrected STIR-MRI examinations of skeletal muscle.

    PubMed

    Viddeleer, Alain R; Sijens, Paul E; van Ooijen, Peter M A; Kuypers, Paul D L; Hovius, Steven E R; Oudkerk, Matthijs

    2009-08-01

    Nerve regeneration could be monitored by comparing MRI image intensities in time, as denervated muscles display increased signal intensity in STIR sequences. In this study long-term reproducibility of STIR image intensity was assessed under clinical conditions and the required image intensity nonuniformity correction was improved by using phantom scans obtained at multiple positions. Three-dimensional image intensity nonuniformity was investigated in phantom scans. Next, over a three-year period, 190 clinical STIR hand scans were obtained using a standardized acquisition protocol, and corrected for intensity nonuniformity by using the results of phantom scanning. The results of correction with 1, 3, and 11 phantom scans were compared. The image intensities in calibration tubes close to the hands were measured every time to determine the reproducibility of our method. With calibration, the reproducibility of STIR image intensity improved from 7.8 to 6.4%. Image intensity nonuniformity correction with 11 phantom scans gave significantly better results than correction with 1 or 3 scans. The image intensities in clinical STIR images acquired at different times can be compared directly, provided that the acquisition protocol is standardized and that nonuniformity correction is applied. Nonuniformity correction is preferably based on multiple phantom scans.

  8. Non-invasive estimate of blood glucose and blood pressure from a photoplethysmograph by means of machine learning techniques.

    PubMed

    Monte-Moreno, Enric

    2011-10-01

    This work presents a system for a simultaneous non-invasive estimate of the blood glucose level (BGL) and the systolic (SBP) and diastolic (DBP) blood pressure, using a photoplethysmograph (PPG) and machine learning techniques. The method is independent of the person whose values are being measured and does not need calibration over time or subjects. The architecture of the system consists of a photoplethysmograph sensor, an activity detection module, a signal processing module that extracts features from the PPG waveform, and a machine learning algorithm that estimates the SBP, DBP and BGL values. The idea that underlies the system is that there is functional relationship between the shape of the PPG waveform and the blood pressure and glucose levels. As described in this paper we tested this method on 410 individuals without performing any personalized calibration. The results were computed after cross validation. The machine learning techniques tested were: ridge linear regression, a multilayer perceptron neural network, support vector machines and random forests. The best results were obtained with the random forest technique. In the case of blood pressure, the resulting coefficients of determination for reference vs. prediction were R(SBP)(2)=0.91, R(DBP)(2)=0.89, and R(BGL)(2)=0.90. For the glucose estimation, distribution of the points on a Clarke error grid placed 87.7% of points in zone A, 10.3% in zone B, and 1.9% in zone D. Blood pressure values complied with the grade B protocol of the British Hypertension society. An effective system for estimate of blood glucose and blood pressure from a photoplethysmograph is presented. The main advantage of the system is that for clinical use it complies with the grade B protocol of the British Hypertension society for the blood pressure and only in 1.9% of the cases did not detect hypoglycemia or hyperglycemia. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. A semi-supervised Support Vector Machine model for predicting the language outcomes following cochlear implantation based on pre-implant brain fMRI imaging.

    PubMed

    Tan, Lirong; Holland, Scott K; Deshpande, Aniruddha K; Chen, Ye; Choo, Daniel I; Lu, Long J

    2015-12-01

    We developed a machine learning model to predict whether or not a cochlear implant (CI) candidate will develop effective language skills within 2 years after the CI surgery by using the pre-implant brain fMRI data from the candidate. The language performance was measured 2 years after the CI surgery by the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2). Based on the CELF-P2 scores, the CI recipients were designated as either effective or ineffective CI users. For feature extraction from the fMRI data, we constructed contrast maps using the general linear model, and then utilized the Bag-of-Words (BoW) approach that we previously published to convert the contrast maps into feature vectors. We trained both supervised models and semi-supervised models to classify CI users as effective or ineffective. Compared with the conventional feature extraction approach, which used each single voxel as a feature, our BoW approach gave rise to much better performance for the classification of effective versus ineffective CI users. The semi-supervised model with the feature set extracted by the BoW approach from the contrast of speech versus silence achieved a leave-one-out cross-validation AUC as high as 0.97. Recursive feature elimination unexpectedly revealed that two features were sufficient to provide highly accurate classification of effective versus ineffective CI users based on our current dataset. We have validated the hypothesis that pre-implant cortical activation patterns revealed by fMRI during infancy correlate with language performance 2 years after cochlear implantation. The two brain regions highlighted by our classifier are potential biomarkers for the prediction of CI outcomes. Our study also demonstrated the superiority of the semi-supervised model over the supervised model. It is always worthwhile to try a semi-supervised model when unlabeled data are available.

  10. Magnetic Resonance Medical Imaging (MRI)-from the inside

    NASA Astrophysics Data System (ADS)

    Bottomley, Paul

    There are about 36,000 magnetic resonance imaging (MRI) scanners in the world, with annual sales of 2500. In the USA about 34 million MRI studies are done annually, and 60-70% of all scanners operate at 1.5 Tesla (T). In 1982 there were none. How MRI got to be-and how it got to1.5T is the subject of this talk. Its an insider's view-mine-as a physics PhD student at Nottingham University when MRI (almost) began, through to the invention of the 1.5T clinical MRI scanner at GE's research center in Schenectady NY.Before 1977 all MRI was done on laboratory nuclear magnetic resonance instruments used for analyzing small specimens via chemical shift spectroscopy (MRS). It began with Lauterbur's 1973 observation that turning up the spectrometer's linear gradient magnetic field, generated a spectrum that was a 1D projection of the sample in the direction of the gradient. What followed in the 70's was the development of 3 key methods of 3D spatial localization that remain fundamental to MRI today.As the 1980's began, the once unimaginable prospect of upscaling from 2cm test-tubes to human body-sized magnets, gradient and RF transmit/receive systems, was well underway, evolving from arm-sized, to whole-body electromagnet-based systems operating at <0.2T. I moved to Johns Hopkins University to apply MRI methods to localized MRS and study cardiac metabolism, and then to GE to build a whole-body MRS machine. The largest uniform magnet possible-then, a 1.5T superconducting system-was required. Body MRI was first thought impossible above 0.35T due to RF penetration, detector coil and signal-to-noise ratio (SNR) issues. When GE finally did take on MRI, their plan was to drop the field to 0.3T. We opted to make MRI work at 1.5T instead. The result was a scanner that could study both anatomy and metabolism with a SNR way beyond its lower field rivals. MRI's success truly reflects the team efforts of many: from the NMR physics to the engineering of magnets, gradient and RF systems.

  11. Detection of physiological noise in resting state fMRI using machine learning.

    PubMed

    Ash, Tom; Suckling, John; Walter, Martin; Ooi, Cinly; Tempelmann, Claus; Carpenter, Adrian; Williams, Guy

    2013-04-01

    We present a technique for predicting cardiac and respiratory phase on a time point by time point basis, from fMRI image data. These predictions have utility in attempts to detrend effects of the physiological cycles from fMRI image data. We demonstrate the technique both in the case where it can be trained on a subject's own data, and when it cannot. The prediction scheme uses a multiclass support vector machine algorithm. Predictions are demonstrated to have a close fit to recorded physiological phase, with median Pearson correlation scores between recorded and predicted values of 0.99 for the best case scenario (cardiac cycle trained on a subject's own data) down to 0.83 for the worst case scenario (respiratory predictions trained on group data), as compared to random chance correlation score of 0.70. When predictions were used with RETROICOR--a popular physiological noise removal tool--the effects are compared to using recorded phase values. Using Fourier transforms and seed based correlation analysis, RETROICOR is shown to produce similar effects whether recorded physiological phase values are used, or they are predicted using this technique. This was seen by similar levels of noise reduction noise in the same regions of the Fourier spectra, and changes in seed based correlation scores in similar regions of the brain. This technique has a use in situations where data from direct monitoring of the cardiac and respiratory cycles are incomplete or absent, but researchers still wish to reduce this source of noise in the image data. Copyright © 2011 Wiley Periodicals, Inc.

  12. A Hybrid Hierarchical Approach for Brain Tissue Segmentation by Combining Brain Atlas and Least Square Support Vector Machine

    PubMed Central

    Kasiri, Keyvan; Kazemi, Kamran; Dehghani, Mohammad Javad; Helfroush, Mohammad Sadegh

    2013-01-01

    In this paper, we present a new semi-automatic brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and the cerebrospinal fluid (CSF) is extracted. These two steps are performed using the toolbox FMRIB's automated segmentation tool integrated in the FSL software (FSL-FAST) developed in Oxford Centre for functional MRI of the brain (FMRIB). Then, in the third step, the LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. The voxel intensities and spatial positions are selected as the two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems; however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from the simulated magnetic resonance imaging (MRI) using Brainweb MRI simulator and real data provided by Internet Brain Segmentation Repository. The semi-automatically segmented brain tissues were evaluated by comparing to the corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for the quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth. PMID:24696800

  13. SU-E-J-198: Out-Of-Field Dose and Surface Dose Measurements of MRI-Guided Cobalt-60 Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamb, J; Agazaryan, N; Cao, M

    2015-06-15

    Purpose: To measure quantities of dosimetric interest in an MRI-guided cobalt radiotherapy machine that was recently introduced to clinical use. Methods: Out-of-field dose due to photon scatter and leakage was measured using an ion chamber and solid water slabs mimicking a human body. Surface dose was measured by irradiating stacks of radiochromic film and extrapolating to zero thickness. Electron out-of-field dose was characterized using solid water slabs and radiochromic film. Results: For some phantom geometries, up to 50% of Dmax was observed up to 10 cm laterally from the edge of the beam. The maximum penetration was between 1 andmore » 2 mm in solid water, indicating an electron energy not greater than approximately 0.4 MeV. Out-of-field dose from photon scatter measured at 1 cm depth in solid water was found to fall to less than 10% of Dmax at a distance of 1.2 cm from the edge of a 10.5 × 10.5 cm field, and less that 1% of Dmax at a distance of 10 cm from field edge. Surface dose was measured to be 8% of Dmax. Conclusion: Surface dose and out-of-field dose from the MRIguided cobalt radiotherapy machine was measured and found to be within acceptable limits. Electron out-of-field dose, an effect unique to MRI-guided radiotherapy and presumed to arise from low-energy electrons trapped by the Lorentz force, was quantified. Dr. Low is a member of the scientific advisory board of ViewRay, Inc.« less

  14. Voxel-based automated detection of focal cortical dysplasia lesions using diffusion tensor imaging and T2-weighted MRI data.

    PubMed

    Wang, Yanming; Zhou, Yawen; Wang, Huijuan; Cui, Jin; Nguchu, Benedictor Alexander; Zhang, Xufei; Qiu, Bensheng; Wang, Xiaoxiao; Zhu, Mingwang

    2018-05-21

    The aim of this study was to automatically detect focal cortical dysplasia (FCD) lesions in patients with extratemporal lobe epilepsy by relying on diffusion tensor imaging (DTI) and T2-weighted magnetic resonance imaging (MRI) data. We implemented an automated classifier using voxel-based multimodal features to identify gray and white matter abnormalities of FCD in patient cohorts. In addition to the commonly used T2-weighted image intensity feature, DTI-based features were also utilized. A Gaussian processes for machine learning (GPML) classifier was tested on 12 patients with FCD (8 with histologically confirmed FCD) scanned at 1.5 T and cross-validated using a leave-one-out strategy. Moreover, we compared the multimodal GPML paradigm's performance with that of single modal GPML and classical support vector machine (SVM). Our results demonstrated that the GPML performance on DTI-based features (mean AUC = 0.63) matches with the GPML performance on T2-weighted image intensity feature (mean AUC = 0.64). More promisingly, GPML yielded significantly improved performance (mean AUC = 0.76) when applying DTI-based features to multimodal paradigm. Based on the results, it can also be clearly stated that the proposed GPML strategy performed better and is robust to unbalanced dataset contrary to SVM that performed poorly (AUC = 0.69). Therefore, the GPML paradigm using multimodal MRI data containing DTI modality has promising result towards detection of the FCD lesions and provides an effective direction for future researches. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement

    PubMed Central

    Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian

    2013-01-01

    Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990

  16. Development and internal validation of a side-specific, multiparametric magnetic resonance imaging-based nomogram for the prediction of extracapsular extension of prostate cancer.

    PubMed

    Martini, Alberto; Gupta, Akriti; Lewis, Sara C; Cumarasamy, Shivaram; Haines, Kenneth G; Briganti, Alberto; Montorsi, Francesco; Tewari, Ashutosh K

    2018-04-19

    To develop a nomogram for predicting side-specific extracapsular extension (ECE) for planning nerve-sparing radical prostatectomy. We retrospectively analysed data from 561 patients who underwent robot-assisted radical prostatectomy between February 2014 and October 2015. To develop a side-specific predictive model, we considered the prostatic lobes separately. Four variables were included: prostate-specific antigen; highest ipsilateral biopsy Gleason grade; highest ipsilateral percentage core involvement; and ECE on multiparametric magnetic resonance imaging (mpMRI). A multivariable logistic regression analysis was fitted to predict side-specific ECE. A nomogram was built based on the coefficients of the logit function. Internal validation was performed using 'leave-one-out' cross-validation. Calibration was graphically investigated. The decision curve analysis was used to evaluate the net clinical benefit. The study population consisted of 829 side-specific cases, after excluding negative biopsy observations (n = 293). ECE was reported on mpMRI and final pathology in 115 (14%) and 142 (17.1%) cases, respectively. Among these, mpMRI was able to predict ECE correctly in 57 (40.1%) cases. All variables in the model except highest percentage core involvement were predictors of ECE (all P ≤ 0.006). All variables were considered for inclusion in the nomogram. After internal validation, the area under the curve was 82.11%. The model demonstrated excellent calibration and improved clinical risk prediction, especially when compared with relying on mpMRI prediction of ECE alone. When retrospectively applying the nomogram-derived probability, using a 20% threshold for performing nerve-sparing, nine out of 14 positive surgical margins (PSMs) at the site of ECE resulted above the threshold. We developed an easy-to-use model for the prediction of side-specific ECE, and hope it serves as a tool for planning nerve-sparing radical prostatectomy and in the reduction of PSM in future series. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.

  17. Accuracy study of a robotic system for MRI-guided prostate needle placement.

    PubMed

    Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian

    2013-09-01

    Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Psychological reactions in women undergoing fetal magnetic resonance imaging.

    PubMed

    Leithner, Katharina; Pörnbacher, Susanne; Assem-Hilger, Eva; Krampl, Elisabeth; Ponocny-Seliger, Elisabeth; Prayer, Daniela

    2008-02-01

    To investigate women's psychological reactions when undergoing fetal magnetic resonance imaging (MRI), and to estimate whether certain groups, based on clinical and sociodemographic variables, differ in their subjective experiences with fetal MRI and in their anxiety levels related to the scanning procedure. This study is a prospective cohort investigation of 62 women before and immediately after fetal MRI. Anxiety levels and subjective experiences were measured by questionnaires. Groups based on clinical and sociodemographic variables were compared with regard to anxiety levels and to the scores on the Prescan and Postscan Imaging Distress Questionnaire. Anxiety scores before fetal MRI were 8.8 points higher than those of the female, nonclinical, norm population (P<.001). The severity of the referral diagnosis showed a linearly increasing effect on anxiety level before MRI (weighted linear term: F1,59=5.325, P=.025). Magnetic resonance imaging was experienced as unpleasant by 33.9% (95% confidence interval [CI] 21.2-46.6%) and as hardly bearable by 4.8% (95% CI 0-17.5%) of the women. Physical restraint (49.9%, 95% CI 37.4-62.4%), noise level (53.2%, 95% CI 40.7-65.7%), anxiety for the infant (53.2%, 95% CI 40.7-65.7%), and the duration of the examination (51.6%, 95% CI 39.1-64.1%) were major distressing factors. Women who undergo fetal magnetic resonance imaging experience considerable distress, especially those with poor fetal prognoses. Ongoing technical developments, such as a reduction of noise, shortening the duration of the MRI, and a more comfortable position in open MRI machines, may have the potential to improve the subjective experiences of women during fetal MRI. III.

  19. Precise on-machine extraction of the surface normal vector using an eddy current sensor array

    NASA Astrophysics Data System (ADS)

    Wang, Yongqing; Lian, Meng; Liu, Haibo; Ying, Yangwei; Sheng, Xianjun

    2016-11-01

    To satisfy the requirements of on-machine measurement of the surface normal during complex surface manufacturing, a highly robust normal vector extraction method using an Eddy current (EC) displacement sensor array is developed, the output of which is almost unaffected by surface brightness, machining coolant and environmental noise. A precise normal vector extraction model based on a triangular-distributed EC sensor array is first established. Calibration of the effects of object surface inclination and coupling interference on measurement results, and the relative position of EC sensors, is involved. A novel apparatus employing three EC sensors and a force transducer was designed, which can be easily integrated into the computer numerical control (CNC) machine tool spindle and/or robot terminal execution. Finally, to test the validity and practicability of the proposed method, typical experiments were conducted with specified testing pieces using the developed approach and system, such as an inclined plane and cylindrical and spherical surfaces.

  20. Biodiesel content determination in diesel fuel blends using near infrared (NIR) spectroscopy and support vector machines (SVM).

    PubMed

    Alves, Julio Cesar L; Poppi, Ronei J

    2013-01-30

    This work verifies the potential of support vector machine (SVM) algorithm applied to near infrared (NIR) spectroscopy data to develop multivariate calibration models for determination of biodiesel content in diesel fuel blends that are more effective and appropriate for analytical determinations of this type of fuel nowadays, providing the usual extended analytical range with required accuracy. Considering the difficulty to develop suitable models for this type of determination in an extended analytical range and that, in practice, biodiesel/diesel fuel blends are nowadays most often used between 0 and 30% (v/v) of biodiesel content, a calibration model is suggested for the range 0-35% (v/v) of biodiesel in diesel blends. The possibility of using a calibration model for the range 0-100% (v/v) of biodiesel in diesel fuel blends was also investigated and the difficulty in obtaining adequate results for this full analytical range is discussed. The SVM models are compared with those obtained with PLS models. The best result was obtained by the SVM model using the spectral region 4400-4600 cm(-1) providing the RMSEP value of 0.11% in 0-35% biodiesel content calibration model. This model provides the determination of biodiesel content in agreement with the accuracy required by ABNT NBR and ASTM reference methods and without interference due to the presence of vegetable oil in the mixture. The best SVM model fit performance for the relationship studied is also verified by providing similar prediction results with the use of 4400-6200 cm(-1) spectral range while the PLS results are much worse over this spectral region. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Development of a new calibration procedure and its experimental validation applied to a human motion capture system.

    PubMed

    Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge

    2014-12-01

    Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.

  2. Large Scale Structure From Motion for Autonomous Underwater Vehicle Surveys

    DTIC Science & Technology

    2004-09-01

    Govern the Formation of Multiple Images of a Scene and Some of Their Applications. MIT Press, 2001. [26] 0. Faugeras and S. Maybank . Motion from point...Machine Vision Conference, volume 1, pages 384-393, September 2002. [69] S. Maybank and 0. Faugeras. A theory of self-calibration of a moving camera

  3. 78 FR 76035 - Airworthiness Directives; Maule Aerospace Technology, Inc. Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-16

    ... precision machined step wedge made of 4340 steel (or similar steel with equivalent sound velocity) or at... unless an alternative instrument calibration procedure is used to set the sound velocity. 6. Obtain a... reflection of the thick section. If the digital display does not agree with the thickest thickness, follow...

  4. Proceedings of the 1984 IEEE international conference on systems, man and cybernetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1984-01-01

    This conference contains papers on artificial intelligence, pattern recognition, and man-machine systems. Topics considered include concurrent minimization, a robot programming system, system modeling and simulation, camera calibration, thermal power plants, image processing, fault diagnosis, knowledge-based systems, power systems, hydroelectric power plants, expert systems, and electrical transients.

  5. Testing of molded high temperature plastic actuator road seals for use in advanced aircraft hydraulic systems

    NASA Technical Reports Server (NTRS)

    Waterman, A. W.; Huxford, R. L.; Nelson, W. G.

    1976-01-01

    Molded high temperature plastic first and second stage rod seal elements were evaluated in seal assemblies to determine performance characteristics. These characteristics were compared with the performance of machined seal elements. The 6.35 cm second stage Chevron seal assembly was tested using molded Chevrons fabricated from five molding materials. Impulse screening tests conducted over a range of 311 K to 478 K revealed thermal setting deficiencies in the aromatic polyimide molding materials. Seal elements fabricated from aromatic copolyester materials structurally failed during impulse cycle calibration. Endurance testing of 3.85 million cycles at 450 K using MIL-H-83283 fluid showed poorer seal performance with the unfilled aromatic polyimide material than had been attained with seals machined from Vespel SP-21 material. The 6.35 cm first stage step-cut compression loaded seal ring fabricated from copolyester injection molding material failed structurally during impulse cycle calibration. Molding of complex shape rod seals was shown to be a potentially controllable technique, but additional molding material property testing is recommended.

  6. Position calibration of a 3-DOF hand-controller with hybrid structure

    NASA Astrophysics Data System (ADS)

    Zhu, Chengcheng; Song, Aiguo

    2017-09-01

    A hand-controller is a human-robot interactive device, which measures the 3-DOF (Degree of Freedom) position of the human hand and sends it as a command to control robot movement. The device also receives 3-DOF force feedback from the robot and applies it to the human hand. Thus, the precision of 3-DOF position measurements is a key performance factor for hand-controllers. However, when using a hybrid type 3-DOF hand controller, various errors occur and are considered originating from machining and assembly variations within the device. This paper presents a calibration method to improve the position tracking accuracy of hybrid type hand-controllers by determining the actual size of the hand-controller parts. By re-measuring and re-calibrating this kind of hand-controller, the actual size of the key parts that cause errors is determined. Modifying the formula parameters with the actual sizes, which are obtained in the calibrating process, improves the end position tracking accuracy of the device.

  7. A computerized MRI biomarker quantification scheme for a canine model of Duchenne muscular dystrophy

    PubMed Central

    Wang, Jiahui; Fan, Zheng; Vandenborne, Krista; Walter, Glenn; Shiloh-Malawsky, Yael; An, Hongyu; Kornegay, Joe N.; Styner, Martin A.

    2015-01-01

    Purpose Golden retriever muscular dystrophy (GRMD) is a widely used canine model of Duchenne muscular dystrophy (DMD). Recent studies have shown that magnetic resonance imaging (MRI) can be used to non-invasively detect consistent changes in both DMD and GRMD. In this paper, we propose a semi-automated system to quantify MRI biomarkers of GRMD. Methods Our system was applied to a database of 45 MRI scans from 8 normal and 10 GRMD dogs in a longitudinal natural history study. We first segmented six proximal pelvic limb muscles using two competing schemes: 1) standard, limited muscle range segmentation and 2) semi-automatic full muscle segmentation. We then performed pre-processing, including: intensity inhomogeneity correction, spatial registration of different image sequences, intensity calibration of T2-weighted (T2w) and T2-weighted fat suppressed (T2fs) images, and calculation of MRI biomarker maps. Finally, for each of the segmented muscles, we automatically measured MRI biomarkers of muscle volume and intensity statistics over MRI biomarker maps, and statistical image texture features. Results The muscle volume and the mean intensities in T2 value, fat, and water maps showed group differences between normal and GRMD dogs. For the statistical texture biomarkers, both the histogram and run-length matrix features showed obvious group differences between normal and GRMD dogs. The full muscle segmentation shows significantly less error and variability in the proposed biomarkers when compared to the standard, limited muscle range segmentation. Conclusion The experimental results demonstrated that this quantification tool can reliably quantify MRI biomarkers in GRMD dogs, suggesting that it would also be useful for quantifying disease progression and measuring therapeutic effect in DMD patients. PMID:23299128

  8. Longitudinal MRI assessment: the identification of relevant features in the development of Posterior Fossa Syndrome in children

    NASA Astrophysics Data System (ADS)

    Spiteri, M.; Lewis, E.; Windridge, D.; Avula, S.

    2015-03-01

    Up to 25% of children who undergo brain tumour resection surgery in the posterior fossa develop posterior fossa syndrome (PFS). This syndrome is characterised by mutism and disturbance in speech. Our hypothesis is that there is a correlation between PFS and the occurrence of hypertrophic olivary degeneration (HOD) in lobes within the posterior fossa, known as the inferior olivary nuclei (ION). HOD is exhibited as an increase in size and intensity of the ION on an MR image. Intra-operative MRI (IoMRI) is used during surgical procedures at the Alder Hey Children's Hospital, Liver- pool, England, in the treatment of Posterior Fossa tumours and allows visualisation of the brain during surgery. The final MR scan on the IoMRI allows early assessment of the ION immediately after the surgical procedure. The longitudinal MRI data of 28 patients was analysed in a collaborative study with Alder Hey Children's Hospital, in order to identify the most relevant imaging features that relate to the development of PFS, specifically related to HOD. A semi-automated segmentation process was carried out to delineate the ION on each MRI. Feature selection techniques were used to identify the most relevant features amongst the MRI data, demographics and clinical data provided by the hospital. A support vector machine (SVM) was used to analyse the discriminative ability of the selected features. The results indicate the presence of HOD as the most efficient feature that correlates with the development of PFS, followed by the change in intensity and size of the ION and whether HOD occurred bilaterally or unilaterally.

  9. Wavelet Entropy and Directed Acyclic Graph Support Vector Machine for Detection of Patients with Unilateral Hearing Loss in MRI Scanning

    PubMed Central

    Wang, Shuihua; Yang, Ming; Du, Sidan; Yang, Jiquan; Liu, Bin; Gorriz, Juan M.; Ramírez, Javier; Yuan, Ti-Fei; Zhang, Yudong

    2016-01-01

    Highlights We develop computer-aided diagnosis system for unilateral hearing loss detection in structural magnetic resonance imaging.Wavelet entropy is introduced to extract image global features from brain images. Directed acyclic graph is employed to endow support vector machine an ability to handle multi-class problems.The developed computer-aided diagnosis system achieves an overall accuracy of 95.1% for this three-class problem of differentiating left-sided and right-sided hearing loss from healthy controls. Aim: Sensorineural hearing loss (SNHL) is correlated to many neurodegenerative disease. Now more and more computer vision based methods are using to detect it in an automatic way. Materials: We have in total 49 subjects, scanned by 3.0T MRI (Siemens Medical Solutions, Erlangen, Germany). The subjects contain 14 patients with right-sided hearing loss (RHL), 15 patients with left-sided hearing loss (LHL), and 20 healthy controls (HC). Method: We treat this as a three-class classification problem: RHL, LHL, and HC. Wavelet entropy (WE) was selected from the magnetic resonance images of each subjects, and then submitted to a directed acyclic graph support vector machine (DAG-SVM). Results: The 10 repetition results of 10-fold cross validation shows 3-level decomposition will yield an overall accuracy of 95.10% for this three-class classification problem, higher than feedforward neural network, decision tree, and naive Bayesian classifier. Conclusions: This computer-aided diagnosis system is promising. We hope this study can attract more computer vision method for detecting hearing loss. PMID:27807415

  10. Frontotemporal correlates of impulsivity and machine learning in retired professional athletes with a history of multiple concussions.

    PubMed

    Goswami, R; Dufort, P; Tartaglia, M C; Green, R E; Crawley, A; Tator, C H; Wennberg, R; Mikulis, D J; Keightley, M; Davis, Karen D

    2016-05-01

    The frontotemporal cortical network is associated with behaviours such as impulsivity and aggression. The health of the uncinate fasciculus (UF) that connects the orbitofrontal cortex (OFC) with the anterior temporal lobe (ATL) may be a crucial determinant of behavioural regulation. Behavioural changes can emerge after repeated concussion and thus we used MRI to examine the UF and connected gray matter as it relates to impulsivity and aggression in retired professional football players who had sustained multiple concussions. Behaviourally, athletes had faster reaction times and an increased error rate on a go/no-go task, and increased aggression and mania compared to controls. MRI revealed that the athletes had (1) cortical thinning of the ATL, (2) negative correlations of OFC thickness with aggression and task errors, indicative of impulsivity, (3) negative correlations of UF axial diffusivity with error rates and aggression, and (4) elevated resting-state functional connectivity between the ATL and OFC. Using machine learning, we found that UF diffusion imaging differentiates athletes from healthy controls with significant classifiers based on UF mean and radial diffusivity showing 79-84 % sensitivity and specificity, and 0.8 areas under the ROC curves. The spatial pattern of classifier weights revealed hot spots at the orbitofrontal and temporal ends of the UF. These data implicate the UF system in the pathological outcomes of repeated concussion as they relate to impulsive behaviour. Furthermore, a support vector machine has potential utility in the general assessment and diagnosis of brain abnormalities following concussion.

  11. Disrupted white matter connectivity underlying developmental dyslexia: A machine learning approach.

    PubMed

    Cui, Zaixu; Xia, Zhichao; Su, Mengmeng; Shu, Hua; Gong, Gaolang

    2016-04-01

    Developmental dyslexia has been hypothesized to result from multiple causes and exhibit multiple manifestations, implying a distributed multidimensional effect on human brain. The disruption of specific white-matter (WM) tracts/regions has been observed in dyslexic children. However, it remains unknown if developmental dyslexia affects the human brain WM in a multidimensional manner. Being a natural tool for evaluating this hypothesis, the multivariate machine learning approach was applied in this study to compare 28 school-aged dyslexic children with 33 age-matched controls. Structural magnetic resonance imaging (MRI) and diffusion tensor imaging were acquired to extract five multitype WM features at a regional level: white matter volume, fractional anisotropy, mean diffusivity, axial diffusivity, and radial diffusivity. A linear support vector machine (LSVM) classifier achieved an accuracy of 83.61% using these MRI features to distinguish dyslexic children from controls. Notably, the most discriminative features that contributed to the classification were primarily associated with WM regions within the putative reading network/system (e.g., the superior longitudinal fasciculus, inferior fronto-occipital fasciculus, thalamocortical projections, and corpus callosum), the limbic system (e.g., the cingulum and fornix), and the motor system (e.g., the cerebellar peduncle, corona radiata, and corticospinal tract). These results were well replicated using a logistic regression classifier. These findings provided direct evidence supporting a multidimensional effect of developmental dyslexia on WM connectivity of human brain, and highlighted the involvement of WM tracts/regions beyond the well-recognized reading system in dyslexia. Finally, the discriminating results demonstrated a potential of WM neuroimaging features as imaging markers for identifying dyslexic individuals. © 2016 Wiley Periodicals, Inc.

  12. Wavelet Entropy and Directed Acyclic Graph Support Vector Machine for Detection of Patients with Unilateral Hearing Loss in MRI Scanning.

    PubMed

    Wang, Shuihua; Yang, Ming; Du, Sidan; Yang, Jiquan; Liu, Bin; Gorriz, Juan M; Ramírez, Javier; Yuan, Ti-Fei; Zhang, Yudong

    2016-01-01

    Highlights We develop computer-aided diagnosis system for unilateral hearing loss detection in structural magnetic resonance imaging.Wavelet entropy is introduced to extract image global features from brain images. Directed acyclic graph is employed to endow support vector machine an ability to handle multi-class problems.The developed computer-aided diagnosis system achieves an overall accuracy of 95.1% for this three-class problem of differentiating left-sided and right-sided hearing loss from healthy controls. Aim: Sensorineural hearing loss (SNHL) is correlated to many neurodegenerative disease. Now more and more computer vision based methods are using to detect it in an automatic way. Materials: We have in total 49 subjects, scanned by 3.0T MRI (Siemens Medical Solutions, Erlangen, Germany). The subjects contain 14 patients with right-sided hearing loss (RHL), 15 patients with left-sided hearing loss (LHL), and 20 healthy controls (HC). Method: We treat this as a three-class classification problem: RHL, LHL, and HC. Wavelet entropy (WE) was selected from the magnetic resonance images of each subjects, and then submitted to a directed acyclic graph support vector machine (DAG-SVM). Results: The 10 repetition results of 10-fold cross validation shows 3-level decomposition will yield an overall accuracy of 95.10% for this three-class classification problem, higher than feedforward neural network, decision tree, and naive Bayesian classifier. Conclusions: This computer-aided diagnosis system is promising. We hope this study can attract more computer vision method for detecting hearing loss.

  13. Calibration of fluorescence resonance energy transfer in microscopy

    DOEpatents

    Youvan, Dougalas C.; Silva, Christopher M.; Bylina, Edward J.; Coleman, William J.; Dilworth, Michael R.; Yang, Mary M.

    2003-12-09

    Imaging hardware, software, calibrants, and methods are provided to visualize and quantitate the amount of Fluorescence Resonance Energy Transfer (FRET) occurring between donor and acceptor molecules in epifluorescence microscopy. The MicroFRET system compensates for overlap among donor, acceptor, and FRET spectra using well characterized fluorescent beads as standards in conjunction with radiometrically calibrated image processing techniques. The MicroFRET system also provides precisely machined epifluorescence cubes to maintain proper image registration as the sample is illuminated at the donor and acceptor excitation wavelengths. Algorithms are described that pseudocolor the image to display pixels exhibiting radiometrically-corrected fluorescence emission from the donor (blue), the acceptor (green) and FRET (red). The method is demonstrated on samples exhibiting FRET between genetically engineered derivatives of the Green Fluorescent Protein (GFP) bound to the surface of Ni chelating beads by histidine-tags.

  14. Calibration of fluorescence resonance energy transfer in microscopy

    DOEpatents

    Youvan, Douglas C.; Silva, Christopher M.; Bylina, Edward J.; Coleman, William J.; Dilworth, Michael R.; Yang, Mary M.

    2002-09-24

    Imaging hardware, software, calibrants, and methods are provided to visualize and quantitate the amount of Fluorescence Resonance Energy Transfer (FRET) occurring between donor and acceptor molecules in epifluorescence microscopy. The MicroFRET system compensates for overlap among donor, acceptor, and FRET spectra using well characterized fluorescent beads as standards in conjunction with radiometrically calibrated image processing techniques. The MicroFRET system also provides precisely machined epifluorescence cubes to maintain proper image registration as the sample is illuminated at the donor and acceptor excitation wavelengths. Algorithms are described that pseudocolor the image to display pixels exhibiting radiometrically-corrected fluorescence emission from the donor (blue), the acceptor (green) and FRET (red). The method is demonstrated on samples exhibiting FRET between genetically engineered derivatives of the Green Fluorescent Protein (GFP) bound to the surface of Ni chelating beads by histidine-tags.

  15. Transient Cognitive Dynamics, Metastability, and Decision Making

    DTIC Science & Technology

    2008-05-02

    imaging (fMRI) and EEG have opened new possibilities for understanding and modeling cognition [11–15]. Experimental recordings have revealed detailed...between different phase-synchronized states of alpha activity in spontaneous EEG . Alpha activity has been characterized as a series of globally...novel protocols of assisted neurofeedback [59– 62], which can open a wide variety of new medical and brain- machine applications. Methods Stable

  16. A SVM-based quantitative fMRI method for resting-state functional network detection.

    PubMed

    Song, Xiaomu; Chen, Nan-kuei

    2014-09-01

    Resting-state functional magnetic resonance imaging (fMRI) aims to measure baseline neuronal connectivity independent of specific functional tasks and to capture changes in the connectivity due to neurological diseases. Most existing network detection methods rely on a fixed threshold to identify functionally connected voxels under the resting state. Due to fMRI non-stationarity, the threshold cannot adapt to variation of data characteristics across sessions and subjects, and generates unreliable mapping results. In this study, a new method is presented for resting-state fMRI data analysis. Specifically, the resting-state network mapping is formulated as an outlier detection process that is implemented using one-class support vector machine (SVM). The results are refined by using a spatial-feature domain prototype selection method and two-class SVM reclassification. The final decision on each voxel is made by comparing its probabilities of functionally connected and unconnected instead of a threshold. Multiple features for resting-state analysis were extracted and examined using an SVM-based feature selection method, and the most representative features were identified. The proposed method was evaluated using synthetic and experimental fMRI data. A comparison study was also performed with independent component analysis (ICA) and correlation analysis. The experimental results show that the proposed method can provide comparable or better network detection performance than ICA and correlation analysis. The method is potentially applicable to various resting-state quantitative fMRI studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Real-Time fMRI Pattern Decoding and Neurofeedback Using FRIEND: An FSL-Integrated BCI Toolbox

    PubMed Central

    Sato, João R.; Basilio, Rodrigo; Paiva, Fernando F.; Garrido, Griselda J.; Bramati, Ivanei E.; Bado, Patricia; Tovar-Moll, Fernanda; Zahn, Roland; Moll, Jorge

    2013-01-01

    The demonstration that humans can learn to modulate their own brain activity based on feedback of neurophysiological signals opened up exciting opportunities for fundamental and applied neuroscience. Although EEG-based neurofeedback has been long employed both in experimental and clinical investigation, functional MRI (fMRI)-based neurofeedback emerged as a promising method, given its superior spatial resolution and ability to gauge deep cortical and subcortical brain regions. In combination with improved computational approaches, such as pattern recognition analysis (e.g., Support Vector Machines, SVM), fMRI neurofeedback and brain decoding represent key innovations in the field of neuromodulation and functional plasticity. Expansion in this field and its applications critically depend on the existence of freely available, integrated and user-friendly tools for the neuroimaging research community. Here, we introduce FRIEND, a graphic-oriented user-friendly interface package for fMRI neurofeedback and real-time multivoxel pattern decoding. The package integrates routines for image preprocessing in real-time, ROI-based feedback (single-ROI BOLD level and functional connectivity) and brain decoding-based feedback using SVM. FRIEND delivers an intuitive graphic interface with flexible processing pipelines involving optimized procedures embedding widely validated packages, such as FSL and libSVM. In addition, a user-defined visual neurofeedback module allows users to easily design and run fMRI neurofeedback experiments using ROI-based or multivariate classification approaches. FRIEND is open-source and free for non-commercial use. Processing tutorials and extensive documentation are available. PMID:24312569

  18. The Potential for an Enhanced Role for MRI in Radiation-therapy Treatment Planning

    PubMed Central

    Metcalfe, P.; Liney, G. P.; Holloway, L.; Walker, A.; Barton, M.; Delaney, G. P.; Vinod, S.; Tomé, W.

    2013-01-01

    The exquisite soft-tissue contrast of magnetic resonance imaging (MRI) has meant that the technique is having an increasing role in contouring the gross tumor volume (GTV) and organs at risk (OAR) in radiation therapy treatment planning systems (TPS). MRI-planning scans from diagnostic MRI scanners are currently incorporated into the planning process by being registered to CT data. The soft-tissue data from the MRI provides target outline guidance and the CT provides a solid geometric and electron density map for accurate dose calculation on the TPS computer. There is increasing interest in MRI machine placement in radiotherapy clinics as an adjunct to CT simulators. Most vendors now offer 70 cm bores with flat couch inserts and specialised RF coil designs. We would refer to these devices as MR-simulators. There is also research into the future application of MR-simulators independent of CT and as in-room image-guidance devices. It is within the background of this increased interest in the utility of MRI in radiotherapy treatment planning that this paper is couched. The paper outlines publications that deal with standard MRI sequences used in current clinical practice. It then discusses the potential for using processed functional diffusion maps (fDM) derived from diffusion weighted image sequences in tracking tumor activity and tumor recurrence. Next, this paper reviews publications that describe the use of MRI in patient-management applications that may, in turn, be relevant to radiotherapy treatment planning. The review briefly discusses the concepts behind functional techniques such as dynamic contrast enhanced (DCE), diffusion-weighted (DW) MRI sequences and magnetic resonance spectroscopic imaging (MRSI). Significant applications of MR are discussed in terms of the following treatment sites: brain, head and neck, breast, lung, prostate and cervix. While not yet routine, the use of apparent diffusion coefficient (ADC) map analysis indicates an exciting future application for functional MRI. Although DW-MRI has not yet been routinely used in boost adaptive techniques, it is being assessed in cohort studies for sub-volume boosting in prostate tumors. PMID:23617289

  19. Neuroimaging in epilepsy.

    PubMed

    Sidhu, Meneka Kaur; Duncan, John S; Sander, Josemir W

    2018-05-17

    Epilepsy neuroimaging is important for detecting the seizure onset zone, predicting and preventing deficits from surgery and illuminating mechanisms of epileptogenesis. An aspiration is to integrate imaging and genetic biomarkers to enable personalized epilepsy treatments. The ability to detect lesions, particularly focal cortical dysplasia and hippocampal sclerosis, is increased using ultra high-field imaging and postprocessing techniques such as automated volumetry, T2 relaxometry, voxel-based morphometry and surface-based techniques. Statistical analysis of PET and single photon emission computer tomography (STATISCOM) are superior to qualitative analysis alone in identifying focal abnormalities in MRI-negative patients. These methods have also been used to study mechanisms of epileptogenesis and pharmacoresistance.Recent language fMRI studies aim to localize, and also lateralize language functions. Memory fMRI has been recommended to lateralize mnemonic function and predict outcome after surgery in temporal lobe epilepsy. Combinations of structural, functional and post-processing methods have been used in multimodal and machine learning models to improve the identification of the seizure onset zone and increase understanding of mechanisms underlying structural and functional aberrations in epilepsy.

  20. Neural Representations of Physics Concepts.

    PubMed

    Mason, Robert A; Just, Marcel Adam

    2016-06-01

    We used functional MRI (fMRI) to assess neural representations of physics concepts (momentum, energy, etc.) in juniors, seniors, and graduate students majoring in physics or engineering. Our goal was to identify the underlying neural dimensions of these representations. Using factor analysis to reduce the number of dimensions of activation, we obtained four physics-related factors that were mapped to sets of voxels. The four factors were interpretable as causal motion visualization, periodicity, algebraic form, and energy flow. The individual concepts were identifiable from their fMRI signatures with a mean rank accuracy of .75 using a machine-learning (multivoxel) classifier. Furthermore, there was commonality in participants' neural representation of physics; a classifier trained on data from all but one participant identified the concepts in the left-out participant (mean accuracy = .71 across all nine participant samples). The findings indicate that abstract scientific concepts acquired in an educational setting evoke activation patterns that are identifiable and common, indicating that science education builds abstract knowledge using inherent, repurposed brain systems. © The Author(s) 2016.

  1. MR to CT registration of brains using image synthesis

    NASA Astrophysics Data System (ADS)

    Roy, Snehashis; Carass, Aaron; Jog, Amod; Prince, Jerry L.; Lee, Junghoon

    2014-03-01

    Computed tomography (CT) is the preferred imaging modality for patient dose calculation for radiation therapy. Magnetic resonance (MR) imaging (MRI) is used along with CT to identify brain structures due to its superior soft tissue contrast. Registration of MR and CT is necessary for accurate delineation of the tumor and other structures, and is critical in radiotherapy planning. Mutual information (MI) or its variants are typically used as a similarity metric to register MRI to CT. However, unlike CT, MRI intensity does not have an accepted calibrated intensity scale. Therefore, MI-based MR-CT registration may vary from scan to scan as MI depends on the joint histogram of the images. In this paper, we propose a fully automatic framework for MR-CT registration by synthesizing a synthetic CT image from MRI using a co-registered pair of MR and CT images as an atlas. Patches of the subject MRI are matched to the atlas and the synthetic CT patches are estimated in a probabilistic framework. The synthetic CT is registered to the original CT using a deformable registration and the computed deformation is applied to the MRI. In contrast to most existing methods, we do not need any manual intervention such as picking landmarks or regions of interests. The proposed method was validated on ten brain cancer patient cases, showing 25% improvement in MI and correlation between MR and CT images after registration compared to state-of-the-art registration methods.

  2. A Unified Framework for Brain Segmentation in MR Images

    PubMed Central

    Yazdani, S.; Yusof, R.; Karimian, A.; Riazi, A. H.; Bennamoun, M.

    2015-01-01

    Brain MRI segmentation is an important issue for discovering the brain structure and diagnosis of subtle anatomical changes in different brain diseases. However, due to several artifacts brain tissue segmentation remains a challenging task. The aim of this paper is to improve the automatic segmentation of brain into gray matter, white matter, and cerebrospinal fluid in magnetic resonance images (MRI). We proposed an automatic hybrid image segmentation method that integrates the modified statistical expectation-maximization (EM) method and the spatial information combined with support vector machine (SVM). The combined method has more accurate results than what can be achieved with its individual techniques that is demonstrated through experiments on both real data and simulated images. Experiments are carried out on both synthetic and real MRI. The results of proposed technique are evaluated against manual segmentation results and other methods based on real T1-weighted scans from Internet Brain Segmentation Repository (IBSR) and simulated images from BrainWeb. The Kappa index is calculated to assess the performance of the proposed framework relative to the ground truth and expert segmentations. The results demonstrate that the proposed combined method has satisfactory results on both simulated MRI and real brain datasets. PMID:26089978

  3. [Comparison of magnetic resonance imaging artifacts of five common dental materials].

    PubMed

    Xu, Yisheng; Yu, Risheng

    2015-06-01

    To compare five materials commonly used in dentistry, including three types of metals and two types of ceramics, by using different sequences of three magnetic resonance imaging (MRI) field strengths (0.35, 1.5, and 3.0 T). Three types of metals and two types of ceramics that were fabricated into the same size and thickness as an incisor crown were placed in a plastic tank filled with saline. The crowns were scanned using an magnetic resonance (MR) machine at 0.35, 1.5, and 3.0 T field strengths. The TlWI and T2WI images were obtained. The differences of various materials in different artifacts of field MR scans were determined. The zirconia crown presented no significant artifacts when scanned under the three types of MRI field strengths. The artifacts of casting ceramic were minimal. All dental precious metal alloys, nickel-chromium alloy dental porcelain, and cobalt-chromium ceramic alloy showed varying degrees of artifacts under the three MRI field strengths. Zirconia and casting ceramics present almost no or faint artifacts. By contrast, precious metal alloys, nickel-chromium alloy dental porcelain and cobalt-chromium ceramic alloy display MRI artifacts. The artifact area increase with increasing magnetic field.

  4. Consistency of signal intensity and T2* in frozen ex vivo heart muscle, kidney, and liver tissue.

    PubMed

    Kaye, Elena A; Josan, Sonal; Lu, Aiming; Rosenberg, Jarrett; Daniel, Bruce L; Pauly, Kim Butts

    2010-03-01

    To investigate tissue dependence of the MRI-based thermometry in frozen tissue by quantification and comparison of signal intensity and T2* of ex vivo frozen tissue of three different types: heart muscle, kidney, and liver. Tissue samples were frozen and imaged on a 0.5 Tesla MRI scanner with ultrashort echo time (UTE) sequence. Signal intensity and T2* were determined as the temperature of the tissue samples was decreased from room temperature to approximately -40 degrees C. Statistical analysis was performed for (-20 degrees C, -5 degrees C) temperature interval. The findings of this study demonstrate that signal intensity and T2* are consistent across three types of tissue for (-20 degrees C, -5 degrees C) temperature interval. Both parameters can be used to calculate a single temperature calibration curve for all three types of tissue and potentially in the future serve as a foundation for tissue-independent MRI-based thermometry.

  5. QA in Radiation Therapy: The RPC Perspective

    NASA Astrophysics Data System (ADS)

    Ibbott, G. S.

    2010-11-01

    The Radiological Physics Center (RPC) is charged with assuring the consistent delivery of radiation doses to patients on NCI-sponsored clinical trials. To accomplish this, the RPC conducts annual mailed audits of machine calibration, dosimetry audit visits to institutions, reviews of treatment records, and credentialing procedures requiring the irradiation of anthropomorphic phantoms. Through these measurements, the RPC has gained an understanding of the level of quality assurance practiced in this cohort of institutions, and a database of measurements of beam characteristics of a large number of treatment machines. The results of irradiations of phantoms have yielded insight into the delivery of advanced technology treatment procedures.

  6. Residual Stresses in 21-6-9 Stainless Steel Warm Forgings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Everhart, Wesley A.; Lee, Jordan D.; Broecker, Daniel J.

    Forging residual stresses are detrimental to the production and performance of derived machined parts due to machining distortions, corrosion drivers and fatigue crack drivers. Residual strains in a 21-6-9 stainless steel warm High Energy Rate Forging (HERF) were measured via neutron diffraction. The finite element analysis (FEA) method was used to predict the residual stresses that occur during forging and water quenching. The experimentally measured residual strains were used to calibrate simulations of the three-dimensional residual stress state of the forging. ABAQUS simulation tools predicted residual strains that tend to match with experimental results when varying yield strength is considered.

  7. Study of wear between piston ring and cylinder housing of an internal combustion engine by thin layer activation technique

    NASA Astrophysics Data System (ADS)

    Chowdhury, D. P.; Chaudhuri, Jayanta; Raju, V. S.; Das, S. K.; Bhattacharjee, B. B.; Gangadharan, S.

    1989-07-01

    The wear analysis of a compression ring and cylinder housing of an Internal Combustion Engine by thin layer activation (TLA) with 40 MeV α-particles from the Variable Energy Cyclotron at Calcutta is reported. The calibration curves have been obtained for Fe and Ni using stacked foil activation technique for determining the absolute wear in these machine parts. It has been possible to determine the pattern of wear on the points along the surface of machine components. The minimum detectable depth in this wear study has been estimated at 0.11 ± 0.04 μm.

  8. An Inexpensive Method to use an Ocean Optics Spectrometer for Telescopic Spectroscopy

    NASA Astrophysics Data System (ADS)

    Joel, Berger; Sugerman, B. E. K.

    2012-01-01

    We present a relatively-inexpensive method for using an Ocean Optics spectrometer for telescopic spectroscopy. The Ocean Optics spectrometer is a highly-sensitive, affordable and versatile fiber-optic spectrometer that can be used in a variety of physics and astronomy classes and labs. With about $275 and a small amount of machining, this spectrometer can be easily adapted for any telescope that accepts 2" eyepieces. We provide the equipment list, machining specs, and calibration process, as well as sample stellar spectra. This work was supported by the Department of Physics and Astronomy and the Office of the Provost of Goucher College.

  9. Real-Time Deflection Monitoring for Milling of a Thin-Walled Workpiece by Using PVDF Thin-Film Sensors with a Cantilevered Beam as a Case Study

    PubMed Central

    Luo, Ming; Liu, Dongsheng; Luo, Huan

    2016-01-01

    Thin-walled workpieces, such as aero-engine blisks and casings, are usually made of hard-to-cut materials. The wall thickness is very small and it is easy to deflect during milling process under dynamic cutting forces, leading to inaccurate workpiece dimensions and poor surface integrity. To understand the workpiece deflection behavior in a machining process, a new real-time nonintrusive method for deflection monitoring is presented, and a detailed analysis of workpiece deflection for different machining stages of the whole machining process is discussed. The thin-film polyvinylidene fluoride (PVDF) sensor is attached to the non-machining surface of the workpiece to copy the deflection excited by the dynamic cutting force. The relationship between the input deflection and the output voltage of the monitoring system is calibrated by testing. Monitored workpiece deflection results show that the workpiece experiences obvious vibration during the cutter entering the workpiece stage, and vibration during the machining process can be easily tracked by monitoring the deflection of the workpiece. During the cutter exiting the workpiece stage, the workpiece experiences forced vibration firstly, and free vibration exists until the amplitude reduces to zero after the cutter exits the workpiece. Machining results confirmed the suitability of the deflection monitoring system for machining thin-walled workpieces with the application of PVDF sensors. PMID:27626424

  10. Development of hardware system using temperature and vibration maintenance models integration concepts for conventional machines monitoring: a case study

    NASA Astrophysics Data System (ADS)

    Adeyeri, Michael Kanisuru; Mpofu, Khumbulani; Kareem, Buliaminu

    2016-03-01

    This article describes the integration of temperature and vibration models for maintenance monitoring of conventional machinery parts in which their optimal and best functionalities are affected by abnormal changes in temperature and vibration values thereby resulting in machine failures, machines breakdown, poor quality of products, inability to meeting customers' demand, poor inventory control and just to mention a few. The work entails the use of temperature and vibration sensors as monitoring probes programmed in microcontroller using C language. The developed hardware consists of vibration sensor of ADXL345, temperature sensor of AD594/595 of type K thermocouple, microcontroller, graphic liquid crystal display, real time clock, etc. The hardware is divided into two: one is based at the workstation (majorly meant to monitor machines behaviour) and the other at the base station (meant to receive transmission of machines information sent from the workstation), working cooperatively for effective functionalities. The resulting hardware built was calibrated, tested using model verification and validated through principles pivoted on least square and regression analysis approach using data read from the gear boxes of extruding and cutting machines used for polyethylene bag production. The results got therein confirmed related correlation existing between time, vibration and temperature, which are reflections of effective formulation of the developed concept.

  11. Classification of sodium MRI data of cartilage using machine learning.

    PubMed

    Madelin, Guillaume; Poidevin, Frederick; Makrymallis, Antonios; Regatte, Ravinder R

    2015-11-01

    To assess the possible utility of machine learning for classifying subjects with and subjects without osteoarthritis using sodium magnetic resonance imaging data. Theory: Support vector machine, k-nearest neighbors, naïve Bayes, discriminant analysis, linear regression, logistic regression, neural networks, decision tree, and tree bagging were tested. Sodium magnetic resonance imaging with and without fluid suppression by inversion recovery was acquired on the knee cartilage of 19 controls and 28 osteoarthritis patients. Sodium concentrations were measured in regions of interests in the knee for both acquisitions. Mean (MEAN) and standard deviation (STD) of these concentrations were measured in each regions of interest, and the minimum, maximum, and mean of these two measurements were calculated over all regions of interests for each subject. The resulting 12 variables per subject were used as predictors for classification. Either Min [STD] alone, or in combination with Mean [MEAN] or Min [MEAN], all from fluid suppressed data, were the best predictors with an accuracy >74%, mainly with linear logistic regression and linear support vector machine. Other good classifiers include discriminant analysis, linear regression, and naïve Bayes. Machine learning is a promising technique for classifying osteoarthritis patients and controls from sodium magnetic resonance imaging data. © 2014 Wiley Periodicals, Inc.

  12. Z-correction, a method for achieving ultraprecise self-calibration on large area coordinate measurement machines for photomasks

    NASA Astrophysics Data System (ADS)

    Ekberg, Peter; Stiblert, Lars; Mattsson, Lars

    2014-05-01

    High-quality photomasks are a prerequisite for the production of flat panel TVs, tablets and other kinds of high-resolution displays. During the past years, the resolution demand has become more and more accelerated, and today, the high-definition standard HD, 1920 × 1080 pixels2, is well established, and already the next-generation so-called ultra-high-definition UHD or 4K display is entering the market. Highly advanced mask writers are used to produce the photomasks needed for the production of such displays. The dimensional tolerance in X and Y on absolute pattern placement on these photomasks, with sizes of square meters, has been in the range of 200-300 nm (3σ), but is now on the way to be <150 nm (3σ). To verify these photomasks, 2D ultra-precision coordinate measurement machines are used with even tighter tolerance requirements. The metrology tool MMS15000 is today the world standard tool used for the verification of large area photomasks. This paper will present a method called Z-correction that has been developed for the purpose of improving the absolute X, Y placement accuracy of features on the photomask in the writing process. However, Z-correction is also a prerequisite for achieving X and Y uncertainty levels <90 nm (3σ) in the self-calibration process of the MMS15000 stage area of 1.4 × 1.5 m2. When talking of uncertainty specifications below 200 nm (3σ) of such a large area, the calibration object used, here an 8-16 mm thick quartz plate of size approximately a square meter, cannot be treated as a rigid body. The reason for this is that the absolute shape of the plate will be affected by gravity and will therefore not be the same at different places on the measurement machine stage when it is used in the self-calibration process. This mechanical deformation will stretch or compress the top surface (i.e. the image side) of the plate where the pattern resides, and therefore spatially deform the mask pattern in the X- and Y-directions. Errors due to this deformation can easily be several hundred nanometers. When Z-correction is used in the writer, it is also possible to relax the flatness demand of the photomask backside, leading to reduced manufacturing costs of the plates.

  13. Stepwise Regression Analysis of MDOE Balance Calibration Data Acquired at DNW

    NASA Technical Reports Server (NTRS)

    DeLoach, RIchard; Philipsen, Iwan

    2007-01-01

    This paper reports a comparison of two experiment design methods applied in the calibration of a strain-gage balance. One features a 734-point test matrix in which loads are varied systematically according to a method commonly applied in aerospace research and known in the literature of experiment design as One Factor At a Time (OFAT) testing. Two variations of an alternative experiment design were also executed on the same balance, each with different features of an MDOE experiment design. The Modern Design of Experiments (MDOE) is an integrated process of experiment design, execution, and analysis applied at NASA's Langley Research Center to achieve significant reductions in cycle time, direct operating cost, and experimental uncertainty in aerospace research generally and in balance calibration experiments specifically. Personnel in the Instrumentation and Controls Department of the German Dutch Wind Tunnels (DNW) have applied MDOE methods to evaluate them in the calibration of a balance using an automated calibration machine. The data have been sent to Langley Research Center for analysis and comparison. This paper reports key findings from this analysis. The chief result is that a 100-point calibration exploiting MDOE principles delivered quality comparable to a 700+ point OFAT calibration with significantly reduced cycle time and attendant savings in direct and indirect costs. While the DNW test matrices implemented key MDOE principles and produced excellent results, additional MDOE concepts implemented in balance calibrations at Langley Research Center are also identified and described.

  14. The classification of the patients with pulmonary diseases using breath air samples spectral analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yury V.; Borisov, Alexey V.; Kuzmin, Dmitry A.; Bulanova, Anna A.

    2016-08-01

    Technique of exhaled breath sampling is discussed. The procedure of wavelength auto-calibration is proposed and tested. Comparison of the experimental data with the model absorption spectra of 5% CO2 is conducted. The classification results of three study groups obtained by using support vector machine and principal component analysis methods are presented.

  15. Accuracy of Bayes and Logistic Regression Subscale Probabilities for Educational and Certification Tests

    ERIC Educational Resources Information Center

    Rudner, Lawrence

    2016-01-01

    In the machine learning literature, it is commonly accepted as fact that as calibration sample sizes increase, Naïve Bayes classifiers initially outperform Logistic Regression classifiers in terms of classification accuracy. Applied to subtests from an on-line final examination and from a highly regarded certification examination, this study shows…

  16. 78 FR 49774 - Petitions for Modification of Application of Existing Mandatory Safety Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-15

    ... the well. (7) Calibrate the methane monitors on the longwall, continuous mining machine, or cutting..., test methane levels with a hand- held methane detector at least every 10 minutes from the time that... methane levels are less than 1.0 percent in all areas that will be exposed to flames and sparks from the...

  17. Assessing Writing in MOOCs: Automated Essay Scoring and Calibrated Peer Review™

    ERIC Educational Resources Information Center

    Balfour, Stephen P.

    2013-01-01

    Two of the largest Massive Open Online Course (MOOC) organizations have chosen different methods for the way they will score and provide feedback on essays students submit. EdX, MIT and Harvard's non-profit MOOC federation, recently announced that they will use a machine-based Automated Essay Scoring (AES) application to assess written work in…

  18. Machine Learning Techniques for Global Sensitivity Analysis in Climate Models

    NASA Astrophysics Data System (ADS)

    Safta, C.; Sargsyan, K.; Ricciuto, D. M.

    2017-12-01

    Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.

  19. Multivariate detrending of fMRI signal drifts for real-time multiclass pattern classification.

    PubMed

    Lee, Dongha; Jang, Changwon; Park, Hae-Jeong

    2015-03-01

    Signal drift in functional magnetic resonance imaging (fMRI) is an unavoidable artifact that limits classification performance in multi-voxel pattern analysis of fMRI. As conventional methods to reduce signal drift, global demeaning or proportional scaling disregards regional variations of drift, whereas voxel-wise univariate detrending is too sensitive to noisy fluctuations. To overcome these drawbacks, we propose a multivariate real-time detrending method for multiclass classification that involves spatial demeaning at each scan and the recursive detrending of drifts in the classifier outputs driven by a multiclass linear support vector machine. Experiments using binary and multiclass data showed that the linear trend estimation of the classifier output drift for each class (a weighted sum of drifts in the class-specific voxels) was more robust against voxel-wise artifacts that lead to inconsistent spatial patterns and the effect of online processing than voxel-wise detrending. The classification performance of the proposed method was significantly better, especially for multiclass data, than that of voxel-wise linear detrending, global demeaning, and classifier output detrending without demeaning. We concluded that the multivariate approach using classifier output detrending of fMRI signals with spatial demeaning preserves spatial patterns, is less sensitive than conventional methods to sample size, and increases classification performance, which is a useful feature for real-time fMRI classification. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. MRI/TRUS data fusion for prostate brachytherapy. Preliminary results.

    PubMed

    Reynier, Christophe; Troccaz, Jocelyne; Fourneret, Philippe; Dusserre, André; Gay-Jeune, Cécile; Descotes, Jean-Luc; Bolla, Michel; Giraud, Jean-Yves

    2004-06-01

    Prostate brachytherapy involves implanting radioactive seeds (I125 for instance) permanently in the gland for the treatment of localized prostate cancers, e.g., cT1c-T2a N0 M0 with good prognostic factors. Treatment planning and seed implanting are most often based on the intensive use of transrectal ultrasound (TRUS) imaging. This is not easy because prostate visualization is difficult in this imaging modality particularly as regards the apex of the gland and from an intra- and interobserver variability standpoint. Radioactive seeds are implanted inside open interventional MR machines in some centers. Since MRI was shown to be sensitive and specific for prostate imaging whilst open MR is prohibitive for most centers and makes surgical procedures very complex, this work suggests bringing the MR virtually in the operating room with MRI/TRUS data fusion. This involves providing the physician with bi-modality images (TRUS plus MRI) intended to improve treatment planning from the data registration stage. The paper describes the method developed and implemented in the PROCUR system. Results are reported for a phantom and first series of patients. Phantom experiments helped characterize the accuracy of the process. Patient experiments have shown that using MRI data linked with TRUS data improves TRUS image segmentation especially regarding the apex and base of the prostate. This may significantly modify prostate volume definition and have an impact on treatment planning.

  1. A machine learning calibration model using random forests to improve sensor performance for lower-cost air quality monitoring

    NASA Astrophysics Data System (ADS)

    Zimmerman, Naomi; Presto, Albert A.; Kumar, Sriniwasa P. N.; Gu, Jason; Hauryliuk, Aliaksei; Robinson, Ellis S.; Robinson, Allen L.; Subramanian, R.

    2018-01-01

    Low-cost sensing strategies hold the promise of denser air quality monitoring networks, which could significantly improve our understanding of personal air pollution exposure. Additionally, low-cost air quality sensors could be deployed to areas where limited monitoring exists. However, low-cost sensors are frequently sensitive to environmental conditions and pollutant cross-sensitivities, which have historically been poorly addressed by laboratory calibrations, limiting their utility for monitoring. In this study, we investigated different calibration models for the Real-time Affordable Multi-Pollutant (RAMP) sensor package, which measures CO, NO2, O3, and CO2. We explored three methods: (1) laboratory univariate linear regression, (2) empirical multiple linear regression, and (3) machine-learning-based calibration models using random forests (RF). Calibration models were developed for 16-19 RAMP monitors (varied by pollutant) using training and testing windows spanning August 2016 through February 2017 in Pittsburgh, PA, US. The random forest models matched (CO) or significantly outperformed (NO2, CO2, O3) the other calibration models, and their accuracy and precision were robust over time for testing windows of up to 16 weeks. Following calibration, average mean absolute error on the testing data set from the random forest models was 38 ppb for CO (14 % relative error), 10 ppm for CO2 (2 % relative error), 3.5 ppb for NO2 (29 % relative error), and 3.4 ppb for O3 (15 % relative error), and Pearson r versus the reference monitors exceeded 0.8 for most units. Model performance is explored in detail, including a quantification of model variable importance, accuracy across different concentration ranges, and performance in a range of monitoring contexts including the National Ambient Air Quality Standards (NAAQS) and the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. A key strength of the RF approach is that it accounts for pollutant cross-sensitivities. This highlights the importance of developing multipollutant sensor packages (as opposed to single-pollutant monitors); we determined this is especially critical for NO2 and CO2. The evaluation reveals that only the RF-calibrated sensors meet the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. We also demonstrate that the RF-model-calibrated sensors could detect differences in NO2 concentrations between a near-road site and a suburban site less than 1.5 km away. From this study, we conclude that combining RF models with carefully controlled state-of-the-art multipollutant sensor packages as in the RAMP monitors appears to be a very promising approach to address the poor performance that has plagued low-cost air quality sensors.

  2. TU-F-BRB-02: Motion Artifacts and Suppression in MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, X.

    The current clinical standard of organ respiratory imaging, 4D-CT, is fundamentally limited by poor soft-tissue contrast and imaging dose. These limitations are potential barriers to beneficial “4D” radiotherapy methods which optimize the target and OAR dose-volume considering breathing motion but rely on a robust motion characterization. Conversely, MRI imparts no known radiation risk and has excellent soft-tissue contrast. MRI-based motion management is therefore highly desirable and holds great promise to improve radiotherapy of moving cancers, particularly in the abdomen. Over the past decade, MRI techniques have improved significantly, making MR-based motion management clinically feasible. For example, cine MRI has highmore » temporal resolution up to 10 f/s and has been used to track and/or characterize tumor motion, study correlation between external and internal motions. New MR technologies, such as 4D-MRI and MRI hybrid treatment machines (i.e. MR-linac or MR-Co60), have been recently developed. These technologies can lead to more accurate target volume determination and more precise radiation dose delivery via direct tumor gating or tracking. Despite all these promises, great challenges exist and the achievable clinical benefit of MRI-based tumor motion management has yet to be fully explored, much less realized. In this proposal, we will review novel MR-based motion management methods and technologies, the state-of-the-art concerning MRI development and clinical application and the barriers to more widespread adoption. Learning Objectives: Discuss the need of MR-based motion management for improving patient care in radiotherapy. Understand MR techniques for motion imaging and tumor motion characterization. Understand the current state of the art and future steps for clinical integration. Henry Ford Health System holds research agreements with Philips Healthcare. Research sponsored in part by a Henry Ford Health System Internal Mentored Grant.« less

  3. TU-F-BRB-01: Resolving and Characterizing Breathing Motion for Radiotherapy with MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tryggestad, E.

    The current clinical standard of organ respiratory imaging, 4D-CT, is fundamentally limited by poor soft-tissue contrast and imaging dose. These limitations are potential barriers to beneficial “4D” radiotherapy methods which optimize the target and OAR dose-volume considering breathing motion but rely on a robust motion characterization. Conversely, MRI imparts no known radiation risk and has excellent soft-tissue contrast. MRI-based motion management is therefore highly desirable and holds great promise to improve radiotherapy of moving cancers, particularly in the abdomen. Over the past decade, MRI techniques have improved significantly, making MR-based motion management clinically feasible. For example, cine MRI has highmore » temporal resolution up to 10 f/s and has been used to track and/or characterize tumor motion, study correlation between external and internal motions. New MR technologies, such as 4D-MRI and MRI hybrid treatment machines (i.e. MR-linac or MR-Co60), have been recently developed. These technologies can lead to more accurate target volume determination and more precise radiation dose delivery via direct tumor gating or tracking. Despite all these promises, great challenges exist and the achievable clinical benefit of MRI-based tumor motion management has yet to be fully explored, much less realized. In this proposal, we will review novel MR-based motion management methods and technologies, the state-of-the-art concerning MRI development and clinical application and the barriers to more widespread adoption. Learning Objectives: Discuss the need of MR-based motion management for improving patient care in radiotherapy. Understand MR techniques for motion imaging and tumor motion characterization. Understand the current state of the art and future steps for clinical integration. Henry Ford Health System holds research agreements with Philips Healthcare. Research sponsored in part by a Henry Ford Health System Internal Mentored Grant.« less

  4. TU-F-BRB-00: MRI-Based Motion Management for RT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The current clinical standard of organ respiratory imaging, 4D-CT, is fundamentally limited by poor soft-tissue contrast and imaging dose. These limitations are potential barriers to beneficial “4D” radiotherapy methods which optimize the target and OAR dose-volume considering breathing motion but rely on a robust motion characterization. Conversely, MRI imparts no known radiation risk and has excellent soft-tissue contrast. MRI-based motion management is therefore highly desirable and holds great promise to improve radiotherapy of moving cancers, particularly in the abdomen. Over the past decade, MRI techniques have improved significantly, making MR-based motion management clinically feasible. For example, cine MRI has highmore » temporal resolution up to 10 f/s and has been used to track and/or characterize tumor motion, study correlation between external and internal motions. New MR technologies, such as 4D-MRI and MRI hybrid treatment machines (i.e. MR-linac or MR-Co60), have been recently developed. These technologies can lead to more accurate target volume determination and more precise radiation dose delivery via direct tumor gating or tracking. Despite all these promises, great challenges exist and the achievable clinical benefit of MRI-based tumor motion management has yet to be fully explored, much less realized. In this proposal, we will review novel MR-based motion management methods and technologies, the state-of-the-art concerning MRI development and clinical application and the barriers to more widespread adoption. Learning Objectives: Discuss the need of MR-based motion management for improving patient care in radiotherapy. Understand MR techniques for motion imaging and tumor motion characterization. Understand the current state of the art and future steps for clinical integration. Henry Ford Health System holds research agreements with Philips Healthcare. Research sponsored in part by a Henry Ford Health System Internal Mentored Grant.« less

  5. MRI for transformation of preserved organs and their pathologies into digital formats for medical education and creation of a virtual pathology museum. A pilot study.

    PubMed

    Venkatesh, S K; Wang, G; Seet, J E; Teo, L L S; Chong, V F H

    2013-03-01

    To evaluate the feasibility of magnetic resonance imaging (MRI) for the transformation of preserved organs and their disease entities into digital formats for medical education and creation of a virtual museum. MRI of selected 114 pathology specimen jars representing different organs and their diseases was performed using a 3 T MRI machine with two or more MRI sequences including three-dimensional (3D) T1-weighted (T1W), 3D-T2W, 3D-FLAIR (fluid attenuated inversion recovery), fat-water separation (DIXON), and gradient-recalled echo (GRE) sequences. Qualitative assessment of MRI for depiction of disease and internal anatomy was performed. Volume rendering was performed on commercially available workstations. The digital images, 3D models, and photographs of specimens were archived into a workstation serving as a virtual pathology museum. MRI was successfully performed on all specimens. The 3D-T1W and 3D-T2W sequences demonstrated the best contrast between normal and pathological tissues. The digital material is a useful aid for understanding disease by giving insights into internal structural changes not apparent on visual inspection alone. Volume rendering produced vivid 3D models with better contrast between normal tissue and diseased tissue compared to real specimens or their photographs in some cases. The digital library provides good illustration material for radiological-pathological correlation by enhancing pathological anatomy and information on nature and signal characteristics of tissues. In some specimens, the MRI appearance may be different from corresponding organ and disease in vivo due to dead tissue and changes induced by prolonged contact with preservative fluid. MRI of pathology specimens is feasible and provides excellent images for education and creating a virtual pathology museum that can serve as permanent record of digital material for self-directed learning, improving teaching aids, and radiological-pathological correlation. Copyright © 2012 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  6. Combination of rs-fMRI and sMRI Data to Discriminate Autism Spectrum Disorders in Young Children Using Deep Belief Network.

    PubMed

    Akhavan Aghdam, Maryam; Sharifi, Arash; Pedram, Mir Mohsen

    2018-05-07

    In recent years, the use of advanced magnetic resonance (MR) imaging methods such as functional magnetic resonance imaging (fMRI) and structural magnetic resonance imaging (sMRI) has recorded a great increase in neuropsychiatric disorders. Deep learning is a branch of machine learning that is increasingly being used for applications of medical image analysis such as computer-aided diagnosis. In a bid to classify and represent learning tasks, this study utilized one of the most powerful deep learning algorithms (deep belief network (DBN)) for the combination of data from Autism Brain Imaging Data Exchange I and II (ABIDE I and ABIDE II) datasets. The DBN was employed so as to focus on the combination of resting-state fMRI (rs-fMRI), gray matter (GM), and white matter (WM) data. This was done based on the brain regions that were defined using the automated anatomical labeling (AAL), in order to classify autism spectrum disorders (ASDs) from typical controls (TCs). Since the diagnosis of ASD is much more effective at an early age, only 185 individuals (116 ASD and 69 TC) ranging in age from 5 to 10 years were included in this analysis. In contrast, the proposed method is used to exploit the latent or abstract high-level features inside rs-fMRI and sMRI data while the old methods consider only the simple low-level features extracted from neuroimages. Moreover, combining multiple data types and increasing the depth of DBN can improve classification accuracy. In this study, the best combination comprised rs-fMRI, GM, and WM for DBN of depth 3 with 65.56% accuracy (sensitivity = 84%, specificity = 32.96%, F1 score = 74.76%) obtained via 10-fold cross-validation. This result outperforms previously presented methods on ABIDE I dataset.

  7. Calorimetric method of ac loss measurement in a rotating magnetic field.

    PubMed

    Ghoshal, P K; Coombs, T A; Campbell, A M

    2010-07-01

    A method is described for calorimetric ac-loss measurements of high-T(c) superconductors (HTS) at 80 K. It is based on a technique used at 4.2 K for conventional superconducting wires that allows an easy loss measurement in parallel or perpendicular external field orientation. This paper focuses on ac loss measurement setup and calibration in a rotating magnetic field. This experimental setup is to demonstrate measuring loss using a temperature rise method under the influence of a rotating magnetic field. The slight temperature increase of the sample in an ac-field is used as a measure of losses. The aim is to simulate the loss in rotating machines using HTS. This is a unique technique to measure total ac loss in HTS at power frequencies. The sample is mounted on to a cold finger extended from a liquid nitrogen heat exchanger (HEX). The thermal insulation between the HEX and sample is provided by a material of low thermal conductivity, and low eddy current heating sample holder in vacuum vessel. A temperature sensor and noninductive heater have been incorporated in the sample holder allowing a rapid sample change. The main part of the data is obtained in the calorimetric measurement is used for calibration. The focus is on the accuracy and calibrations required to predict the actual ac losses in HTS. This setup has the advantage of being able to measure the total ac loss under the influence of a continuous moving field as experienced by any rotating machines.

  8. Development of an extended straightness measurement reference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schenz, R.F.; Griffith, L.V.; Sommargren, G.E.

    1988-09-06

    The most accurate diamond turning machines have used physical straightness references. These references commonly are made of optical materials, such as Zerodur, and are flat enough to permit straightness measurements with an accuracy of 100--150 nm (4--6 microinches) p-v. In most cases, the flatness error is stable and can be accommodated by using a calibration table. The straightedges for the Large Optics Diamond Turning Machine (LODTM) at Lawrence Livermore National Laboratory (LLNL) are 1.1 meters in length and allow a straightness reference accuracy of 25--50 nm (1--2 microinches) p-v after calibration. Fabrication problems become insurmountable when a straightness reference formore » a length of up to 4 meters is desired. Moreover, the method of calibration by straightedge reversal does not account for gravitational sag when the sensing direction is vertical. Vertical sensing would be required in a four meter system and sag would become unacceptably large. Recent developments published in the literature suggest that the use of a laser beam for a reference may be feasible. Workers at Osaka University have reported a laser beam straightness reference that has a resolution of 3.5 nm, although tests were done only over a 200 mm length. LLNL has begun an investigation on the use of a directionally stabilized laser beam as a straightness measurement reference. The goal of the investigation is to provide a reference that is accurate to 25 nm (1 microinch) over a four meter distance. 3 refs., 2 figs.« less

  9. Single element ultrasonic imaging of limb geometry: an in-vivo study with comparison to MRI

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Fincke, Jonathan R.; Anthony, Brian W.

    2016-04-01

    Despite advancements in medical imaging, current prosthetic fitting methods remain subjective, operator dependent, and non-repeatable. The standard plaster casting method relies on prosthetist experience and tactile feel of the limb to design the prosthetic socket. Often times, many fitting iterations are required to achieve an acceptable fit. Use of improper socket fittings can lead to painful pathologies including neuromas, inflammation, soft tissue calcification, and pressure sores, often forcing the wearer to into a wheelchair and reducing mobility and quality of life. Computer software along with MRI/CT imaging has already been explored to aid the socket design process. In this paper, we explore the use of ultrasound instead of MRI/CT to accurately obtain the underlying limb geometry to assist the prosthetic socket design process. Using a single element ultrasound system, multiple subjects' proximal limbs were imaged using 1, 2.25, and 5 MHz single element transducers. Each ultrasound transducer was calibrated to ensure acoustic exposure within the limits defined by the FDA. To validate image quality, each patient was also imaged in an MRI. Fiducial markers visible in both MRI and ultrasound were used to compare the same limb cross-sectional image for each patient. After applying a migration algorithm, B-mode ultrasound cross-sections showed sufficiently high image resolution to characterize the skin and bone boundaries along with the underlying tissue structures.

  10. Large enhancement of perfusion contribution on fMRI signal

    PubMed Central

    Wang, Xiao; Zhu, Xiao-Hong; Zhang, Yi; Chen, Wei

    2012-01-01

    The perfusion contribution to the total functional magnetic resonance imaging (fMRI) signal was investigated using a rat model with mild hypercapnia at 9.4 T, and human subjects with visual stimulation at 4 T. It was found that the total fMRI signal change could be approximated as a linear superposition of ‘true' blood oxygenation level-dependent (BOLD; T2/T2*) effect and the blood flow-related (T1) effect. The latter effect was significantly enhanced by using short repetition time and large radiofrequency pulse flip angle and became comparable to the ‘true' BOLD signal in response to a mild hypercapnia in the rat brain, resulting in an improved contrast-to-noise ratio (CNR). Bipolar diffusion gradients suppressed the intravascular signals but had no significant effect on the flow-related signal. Similar results of enhanced fMRI signal were observed in the human study. The overall results suggest that the observed flow-related signal enhancement is likely originated from perfusion, and this enhancement can improve CNR and the spatial specificity for mapping brain activity and physiology changes. The nature of mixed BOLD and perfusion-related contributions in the total fMRI signal also has implication on BOLD quantification, in particular, the BOLD calibration model commonly used to estimate the change of cerebral metabolic rate of oxygen. PMID:22395206

  11. In vitro and in vivo magnetic resonance imaging with chlorotoxin-conjugated superparamagnetic nanoprobes for targeting hepatocarcinoma.

    PubMed

    Chen, Zhu; Xiao, En-Hua; Kang, Zhen; Zeng, Wen-Bin; Tan, Hui-Long; Li, Hua-Bing; Bian, Du-Jun; Shang, Quan-Liang

    2016-05-01

    The present study aimed to assess the in vitro and in vivo magnetic resonance imaging (MRI) features of chlorotoxin (CTX)-conjugated superparamagnetic iron oxide (SPIO) nanoprobes. CTX-conjugated nanoprobes were composed of SPIO coated with polyethylene glycol (PEG) and conjugated with CTX. The nanoprobes were termed SPIO-PEG-CTX. MRI of the SPIO and SPIO-PEG-CTX solutions at a different concentration was performed with a 3.0-T MRI scanner (Philips Achieva 3.0T X Series; Phillips Healthcare, The Netherlands). Rabbit VX2 hepatocarcinoma was established by a traditional laparotomy method (injection of the tumor particles into the liver using a 15G syringe needle) following approval by the institutional animal care and use committee. Contrast-enhanced MRI of VX2 rabbits (n=8) was performed using the same MRI scanner with SPIO‑PEG-CTX solutions as the contrast agent. Data were analyzed with calibration curve and a paired t-test. The SPIO-PEG-CTX nanoparticles were successfully prepared. With increasing concentrations of the solutions, the MRI signal intensity was increased at T1WI, but decreased at T2WI, which were the same as that for SPIO. Rabbit VX2 carcinoma appeared as a low MRI signal at T1WI, and high at T2WI. After injection of the contrast agent, the MRI signal of carcinoma was decreased relative to that before injection at T2WI (1,161±331.5 vs. 1,346±300.5; P=0.004<0.05), while the signal of the adjacent normal hepatic tissues was unchanged (480.6±165.1 vs. 563.4±67.8; P=0.202>0.05). The SPIO-PEG-CTX nanoparticles showed MRI negative enhancement at T2WI and a targeting effect in liver cancer, which provides the theoretical basis for further study of the early diagnosis of hepatocellular carcinoma.

  12. Monitoring local heating around an interventional MRI antenna with RF radiometry

    PubMed Central

    Ertürk, M. Arcan; El-Sharkawy, AbdEl-Monem M.; Bottomley, Paul A.

    2015-01-01

    Purpose: Radiofrequency (RF) radiometry uses thermal noise detected by an antenna to measure the temperature of objects independent of medical imaging technologies such as magnetic resonance imaging (MRI). Here, an active interventional MRI antenna can be deployed as a RF radiometer to measure local heating, as a possible new method of monitoring device safety and thermal therapy. Methods: A 128 MHz radiometer receiver was fabricated to measure the RF noise voltage from an interventional 3 T MRI loopless antenna and calibrated for temperature in a uniformly heated bioanalogous gel phantom. Local heating (ΔT) was induced using the antenna for RF transmission and measured by RF radiometry, fiber-optic thermal sensors, and MRI thermometry. The spatial thermal sensitivity of the antenna radiometer was numerically computed using a method-of-moment electric field analyses. The gel’s thermal conductivity was measured by MRI thermometry, and the localized time-dependent ΔT distribution computed from the bioheat transfer equation and compared with radiometry measurements. A “H-factor” relating the 1 g-averaged ΔT to the radiometric temperature was introduced to estimate peak temperature rise in the antenna’s sensitive region. Results: The loopless antenna radiometer linearly tracked temperature inside a thermally equilibrated phantom up to 73 °C to within ±0.3 °C at a 2 Hz sample rate. Computed and MRI thermometric measures of peak ΔT agreed within 13%. The peak 1 g-average temperature was H = 1.36 ± 0.02 times higher than the radiometric temperature for any media with a thermal conductivity of 0.15–0.50 (W/m)/K, indicating that the radiometer can measure peak 1 g-averaged ΔT in physiologically relevant tissue within ±0.4 °C. Conclusions: Active internal MRI detectors can serve as RF radiometers at the MRI frequency to provide accurate independent measures of local and peak temperature without the artifacts that can accompany MRI thermometry or the extra space needed to accommodate alternative thermal transducers. A RF radiometer could be integrated in a MRI scanner to permit “self-monitoring” for assuring device safety and/or monitoring delivery of thermal therapy. PMID:25735295

  13. Prediction of individual clinical scores in patients with Parkinson's disease using resting-state functional magnetic resonance imaging.

    PubMed

    Hou, YanBing; Luo, ChunYan; Yang, Jing; Ou, RuWei; Song, Wei; Wei, QianQian; Cao, Bei; Zhao, Bi; Wu, Ying; Shang, Hui-Fang; Gong, QiYong

    2016-07-15

    Neuroimaging holds the promise that it may one day aid the clinical assessment. However, the vast majority of studies using resting-state functional magnetic resonance imaging (fMRI) have reported average differences between Parkinson's disease (PD) patients and healthy controls, which do not permit inferences at the level of individuals. This study was to develop a model for the prediction of PD illness severity ratings from individual fMRI brain scan. The resting-state fMRI scans were obtained from 84 patients with PD and the Unified Parkinson's Disease Rating Scale-III (UPDRS-III) scores were obtained before scanning. The RVR method was used to predict clinical scores (UPDRS-III) from fMRI scans. The application of RVR to whole-brain resting-state fMRI data allowed prediction of UPDRS-III scores with statistically significant accuracy (correlation=0.35, P-value=0.001; mean sum of squares=222.17, P-value=0.002). This prediction was informed strongly by negative weight areas including prefrontal lobe and medial occipital lobe, and positive weight areas including medial parietal lobe. It was suggested that fMRI scans contained sufficient information about neurobiological change in patients with PD to permit accurate prediction about illness severity, on an individual subject basis. Our results provided preliminary evidence, as proof-of-concept, to support that fMRI might be possible to be a clinically useful quantitative assessment aid in PD at individual level. This may enable clinicians to target those uncooperative patients and machines to replace human for a more efficient use of health care resources. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. MRI Customized Play Therapy in Children Reduces the Need for Sedation--A Randomized Controlled Trial.

    PubMed

    Bharti, Bhavneet; Malhi, Prahbhjot; Khandelwal, N

    2016-03-01

    To evaluate the effectiveness of an MRI-specific play therapy intervention on the need for sedation in young children. All children in the age group of 4-10 y, who were advised an MRI scan over a period of one year were randomized. Exclusion criteria included children with neurodevelopmental disorders impairing cognition and children who had previously undergone diagnostic MRI. A total of 79 children were randomized to a control or an intervention condition. The intervention involved familiarizing the child with the MRI model machine, listing the steps involved in the scan to the child in vivid detail, training the child to stand still for 5 min, and conducting several dry runs with a doll or a favorite toy. The study was approved by the Institute ethical committee. The need for sedation was 41 % (n = 16) in the control group and this declined to 20 % (n = 8) in the intervention group (χ(2) = 4.13; P = 0.04). The relative risk of sedation decreased by 49 % in the intervention group as compared to the control group (RR 0.49; 95 % CI: 0.24-1.01) and this difference was statistically significant (P = 0.04). The absolute risk difference in sedation use between intervention and control group was 21 % (95 % CI 1.3 %-40.8 %). Even on adjusting for age, relative risk of sedation remained significantly lower in children undergoing play therapy as compared to the control (RR 0.57, 95 % CI: 0.32-0.98) with P value of 0.04. The use of an MRI customized play therapy with pediatric patients undergoing diagnostic MRI resulted in significant reduction of the use of sedation.

  15. Rats avoid high magnetic fields: dependence on an intact vestibular system

    PubMed Central

    Houpt, Thomas A.; Cassell, Jennifer A.; Riccardi, Christina; DenBleyker, Megan D.; Hood, Alison; Smith, James C.

    2009-01-01

    Summary HOUPT, T.A., J.A. CASSELL, C. RICCARDI, M.D. DENBLEYKER, A. HOOD, AND J.C. SMITH. Rats avoid high magnetic fields: dependence on an intact vestibular system. PHYSIOL BEHAV 00(0)000-000, 2006. High strength static magnetic fields are thought to be benign and largely undetectable by mammals. As magnetic resonance imaging (MRI) machines increase in strength, however, potential aversive effects may become clinically relevant. Here we report that rats find entry into a 14.1 T magnet aversive, and that they can detect and avoid entry into the magnet at a point where the magnetic field is 2 T or lower. Rats were trained to climb a ladder through the bore of a 14.1 T superconducting magnet. After their first climb into 14.1 T, most rats refused to re-enter the magnet or climb past the 2 T field line. This result was confirmed in a resistive magnet in which the magnetic field was varied from 1 to 14 T. Detection and avoidance required the vestibular apparatus of the inner ear, because labyrinthectomized rats readily traversed the magnet. The inner ear is a novel site for magnetic field transduction in mammals, but perturbation of the vestibular apparatus would be consistent with human reports of vertigo and nausea around high strength MRI machines. PMID:17585969

  16. MRI texture features as biomarkers to predict MGMT methylation status in glioblastomas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korfiatis, Panagiotis; Kline, Timothy L.; Erickson, Bradley J., E-mail: bje@mayo.edu

    Purpose: Imaging biomarker research focuses on discovering relationships between radiological features and histological findings. In glioblastoma patients, methylation of the O{sup 6}-methylguanine methyltransferase (MGMT) gene promoter is positively correlated with an increased effectiveness of current standard of care. In this paper, the authors investigate texture features as potential imaging biomarkers for capturing the MGMT methylation status of glioblastoma multiforme (GBM) tumors when combined with supervised classification schemes. Methods: A retrospective study of 155 GBM patients with known MGMT methylation status was conducted. Co-occurrence and run length texture features were calculated, and both support vector machines (SVMs) and random forest classifiersmore » were used to predict MGMT methylation status. Results: The best classification system (an SVM-based classifier) had a maximum area under the receiver-operating characteristic (ROC) curve of 0.85 (95% CI: 0.78–0.91) using four texture features (correlation, energy, entropy, and local intensity) originating from the T2-weighted images, yielding at the optimal threshold of the ROC curve, a sensitivity of 0.803 and a specificity of 0.813. Conclusions: Results show that supervised machine learning of MRI texture features can predict MGMT methylation status in preoperative GBM tumors, thus providing a new noninvasive imaging biomarker.« less

  17. Computer-aided classification of Alzheimer's disease based on support vector machine with combination of cerebral image features in MRI

    NASA Astrophysics Data System (ADS)

    Jongkreangkrai, C.; Vichianin, Y.; Tocharoenchai, C.; Arimura, H.; Alzheimer's Disease Neuroimaging Initiative

    2016-03-01

    Several studies have differentiated Alzheimer's disease (AD) using cerebral image features derived from MR brain images. In this study, we were interested in combining hippocampus and amygdala volumes and entorhinal cortex thickness to improve the performance of AD differentiation. Thus, our objective was to investigate the useful features obtained from MRI for classification of AD patients using support vector machine (SVM). T1-weighted MR brain images of 100 AD patients and 100 normal subjects were processed using FreeSurfer software to measure hippocampus and amygdala volumes and entorhinal cortex thicknesses in both brain hemispheres. Relative volumes of hippocampus and amygdala were calculated to correct variation in individual head size. SVM was employed with five combinations of features (H: hippocampus relative volumes, A: amygdala relative volumes, E: entorhinal cortex thicknesses, HA: hippocampus and amygdala relative volumes and ALL: all features). Receiver operating characteristic (ROC) analysis was used to evaluate the method. AUC values of five combinations were 0.8575 (H), 0.8374 (A), 0.8422 (E), 0.8631 (HA) and 0.8906 (ALL). Although “ALL” provided the highest AUC, there were no statistically significant differences among them except for “A” feature. Our results showed that all suggested features may be feasible for computer-aided classification of AD patients.

  18. Neural mechanisms of cue-approach training

    PubMed Central

    Bakkour, Akram; Lewis-Peacock, Jarrod A.; Poldrack, Russell A.; Schonberg, Tom

    2016-01-01

    Biasing choices may prove a useful way to implement behavior change. Previous work has shown that a simple training task (the cue-approach task), which does not rely on external reinforcement, can robustly influence choice behavior by biasing choice toward items that were targeted during training. In the current study, we replicate previous behavioral findings and explore the neural mechanisms underlying the shift in preferences following cue-approach training. Given recent successes in the development and application of machine learning techniques to task-based fMRI data, which have advanced understanding of the neural substrates of cognition, we sought to leverage the power of these techniques to better understand neural changes during cue-approach training that subsequently led to a shift in choice behavior. Contrary to our expectations, we found that machine learning techniques applied to fMRI data during non-reinforced training were unsuccessful in elucidating the neural mechanism underlying the behavioral effect. However, univariate analyses during training revealed that the relationship between BOLD and choices for Go items increases as training progresses compared to choices of NoGo items primarily in lateral prefrontal cortical areas. This new imaging finding suggests that preferences are shifted via differential engagement of task control networks that interact with value networks during cue-approach training. PMID:27677231

  19. Comparison of Artificial Immune System and Particle Swarm Optimization Techniques for Error Optimization of Machine Vision Based Tool Movements

    NASA Astrophysics Data System (ADS)

    Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod

    2015-10-01

    In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.

  20. Prediction of lithium response in first-episode mania using the LITHium Intelligent Agent (LITHIA): Pilot data and proof-of-concept.

    PubMed

    Fleck, David E; Ernest, Nicholas; Adler, Caleb M; Cohen, Kelly; Eliassen, James C; Norris, Matthew; Komoroski, Richard A; Chu, Wen-Jang; Welge, Jeffrey A; Blom, Thomas J; DelBello, Melissa P; Strakowski, Stephen M

    2017-06-01

    Individualized treatment for bipolar disorder based on neuroimaging treatment targets remains elusive. To address this shortcoming, we developed a linguistic machine learning system based on a cascading genetic fuzzy tree (GFT) design called the LITHium Intelligent Agent (LITHIA). Using multiple objectively defined functional magnetic resonance imaging (fMRI) and proton magnetic resonance spectroscopy ( 1 H-MRS) inputs, we tested whether LITHIA could accurately predict the lithium response in participants with first-episode bipolar mania. We identified 20 subjects with first-episode bipolar mania who received an adequate trial of lithium over 8 weeks and both fMRI and 1 H-MRS scans at baseline pre-treatment. We trained LITHIA using 18 1 H-MRS and 90 fMRI inputs over four training runs to classify treatment response and predict symptom reductions. Each training run contained a randomly selected 80% of the total sample and was followed by a 20% validation run. Over a different randomly selected distribution of the sample, we then compared LITHIA to eight common classification methods. LITHIA demonstrated nearly perfect classification accuracy and was able to predict post-treatment symptom reductions at 8 weeks with at least 88% accuracy in training and 80% accuracy in validation. Moreover, LITHIA exceeded the predictive capacity of the eight comparator methods and showed little tendency towards overfitting. The results provided proof-of-concept that a novel GFT is capable of providing control to a multidimensional bioinformatics problem-namely, prediction of the lithium response-in a pilot data set. Future work on this, and similar machine learning systems, could help assign psychiatric treatments more efficiently, thereby optimizing outcomes and limiting unnecessary treatment. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Tracking neural coding of perceptual and semantic features of concrete nouns

    PubMed Central

    Sudre, Gustavo; Pomerleau, Dean; Palatucci, Mark; Wehbe, Leila; Fyshe, Alona; Salmelin, Riitta; Mitchell, Tom

    2015-01-01

    We present a methodological approach employing magnetoencephalography (MEG) and machine learning techniques to investigate the flow of perceptual and semantic information decodable from neural activity in the half second during which the brain comprehends the meaning of a concrete noun. Important information about the cortical location of neural activity related to the representation of nouns in the human brain has been revealed by past studies using fMRI. However, the temporal sequence of processing from sensory input to concept comprehension remains unclear, in part because of the poor time resolution provided by fMRI. In this study, subjects answered 20 questions (e.g. is it alive?) about the properties of 60 different nouns prompted by simultaneous presentation of a pictured item and its written name. Our results show that the neural activity observed with MEG encodes a variety of perceptual and semantic features of stimuli at different times relative to stimulus onset, and in different cortical locations. By decoding these features, our MEG-based classifier was able to reliably distinguish between two different concrete nouns that it had never seen before. The results demonstrate that there are clear differences between the time course of the magnitude of MEG activity and that of decodable semantic information. Perceptual features were decoded from MEG activity earlier in time than semantic features, and features related to animacy, size, and manipulability were decoded consistently across subjects. We also observed that regions commonly associated with semantic processing in the fMRI literature may not show high decoding results in MEG. We believe that this type of approach and the accompanying machine learning methods can form the basis for further modeling of the flow of neural information during language processing and a variety of other cognitive processes. PMID:22565201

  2. Optimal classification for the diagnosis of duchenne muscular dystrophy images using support vector machines.

    PubMed

    Zhang, Ming-Huan; Ma, Jun-Shan; Shen, Ying; Chen, Ying

    2016-09-01

    This study aimed to investigate the optimal support vector machines (SVM)-based classifier of duchenne muscular dystrophy (DMD) magnetic resonance imaging (MRI) images. T1-weighted (T1W) and T2-weighted (T2W) images of the 15 boys with DMD and 15 normal controls were obtained. Textural features of the images were extracted and wavelet decomposed, and then, principal features were selected. Scale transform was then performed for MRI images. Afterward, SVM-based classifiers of MRI images were analyzed based on the radical basis function and decomposition levels. The cost (C) parameter and kernel parameter [Formula: see text] were used for classification. Then, the optimal SVM-based classifier, expressed as [Formula: see text]), was identified by performance evaluation (sensitivity, specificity and accuracy). Eight of 12 textural features were selected as principal features (eigenvalues [Formula: see text]). The 16 SVM-based classifiers were obtained using combination of (C, [Formula: see text]), and those with lower C and [Formula: see text] values showed higher performances, especially classifier of [Formula: see text]). The SVM-based classifiers of T1W images showed higher performance than T1W images at the same decomposition level. The T1W images in classifier of [Formula: see text]) at level 2 decomposition showed the highest performance of all, and its overall correct sensitivity, specificity, and accuracy reached 96.9, 97.3, and 97.1 %, respectively. The T1W images in SVM-based classifier [Formula: see text] at level 2 decomposition showed the highest performance of all, demonstrating that it was the optimal classification for the diagnosis of DMD.

  3. Development of gait segmentation methods for wearable foot pressure sensors.

    PubMed

    Crea, S; De Rossi, S M M; Donati, M; Reberšek, P; Novak, D; Vitiello, N; Lenzi, T; Podobnik, J; Munih, M; Carrozza, M C

    2012-01-01

    We present an automated segmentation method based on the analysis of plantar pressure signals recorded from two synchronized wireless foot insoles. Given the strict limits on computational power and power consumption typical of wearable electronic components, our aim is to investigate the capability of a Hidden Markov Model machine-learning method, to detect gait phases with different levels of complexity in the processing of the wearable pressure sensors signals. Therefore three different datasets are developed: raw voltage values, calibrated sensor signals and a calibrated estimation of total ground reaction force and position of the plantar center of pressure. The method is tested on a pool of 5 healthy subjects, through a leave-one-out cross validation. The results show high classification performances achieved using estimated biomechanical variables, being on average the 96%. Calibrated signals and raw voltage values show higher delays and dispersions in phase transition detection, suggesting a lower reliability for online applications.

  4. Impact and Estimation of Balance Coordinate System Rotations and Translations in Wind-Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Toro, Kenneth G.; Parker, Peter A.

    2017-01-01

    Discrepancies between the model and balance coordinate systems lead to biases in the aerodynamic measurements during wind-tunnel testing. The reference coordinate system relative to the calibration coordinate system at which the forces and moments are resolved is crucial to the overall accuracy of force measurements. This paper discusses sources of discrepancies and estimates of coordinate system rotation and translation due to machining and assembly differences. A methodology for numerically estimating the coordinate system biases will be discussed and developed. Two case studies are presented using this methodology to estimate the model alignment. Examples span from angle measurement system shifts on the calibration system to discrepancies in actual wind-tunnel data. The results from these case-studies will help aerodynamic researchers and force balance engineers to better the understand and identify potential differences in calibration systems due to coordinate system rotation and translation.

  5. Geometrical pose and structural estimation from a single image for automatic inspection of filter components

    NASA Astrophysics Data System (ADS)

    Liu, Yonghuai; Rodrigues, Marcos A.

    2000-03-01

    This paper describes research on the application of machine vision techniques to a real time automatic inspection task of air filter components in a manufacturing line. A novel calibration algorithm is proposed based on a special camera setup where defective items would show a large calibration error. The algorithm makes full use of rigid constraints derived from the analysis of geometrical properties of reflected correspondence vectors which have been synthesized into a single coordinate frame and provides a closed form solution to the estimation of all parameters. For a comparative study of performance, we also developed another algorithm based on this special camera setup using epipolar geometry. A number of experiments using synthetic data have shown that the proposed algorithm is generally more accurate and robust than the epipolar geometry based algorithm and that the geometric properties of reflected correspondence vectors provide effective constraints to the calibration of rigid body transformations.

  6. gr-MRI: A software package for magnetic resonance imaging using software defined radios

    NASA Astrophysics Data System (ADS)

    Hasselwander, Christopher J.; Cao, Zhipeng; Grissom, William A.

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5 Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately 2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500 kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.

  7. Reproducibility of Brain Morphometry from Short-Term Repeat Clinical MRI Examinations: A Retrospective Study

    PubMed Central

    Liu, Hon-Man; Chen, Shan-Kai; Chen, Ya-Fang; Lee, Chung-Wei; Yeh, Lee-Ren

    2016-01-01

    Purpose To assess the inter session reproducibility of automatic segmented MRI-derived measures by FreeSurfer in a group of subjects with normal-appearing MR images. Materials and Methods After retrospectively reviewing a brain MRI database from our institute consisting of 14,758 adults, those subjects who had repeat scans and had no history of neurodegenerative disorders were selected for morphometry analysis using FreeSurfer. A total of 34 subjects were grouped by MRI scanner model. After automatic segmentation using FreeSurfer, label-wise comparison (involving area, thickness, and volume) was performed on all segmented results. An intraclass correlation coefficient was used to estimate the agreement between sessions. Wilcoxon signed rank test was used to assess the population mean rank differences across sessions. Mean-difference analysis was used to evaluate the difference intervals across scanners. Absolute percent difference was used to estimate the reproducibility errors across the MRI models. Kruskal-Wallis test was used to determine the across-scanner effect. Results The agreement in segmentation results for area, volume, and thickness measurements of all segmented anatomical labels was generally higher in Signa Excite and Verio models when compared with Sonata and TrioTim models. There were significant rank differences found across sessions in some labels of different measures. Smaller difference intervals in global volume measurements were noted on images acquired by Signa Excite and Verio models. For some brain regions, significant MRI model effects were observed on certain segmentation results. Conclusions Short-term scan-rescan reliability of automatic brain MRI morphometry is feasible in the clinical setting. However, since repeatability of software performance is contingent on the reproducibility of the scanner performance, the scanner performance must be calibrated before conducting such studies or before using such software for retrospective reviewing. PMID:26812647

  8. Focal versus distributed temporal cortex activity for speech sound category assignment

    PubMed Central

    Bouton, Sophie; Chambon, Valérian; Tyrand, Rémi; Seeck, Margitta; Karkar, Sami; van de Ville, Dimitri; Giraud, Anne-Lise

    2018-01-01

    Percepts and words can be decoded from distributed neural activity measures. However, the existence of widespread representations might conflict with the more classical notions of hierarchical processing and efficient coding, which are especially relevant in speech processing. Using fMRI and magnetoencephalography during syllable identification, we show that sensory and decisional activity colocalize to a restricted part of the posterior superior temporal gyrus (pSTG). Next, using intracortical recordings, we demonstrate that early and focal neural activity in this region distinguishes correct from incorrect decisions and can be machine-decoded to classify syllables. Crucially, significant machine decoding was possible from neuronal activity sampled across different regions of the temporal and frontal lobes, despite weak or absent sensory or decision-related responses. These findings show that speech-sound categorization relies on an efficient readout of focal pSTG neural activity, while more distributed activity patterns, although classifiable by machine learning, instead reflect collateral processes of sensory perception and decision. PMID:29363598

  9. Effect of echo spacing and readout bandwidth on basic performances of EPI-fMRI acquisition sequences implemented on two 1.5 T MR scanner systems.

    PubMed

    Giannelli, Marco; Diciotti, Stefano; Tessa, Carlo; Mascalchi, Mario

    2010-01-01

    Although in EPI-fMRI analyses typical acquisition parameters (TR, TE, matrix, slice thickness, etc.) are generally employed, various readout bandwidth (BW) values are used as a function of gradients characteristics of the MR scanner. Echo spacing (ES) is another fundamental parameter of EPI-fMRI acquisition sequences but the employed ES value is not usually reported in fMRI studies. In the present work, the authors investigated the effect of ES and BW on basic performances of EPI-fMRI sequences in terms of temporal stability and overall image quality of time series acquisition. EPI-fMRI acquisitions of the same water phantom were performed using two clinical MR scanner systems (scanners A and B) with different gradient characteristics and functional designs of radiofrequency coils. For both scanners, the employed ES values ranged from 0.75 to 1.33 ms. The used BW values ranged from 125.0 to 250.0 kHz/64pixels and from 78.1 to 185.2 kHz/64pixels for scanners A and B, respectively. The temporal stability of EPI-fMRI sequence was assessed measuring the signal-to-fluctuation noise ratio (SFNR) and signal drift (DR), while the overall image quality was assessed evaluating the signal-to-noise ratio (SNR(ts)) and nonuniformity (NU(ts)) of the time series acquisition. For both scanners, no significant effect of ES and BW on signal drift was revealed. The SFNR, NU(ts) and SNR(ts) values of scanner A did not significantly vary with ES. On the other hand, the SFNR, NU(ts), and SNR(ts) values of scanner B significantly varied with ES. SFNR (5.8%) and SNR(ts) (5.9%) increased with increasing ES. SFNR (25% scanner A, 32% scanner B) and SNR(ts) (26.2% scanner A, 30.1% scanner B) values of both scanners significantly decreased with increasing BW. NU(ts) values of scanners A and B were less than 3% for all BW and ES values. Nonetheless, scanner A was characterized by a significant upward trend (3% percentage of variation) of time series nonuniformity with increasing BW while NU(ts) of scanner B significantly increased (19% percentage of variation) with increasing ES. Temporal stability (SFNR and DR) and overall image quality (NU(ts) and SNR(ts)) of EPI-fMRI time series can significantly vary with echo spacing and readout bandwidth. The specific pattern of variation may depend on the performance of each single MR scanner system in terms of gradients characteristics, EPI sequence calibrations (eddy currents, shimming, etc.), and functional design of radiofrequency coil. Our results indicate that the employment of low BW improves not only the signal-to-noise ratio of EPI-fMRI time series but also the temporal stability of functional acquisitions. The use of minimum ES values is not entirely advantageous when the MR scanner system is characterized by gradients with low performances and suboptimal EPI sequence calibration. Since differences in basic performances of MR scanner system are potential source of variability for fMRI activation, phantom measurements of SFNR, DR, NU(ts), and SNR(ts) can be executed before subjects acquisitions to monitor the stability of MR scanner performances in clinical group comparison and longitudinal studies.

  10. Early postnatal myelin content estimate of white matter via T1w/T2w ratio

    NASA Astrophysics Data System (ADS)

    Lee, Kevin; Cherel, Marie; Budin, Francois; Gilmore, John; Zaldarriaga Consing, Kirsten; Rasmussen, Jerod; Wadhwa, Pathik D.; Entringer, Sonja; Glasser, Matthew F.; Van Essen, David C.; Buss, Claudia; Styner, Martin

    2015-03-01

    To develop and evaluate a novel processing framework for the relative quantification of myelin content in cerebral white matter (WM) regions from brain MRI data via a computed ratio of T1 to T2 weighted intensity values. We employed high resolution (1mm3 isotropic) T1 and T2 weighted MRI from 46 (28 male, 18 female) neonate subjects (typically developing controls) scanned on a Siemens Tim Trio 3T at UC Irvine. We developed a novel, yet relatively straightforward image processing framework for WM myelin content estimation based on earlier work by Glasser, et al. We first co-register the structural MRI data to correct for motion. Then, background areas are masked out via a joint T1w and T2 foreground mask computed. Raw T1w/T2w-ratios images are computed next. For purpose of calibration across subjects, we first coarsely segment the fat-rich facial regions via an atlas co-registration. Linear intensity rescaling based on median T1w/T2w-ratio values in those facial regions yields calibrated T1w/T2wratio images. Mean values in lobar regions are evaluated using standard statistical analysis to investigate their interaction with age at scan. Several lobes have strongly positive significant interactions of age at scan with the computed T1w/T2w-ratio. Most regions do not show sex effects. A few regions show no measurable effects of change in myelin content change within the first few weeks of postnatal development, such as cingulate and CC areas, which we attribute to sample size and measurement variability. We developed and evaluated a novel way to estimate white matter myelin content for use in studies of brain white matter development.

  11. Human-machine interface for a VR-based medical imaging environment

    NASA Astrophysics Data System (ADS)

    Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans

    1997-05-01

    Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.

  12. Decoding Lifespan Changes of the Human Brain Using Resting-State Functional Connectivity MRI

    PubMed Central

    Wang, Lubin; Su, Longfei; Shen, Hui; Hu, Dewen

    2012-01-01

    The development of large-scale functional brain networks is a complex, lifelong process that can be investigated using resting-state functional connectivity MRI (rs-fcMRI). In this study, we aimed to decode the developmental dynamics of the whole-brain functional network in seven decades (8–79 years) of the human lifespan. We first used parametric curve fitting to examine linear and nonlinear age effect on the resting human brain, and then combined manifold learning and support vector machine methods to predict individuals' “brain ages” from rs-fcMRI data. We found that age-related changes in interregional functional connectivity exhibited spatially and temporally specific patterns. During brain development from childhood to senescence, functional connections tended to linearly increase in the emotion system and decrease in the sensorimotor system; while quadratic trajectories were observed in functional connections related to higher-order cognitive functions. The complex patterns of age effect on the whole-brain functional network could be effectively represented by a low-dimensional, nonlinear manifold embedded in the functional connectivity space, which uncovered the inherent structure of brain maturation and aging. Regression of manifold coordinates with age further showed that the manifold representation extracted sufficient information from rs-fcMRI data to make prediction about individual brains' functional development levels. Our study not only gives insights into the neural substrates that underlie behavioral and cognitive changes over age, but also provides a possible way to quantitatively describe the typical and atypical developmental progression of human brain function using rs-fcMRI. PMID:22952990

  13. Decoding lifespan changes of the human brain using resting-state functional connectivity MRI.

    PubMed

    Wang, Lubin; Su, Longfei; Shen, Hui; Hu, Dewen

    2012-01-01

    The development of large-scale functional brain networks is a complex, lifelong process that can be investigated using resting-state functional connectivity MRI (rs-fcMRI). In this study, we aimed to decode the developmental dynamics of the whole-brain functional network in seven decades (8-79 years) of the human lifespan. We first used parametric curve fitting to examine linear and nonlinear age effect on the resting human brain, and then combined manifold learning and support vector machine methods to predict individuals' "brain ages" from rs-fcMRI data. We found that age-related changes in interregional functional connectivity exhibited spatially and temporally specific patterns. During brain development from childhood to senescence, functional connections tended to linearly increase in the emotion system and decrease in the sensorimotor system; while quadratic trajectories were observed in functional connections related to higher-order cognitive functions. The complex patterns of age effect on the whole-brain functional network could be effectively represented by a low-dimensional, nonlinear manifold embedded in the functional connectivity space, which uncovered the inherent structure of brain maturation and aging. Regression of manifold coordinates with age further showed that the manifold representation extracted sufficient information from rs-fcMRI data to make prediction about individual brains' functional development levels. Our study not only gives insights into the neural substrates that underlie behavioral and cognitive changes over age, but also provides a possible way to quantitatively describe the typical and atypical developmental progression of human brain function using rs-fcMRI.

  14. Pattern recognition and functional neuroimaging help to discriminate healthy adolescents at risk for mood disorders from low risk adolescents.

    PubMed

    Mourão-Miranda, Janaina; Oliveira, Leticia; Ladouceur, Cecile D; Marquand, Andre; Brammer, Michael; Birmaher, Boris; Axelson, David; Phillips, Mary L

    2012-01-01

    There are no known biological measures that accurately predict future development of psychiatric disorders in individual at-risk adolescents. We investigated whether machine learning and fMRI could help to: 1. differentiate healthy adolescents genetically at-risk for bipolar disorder and other Axis I psychiatric disorders from healthy adolescents at low risk of developing these disorders; 2. identify those healthy genetically at-risk adolescents who were most likely to develop future Axis I disorders. 16 healthy offspring genetically at risk for bipolar disorder and other Axis I disorders by virtue of having a parent with bipolar disorder and 16 healthy, age- and gender-matched low-risk offspring of healthy parents with no history of psychiatric disorders (12-17 year-olds) performed two emotional face gender-labeling tasks (happy/neutral; fearful/neutral) during fMRI. We used Gaussian Process Classifiers (GPC), a machine learning approach that assigns a predictive probability of group membership to an individual person, to differentiate groups and to identify those at-risk adolescents most likely to develop future Axis I disorders. Using GPC, activity to neutral faces presented during the happy experiment accurately and significantly differentiated groups, achieving 75% accuracy (sensitivity = 75%, specificity = 75%). Furthermore, predictive probabilities were significantly higher for those at-risk adolescents who subsequently developed an Axis I disorder than for those at-risk adolescents remaining healthy at follow-up. We show that a combination of two promising techniques, machine learning and neuroimaging, not only discriminates healthy low-risk from healthy adolescents genetically at-risk for Axis I disorders, but may ultimately help to predict which at-risk adolescents subsequently develop these disorders.

  15. Research of a smart cutting tool based on MEMS strain gauge

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Zhao, Y. L.; Shao, YW; Hu, T. J.; Zhang, Q.; Ge, X. H.

    2018-03-01

    Cutting force is an important factor that affects machining accuracy, cutting vibration and tool wear. Machining condition monitoring by cutting force measurement is a key technology for intelligent manufacture. Current cutting force sensors exist problems of large volume, complex structure and poor compatibility in practical application, for these problems, a smart cutting tool is proposed in this paper for cutting force measurement. Commercial MEMS (Micro-Electro-Mechanical System) strain gauges with high sensitivity and small size are adopted as transducing element of the smart tool, and a structure optimized cutting tool is fabricated for MEMS strain gauge bonding. Static calibration results show that the developed smart cutting tool is able to measure cutting forces in both X and Y directions, and the cross-interference error is within 3%. Its general accuracy is 3.35% and 3.27% in X and Y directions, and sensitivity is 0.1 mV/N, which is very suitable for measuring small cutting forces in high speed and precision machining. The smart cutting tool is portable and reliable for practical application in CNC machine tool.

  16. MRI-based quantification of Duchenne muscular dystrophy in a canine model

    NASA Astrophysics Data System (ADS)

    Wang, Jiahui; Fan, Zheng; Kornegay, Joe N.; Styner, Martin A.

    2011-03-01

    Duchenne muscular dystrophy (DMD) is a progressive and fatal X-linked disease caused by mutations in the DMD gene. Magnetic resonance imaging (MRI) has shown potential to provide non-invasive and objective biomarkers for monitoring disease progression and therapeutic effect in DMD. In this paper, we propose a semi-automated scheme to quantify MRI features of golden retriever muscular dystrophy (GRMD), a canine model of DMD. Our method was applied to a natural history data set and a hydrodynamic limb perfusion data set. The scheme is composed of three modules: pre-processing, muscle segmentation, and feature analysis. The pre-processing module includes: calculation of T2 maps, spatial registration of T2 weighted (T2WI) images, T2 weighted fat suppressed (T2FS) images, and T2 maps, and intensity calibration of T2WI and T2FS images. We then manually segment six pelvic limb muscles. For each of the segmented muscles, we finally automatically measure volume and intensity statistics of the T2FS images and T2 maps. For the natural history study, our results showed that four of six muscles in affected dogs had smaller volumes and all had higher mean intensities in T2 maps as compared to normal dogs. For the perfusion study, the muscle volumes and mean intensities in T2FS were increased in the post-perfusion MRI scans as compared to pre-perfusion MRI scans, as predicted. We conclude that our scheme successfully performs quantitative analysis of muscle MRI features of GRMD.

  17. Bayesian model calibration of ramp compression experiments on Z

    NASA Astrophysics Data System (ADS)

    Brown, Justin; Hund, Lauren

    2017-06-01

    Bayesian model calibration (BMC) is a statistical framework to estimate inputs for a computational model in the presence of multiple uncertainties, making it well suited to dynamic experiments which must be coupled with numerical simulations to interpret the results. Often, dynamic experiments are diagnosed using velocimetry and this output can be modeled using a hydrocode. Several calibration issues unique to this type of scenario including the functional nature of the output, uncertainty of nuisance parameters within the simulation, and model discrepancy identifiability are addressed, and a novel BMC process is proposed. As a proof of concept, we examine experiments conducted on Sandia National Laboratories' Z-machine which ramp compressed tantalum to peak stresses of 250 GPa. The proposed BMC framework is used to calibrate the cold curve of Ta (with uncertainty), and we conclude that the procedure results in simple, fast, and valid inferences. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  18. Automatic Estimation of Volumetric Breast Density Using Artificial Neural Network-Based Calibration of Full-Field Digital Mammography: Feasibility on Japanese Women With and Without Breast Cancer.

    PubMed

    Wang, Jeff; Kato, Fumi; Yamashita, Hiroko; Baba, Motoi; Cui, Yi; Li, Ruijiang; Oyama-Manabe, Noriko; Shirato, Hiroki

    2017-04-01

    Breast cancer is the most common invasive cancer among women and its incidence is increasing. Risk assessment is valuable and recent methods are incorporating novel biomarkers such as mammographic density. Artificial neural networks (ANN) are adaptive algorithms capable of performing pattern-to-pattern learning and are well suited for medical applications. They are potentially useful for calibrating full-field digital mammography (FFDM) for quantitative analysis. This study uses ANN modeling to estimate volumetric breast density (VBD) from FFDM on Japanese women with and without breast cancer. ANN calibration of VBD was performed using phantom data for one FFDM system. Mammograms of 46 Japanese women diagnosed with invasive carcinoma and 53 with negative findings were analyzed using ANN models learned. ANN-estimated VBD was validated against phantom data, compared intra-patient, with qualitative composition scoring, with MRI VBD, and inter-patient with classical risk factors of breast cancer as well as cancer status. Phantom validations reached an R 2 of 0.993. Intra-patient validations ranged from R 2 of 0.789 with VBD to 0.908 with breast volume. ANN VBD agreed well with BI-RADS scoring and MRI VBD with R 2 ranging from 0.665 with VBD to 0.852 with breast volume. VBD was significantly higher in women with cancer. Associations with age, BMI, menopause, and cancer status previously reported were also confirmed. ANN modeling appears to produce reasonable measures of mammographic density validated with phantoms, with existing measures of breast density, and with classical biomarkers of breast cancer. FFDM VBD is significantly higher in Japanese women with cancer.

  19. Constitutive Model Calibration via Autonomous Multiaxial Experimentation (Postprint)

    DTIC Science & Technology

    2016-09-17

    test machine. Experimental data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain...data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain conditions. Optimization methods...be used directly in finite element simulations of more complex geometries. Keywords Axial/torsional experimentation • Plasticity • Constitutive model

  20. Research on the method of improving the accuracy of CMM (coordinate measuring machine) testing aspheric surface

    NASA Astrophysics Data System (ADS)

    Cong, Wang; Xu, Lingdi; Li, Ang

    2017-10-01

    Large aspheric surface which have the deviation with spherical surface are being used widely in various of optical systems. Compared with spherical surface, Large aspheric surfaces have lots of advantages, such as improving image quality, correcting aberration, expanding field of view, increasing the effective distance and make the optical system compact, lightweight. Especially, with the rapid development of space optics, space sensor resolution is required higher and viewing angle is requred larger. Aspheric surface will become one of the essential components in the optical system. After finishing Aspheric coarse Grinding surface profile error is about Tens of microns[1].In order to achieve the final requirement of surface accuracy,the aspheric surface must be quickly modified, high precision testing is the basement of rapid convergence of the surface error . There many methods on aspheric surface detection[2], Geometric ray detection, hartmann detection, ronchi text, knifeedge method, direct profile test, interferometry, while all of them have their disadvantage[6]. In recent years the measure of the aspheric surface become one of the import factors which are restricting the aspheric surface processing development. A two meter caliber industrial CMM coordinate measuring machine is avaiable, but it has many drawbacks such as large detection error and low repeatability precision in the measurement of aspheric surface coarse grinding , which seriously affects the convergence efficiency during the aspherical mirror processing. To solve those problems, this paper presents an effective error control, calibration and removal method by calibration mirror position of the real-time monitoring and other effective means of error control, calibration and removal by probe correction and the measurement mode selection method to measure the point distribution program development. This method verified by real engineer examples, this method increases the original industrial-grade coordinate system nominal measurement accuracy PV value of 7 microns to 4microns, Which effectively improves the grinding efficiency of aspheric mirrors and verifies the correctness of the method. This paper also investigates the error detection and operation control method, the error calibration of the CMM and the random error calibration of the CMM .

  1. Man vs. Machine: An interactive poll to evaluate hydrological model performance of a manual and an automatic calibration

    NASA Astrophysics Data System (ADS)

    Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that produced the respective hydrograph. Therefore, the result of the poll can be seen as an additional quality criterion for the comparison of the two different approaches and help in the evaluation of the automatic calibration method.

  2. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  3. Quantification of uncertainty in machining operations for on-machine acceptance.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Claudet, Andre A.; Tran, Hy D.; Su, Jiann-Chemg

    2008-09-01

    Manufactured parts are designed with acceptance tolerances, i.e. deviations from ideal design conditions, due to unavoidable errors in the manufacturing process. It is necessary to measure and evaluate the manufactured part, compared to the nominal design, to determine whether the part meets design specifications. The scope of this research project is dimensional acceptance of machined parts; specifically, parts machined using numerically controlled (NC, or also CNC for Computer Numerically Controlled) machines. In the design/build/accept cycle, the designer will specify both a nominal value, and an acceptable tolerance. As part of the typical design/build/accept business practice, it is required to verifymore » that the part did meet acceptable values prior to acceptance. Manufacturing cost must include not only raw materials and added labor, but also the cost of ensuring conformance to specifications. Ensuring conformance is a substantial portion of the cost of manufacturing. In this project, the costs of measurements were approximately 50% of the cost of the machined part. In production, cost of measurement would be smaller, but still a substantial proportion of manufacturing cost. The results of this research project will point to a science-based approach to reducing the cost of ensuring conformance to specifications. The approach that we take is to determine, a priori, how well a CNC machine can manufacture a particular geometry from stock. Based on the knowledge of the manufacturing process, we are then able to decide features which need further measurements from features which can be accepted 'as is' from the CNC. By calibration of the machine tool, and establishing a machining accuracy ratio, we can validate the ability of CNC to fabricate to a particular level of tolerance. This will eliminate the costs of checking for conformance for relatively large tolerances.« less

  4. PyMVPA: A python toolbox for multivariate pattern analysis of fMRI data.

    PubMed

    Hanke, Michael; Halchenko, Yaroslav O; Sederberg, Per B; Hanson, Stephen José; Haxby, James V; Pollmann, Stefan

    2009-01-01

    Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability.

  5. PyMVPA: A Python toolbox for multivariate pattern analysis of fMRI data

    PubMed Central

    Hanke, Michael; Halchenko, Yaroslav O.; Sederberg, Per B.; Hanson, Stephen José; Haxby, James V.; Pollmann, Stefan

    2009-01-01

    Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine-learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability. PMID:19184561

  6. Enhancing the discrimination accuracy between metastases, gliomas and meningiomas on brain MRI by volumetric textural features and ensemble pattern recognition methods.

    PubMed

    Georgiadis, Pantelis; Cavouras, Dionisis; Kalatzis, Ioannis; Glotsos, Dimitris; Athanasiadis, Emmanouil; Kostopoulos, Spiros; Sifaki, Koralia; Malamas, Menelaos; Nikiforidis, George; Solomou, Ekaterini

    2009-01-01

    Three-dimensional (3D) texture analysis of volumetric brain magnetic resonance (MR) images has been identified as an important indicator for discriminating among different brain pathologies. The purpose of this study was to evaluate the efficiency of 3D textural features using a pattern recognition system in the task of discriminating benign, malignant and metastatic brain tissues on T1 postcontrast MR imaging (MRI) series. The dataset consisted of 67 brain MRI series obtained from patients with verified and untreated intracranial tumors. The pattern recognition system was designed as an ensemble classification scheme employing a support vector machine classifier, specially modified in order to integrate the least squares features transformation logic in its kernel function. The latter, in conjunction with using 3D textural features, enabled boosting up the performance of the system in discriminating metastatic, malignant and benign brain tumors with 77.14%, 89.19% and 93.33% accuracy, respectively. The method was evaluated using an external cross-validation process; thus, results might be considered indicative of the generalization performance of the system to "unseen" cases. The proposed system might be used as an assisting tool for brain tumor characterization on volumetric MRI series.

  7. Correlation between resting state fMRI total neuronal activity and PET metabolism in healthy controls and patients with disorders of consciousness.

    PubMed

    Soddu, Andrea; Gómez, Francisco; Heine, Lizette; Di Perri, Carol; Bahri, Mohamed Ali; Voss, Henning U; Bruno, Marie-Aurélie; Vanhaudenhuyse, Audrey; Phillips, Christophe; Demertzi, Athena; Chatelle, Camille; Schrouff, Jessica; Thibaut, Aurore; Charland-Verville, Vanessa; Noirhomme, Quentin; Salmon, Eric; Tshibanda, Jean-Flory Luaba; Schiff, Nicholas D; Laureys, Steven

    2016-01-01

    The mildly invasive 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) is a well-established imaging technique to measure 'resting state' cerebral metabolism. This technique made it possible to assess changes in metabolic activity in clinical applications, such as the study of severe brain injury and disorders of consciousness. We assessed the possibility of creating functional MRI activity maps, which could estimate the relative levels of activity in FDG-PET cerebral metabolic maps. If no metabolic absolute measures can be extracted, our approach may still be of clinical use in centers without access to FDG-PET. It also overcomes the problem of recognizing individual networks of independent component selection in functional magnetic resonance imaging (fMRI) resting state analysis. We extracted resting state fMRI functional connectivity maps using independent component analysis and combined only components of neuronal origin. To assess neuronality of components a classification based on support vector machine (SVM) was used. We compared the generated maps with the FDG-PET maps in 16 healthy controls, 11 vegetative state/unresponsive wakefulness syndrome patients and four locked-in patients. The results show a significant similarity with ρ = 0.75 ± 0.05 for healthy controls and ρ = 0.58 ± 0.09 for vegetative state/unresponsive wakefulness syndrome patients between the FDG-PET and the fMRI based maps. FDG-PET, fMRI neuronal maps, and the conjunction analysis show decreases in frontoparietal and medial regions in vegetative patients with respect to controls. Subsequent analysis in locked-in syndrome patients produced also consistent maps with healthy controls. The constructed resting state fMRI functional connectivity map points toward the possibility for fMRI resting state to estimate relative levels of activity in a metabolic map.

  8. TU-PIS-Exhibit Hall-2: How to Move Beyond Dose Monitoring to Imaging Performance Utilization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valencia, D.

    The current clinical standard of organ respiratory imaging, 4D-CT, is fundamentally limited by poor soft-tissue contrast and imaging dose. These limitations are potential barriers to beneficial “4D” radiotherapy methods which optimize the target and OAR dose-volume considering breathing motion but rely on a robust motion characterization. Conversely, MRI imparts no known radiation risk and has excellent soft-tissue contrast. MRI-based motion management is therefore highly desirable and holds great promise to improve radiotherapy of moving cancers, particularly in the abdomen. Over the past decade, MRI techniques have improved significantly, making MR-based motion management clinically feasible. For example, cine MRI has highmore » temporal resolution up to 10 f/s and has been used to track and/or characterize tumor motion, study correlation between external and internal motions. New MR technologies, such as 4D-MRI and MRI hybrid treatment machines (i.e. MR-linac or MR-Co60), have been recently developed. These technologies can lead to more accurate target volume determination and more precise radiation dose delivery via direct tumor gating or tracking. Despite all these promises, great challenges exist and the achievable clinical benefit of MRI-based tumor motion management has yet to be fully explored, much less realized. In this proposal, we will review novel MR-based motion management methods and technologies, the state-of-the-art concerning MRI development and clinical application and the barriers to more widespread adoption. Learning Objectives: Discuss the need of MR-based motion management for improving patient care in radiotherapy. Understand MR techniques for motion imaging and tumor motion characterization. Understand the current state of the art and future steps for clinical integration. Henry Ford Health System holds research agreements with Philips Healthcare. Research sponsored in part by a Henry Ford Health System Internal Mentored Grant.« less

  9. TU-F-BRB-03: Clinical Implementation of MR-Based Motion Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glide-Hurst, C.

    The current clinical standard of organ respiratory imaging, 4D-CT, is fundamentally limited by poor soft-tissue contrast and imaging dose. These limitations are potential barriers to beneficial “4D” radiotherapy methods which optimize the target and OAR dose-volume considering breathing motion but rely on a robust motion characterization. Conversely, MRI imparts no known radiation risk and has excellent soft-tissue contrast. MRI-based motion management is therefore highly desirable and holds great promise to improve radiotherapy of moving cancers, particularly in the abdomen. Over the past decade, MRI techniques have improved significantly, making MR-based motion management clinically feasible. For example, cine MRI has highmore » temporal resolution up to 10 f/s and has been used to track and/or characterize tumor motion, study correlation between external and internal motions. New MR technologies, such as 4D-MRI and MRI hybrid treatment machines (i.e. MR-linac or MR-Co60), have been recently developed. These technologies can lead to more accurate target volume determination and more precise radiation dose delivery via direct tumor gating or tracking. Despite all these promises, great challenges exist and the achievable clinical benefit of MRI-based tumor motion management has yet to be fully explored, much less realized. In this proposal, we will review novel MR-based motion management methods and technologies, the state-of-the-art concerning MRI development and clinical application and the barriers to more widespread adoption. Learning Objectives: Discuss the need of MR-based motion management for improving patient care in radiotherapy. Understand MR techniques for motion imaging and tumor motion characterization. Understand the current state of the art and future steps for clinical integration. Henry Ford Health System holds research agreements with Philips Healthcare. Research sponsored in part by a Henry Ford Health System Internal Mentored Grant.« less

  10. TU-PIS-Exhibit Hall-4: How to implement a dose monitoring solution in the real world: a technical perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massey, S.

    The current clinical standard of organ respiratory imaging, 4D-CT, is fundamentally limited by poor soft-tissue contrast and imaging dose. These limitations are potential barriers to beneficial “4D” radiotherapy methods which optimize the target and OAR dose-volume considering breathing motion but rely on a robust motion characterization. Conversely, MRI imparts no known radiation risk and has excellent soft-tissue contrast. MRI-based motion management is therefore highly desirable and holds great promise to improve radiotherapy of moving cancers, particularly in the abdomen. Over the past decade, MRI techniques have improved significantly, making MR-based motion management clinically feasible. For example, cine MRI has highmore » temporal resolution up to 10 f/s and has been used to track and/or characterize tumor motion, study correlation between external and internal motions. New MR technologies, such as 4D-MRI and MRI hybrid treatment machines (i.e. MR-linac or MR-Co60), have been recently developed. These technologies can lead to more accurate target volume determination and more precise radiation dose delivery via direct tumor gating or tracking. Despite all these promises, great challenges exist and the achievable clinical benefit of MRI-based tumor motion management has yet to be fully explored, much less realized. In this proposal, we will review novel MR-based motion management methods and technologies, the state-of-the-art concerning MRI development and clinical application and the barriers to more widespread adoption. Learning Objectives: Discuss the need of MR-based motion management for improving patient care in radiotherapy. Understand MR techniques for motion imaging and tumor motion characterization. Understand the current state of the art and future steps for clinical integration. Henry Ford Health System holds research agreements with Philips Healthcare. Research sponsored in part by a Henry Ford Health System Internal Mentored Grant.« less

  11. TU-PIS-Exhibit Hall-1: Tools for Collecting and Analyzing Patient Dose Index Information from Imaging Equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J.; Stanford University: Introduction

    The current clinical standard of organ respiratory imaging, 4D-CT, is fundamentally limited by poor soft-tissue contrast and imaging dose. These limitations are potential barriers to beneficial “4D” radiotherapy methods which optimize the target and OAR dose-volume considering breathing motion but rely on a robust motion characterization. Conversely, MRI imparts no known radiation risk and has excellent soft-tissue contrast. MRI-based motion management is therefore highly desirable and holds great promise to improve radiotherapy of moving cancers, particularly in the abdomen. Over the past decade, MRI techniques have improved significantly, making MR-based motion management clinically feasible. For example, cine MRI has highmore » temporal resolution up to 10 f/s and has been used to track and/or characterize tumor motion, study correlation between external and internal motions. New MR technologies, such as 4D-MRI and MRI hybrid treatment machines (i.e. MR-linac or MR-Co60), have been recently developed. These technologies can lead to more accurate target volume determination and more precise radiation dose delivery via direct tumor gating or tracking. Despite all these promises, great challenges exist and the achievable clinical benefit of MRI-based tumor motion management has yet to be fully explored, much less realized. In this proposal, we will review novel MR-based motion management methods and technologies, the state-of-the-art concerning MRI development and clinical application and the barriers to more widespread adoption. Learning Objectives: Discuss the need of MR-based motion management for improving patient care in radiotherapy. Understand MR techniques for motion imaging and tumor motion characterization. Understand the current state of the art and future steps for clinical integration. Henry Ford Health System holds research agreements with Philips Healthcare. Research sponsored in part by a Henry Ford Health System Internal Mentored Grant.« less

  12. TU-PIS-Exhibit Hall-5: Use of the Enterprise-wide Dose Tracking Software Radimetrics In an Academic Medical System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goode, A.

    The current clinical standard of organ respiratory imaging, 4D-CT, is fundamentally limited by poor soft-tissue contrast and imaging dose. These limitations are potential barriers to beneficial “4D” radiotherapy methods which optimize the target and OAR dose-volume considering breathing motion but rely on a robust motion characterization. Conversely, MRI imparts no known radiation risk and has excellent soft-tissue contrast. MRI-based motion management is therefore highly desirable and holds great promise to improve radiotherapy of moving cancers, particularly in the abdomen. Over the past decade, MRI techniques have improved significantly, making MR-based motion management clinically feasible. For example, cine MRI has highmore » temporal resolution up to 10 f/s and has been used to track and/or characterize tumor motion, study correlation between external and internal motions. New MR technologies, such as 4D-MRI and MRI hybrid treatment machines (i.e. MR-linac or MR-Co60), have been recently developed. These technologies can lead to more accurate target volume determination and more precise radiation dose delivery via direct tumor gating or tracking. Despite all these promises, great challenges exist and the achievable clinical benefit of MRI-based tumor motion management has yet to be fully explored, much less realized. In this proposal, we will review novel MR-based motion management methods and technologies, the state-of-the-art concerning MRI development and clinical application and the barriers to more widespread adoption. Learning Objectives: Discuss the need of MR-based motion management for improving patient care in radiotherapy. Understand MR techniques for motion imaging and tumor motion characterization. Understand the current state of the art and future steps for clinical integration. Henry Ford Health System holds research agreements with Philips Healthcare. Research sponsored in part by a Henry Ford Health System Internal Mentored Grant.« less

  13. Magnetic resonance imaging in precision radiation therapy for lung cancer

    PubMed Central

    Bainbridge, Hannah; Salem, Ahmed; Tijssen, Rob H. N.; Dubec, Michael; Wetscherek, Andreas; Van Es, Corinne; Belderbos, Jose; Faivre-Finn, Corinne

    2017-01-01

    Radiotherapy remains the cornerstone of curative treatment for inoperable locally advanced lung cancer, given concomitantly with platinum-based chemotherapy. With poor overall survival, research efforts continue to explore whether integration of advanced radiation techniques will assist safe treatment intensification with the potential for improving outcomes. One advance is the integration of magnetic resonance imaging (MRI) in the treatment pathway, providing anatomical and functional information with excellent soft tissue contrast without exposure of the patient to radiation. MRI may complement or improve the diagnostic staging accuracy of F-18 fluorodeoxyglucose position emission tomography and computerized tomography imaging, particularly in assessing local tumour invasion and is also effective for identification of nodal and distant metastatic disease. Incorporating anatomical MRI sequences into lung radiotherapy treatment planning is a novel application and may improve target volume and organs at risk delineation reproducibility. Furthermore, functional MRI may facilitate dose painting for heterogeneous target volumes and prediction of normal tissue toxicity to guide adaptive strategies. MRI sequences are rapidly developing and although the issue of intra-thoracic motion has historically hindered the quality of MRI due to the effect of motion, progress is being made in this field. Four-dimensional MRI has the potential to complement or supersede 4D CT and 4D F-18-FDG PET, by providing superior spatial resolution. A number of MR-guided radiotherapy delivery units are now available, combining a radiotherapy delivery machine (linear accelerator or cobalt-60 unit) with MRI at varying magnetic field strengths. This novel hybrid technology is evolving with many technical challenges to overcome. It is anticipated that the clinical benefits of MR-guided radiotherapy will be derived from the ability to adapt treatment on the fly for each fraction and in real-time, using ‘beam-on’ imaging. The lung tumour site group of the Atlantic MR-Linac consortium is working to generate a challenging MR-guided adaptive workflow for multi-institution treatment intensification trials in this patient group. PMID:29218271

  14. TU-PIS-Exhibit Hall-3: Simultaneous tracking of patient and real time staff dose to optimize interventional workflow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boon, S.

    The current clinical standard of organ respiratory imaging, 4D-CT, is fundamentally limited by poor soft-tissue contrast and imaging dose. These limitations are potential barriers to beneficial “4D” radiotherapy methods which optimize the target and OAR dose-volume considering breathing motion but rely on a robust motion characterization. Conversely, MRI imparts no known radiation risk and has excellent soft-tissue contrast. MRI-based motion management is therefore highly desirable and holds great promise to improve radiotherapy of moving cancers, particularly in the abdomen. Over the past decade, MRI techniques have improved significantly, making MR-based motion management clinically feasible. For example, cine MRI has highmore » temporal resolution up to 10 f/s and has been used to track and/or characterize tumor motion, study correlation between external and internal motions. New MR technologies, such as 4D-MRI and MRI hybrid treatment machines (i.e. MR-linac or MR-Co60), have been recently developed. These technologies can lead to more accurate target volume determination and more precise radiation dose delivery via direct tumor gating or tracking. Despite all these promises, great challenges exist and the achievable clinical benefit of MRI-based tumor motion management has yet to be fully explored, much less realized. In this proposal, we will review novel MR-based motion management methods and technologies, the state-of-the-art concerning MRI development and clinical application and the barriers to more widespread adoption. Learning Objectives: Discuss the need of MR-based motion management for improving patient care in radiotherapy. Understand MR techniques for motion imaging and tumor motion characterization. Understand the current state of the art and future steps for clinical integration. Henry Ford Health System holds research agreements with Philips Healthcare. Research sponsored in part by a Henry Ford Health System Internal Mentored Grant.« less

  15. Comprehensive Reproductive System Care Program - Clinical Breast Care Project (CRSCP-CBCP)

    DTIC Science & Technology

    2013-01-01

    biomedical informatics group here, the ProLogic team, and the MDR Global leader. This Pathology Checklist tablet data capturing system development with...initiative in developing a prototype tablet application using the Pathology Checklist as the first example following a decision made at the last CBCP...enabling surgery within the center. The Breast Imaging Center has a designated Aurora Breast MRI machine. The merging of the Army and Navy Breast

  16. Noninvasive iPhone Measurement of Left Ventricular Ejection Fraction Using Intrinsic Frequency Methodology.

    PubMed

    Pahlevan, Niema M; Rinderknecht, Derek G; Tavallali, Peyman; Razavi, Marianne; Tran, Thao T; Fong, Michael W; Kloner, Robert A; Csete, Marie; Gharib, Morteza

    2017-07-01

    The study is based on previously reported mathematical analysis of arterial waveform that extracts hidden oscillations in the waveform that we called intrinsic frequencies. The goal of this clinical study was to compare the accuracy of left ventricular ejection fraction derived from intrinsic frequencies noninvasively versus left ventricular ejection fraction obtained with cardiac MRI, the most accurate method for left ventricular ejection fraction measurement. After informed consent, in one visit, subjects underwent cardiac MRI examination and noninvasive capture of a carotid waveform using an iPhone camera (The waveform is captured using a custom app that constructs the waveform from skin displacement images during the cardiac cycle.). The waveform was analyzed using intrinsic frequency algorithm. Outpatient MRI facility. Adults able to undergo MRI were referred by local physicians or self-referred in response to local advertisement and included patients with heart failure with reduced ejection fraction diagnosed by a cardiologist. Standard cardiac MRI sequences were used, with periodic breath holding for image stabilization. To minimize motion artifact, the iPhone camera was held in a cradle over the carotid artery during iPhone measurements. Regardless of neck morphology, carotid waveforms were captured in all subjects, within seconds to minutes. Seventy-two patients were studied, ranging in age from 20 to 92 years old. The main endpoint of analysis was left ventricular ejection fraction; overall, the correlation between ejection fraction-iPhone and ejection fraction-MRI was 0.74 (r = 0.74; p < 0.0001; ejection fraction-MRI = 0.93 × [ejection fraction-iPhone] + 1.9). Analysis of carotid waveforms using intrinsic frequency methods can be used to document left ventricular ejection fraction with accuracy comparable with that of MRI. The measurements require no training to perform or interpret, no calibration, and can be repeated at the bedside to generate almost continuous analysis of left ventricular ejection fraction without arterial cannulation.

  17. Automatic and Reproducible Positioning of Phase-Contrast MRI for the Quantification of Global Cerebral Blood Flow

    PubMed Central

    Liu, Peiying; Lu, Hanzhang; Filbey, Francesca M.; Pinkham, Amy E.; McAdams, Carrie J.; Adinoff, Bryon; Daliparthi, Vamsi; Cao, Yan

    2014-01-01

    Phase-Contrast MRI (PC-MRI) is a noninvasive technique to measure blood flow. In particular, global but highly quantitative cerebral blood flow (CBF) measurement using PC-MRI complements several other CBF mapping methods such as arterial spin labeling and dynamic susceptibility contrast MRI by providing a calibration factor. The ability to estimate blood supply in physiological units also lays a foundation for assessment of brain metabolic rate. However, a major obstacle before wider applications of this method is that the slice positioning of the scan, ideally placed perpendicular to the feeding arteries, requires considerable expertise and can present a burden to the operator. In the present work, we proposed that the majority of PC-MRI scans can be positioned using an automatic algorithm, leaving only a small fraction of arteries requiring manual positioning. We implemented and evaluated an algorithm for this purpose based on feature extraction of a survey angiogram, which is of minimal operator dependence. In a comparative test-retest study with 7 subjects, the blood flow measurement using this algorithm showed an inter-session coefficient of variation (CoV) of . The Bland-Altman method showed that the automatic method differs from the manual method by between and , for of the CBF measurements. This is comparable to the variance in CBF measurement using manually-positioned PC MRI alone. In a further application of this algorithm to 157 consecutive subjects from typical clinical cohorts, the algorithm provided successful positioning in 89.7% of the arteries. In 79.6% of the subjects, all four arteries could be planned using the algorithm. Chi-square tests of independence showed that the success rate was not dependent on the age or gender, but the patients showed a trend of lower success rate (p = 0.14) compared to healthy controls. In conclusion, this automatic positioning algorithm could improve the application of PC-MRI in CBF quantification. PMID:24787742

  18. Low-field and high-field magnetic resonance contrast imaging of magnetoferritin as a pathological model system of iron accumulation

    NASA Astrophysics Data System (ADS)

    Strbak, Oliver; Balejcikova, Lucia; Baciak, Ladislav; Kovac, Jozef; Masarova-Kozelova, Marta; Krafcik, Andrej; Dobrota, Dusan; Kopcansky, Peter

    2017-09-01

    Various pathological processes including neurodegenerative disorders are associated with the accumulation of iron, while it is believed that a precursor of iron accumulation is ferritin. Physiological ferritin is due to low relaxivity, which results in only weak detection by magnetic resonance imaging (MRI) techniques. On the other hand, pathological ferritin is associated with disrupted iron homeostasis and structural changes in the mineral core, and should increase the hypointensive artefacts in MRI. On the basis of recent findings in respect to the pathological ferritin structure, we prepared the magnetoferritin particles as a possible pathological ferritin model system. The particles were characterised with dynamic light scattering, as well as with superconducting quantum interference device measurements. With the help of low-field (0.2 T) and high-field (4.7 T) MRI standard T 2-weighted protocols we found that it is possible to clearly distinguish between native ferritin as a physiological model system, and magnetoferritin as a pathological model system. Surprisingly, the T 2-weighted short TI inversion recovery protocol at low-field system showed the optimum contrast differentiation. Such findings are highly promising for exploiting the use of iron accumulation as a noninvasive diagnostics tool of pathological processes, where the magnetoferritin particles could be utilised as MRI iron quantification calibration samples.

  19. Two-dimensional imaging in a lightweight portable MRI scanner without gradient coils.

    PubMed

    Cooley, Clarissa Zimmerman; Stockmann, Jason P; Armstrong, Brandon D; Sarracanie, Mathieu; Lev, Michael H; Rosen, Matthew S; Wald, Lawrence L

    2015-02-01

    As the premiere modality for brain imaging, MRI could find wider applicability if lightweight, portable systems were available for siting in unconventional locations such as intensive care units, physician offices, surgical suites, ambulances, emergency rooms, sports facilities, or rural healthcare sites. We construct and validate a truly portable (<100 kg) and silent proof-of-concept MRI scanner which replaces conventional gradient encoding with a rotating lightweight cryogen-free, low-field magnet. When rotated about the object, the inhomogeneous field pattern is used as a rotating spatial encoding magnetic field (rSEM) to create generalized projections which encode the iteratively reconstructed two-dimensional (2D) image. Multiple receive channels are used to disambiguate the nonbijective encoding field. The system is validated with experimental images of 2D test phantoms. Similar to other nonlinear field encoding schemes, the spatial resolution is position dependent with blurring in the center, but is shown to be likely sufficient for many medical applications. The presented MRI scanner demonstrates the potential for portability by simultaneously relaxing the magnet homogeneity criteria and eliminating the gradient coil. This new architecture and encoding scheme shows convincing proof of concept images that are expected to be further improved with refinement of the calibration and methodology. © 2014 Wiley Periodicals, Inc.

  20. LORAKS Makes Better SENSE: Phase-Constrained Partial Fourier SENSE Reconstruction without Phase Calibration

    PubMed Central

    Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P.

    2016-01-01

    Purpose Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. Theory and Methods The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly-accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely-used calibrationless uniformly-undersampled trajectories. Results Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. Conclusion The SENSE-LORAKS framework provides promising new opportunities for highly-accelerated MRI. PMID:27037836

  1. 2D Imaging in a Lightweight Portable MRI Scanner without Gradient Coils

    PubMed Central

    Cooley, Clarissa Zimmerman; Stockmann, Jason P.; Armstrong, Brandon D.; Sarracanie, Mathieu; Lev, Michael H.; Rosen, Matthew S.; Wald, Lawrence L.

    2014-01-01

    Purpose As the premiere modality for brain imaging, MRI could find wider applicability if lightweight, portable systems were available for siting in unconventional locations such as Intensive Care Units, physician offices, surgical suites, ambulances, emergency rooms, sports facilities, or rural healthcare sites. Methods We construct and validate a truly portable (<100kg) and silent proof-of-concept MRI scanner which replaces conventional gradient encoding with a rotating lightweight cryogen-free, low-field magnet. When rotated about the object, the inhomogeneous field pattern is used as a rotating Spatial Encoding Magnetic field (rSEM) to create generalized projections which encode the iteratively reconstructed 2D image. Multiple receive channels are used to disambiguate the non-bijective encoding field. Results The system is validated with experimental images of 2D test phantoms. Similar to other non-linear field encoding schemes, the spatial resolution is position dependent with blurring in the center, but is shown to be likely sufficient for many medical applications. Conclusion The presented MRI scanner demonstrates the potential for portability by simultaneously relaxing the magnet homogeneity criteria and eliminating the gradient coil. This new architecture and encoding scheme shows convincing proof of concept images that are expected to be further improved with refinement of the calibration and methodology. PMID:24668520

  2. SU-F-303-15: Ion Chamber Dose Response in Magnetic Fields as a Function of Incident Photon Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malkov, V. N.; Rogers, D. W. O.

    2015-06-15

    Purpose: In considering the continued development of synergetic MRI-radiation therapy machines, we seek to quantify the variability of ion chamber response per unit dose in the presence of magnetic fields of varying strength as a function of incident photon beam quality and geometric configuration. Methods: To account for the effect of magnetic fields on the trajectory of charged particles a new algorithm was introduced into the EGSnrc Monte Carlo code. In the egs-chamber user code the dose to the cavity of an NE2571 ion chamber is calculated in two configurations, in 0 to 2 T magnetic fields, with an incomingmore » parallel 10×10 cm{sup 2} photon beam with energies ranging between 0.5 MeV and 8 MeV. In the first, the photon beam is incident on the long-axis of the ion chamber (config-1), and in the second the beam is parallel to the long-axis and incident from the conical end of the chamber (config-2). For both, the magnetic field is perpendicular to the direction of the beam and the long axis of the chamber. Results: The ion chamber response per unit dose to water at the same point is determined as a function of magnetic field and is normalized to the 0T case for each of incoming photon energies. For both configurations, accurate modeling of the ion chamber yielded closer agreement with the experimental results obtained by Meijsing et. al (2009). Config-1 yields a gradual increase in response with increasing field strength to a maximum of 13.4% and 1.4% for 1 MeV and 8 MeV photon beams, respectively. Config-2 produced a decrease in response of up to 6% and 13% for 0.5 MeV and 8 MeV beams, respectively. Conclusion: These results provide further support for ion chamber calibration in MRI-radiotherapy coupled systems and demonstrates noticeable energy dependence for clinically relevant fields.« less

  3. A fuzzy integral method based on the ensemble of neural networks to analyze fMRI data for cognitive state classification across multiple subjects.

    PubMed

    Cacha, L A; Parida, S; Dehuri, S; Cho, S-B; Poznanski, R R

    2016-12-01

    The huge number of voxels in fMRI over time poses a major challenge to for effective analysis. Fast, accurate, and reliable classifiers are required for estimating the decoding accuracy of brain activities. Although machine-learning classifiers seem promising, individual classifiers have their own limitations. To address this limitation, the present paper proposes a method based on the ensemble of neural networks to analyze fMRI data for cognitive state classification for application across multiple subjects. Similarly, the fuzzy integral (FI) approach has been employed as an efficient tool for combining different classifiers. The FI approach led to the development of a classifiers ensemble technique that performs better than any of the single classifier by reducing the misclassification, the bias, and the variance. The proposed method successfully classified the different cognitive states for multiple subjects with high accuracy of classification. Comparison of the performance improvement, while applying ensemble neural networks method, vs. that of the individual neural network strongly points toward the usefulness of the proposed method.

  4. Evaluating effects of methylphenidate on brain activity in cocaine addiction: a machine-learning approach

    NASA Astrophysics Data System (ADS)

    Rish, Irina; Bashivan, Pouya; Cecchi, Guillermo A.; Goldstein, Rita Z.

    2016-03-01

    The objective of this study is to investigate effects of methylphenidate on brain activity in individuals with cocaine use disorder (CUD) using functional MRI (fMRI). Methylphenidate hydrochloride (MPH) is an indirect dopamine agonist commonly used for treating attention deficit/hyperactivity disorders; it was also shown to have some positive effects on CUD subjects, such as improved stop signal reaction times associated with better control/inhibition,1 as well as normalized task-related brain activity2 and resting-state functional connectivity in specific areas.3 While prior fMRI studies of MPH in CUDs have focused on mass-univariate statistical hypothesis testing, this paper evaluates multivariate, whole-brain effects of MPH as captured by the generalization (prediction) accuracy of different classification techniques applied to features extracted from resting-state functional networks (e.g., node degrees). Our multivariate predictive results based on resting-state data from3 suggest that MPH tends to normalize network properties such as voxel degrees in CUD subjects, thus providing additional evidence for potential benefits of MPH in treating cocaine addiction.

  5. Discriminant analysis of resting-state functional connectivity patterns on the Grassmann manifold

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Liu, Yong; Jiang, Tianzi; Liu, Zhening; Hao, Yihui; Liu, Haihong

    2010-03-01

    The functional networks, extracted from fMRI images using independent component analysis, have been demonstrated informative for distinguishing brain states of cognitive functions and neurological diseases. In this paper, we propose a novel algorithm for discriminant analysis of functional networks encoded by spatial independent components. The functional networks of each individual are used as bases for a linear subspace, referred to as a functional connectivity pattern, which facilitates a comprehensive characterization of temporal signals of fMRI data. The functional connectivity patterns of different individuals are analyzed on the Grassmann manifold by adopting a principal angle based subspace distance. In conjunction with a support vector machine classifier, a forward component selection technique is proposed to select independent components for constructing the most discriminative functional connectivity pattern. The discriminant analysis method has been applied to an fMRI based schizophrenia study with 31 schizophrenia patients and 31 healthy individuals. The experimental results demonstrate that the proposed method not only achieves a promising classification performance for distinguishing schizophrenia patients from healthy controls, but also identifies discriminative functional networks that are informative for schizophrenia diagnosis.

  6. Gadolinium-Doped Gallic Acid-Zinc/Aluminium-Layered Double Hydroxide/Gold Theranostic Nanoparticles for a Bimodal Magnetic Resonance Imaging and Drug Delivery System.

    PubMed

    Sani Usman, Muhammad; Hussein, Mohd Zobir; Fakurazi, Sharida; Masarudin, Mas Jaffri; Ahmad Saad, Fathinul Fikri

    2017-08-31

    We have developed gadolinium-based theranostic nanoparticles for co-delivery of drug and magnetic resonance imaging (MRI) contrast agent using Zn/Al-layered double hydroxide as the nanocarrier platform, a naturally occurring phenolic compound, gallic acid (GA) as therapeutic agent, and Gd(NO₃)₃ as diagnostic agent. Gold nanoparticles (AuNPs) were grown on the system to support the contrast for MRI imaging. The nanoparticles were characterized using techniques such as Hi-TEM, XRD, ICP-ES. Kinetic release study of the GA from the nanoparticles showed about 70% of GA was released over a period of 72 h. The in vitro cell viability test for the nanoparticles showed relatively low toxicity to human cell lines (3T3) and improved toxicity on cancerous cell lines (HepG2). A preliminary contrast property test of the nanoparticles, tested on a 3 Tesla MRI machine at various concentrations of GAGZAu and water (as a reference) indicates that the nanoparticles have a promising dual diagnostic and therapeutic features to further develop a better future for clinical remedy for cancer treatment.

  7. SU-E-T-145: MRI Gel Dosimetry Applied to Dose Profile Determination for 50kV X-Ray Tube.

    PubMed

    Schwarcke, M; Marques, T; Nicolucci, P; Filho, O Baffa

    2012-06-01

    The aim of this study was to use MRI gel dosimetry to determine the dose profile of 50kV MAGNUM® X-ray tube, MOXTEK Inc., in order to calibrate small solid dosimeters of alanine, tooth enamel and LiF-TLDs, commonly used in clinical quality assurance and datation dosimetry. MAGIC-f polymer gel was kept in two plastic containers of 100mL, avoiding attenuation of the primary beam trough the wall. Beam aberture of 3mm and dose rate of 16.5Gy/min were set, reproducing irradiation conditions of interest. The dose rate was assumed based on data of the vendor information of the tube and dose of 30Gy was delivered at the surface of the gel. MAGIC-f gel was irradiated at source-surface distances(SSD) of 0.1cm and 1.0cm. After 24hours of irradiation, gel was scanned in an Achieva® 3T Philips® MRI tomography using relaxometry sequence with 32 Echos, Time-to-Echo(TE) of 15.0ms, Time-to-Repetition(TR) of 6000ms and Field-of-View(FOV) of 0.5×0.5×2.0mm. Dose map at the central plain of irradiation was calculated from T2 relaxometry map. The gel dosimetry results evidenced a build-up depth of 0.13cm for SSD=0.1cm and no build-up was detected for SSD=1.0cm. However, the dose profile evidenced high gradient of dose in SSD=0.1, decreasing the dose from 100% to 30% in 1.4cm depth inside the gel; In turn, the dose distribution is homogeneous after 0.4cm deth for SSD=1.0cm. MRI gel dosimetry using MAGIC-f presented as feasible technique to determine dose profiles for kilovoltage x-rays tubes. The results evidenced that the calibration of small solid dosimeters can be performed using SSD of 1.0cm in the 50kV MAGNUM® X-ray tube using 0.4cm/g/cm 3 filter. This work was funded supported by CNPQ, CAPES and FAPESP. © 2012 American Association of Physicists in Medicine.

  8. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.

    PubMed

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero

    2017-04-01

    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  9. Textural kinetics: a novel dynamic contrast-enhanced (DCE)-MRI feature for breast lesion classification.

    PubMed

    Agner, Shannon C; Soman, Salil; Libfeld, Edward; McDonald, Margie; Thomas, Kathleen; Englander, Sarah; Rosen, Mark A; Chin, Deanna; Nosher, John; Madabhushi, Anant

    2011-06-01

    Dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) of the breast has emerged as an adjunct imaging tool to conventional X-ray mammography due to its high detection sensitivity. Despite the increasing use of breast DCE-MRI, specificity in distinguishing malignant from benign breast lesions is low, and interobserver variability in lesion classification is high. The novel contribution of this paper is in the definition of a new DCE-MRI descriptor that we call textural kinetics, which attempts to capture spatiotemporal changes in breast lesion texture in order to distinguish malignant from benign lesions. We qualitatively and quantitatively demonstrated on 41 breast DCE-MRI studies that textural kinetic features outperform signal intensity kinetics and lesion morphology features in distinguishing benign from malignant lesions. A probabilistic boosting tree (PBT) classifier in conjunction with textural kinetic descriptors yielded an accuracy of 90%, sensitivity of 95%, specificity of 82%, and an area under the curve (AUC) of 0.92. Graph embedding, used for qualitative visualization of a low-dimensional representation of the data, showed the best separation between benign and malignant lesions when using textural kinetic features. The PBT classifier results and trends were also corroborated via a support vector machine classifier which showed that textural kinetic features outperformed the morphological, static texture, and signal intensity kinetics descriptors. When textural kinetic attributes were combined with morphologic descriptors, the resulting PBT classifier yielded 89% accuracy, 99% sensitivity, 76% specificity, and an AUC of 0.91.

  10. Evaluation of dose delivery accuracy of gamma knife using MRI polymer gel dosimeter in an inhomogeneous phantom

    NASA Astrophysics Data System (ADS)

    Pourfallah T, A.; Alam N, Riahi; M, Allahverdi; M, Ay; M, Zahmatkesh

    2009-05-01

    Polymer gel dosimetry is still the only dosimetry method for directly measuring three-dimensional dose distributions. MRI Polymer gel dosimeters are tissue equivalent and can act as a phantom material. Because of high dose response sensitivity, the MRI was chosen as readout device. In this study dose profiles calculated with treatment-planning software (LGP) and measurements with the MR polymer gel dosimeter for single-shot irradiations were compared. A custom-built 16 cm diameter spherical plexiglas head phantom was used in this study. Inside the phantom, there is a cubic cutout for insertion of gel phantoms and another cutout for inserting the inhomogeneities. The phantoms were scanned with a 1.5T MRI (Siemens syngo MR 2004A 4VA25A) scanner. The multiple spin-echo sequence with 32 echoes was used for the MRI scans. Calibration relations between the spin-spin relaxation rate and the absorbed dose were obtained by using small cylindrical vials, which were filled with the PAGAT polymer gel from the same batch as for the spherical phantom. 1D and 2D data obtained using gel dosimeter for homogeneous and inhomogeneous phantoms were compared with dose obtained using LGP calculation. The distance between relative isodose curves obtained for homogeneous phantom and heterogeneous phantoms exceed the accepted total positioning error (>±2mm). The findings of this study indicate that dose measurement using PAGAT gel dosimeter can be used for verifying dose delivering accuracy in GK unit in presence of inhomogeneities.

  11. The SED Machine: a dedicated transient IFU spectrograph

    NASA Astrophysics Data System (ADS)

    Ben-Ami, Sagi; Konidaris, Nick; Quimby, Robert; Davis, Jack T.; Ngeow, Chow Choong; Ritter, Andreas; Rudy, Alexander

    2012-09-01

    The Spectral Energy Distribution (SED) Machine is an Integral Field Unit (IFU) spectrograph designed specifically to classify transients. It is comprised of two subsystems. A lenselet based IFU, with a 26" × 26" Field of View (FoV) and ˜ 0.75" spaxels feeds a constant resolution (R˜100) triple-prism. The dispersed rays are than imaged onto an off-the-shelf CCD detector. The second subsystem, the Rainbow Camera (RC), is a 4-band seeing-limited imager with a 12.5' × 12.5' FoV around the IFU that will allow real time spectrophotometric calibrations with a ˜ 5% accuracy. Data from both subsystems will be processed in real time using a dedicated reduction pipeline. The SED Machine will be mounted on the Palomar 60-inch robotic telescope (P60), covers a wavelength range of 370 - 920nm at high throughput and will classify transients from on-going and future surveys at a high rate. This will provide good statistics for common types of transients, and a better ability to discover and study rare and exotic ones. We present the science cases, optical design, and data reduction strategy of the SED Machine. The SED machine is currently being constructed at the Calofornia Institute of Technology, and will be comissioned on the spring of 2013.

  12. Geometric Calibration of Full Spherical Panoramic Ricoh-Theta Camera

    NASA Astrophysics Data System (ADS)

    Aghayari, S.; Saadatseresht, M.; Omidalizarandi, M.; Neumann, I.

    2017-05-01

    A novel calibration process of RICOH-THETA, full-view fisheye camera, is proposed which has numerous applications as a low cost sensor in different disciplines such as photogrammetry, robotic and machine vision and so on. Ricoh Company developed this camera in 2014 that consists of two lenses and is able to capture the whole surrounding environment in one shot. In this research, each lens is calibrated separately and interior/relative orientation parameters (IOPs and ROPs) of the camera are determined on the basis of designed calibration network on the central and side images captured by the aforementioned lenses. Accordingly, designed calibration network is considered as a free distortion grid and applied to the measured control points in the image space as correction terms by means of bilinear interpolation. By performing corresponding corrections, image coordinates are transformed to the unit sphere as an intermediate space between object space and image space in the form of spherical coordinates. Afterwards, IOPs and EOPs of each lens are determined separately through statistical bundle adjustment procedure based on collinearity condition equations. Subsequently, ROPs of two lenses is computed from both EOPs. Our experiments show that by applying 3*3 free distortion grid, image measurements residuals diminish from 1.5 to 0.25 degrees on aforementioned unit sphere.

  13. Design of a tracked ultrasound calibration phantom made of LEGO bricks

    NASA Astrophysics Data System (ADS)

    Walsh, Ryan; Soehl, Marie; Rankin, Adam; Lasso, Andras; Fichtinger, Gabor

    2014-03-01

    PURPOSE: Spatial calibration of tracked ultrasound systems is commonly performed using precisely fabricated phantoms. Machining or 3D printing has relatively high cost and not easily available. Moreover, the possibilities for modifying the phantoms are very limited. Our goal was to find a method to construct a calibration phantom from affordable, widely available components, which can be built in short time, can be easily modified, and provides comparable accuracy to the existing solutions. METHODS: We designed an N-wire calibration phantom made of LEGO® bricks. To affirm the phantom's reproducibility and build time, ten builds were done by first-time users. The phantoms were used for a tracked ultrasound calibration by an experienced user. The success of each user's build was determined by the lowest root mean square (RMS) wire reprojection error of three calibrations. The accuracy and variance of calibrations were evaluated for the calibrations produced for various tracked ultrasound probes. The proposed model was compared to two of the currently available phantom models for both electromagnetic and optical tracking. RESULTS: The phantom was successfully built by all ten first-time users in an average time of 18.8 minutes. It cost approximately $10 CAD for the required LEGO® bricks and averaged a 0.69mm of error in the calibration reproducibility for ultrasound calibrations. It is one third the cost of similar 3D printed phantoms and takes much less time to build. The proposed phantom's image reprojections were 0.13mm more erroneous than those of the highest performing current phantom model The average standard deviation of multiple 3D image reprojections differed by 0.05mm between the phantoms CONCLUSION: It was found that the phantom could be built in less time, was one third the cost, compared to similar 3D printed models. The proposed phantom was found to be capable of producing equivalent calibrations to 3D printed phantoms.

  14. Sensory Feedback for Lower Extremity Prostheses Incorporating Targeted Muscle Reinnervation (TMR)

    DTIC Science & Technology

    2017-10-01

    hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...map and characterize the sensory capabilities of lower extremity Targeted Reinnervation (TR) sites under tactile stimulation , and (2) Measure the...descent machine; developed new tactile stimulators that we expect to use in later stages of this project; and completed baseline studies to calibrate

  15. Abnormal brain structure as a potential biomarker for venous erectile dysfunction: evidence from multimodal MRI and machine learning.

    PubMed

    Li, Lingli; Fan, Wenliang; Li, Jun; Li, Quanlin; Wang, Jin; Fan, Yang; Ye, Tianhe; Guo, Jialun; Li, Sen; Zhang, Youpeng; Cheng, Yongbiao; Tang, Yong; Zeng, Hanqing; Yang, Lian; Zhu, Zhaohui

    2018-03-29

    To investigate the cerebral structural changes related to venous erectile dysfunction (VED) and the relationship of these changes to clinical symptoms and disorder duration and distinguish patients with VED from healthy controls using a machine learning classification. 45 VED patients and 50 healthy controls were included. Voxel-based morphometry (VBM), tract-based spatial statistics (TBSS) and correlation analyses of VED patients and clinical variables were performed. The machine learning classification method was adopted to confirm its effectiveness in distinguishing VED patients from healthy controls. Compared to healthy control subjects, VED patients showed significantly decreased cortical volumes in the left postcentral gyrus and precentral gyrus, while only the right middle temporal gyrus showed a significant increase in cortical volume. Increased axial diffusivity (AD), radial diffusivity (RD) and mean diffusivity (MD) values were observed in widespread brain regions. Certain regions of these alterations related to VED patients showed significant correlations with clinical symptoms and disorder durations. Machine learning analyses discriminated patients from controls with overall accuracy 96.7%, sensitivity 93.3% and specificity 99.0%. Cortical volume and white matter (WM) microstructural changes were observed in VED patients, and showed significant correlations with clinical symptoms and dysfunction durations. Various DTI-derived indices of some brain regions could be regarded as reliable discriminating features between VED patients and healthy control subjects, as shown by machine learning analyses. • Multimodal magnetic resonance imaging helps clinicians to assess patients with VED. • VED patients show cerebral structural alterations related to their clinical symptoms. • Machine learning analyses discriminated VED patients from controls with an excellent performance. • Machine learning classification provided a preliminary demonstration of DTI's clinical use.

  16. Resting-state functional magnetic resonance imaging for surgical planning in pediatric patients: a preliminary experience.

    PubMed

    Roland, Jarod L; Griffin, Natalie; Hacker, Carl D; Vellimana, Ananth K; Akbari, S Hassan; Shimony, Joshua S; Smyth, Matthew D; Leuthardt, Eric C; Limbrick, David D

    2017-12-01

    OBJECTIVE Cerebral mapping for surgical planning and operative guidance is a challenging task in neurosurgery. Pediatric patients are often poor candidates for many modern mapping techniques because of inability to cooperate due to their immature age, cognitive deficits, or other factors. Resting-state functional MRI (rs-fMRI) is uniquely suited to benefit pediatric patients because it is inherently noninvasive and does not require task performance or significant cooperation. Recent advances in the field have made mapping cerebral networks possible on an individual basis for use in clinical decision making. The authors present their initial experience translating rs-fMRI into clinical practice for surgical planning in pediatric patients. METHODS The authors retrospectively reviewed cases in which the rs-fMRI analysis technique was used prior to craniotomy in pediatric patients undergoing surgery in their institution. Resting-state analysis was performed using a previously trained machine-learning algorithm for identification of resting-state networks on an individual basis. Network maps were uploaded to the clinical imaging and surgical navigation systems. Patient demographic and clinical characteristics, including need for sedation during imaging and use of task-based fMRI, were also recorded. RESULTS Twenty patients underwent rs-fMRI prior to craniotomy between December 2013 and June 2016. Their ages ranged from 1.9 to 18.4 years, and 12 were male. Five of the 20 patients also underwent task-based fMRI and one underwent awake craniotomy. Six patients required sedation to tolerate MRI acquisition, including resting-state sequences. Exemplar cases are presented including anatomical and resting-state functional imaging. CONCLUSIONS Resting-state fMRI is a rapidly advancing field of study allowing for whole brain analysis by a noninvasive modality. It is applicable to a wide range of patients and effective even under general anesthesia. The nature of resting-state analysis precludes any need for task cooperation. These features make rs-fMRI an ideal technology for cerebral mapping in pediatric neurosurgical patients. This review of the use of rs-fMRI mapping in an initial pediatric case series demonstrates the feasibility of utilizing this technique in pediatric neurosurgical patients. The preliminary experience presented here is a first step in translating this technique to a broader clinical practice.

  17. Task-specific feature extraction and classification of fMRI volumes using a deep neural network initialized with a deep belief network: Evaluation using sensorimotor tasks

    PubMed Central

    Jang, Hojin; Plis, Sergey M.; Calhoun, Vince D.; Lee, Jong-Hwan

    2016-01-01

    Feedforward deep neural networks (DNN), artificial neural networks with multiple hidden layers, have recently demonstrated a record-breaking performance in multiple areas of applications in computer vision and speech processing. Following the success, DNNs have been applied to neuroimaging modalities including functional/structural magnetic resonance imaging (MRI) and positron-emission tomography data. However, no study has explicitly applied DNNs to 3D whole-brain fMRI volumes and thereby extracted hidden volumetric representations of fMRI that are discriminative for a task performed as the fMRI volume was acquired. Our study applied fully connected feedforward DNN to fMRI volumes collected in four sensorimotor tasks (i.e., left-hand clenching, right-hand clenching, auditory attention, and visual stimulus) undertaken by 12 healthy participants. Using a leave-one-subject-out cross-validation scheme, a restricted Boltzmann machine-based deep belief network was pretrained and used to initialize weights of the DNN. The pretrained DNN was fine-tuned while systematically controlling weight-sparsity levels across hidden layers. Optimal weight-sparsity levels were determined from a minimum validation error rate of fMRI volume classification. Minimum error rates (mean ± standard deviation; %) of 6.9 (± 3.8) were obtained from the three-layer DNN with the sparsest condition of weights across the three hidden layers. These error rates were even lower than the error rates from the single-layer network (9.4 ± 4.6) and the two-layer network (7.4 ± 4.1). The estimated DNN weights showed spatial patterns that are remarkably task-specific, particularly in the higher layers. The output values of the third hidden layer represented distinct patterns/codes of the 3D whole-brain fMRI volume and encoded the information of the tasks as evaluated from representational similarity analysis. Our reported findings show the ability of the DNN to classify a single fMRI volume based on the extraction of hidden representations of fMRI volumes associated with tasks across multiple hidden layers. Our study may be beneficial to the automatic classification/diagnosis of neuropsychiatric and neurological diseases and prediction of disease severity and recovery in (pre-) clinical settings using fMRI volumes without requiring an estimation of activation patterns or ad hoc statistical evaluation. PMID:27079534

  18. Task-specific feature extraction and classification of fMRI volumes using a deep neural network initialized with a deep belief network: Evaluation using sensorimotor tasks.

    PubMed

    Jang, Hojin; Plis, Sergey M; Calhoun, Vince D; Lee, Jong-Hwan

    2017-01-15

    Feedforward deep neural networks (DNNs), artificial neural networks with multiple hidden layers, have recently demonstrated a record-breaking performance in multiple areas of applications in computer vision and speech processing. Following the success, DNNs have been applied to neuroimaging modalities including functional/structural magnetic resonance imaging (MRI) and positron-emission tomography data. However, no study has explicitly applied DNNs to 3D whole-brain fMRI volumes and thereby extracted hidden volumetric representations of fMRI that are discriminative for a task performed as the fMRI volume was acquired. Our study applied fully connected feedforward DNN to fMRI volumes collected in four sensorimotor tasks (i.e., left-hand clenching, right-hand clenching, auditory attention, and visual stimulus) undertaken by 12 healthy participants. Using a leave-one-subject-out cross-validation scheme, a restricted Boltzmann machine-based deep belief network was pretrained and used to initialize weights of the DNN. The pretrained DNN was fine-tuned while systematically controlling weight-sparsity levels across hidden layers. Optimal weight-sparsity levels were determined from a minimum validation error rate of fMRI volume classification. Minimum error rates (mean±standard deviation; %) of 6.9 (±3.8) were obtained from the three-layer DNN with the sparsest condition of weights across the three hidden layers. These error rates were even lower than the error rates from the single-layer network (9.4±4.6) and the two-layer network (7.4±4.1). The estimated DNN weights showed spatial patterns that are remarkably task-specific, particularly in the higher layers. The output values of the third hidden layer represented distinct patterns/codes of the 3D whole-brain fMRI volume and encoded the information of the tasks as evaluated from representational similarity analysis. Our reported findings show the ability of the DNN to classify a single fMRI volume based on the extraction of hidden representations of fMRI volumes associated with tasks across multiple hidden layers. Our study may be beneficial to the automatic classification/diagnosis of neuropsychiatric and neurological diseases and prediction of disease severity and recovery in (pre-) clinical settings using fMRI volumes without requiring an estimation of activation patterns or ad hoc statistical evaluation. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. The impact of inspired oxygen levels on calibrated fMRI measurements of M, OEF and resting CMRO2 using combined hypercapnia and hyperoxia

    PubMed Central

    Lajoie, Isabelle; Tancredi, Felipe B.; Hoge, Richard D.

    2017-01-01

    Recent calibrated fMRI techniques using combined hypercapnia and hyperoxia allow the mapping of resting cerebral metabolic rate of oxygen (CMRO2) in absolute units, oxygen extraction fraction (OEF) and calibration parameter M (maximum BOLD). The adoption of such technique necessitates knowledge about the precision and accuracy of the model-derived parameters. One of the factors that may impact the precision and accuracy is the level of oxygen provided during periods of hyperoxia (HO). A high level of oxygen may bring the BOLD responses closer to the maximum M value, and hence reduce the error associated with the M interpolation. However, an increased concentration of paramagnetic oxygen in the inhaled air may result in a larger susceptibility area around the frontal sinuses and nasal cavity. Additionally, a higher O2 level may generate a larger arterial blood T1 shortening, which require a bigger cerebral blood flow (CBF) T1 correction. To evaluate the impact of inspired oxygen levels on M, OEF and CMRO2 estimates, a cohort of six healthy adults underwent two different protocols: one where 60% of O2 was administered during HO (low HO or LHO) and one where 100% O2 was administered (high HO or HHO). The QUantitative O2 (QUO2) MRI approach was employed, where CBF and R2* are simultaneously acquired during periods of hypercapnia (HC) and hyperoxia, using a clinical 3 T scanner. Scan sessions were repeated to assess repeatability of results at the different O2 levels. Our T1 values during periods of hyperoxia were estimated based on an empirical ex-vivo relationship between T1 and the arterial partial pressure of O2. As expected, our T1 estimates revealed a larger T1 shortening in arterial blood when administering 100% O2 relative to 60% O2 (T1LHO = 1.56±0.01 sec vs. T1HHO = 1.47±0.01 sec, P < 4*10−13). In regard to the susceptibility artifacts, the patterns and number of affected voxels were comparable irrespective of the O2 concentration. Finally, the model-derived estimates were consistent regardless of the HO levels, indicating that the different effects are adequately accounted for within the model. PMID:28362834

  20. The impact of inspired oxygen levels on calibrated fMRI measurements of M, OEF and resting CMRO2 using combined hypercapnia and hyperoxia.

    PubMed

    Lajoie, Isabelle; Tancredi, Felipe B; Hoge, Richard D

    2017-01-01

    Recent calibrated fMRI techniques using combined hypercapnia and hyperoxia allow the mapping of resting cerebral metabolic rate of oxygen (CMRO2) in absolute units, oxygen extraction fraction (OEF) and calibration parameter M (maximum BOLD). The adoption of such technique necessitates knowledge about the precision and accuracy of the model-derived parameters. One of the factors that may impact the precision and accuracy is the level of oxygen provided during periods of hyperoxia (HO). A high level of oxygen may bring the BOLD responses closer to the maximum M value, and hence reduce the error associated with the M interpolation. However, an increased concentration of paramagnetic oxygen in the inhaled air may result in a larger susceptibility area around the frontal sinuses and nasal cavity. Additionally, a higher O2 level may generate a larger arterial blood T1 shortening, which require a bigger cerebral blood flow (CBF) T1 correction. To evaluate the impact of inspired oxygen levels on M, OEF and CMRO2 estimates, a cohort of six healthy adults underwent two different protocols: one where 60% of O2 was administered during HO (low HO or LHO) and one where 100% O2 was administered (high HO or HHO). The QUantitative O2 (QUO2) MRI approach was employed, where CBF and R2* are simultaneously acquired during periods of hypercapnia (HC) and hyperoxia, using a clinical 3 T scanner. Scan sessions were repeated to assess repeatability of results at the different O2 levels. Our T1 values during periods of hyperoxia were estimated based on an empirical ex-vivo relationship between T1 and the arterial partial pressure of O2. As expected, our T1 estimates revealed a larger T1 shortening in arterial blood when administering 100% O2 relative to 60% O2 (T1LHO = 1.56±0.01 sec vs. T1HHO = 1.47±0.01 sec, P < 4*10-13). In regard to the susceptibility artifacts, the patterns and number of affected voxels were comparable irrespective of the O2 concentration. Finally, the model-derived estimates were consistent regardless of the HO levels, indicating that the different effects are adequately accounted for within the model.

Top