Sample records for calibrating mri machines

  1. Efficient gradient calibration based on diffusion MRI.

    PubMed

    Teh, Irvin; Maguire, Mahon L; Schneider, Jürgen E

    2017-01-01

    To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. The errors in apparent diffusion coefficients along orthogonal axes ranged from -9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and -0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from -5.5% to + 4.5% precalibration and were likewise reduced to -0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170-179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. © 2016 Wiley Periodicals, Inc.

  2. Efficient gradient calibration based on diffusion MRI

    PubMed Central

    Teh, Irvin; Maguire, Mahon L.

    2016-01-01

    Purpose To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. Methods The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. Results The errors in apparent diffusion coefficients along orthogonal axes ranged from −9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and −0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from −5.5% to + 4.5% precalibration and were likewise reduced to −0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Conclusion Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170–179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. PMID:26749277

  3. Integrated calibration sphere and calibration step fixture for improved coordinate measurement machine calibration

    DOEpatents

    Clifford, Harry J [Los Alamos, NM

    2011-03-22

    A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.

  4. Linear positioning laser calibration setup of CNC machine tools

    NASA Astrophysics Data System (ADS)

    Sui, Xiulin; Yang, Congjing

    2002-10-01

    The linear positioning laser calibration setup of CNC machine tools is capable of executing machine tool laser calibraiotn and backlash compensation. Using this setup, hole locations on CNC machien tools will be correct and machien tool geometry will be evaluated and adjusted. Machien tool laser calibration and backlash compensation is a simple and straightforward process. First the setup is to 'find' the stroke limits of the axis. Then the laser head is then brought into correct alignment. Second is to move the machine axis to the other extreme, the laser head is now aligned, using rotation and elevation adjustments. Finally the machine is moved to the start position and final alignment is verified. The stroke of the machine, and the machine compensation interval dictate the amount of data required for each axis. These factors determine the amount of time required for a through compensation of the linear positioning accuracy. The Laser Calibrator System monitors the material temperature and the air density; this takes into consideration machine thermal growth and laser beam frequency. This linear positioning laser calibration setup can be used on CNC machine tools, CNC lathes, horizontal centers and vertical machining centers.

  5. Application of calibrated fMRI in Alzheimer's disease.

    PubMed

    Lajoie, Isabelle; Nugent, Scott; Debacker, Clément; Dyson, Kenneth; Tancredi, Felipe B; Badhwar, AmanPreet; Belleville, Sylvie; Deschaintre, Yan; Bellec, Pierre; Doyon, Julien; Bocti, Christian; Gauthier, Serge; Arnold, Douglas; Kergoat, Marie-Jeanne; Chertkow, Howard; Monchi, Oury; Hoge, Richard D

    2017-01-01

    Calibrated fMRI based on arterial spin-labeling (ASL) and blood oxygen-dependent contrast (BOLD), combined with periods of hypercapnia and hyperoxia, can provide information on cerebrovascular reactivity (CVR), resting blood flow (CBF), oxygen extraction fraction (OEF), and resting oxidative metabolism (CMRO 2 ). Vascular and metabolic integrity are believed to be affected in Alzheimer's disease (AD), thus, the use of calibrated fMRI in AD may help understand the disease and monitor therapeutic responses in future clinical trials. In the present work, we applied a calibrated fMRI approach referred to as Quantitative O2 (QUO2) in a cohort of probable AD dementia and age-matched control participants. The resulting CBF, OEF and CMRO 2 values fell within the range from previous studies using positron emission tomography (PET) with 15 O labeling. Moreover, the typical parietotemporal pattern of hypoperfusion and hypometabolism in AD was observed, especially in the precuneus, a particularly vulnerable region. We detected no deficit in frontal CBF, nor in whole grey matter CVR, which supports the hypothesis that the effects observed were associated specifically with AD rather than generalized vascular disease. Some key pitfalls affecting both ASL and BOLD methods were encountered, such as prolonged arterial transit times (particularly in the occipital lobe), the presence of susceptibility artifacts obscuring medial temporal regions, and the challenges associated with the hypercapnic manipulation in AD patients and elderly participants. The present results are encouraging and demonstrate the promise of calibrated fMRI measurements as potential biomarkers in AD. Although CMRO 2 can be imaged with 15 O PET, the QUO2 method uses more widely available imaging infrastructure, avoids exposure to ionizing radiation, and integrates with other MRI-based measures of brain structure and function.

  6. a Contemporary Approach for Evaluation of the best Measurement Capability of a Force Calibration Machine

    NASA Astrophysics Data System (ADS)

    Kumar, Harish

    The present paper discusses the procedure for evaluation of best measurement capability of a force calibration machine. The best measurement capability of force calibration machine is evaluated by a comparison through the precision force transfer standards to the force standard machines. The force transfer standards are calibrated by the force standard machine and then by the force calibration machine by adopting the similar procedure. The results are reported and discussed in the paper and suitable discussion has been made for force calibration machine of 200 kN capacity. Different force transfer standards of nominal capacity 20 kN, 50 kN and 200 kN are used. It is found that there are significant variations in the .uncertainty of force realization by the force calibration machine according to the proposed method in comparison to the earlier method adopted.

  7. Self-Calibrating Surface Measuring Machine

    NASA Astrophysics Data System (ADS)

    Greenleaf, Allen H.

    1983-04-01

    A new kind of surface-measuring machine has been developed under government contract at Itek Optical Systems, a Division of Itek Corporation, to assist in the fabrication of large, highly aspheric optical elements. The machine uses four steerable distance-measuring interferometers at the corners of a tetrahedron to measure the positions of a retroreflective target placed at various locations against the surface being measured. Using four interferometers gives redundant information so that, from a set of measurement data, the dimensions of the machine as well as the coordinates of the measurement points can be determined. The machine is, therefore, self-calibrating and does not require a structure made to high accuracy. A wood-structured prototype of this machine was made whose key components are a simple form of air bearing steering mirror, a wide-angle cat's eye retroreflector used as the movable target, and tracking sensors and servos to provide automatic tracking of the cat's eye by the four laser beams. The data are taken and analyzed by computer. The output is given in terms of error relative to an equation of the desired surface. In tests of this machine, measurements of a 0.7 m diameter mirror blank have been made with an accuracy on the order of 0.2µm rms.

  8. Using machine learning for sequence-level automated MRI protocol selection in neuroradiology.

    PubMed

    Brown, Andrew D; Marotta, Thomas R

    2018-05-01

    Incorrect imaging protocol selection can lead to important clinical findings being missed, contributing to both wasted health care resources and patient harm. We present a machine learning method for analyzing the unstructured text of clinical indications and patient demographics from magnetic resonance imaging (MRI) orders to automatically protocol MRI procedures at the sequence level. We compared 3 machine learning models - support vector machine, gradient boosting machine, and random forest - to a baseline model that predicted the most common protocol for all observations in our test set. The gradient boosting machine model significantly outperformed the baseline and demonstrated the best performance of the 3 models in terms of accuracy (95%), precision (86%), recall (80%), and Hamming loss (0.0487). This demonstrates the feasibility of automating sequence selection by applying machine learning to MRI orders. Automated sequence selection has important safety, quality, and financial implications and may facilitate improvements in the quality and safety of medical imaging service delivery.

  9. Calibrated thermal microscopy of the tool-chip interface in machining

    NASA Astrophysics Data System (ADS)

    Yoon, Howard W.; Davies, Matthew A.; Burns, Timothy J.; Kennedy, M. D.

    2000-03-01

    A critical parameter in predicting tool wear during machining and in accurate computer simulations of machining is the spatially-resolved temperature at the tool-chip interface. We describe the development and the calibration of a nearly diffraction-limited thermal-imaging microscope to measure the spatially-resolved temperatures during the machining of an AISI 1045 steel with a tungsten-carbide tool bit. The microscope has a target area of 0.5 mm X 0.5 mm square region with a < 5 micrometers spatial resolution and is based on a commercial InSb 128 X 128 focal plane array with an all reflective microscope objective. The minimum frame image acquisition time is < 1 ms. The microscope is calibrated using a standard blackbody source from the radiance temperature calibration laboratory at the National Institute of Standards and Technology, and the emissivity of the machined material is deduced from the infrared reflectivity measurements. The steady-state thermal images from the machining of 1045 steel are compared to previous determinations of tool temperatures from micro-hardness measurements and are found to be in agreement with those studies. The measured average chip temperatures are also in agreement with the temperature rise estimated from energy balance considerations. From these calculations and the agreement between the experimental and the calculated determinations of the emissivity of the 1045 steel, the standard uncertainty of the temperature measurements is estimated to be about 45 degree(s)C at 900 degree(s)C.

  10. Development of a Machine-Vision System for Recording of Force Calibration Data

    NASA Astrophysics Data System (ADS)

    Heamawatanachai, Sumet; Chaemthet, Kittipong; Changpan, Tawat

    This paper presents the development of a new system for recording of force calibration data using machine vision technology. Real time camera and computer system were used to capture images of the reading from the instruments during calibration. Then, the measurement images were transformed and translated to numerical data using optical character recognition (OCR) technique. These numerical data along with raw images were automatically saved to memories as the calibration database files. With this new system, the human error of recording would be eliminated. The verification experiments were done by using this system for recording the measurement results from an amplifier (DMP 40) with load cell (HBM-Z30-10kN). The NIMT's 100-kN deadweight force standard machine (DWM-100kN) was used to generate test forces. The experiments setup were done in 3 categories; 1) dynamics condition (record during load changing), 2) statics condition (record during fix load), and 3) full calibration experiments in accordance with ISO 376:2011. The captured images from dynamics condition experiment gave >94% without overlapping of number. The results from statics condition experiment were >98% images without overlapping. All measurement images without overlapping were translated to number by the developed program with 100% accuracy. The full calibration experiments also gave 100% accurate results. Moreover, in case of incorrect translation of any result, it is also possible to trace back to the raw calibration image to check and correct it. Therefore, this machine-vision-based system and program should be appropriate for recording of force calibration data.

  11. Data filtering with support vector machines in geometric camera calibration.

    PubMed

    Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C

    2010-02-01

    The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.

  12. Multivariate analysis of fMRI time series: classification and regression of brain responses using machine learning.

    PubMed

    Formisano, Elia; De Martino, Federico; Valente, Giancarlo

    2008-09-01

    Machine learning and pattern recognition techniques are being increasingly employed in functional magnetic resonance imaging (fMRI) data analysis. By taking into account the full spatial pattern of brain activity measured simultaneously at many locations, these methods allow detecting subtle, non-strictly localized effects that may remain invisible to the conventional analysis with univariate statistical methods. In typical fMRI applications, pattern recognition algorithms "learn" a functional relationship between brain response patterns and a perceptual, cognitive or behavioral state of a subject expressed in terms of a label, which may assume discrete (classification) or continuous (regression) values. This learned functional relationship is then used to predict the unseen labels from a new data set ("brain reading"). In this article, we describe the mathematical foundations of machine learning applications in fMRI. We focus on two methods, support vector machines and relevance vector machines, which are respectively suited for the classification and regression of fMRI patterns. Furthermore, by means of several examples and applications, we illustrate and discuss the methodological challenges of using machine learning algorithms in the context of fMRI data analysis.

  13. Calibration and validation of TRUST MRI for the estimation of cerebral blood oxygenation

    PubMed Central

    Lu, Hanzhang; Xu, Feng; Grgac, Ksenija; Liu, Peiying; Qin, Qin; van Zijl, Peter

    2011-01-01

    Recently, a T2-Relaxation-Under-Spin-Tagging (TRUST) MRI technique was developed to quantitatively estimate blood oxygen saturation fraction (Y) via the measurement of pure blood T2. This technique has shown promise for normalization of fMRI signals, for the assessment of oxygen metabolism, and in studies of cognitive aging and multiple sclerosis. However, a human validation study has not been conducted. In addition, the calibration curve used to convert blood T2 to Y has not accounted for the effects of hematocrit (Hct). In the present study, we first conducted experiments on blood samples under physiologic conditions, and the Carr-Purcell-Meiboom-Gill (CPMG) T2 was determined for a range of Y and Hct values. The data were fitted to a two-compartment exchange model to allow the characterization of a three-dimensional plot that can serve to calibrate the in vivo data. Next, in a validation study in humans, we showed that arterial Y estimated using TRUST MRI was 0.837±0.036 (N=7) during the inhalation of 14% O2, which was in excellent agreement with the gold-standard Y values of 0.840±0.036 based on Pulse-Oximetry. These data suggest that the availability of this calibration plot should enhance the applicability of TRUST MRI for non-invasive assessment of cerebral blood oxygenation. PMID:21590721

  14. Whole-machine calibration approach for phased array radar with self-test

    NASA Astrophysics Data System (ADS)

    Shen, Kai; Yao, Zhi-Cheng; Zhang, Jin-Chang; Yang, Jian

    2017-06-01

    The performance of the missile-borne phased array radar is greatly influenced by the inter-channel amplitude and phase inconsistencies. In order to ensure its performance, the amplitude and the phase characteristics of radar should be calibrated. Commonly used methods mainly focus on antenna calibration, such as FFT, REV, etc. However, the radar channel also contains T / R components, channels, ADC and messenger. In order to achieve on-based phased array radar amplitude information for rapid machine calibration and compensation, we adopt a high-precision plane scanning test platform for phase amplitude test. A calibration approach for the whole channel system based on the radar frequency source test is proposed. Finally, the advantages and the application prospect of this approach are analysed.

  15. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standardmore » reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.« less

  16. Measurement of liver iron overload: noninvasive calibration of MRI-R2* by magnetic iron detector susceptometer.

    PubMed

    Gianesin, B; Zefiro, D; Musso, M; Rosa, A; Bruzzone, C; Balocco, M; Carrara, P; Bacigalupo, L; Banderali, S; Rollandi, G A; Gambaro, M; Marinelli, M; Forni, G L

    2012-06-01

    An accurate assessment of body iron accumulation is essential for the diagnosis and therapy of iron overload in diseases such as thalassemia or hemochromatosis. Magnetic iron detector susceptometry and MRI are noninvasive techniques capable of detecting iron overload in the liver. Although the transverse relaxation rate measured by MRI can be correlated with the presence of iron, a calibration step is needed to obtain the liver iron concentration. Magnetic iron detector provides an evaluation of the iron overload in the whole liver. In this article, we describe a retrospective observational study comparing magnetic iron detector and MRI examinations performed on the same group of 97 patients with transfusional or congenital iron overload. A biopsy-free linear calibration to convert the average transverse relaxation rate in iron overload (R(2) = 0.72), or in liver iron concentration evaluated in wet tissue (R(2) = 0.68), is presented. This article also compares liver iron concentrations calculated in dry tissue using MRI and the existing biopsy calibration with liver iron concentrations evaluated in wet tissue by magnetic iron detector to obtain an estimate of the wet-to-dry conversion factor of 6.7 ± 0.8 (95% confidence level). Copyright © 2011 Wiley-Liss, Inc.

  17. Calibration standard of body tissue with magnetic nanocomposites for MRI and X-ray imaging

    NASA Astrophysics Data System (ADS)

    Rahn, Helene; Woodward, Robert; House, Michael; Engineer, Diana; Feindel, Kirk; Dutz, Silvio; Odenbach, Stefan; StPierre, Tim

    2016-05-01

    We present a first study of a long-term phantom for Magnetic Resonance Imaging (MRI) and X-ray imaging of biological tissues with magnetic nanocomposites (MNC) suitable for 3-dimensional and quantitative imaging of tissues after, e.g. magnetically assisted cancer treatments. We performed a cross-calibration of X-ray microcomputed tomography (XμCT) and MRI with a joint calibration standard for both imaging techniques. For this, we have designed a phantom for MRI and X-ray computed tomography which represents biological tissue enriched with MNC. The developed phantoms consist of an elastomer with different concentrations of multi-core MNC. The matrix material is a synthetic thermoplastic gel, PermaGel (PG). The developed phantoms have been analyzed with Nuclear Magnetic Resonance (NMR) Relaxometry (Bruker minispec mq 60) at 1.4 T to obtain R2 transverse relaxation rates, with SQUID (Superconducting QUantum Interference Device) magnetometry and Inductively Coupled Plasma Mass Spectrometry (ICP-MS) to verify the magnetite concentration, and with XμCT and 9.4 T MRI to visualize the phantoms 3-dimensionally and also to obtain T2 relaxation times. A specification of a sensitivity range is determined for standard imaging techniques X-ray computed tomography (XCT) and MRI as well as with NMR. These novel phantoms show a long-term stability over several months up to years. It was possible to suspend a particular MNC within the PG reaching a concentration range from 0 mg/ml to 6.914 mg/ml. The R2 relaxation rates from 1.4 T NMR-relaxometry show a clear connection (R2=0.994) with MNC concentrations between 0 mg/ml and 4.5 mg/ml. The MRI experiments have shown a linear correlation of R2 relaxation and MNC concentrations as well but in a range between MNC concentrations of 0 mg/ml and 1.435 mg/ml. It could be shown that XμCT displays best moderate and high MNC concentrations. The sensitivity range for this particular XμCT apparatus yields from 0.569 mg/ml to 6.914 mg/ml. The

  18. Assessment of New Load Schedules for the Machine Calibration of a Force Balance

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Gisler, R.; Kew, R.

    2015-01-01

    New load schedules for the machine calibration of a six-component force balance are currently being developed and evaluated at the NASA Ames Balance Calibration Laboratory. One of the proposed load schedules is discussed in the paper. It has a total of 2082 points that are distributed across 16 load series. Several criteria were applied to define the load schedule. It was decided, for example, to specify the calibration load set in force balance format as this approach greatly simplifies the definition of the lower and upper bounds of the load schedule. In addition, all loads are assumed to be applied in a calibration machine by using the one-factor-at-a-time approach. At first, all single-component loads are applied in six load series. Then, three two-component load series are applied. They consist of the load pairs (N1, N2), (S1, S2), and (RM, AF). Afterwards, four three-component load series are applied. They consist of the combinations (N1, N2, AF), (S1, S2, AF), (N1, N2, RM), and (S1, S2, RM). In the next step, one four-component load series is applied. It is the load combination (N1, N2, S1, S2). Finally, two five-component load series are applied. They are the load combination (N1, N2, S1, S2, AF) and (N1, N2, S1, S2, RM). The maximum difference between loads of two subsequent data points of the load schedule is limited to 33 % of capacity. This constraint helps avoid unwanted load "jumps" in the load schedule that can have a negative impact on the performance of a calibration machine. Only loadings of the single- and two-component load series are loaded to 100 % of capacity. This approach was selected because it keeps the total number of calibration points to a reasonable limit while still allowing for the application of some of the more complex load combinations. Data from two of NASA's force balances is used to illustrate important characteristics of the proposed 2082-point calibration load schedule.

  19. Comparison between laser interferometric and calibrated artifacts for the geometric test of machine tools

    NASA Astrophysics Data System (ADS)

    Sousa, Andre R.; Schneider, Carlos A.

    2001-09-01

    A touch probe is used on a 3-axis vertical machine center to check against a hole plate, calibrated on a coordinate measuring machine (CMM). By comparing the results obtained from the machine tool and CMM, the main machine tool error components are measured, attesting the machine accuracy. The error values can b used also t update the error compensation table at the CNC, enhancing the machine accuracy. The method is easy to us, has a lower cost than classical test techniques, and preliminary results have shown that its uncertainty is comparable to well established techniques. In this paper the method is compared with the laser interferometric system, regarding reliability, cost and time efficiency.

  20. Long Term Uncertainty Investigations of 1 MN Force Calibration Machine at NPL, India (NPLI)

    NASA Astrophysics Data System (ADS)

    Kumar, Rajesh; Kumar, Harish; Kumar, Anil; Vikram

    2012-01-01

    The present paper is an attempt to study the long term uncertainty of 1 MN hydraulic multiplication system (HMS) force calibration machine (FCM) at the National Physical Laboratory, India (NPLI), which is used for calibration of the force measuring instruments in the range of 100 kN - 1 MN. The 1 MN HMS FCM was installed at NPLI in 1993 and was built on the principle of hydraulic amplifications of dead weights. The best measurement capability (BMC) of the machine is ± 0.025% (k = 2) and it is traceable to national standards by means of precision force transfer standards (FTS). The present study discusses the uncertainty variations of the 1 MN HMS FCM over the years and describes the other parameters in detail, too. The 1 MN HMS FCM was calibrated in the years 2004, 2006, 2007, 2008, 2009 and 2010 and the results have been reported.

  1. Calibrated LCD/TFT stimulus presentation for visual psychophysics in fMRI.

    PubMed

    Strasburger, H; Wüstenberg, T; Jäncke, L

    2002-11-15

    Standard projection techniques using liquid crystal (LCD) or thin-film transistor (TFT) technology show drastic distortions in luminance and contrast characteristics across the screen and across grey levels. Common luminance measurement and calibration techniques are not applicable in the vicinity of MRI scanners. With the aid of a fibre optic, we measured screen luminances for the full space of screen position and image grey values and on that basis developed a compensation technique that involves both luminance homogenisation and position-dependent gamma correction. By the technique described, images displayed to a subject in functional MRI can be specified with high precision by a matrix of desired luminance values rather than by local grey value.

  2. Support vector machine learning-based fMRI data group analysis.

    PubMed

    Wang, Ze; Childress, Anna R; Wang, Jiongjiong; Detre, John A

    2007-07-15

    To explore the multivariate nature of fMRI data and to consider the inter-subject brain response discrepancies, a multivariate and brain response model-free method is fundamentally required. Two such methods are presented in this paper by integrating a machine learning algorithm, the support vector machine (SVM), and the random effect model. Without any brain response modeling, SVM was used to extract a whole brain spatial discriminance map (SDM), representing the brain response difference between the contrasted experimental conditions. Population inference was then obtained through the random effect analysis (RFX) or permutation testing (PMU) on the individual subjects' SDMs. Applied to arterial spin labeling (ASL) perfusion fMRI data, SDM RFX yielded lower false-positive rates in the null hypothesis test and higher detection sensitivity for synthetic activations with varying cluster size and activation strengths, compared to the univariate general linear model (GLM)-based RFX. For a sensory-motor ASL fMRI study, both SDM RFX and SDM PMU yielded similar activation patterns to GLM RFX and GLM PMU, respectively, but with higher t values and cluster extensions at the same significance level. Capitalizing on the absence of temporal noise correlation in ASL data, this study also incorporated PMU in the individual-level GLM and SVM analyses accompanied by group-level analysis through RFX or group-level PMU. Providing inferences on the probability of being activated or deactivated at each voxel, these individual-level PMU-based group analysis methods can be used to threshold the analysis results of GLM RFX, SDM RFX or SDM PMU.

  3. 3D artifact for calibrating kinematic parameters of articulated arm coordinate measuring machines

    NASA Astrophysics Data System (ADS)

    Zhao, Huining; Yu, Liandong; Xia, Haojie; Li, Weishi; Jiang, Yizhou; Jia, Huakun

    2018-06-01

    In this paper, a 3D artifact is proposed to calibrate the kinematic parameters of articulated arm coordinate measuring machines (AACMMs). The artifact is composed of 14 reference points with three different heights, which provides 91 different reference lengths, and a method is proposed to calibrate the artifact with laser tracker multi-stations. Therefore, the kinematic parameters of an AACMM can be calibrated in one setup of the proposed artifact, instead of having to adjust the 1D or 2D artifacts to different positions and orientations in the existing methods. As a result, it saves time to calibrate the AACMM with the proposed artifact in comparison with the traditional 1D or 2D artifacts. The performance of the AACMM calibrated with the proposed artifact is verified with a 600.003 mm gauge block. The result shows that the measurement accuracy of the AACMM is improved effectively through calibration with the proposed artifact.

  4. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS)

    NASA Astrophysics Data System (ADS)

    Park, Suhyung; Park, Jaeseok

    2015-05-01

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  5. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS).

    PubMed

    Park, Suhyung; Park, Jaeseok

    2015-05-07

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  6. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features

    PubMed Central

    Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-01-01

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization. PMID:28599282

  7. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features.

    PubMed

    Zhang, Xin; Yan, Lin-Feng; Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-07-18

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization.

  8. Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression.

    PubMed

    Sato, João R; Moll, Jorge; Green, Sophie; Deakin, John F W; Thomaz, Carlos E; Zahn, Roland

    2015-08-30

    Standard functional magnetic resonance imaging (fMRI) analyses cannot assess the potential of a neuroimaging signature as a biomarker to predict individual vulnerability to major depression (MD). Here, we use machine learning for the first time to address this question. Using a recently identified neural signature of guilt-selective functional disconnection, the classification algorithm was able to distinguish remitted MD from control participants with 78.3% accuracy. This demonstrates the high potential of our fMRI signature as a biomarker of MD vulnerability. Crown Copyright © 2015. Published by Elsevier Ireland Ltd. All rights reserved.

  9. Machine-Learning Based Co-adaptive Calibration: A Perspective to Fight BCI Illiteracy

    NASA Astrophysics Data System (ADS)

    Vidaurre, Carmen; Sannelli, Claudia; Müller, Klaus-Robert; Blankertz, Benjamin

    "BCI illiteracy" is one of the biggest problems and challenges in BCI research. It means that BCI control cannot be achieved by a non-negligible number of subjects (estimated 20% to 25%). There are two main causes for BCI illiteracy in BCI users: either no SMR idle rhythm is observed over motor areas, or this idle rhythm is not attenuated during motor imagery, resulting in a classification performance lower than 70% (criterion level) already for offline calibration data. In a previous work of the same authors, the concept of machine learning based co-adaptive calibration was introduced. This new type of calibration provided substantially improved performance for a variety of users. Here, we use a similar approach and investigate to what extent co-adapting learning enables substantial BCI control for completely novice users and those who suffered from BCI illiteracy before.

  10. Simultaneous auto-calibration and gradient delays estimation (SAGE) in non-Cartesian parallel MRI using low-rank constraints.

    PubMed

    Jiang, Wenwen; Larson, Peder E Z; Lustig, Michael

    2018-03-09

    To correct gradient timing delays in non-Cartesian MRI while simultaneously recovering corruption-free auto-calibration data for parallel imaging, without additional calibration scans. The calibration matrix constructed from multi-channel k-space data should be inherently low-rank. This property is used to construct reconstruction kernels or sensitivity maps. Delays between the gradient hardware across different axes and RF receive chain, which are relatively benign in Cartesian MRI (excluding EPI), lead to trajectory deviations and hence data inconsistencies for non-Cartesian trajectories. These in turn lead to higher rank and corrupted calibration information which hampers the reconstruction. Here, a method named Simultaneous Auto-calibration and Gradient delays Estimation (SAGE) is proposed that estimates the actual k-space trajectory while simultaneously recovering the uncorrupted auto-calibration data. This is done by estimating the gradient delays that result in the lowest rank of the calibration matrix. The Gauss-Newton method is used to solve the non-linear problem. The method is validated in simulations using center-out radial, projection reconstruction and spiral trajectories. Feasibility is demonstrated on phantom and in vivo scans with center-out radial and projection reconstruction trajectories. SAGE is able to estimate gradient timing delays with high accuracy at a signal to noise ratio level as low as 5. The method is able to effectively remove artifacts resulting from gradient timing delays and restore image quality in center-out radial, projection reconstruction, and spiral trajectories. The low-rank based method introduced simultaneously estimates gradient timing delays and provides accurate auto-calibration data for improved image quality, without any additional calibration scans. © 2018 International Society for Magnetic Resonance in Medicine.

  11. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application

  12. A novel approach to calibrate the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Khoram, Nafiseh; Zayane, Chadia; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2016-03-15

    The calibration of the hemodynamic model that describes changes in blood flow and blood oxygenation during brain activation is a crucial step for successfully monitoring and possibly predicting brain activity. This in turn has the potential to provide diagnosis and treatment of brain diseases in early stages. We propose an efficient numerical procedure for calibrating the hemodynamic model using some fMRI measurements. The proposed solution methodology is a regularized iterative method equipped with a Kalman filtering-type procedure. The Newton component of the proposed method addresses the nonlinear aspect of the problem. The regularization feature is used to ensure the stability of the algorithm. The Kalman filter procedure is incorporated here to address the noise in the data. Numerical results obtained with synthetic data as well as with real fMRI measurements are presented to illustrate the accuracy, robustness to the noise, and the cost-effectiveness of the proposed method. We present numerical results that clearly demonstrate that the proposed method outperforms the Cubature Kalman Filter (CKF), one of the most prominent existing numerical methods. We have designed an iterative numerical technique, called the TNM-CKF algorithm, for calibrating the mathematical model that describes the single-event related brain response when fMRI measurements are given. The method appears to be highly accurate and effective in reconstructing the BOLD signal even when the measurements are tainted with high noise level (as high as 30%). Published by Elsevier B.V.

  13. Articulated Arm Coordinate Measuring Machine Calibration by Laser Tracker Multilateration

    PubMed Central

    Majarena, Ana C.; Brau, Agustín; Velázquez, Jesús

    2014-01-01

    A new procedure for the calibration of an articulated arm coordinate measuring machine (AACMM) is presented in this paper. First, a self-calibration algorithm of four laser trackers (LTs) is developed. The spatial localization of a retroreflector target, placed in different positions within the workspace, is determined by means of a geometric multilateration system constructed from the four LTs. Next, a nonlinear optimization algorithm for the identification procedure of the AACMM is explained. An objective function based on Euclidean distances and standard deviations is developed. This function is obtained from the captured nominal data (given by the LTs used as a gauge instrument) and the data obtained by the AACMM and compares the measured and calculated coordinates of the target to obtain the identified model parameters that minimize this difference. Finally, results show that the procedure presented, using the measurements of the LTs as a gauge instrument, is very effective by improving the AACMM precision. PMID:24688418

  14. Quantitative Machine Learning Analysis of Brain MRI Morphology throughout Aging.

    PubMed

    Shamir, Lior; Long, Joe

    2016-01-01

    While cognition is clearly affected by aging, it is unclear whether the process of brain aging is driven solely by accumulation of environmental damage, or involves biological pathways. We applied quantitative image analysis to profile the alteration of brain tissues during aging. A dataset of 463 brain MRI images taken from a cohort of 416 subjects was analyzed using a large set of low-level numerical image content descriptors computed from the entire brain MRI images. The correlation between the numerical image content descriptors and the age was computed, and the alterations of the brain tissues during aging were quantified and profiled using machine learning. The comprehensive set of global image content descriptors provides high Pearson correlation of ~0.9822 with the chronological age, indicating that the machine learning analysis of global features is sensitive to the age of the subjects. Profiling of the predicted age shows several periods of mild changes, separated by shorter periods of more rapid alterations. The periods with the most rapid changes were around the age of 55, and around the age of 65. The results show that the process of brain aging of is not linear, and exhibit short periods of rapid aging separated by periods of milder change. These results are in agreement with patterns observed in cognitive decline, mental health status, and general human aging, suggesting that brain aging might not be driven solely by accumulation of environmental damage. Code and data used in the experiments are publicly available.

  15. Calibrated FMRI.

    PubMed

    Hoge, Richard D

    2012-08-15

    Functional magnetic resonance imaging with blood oxygenation level-dependent (BOLD) contrast has had a tremendous influence on human neuroscience in the last twenty years, providing a non-invasive means of mapping human brain function with often exquisite sensitivity and detail. However the BOLD method remains a largely qualitative approach. While the same can be said of anatomic MRI techniques, whose clinical and research impact has not been diminished in the slightest by the lack of a quantitative interpretation of their image intensity, the quantitative expression of BOLD responses as a percent of the baseline T2*- weighted signal has been viewed as necessary since the earliest days of fMRI. Calibrated MRI attempts to dissociate changes in oxygen metabolism from changes in blood flow and volume, the latter three quantities contributing jointly to determine the physiologically ambiguous percent BOLD change. This dissociation is typically performed using a "calibration" procedure in which subjects inhale a gas mixture containing small amounts of carbon dioxide or enriched oxygen to produce changes in blood flow and BOLD signal which can be measured under well-defined hemodynamic conditions. The outcome is a calibration parameter M which can then be substituted into an expression providing the fractional change in oxygen metabolism given changes in blood flow and BOLD signal during a task. The latest generation of calibrated MRI methods goes beyond fractional changes to provide absolute quantification of resting-state oxygen consumption in micromolar units, in addition to absolute measures of evoked metabolic response. This review discusses the history, challenges, and advances in calibrated MRI, from the personal perspective of the author. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Calibrated fMRI in the Medial Temporal Lobe During a Memory Encoding Task

    PubMed Central

    Restom, Khaled; Perthen, Joanna E.; Liu, Thomas T.

    2008-01-01

    Prior measures of the blood oxygenation level dependent (BOLD) and cerebral blood flow (CBF) responses to a memory encoding task within the medial temporal lobe have suggested that the coupling between functional changes in CBF and changes in the cerebral metabolic rate of oxgyen (CMRO2) may be tighter in the medial temporal lobe as compared to the primary sensory areas. In this study, we used a calibrated functional magnetic resonance imaging (fMRI) approach to directly estimate memory-encoding-related changes in CMRO2 and to assess the coupling between CBF and CMRO2 in the medial temporal lobe. The CBF-CMRO2 coupling ratio was estimated using a linear fit to the flow and metabolism changes observed across subjects. In addition, we examined the effect of region-of-interest (ROI) selection on the estimates. In response to the memory encoding task, CMRO2 increased by 23.1% ± 8.8 to 25.3% ± 5.7 (depending upon ROI), with an estimated CBF-CMRO2 coupling ratio of 1.66 ± 0.07 to 1.75± 0.16. There was not a significant effect of ROI selection on either the CMRO2 or coupling ratio estimates. The observed coupling ratios were significantly lower than the values (2 to 4.5) that have been reported in previous calibrated fMRI studies of the visual and motor cortices. In addition, the estimated coupling ratio was found to be less sensitive to the calibration procedure for functional responses in the medial temporal lobe as compared to the primary sensory areas. PMID:18329291

  17. Geometric calibration of a coordinate measuring machine using a laser tracking system

    NASA Astrophysics Data System (ADS)

    Umetsu, Kenta; Furutnani, Ryosyu; Osawa, Sonko; Takatsuji, Toshiyuki; Kurosawa, Tomizo

    2005-12-01

    This paper proposes a calibration method for a coordinate measuring machine (CMM) using a laser tracking system. The laser tracking system can measure three-dimensional coordinates based on the principle of trilateration with high accuracy and is easy to set up. The accuracy of length measurement of a single laser tracking interferometer (laser tracker) is about 0.3 µm over a length of 600 mm. In this study, we first measured 3D coordinates using the laser tracking system. Secondly, 21 geometric errors, namely, parametric errors of the CMM, were estimated by the comparison of the coordinates obtained by the laser tracking system and those obtained by the CMM. As a result, the estimated parametric errors agreed with those estimated by a ball plate measurement, which demonstrates the validity of the proposed calibration system.

  18. Perspectives on Machine Learning for Classification of Schizotypy Using fMRI Data.

    PubMed

    Madsen, Kristoffer H; Krohne, Laerke G; Cai, Xin-Lu; Wang, Yi; Chan, Raymond C K

    2018-03-15

    Functional magnetic resonance imaging is capable of estimating functional activation and connectivity in the human brain, and lately there has been increased interest in the use of these functional modalities combined with machine learning for identification of psychiatric traits. While these methods bear great potential for early diagnosis and better understanding of disease processes, there are wide ranges of processing choices and pitfalls that may severely hamper interpretation and generalization performance unless carefully considered. In this perspective article, we aim to motivate the use of machine learning schizotypy research. To this end, we describe common data processing steps while commenting on best practices and procedures. First, we introduce the important role of schizotypy to motivate the importance of reliable classification, and summarize existing machine learning literature on schizotypy. Then, we describe procedures for extraction of features based on fMRI data, including statistical parametric mapping, parcellation, complex network analysis, and decomposition methods, as well as classification with a special focus on support vector classification and deep learning. We provide more detailed descriptions and software as supplementary material. Finally, we present current challenges in machine learning for classification of schizotypy and comment on future trends and perspectives.

  19. Automated discrimination of dementia spectrum disorders using extreme learning machine and structural T1 MRI features.

    PubMed

    Jongin Kim; Boreom Lee

    2017-07-01

    The classification of neuroimaging data for the diagnosis of Alzheimer's Disease (AD) is one of the main research goals of the neuroscience and clinical fields. In this study, we performed extreme learning machine (ELM) classifier to discriminate the AD, mild cognitive impairment (MCI) from normal control (NC). We compared the performance of ELM with that of a linear kernel support vector machine (SVM) for 718 structural MRI images from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The data consisted of normal control, MCI converter (MCI-C), MCI non-converter (MCI-NC), and AD. We employed SVM-based recursive feature elimination (RFE-SVM) algorithm to find the optimal subset of features. In this study, we found that the RFE-SVM feature selection approach in combination with ELM shows the superior classification accuracy to that of linear kernel SVM for structural T1 MRI data.

  20. Machine learning classification with confidence: application of transductive conformal predictors to MRI-based diagnostic and prognostic markers in depression.

    PubMed

    Nouretdinov, Ilia; Costafreda, Sergi G; Gammerman, Alexander; Chervonenkis, Alexey; Vovk, Vladimir; Vapnik, Vladimir; Fu, Cynthia H Y

    2011-05-15

    There is rapidly accumulating evidence that the application of machine learning classification to neuroimaging measurements may be valuable for the development of diagnostic and prognostic prediction tools in psychiatry. However, current methods do not produce a measure of the reliability of the predictions. Knowing the risk of the error associated with a given prediction is essential for the development of neuroimaging-based clinical tools. We propose a general probabilistic classification method to produce measures of confidence for magnetic resonance imaging (MRI) data. We describe the application of transductive conformal predictor (TCP) to MRI images. TCP generates the most likely prediction and a valid measure of confidence, as well as the set of all possible predictions for a given confidence level. We present the theoretical motivation for TCP, and we have applied TCP to structural and functional MRI data in patients and healthy controls to investigate diagnostic and prognostic prediction in depression. We verify that TCP predictions are as accurate as those obtained with more standard machine learning methods, such as support vector machine, while providing the additional benefit of a valid measure of confidence for each prediction. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Classification of fMRI resting-state maps using machine learning techniques: A comparative study

    NASA Astrophysics Data System (ADS)

    Gallos, Ioannis; Siettos, Constantinos

    2017-11-01

    We compare the efficiency of Principal Component Analysis (PCA) and nonlinear learning manifold algorithms (ISOMAP and Diffusion maps) for classifying brain maps between groups of schizophrenia patients and healthy from fMRI scans during a resting-state experiment. After a standard pre-processing pipeline, we applied spatial Independent component analysis (ICA) to reduce (a) noise and (b) spatial-temporal dimensionality of fMRI maps. On the cross-correlation matrix of the ICA components, we applied PCA, ISOMAP and Diffusion Maps to find an embedded low-dimensional space. Finally, support-vector-machines (SVM) and k-NN algorithms were used to evaluate the performance of the algorithms in classifying between the two groups.

  2. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    PubMed Central

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  3. A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI.

    PubMed

    Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

  4. Performance of a Machine Learning Classifier of Knee MRI Reports in Two Large Academic Radiology Practices: A Tool to Estimate Diagnostic Yield.

    PubMed

    Hassanpour, Saeed; Langlotz, Curtis P; Amrhein, Timothy J; Befera, Nicholas T; Lungren, Matthew P

    2017-04-01

    The purpose of this study is to evaluate the performance of a natural language processing (NLP) system in classifying a database of free-text knee MRI reports at two separate academic radiology practices. An NLP system that uses terms and patterns in manually classified narrative knee MRI reports was constructed. The NLP system was trained and tested on expert-classified knee MRI reports from two major health care organizations. Radiology reports were modeled in the training set as vectors, and a support vector machine framework was used to train the classifier. A separate test set from each organization was used to evaluate the performance of the system. We evaluated the performance of the system both within and across organizations. Standard evaluation metrics, such as accuracy, precision, recall, and F1 score (i.e., the weighted average of the precision and recall), and their respective 95% CIs were used to measure the efficacy of our classification system. The accuracy for radiology reports that belonged to the model's clinically significant concept classes after training data from the same institution was good, yielding an F1 score greater than 90% (95% CI, 84.6-97.3%). Performance of the classifier on cross-institutional application without institution-specific training data yielded F1 scores of 77.6% (95% CI, 69.5-85.7%) and 90.2% (95% CI, 84.5-95.9%) at the two organizations studied. The results show excellent accuracy by the NLP machine learning classifier in classifying free-text knee MRI reports, supporting the institution-independent reproducibility of knee MRI report classification. Furthermore, the machine learning classifier performed well on free-text knee MRI reports from another institution. These data support the feasibility of multiinstitutional classification of radiologic imaging text reports with a single machine learning classifier without requiring institution-specific training data.

  5. The National Aeronautics and Space Administration's Gilmore Load Cell Machine: Load Cell Calibrations to 2.22 x 10(exp 7) Newtons

    NASA Technical Reports Server (NTRS)

    Haynes, Michael W.

    2000-01-01

    Designed in 1964 and erected in 1966, the mission of the Gilmore Load Cell Machine was to provide highly accurate calibrations for large capacity load cells in support of NASA's Apollo Program. Still in use today, the Gilmore Machine is a national treasure with no equal.

  6. Spatially Regularized Machine Learning for Task and Resting-state fMRI

    PubMed Central

    Song, Xiaomu; Panych, Lawrence P.; Chen, Nan-kuei

    2015-01-01

    Background Reliable mapping of brain function across sessions and/or subjects in task- and resting-state has been a critical challenge for quantitative fMRI studies although it has been intensively addressed in the past decades. New Method A spatially regularized support vector machine (SVM) technique was developed for the reliable brain mapping in task- and resting-state. Unlike most existing SVM-based brain mapping techniques, which implement supervised classifications of specific brain functional states or disorders, the proposed method performs a semi-supervised classification for the general brain function mapping where spatial correlation of fMRI is integrated into the SVM learning. The method can adapt to intra- and inter-subject variations induced by fMRI nonstationarity, and identify a true boundary between active and inactive voxels, or between functionally connected and unconnected voxels in a feature space. Results The method was evaluated using synthetic and experimental data at the individual and group level. Multiple features were evaluated in terms of their contributions to the spatially regularized SVM learning. Reliable mapping results in both task- and resting-state were obtained from individual subjects and at the group level. Comparison with Existing Methods A comparison study was performed with independent component analysis, general linear model, and correlation analysis methods. Experimental results indicate that the proposed method can provide a better or comparable mapping performance at the individual and group level. Conclusions The proposed method can provide accurate and reliable mapping of brain function in task- and resting-state, and is applicable to a variety of quantitative fMRI studies. PMID:26470627

  7. WE-G-BRB-08: TG-51 Calibration of First Commercial MRI-Guided IMRT System in the Presence of 0.35 Tesla Magnetic Field.

    PubMed

    Goddu, S; Green, O Pechenaya; Mutic, S

    2012-06-01

    The first real-time-MRI-guided radiotherapy system has been installed in a clinic and it is being evaluated. Presence of magnetic field (MF) during radiation output calibration may have implications on ionization measurements and there is a possibility that standard calibration protocols may not be suitable for dose measurements for such devices. In this study, we evaluated whether a standard calibration protocol (AAPM- TG-51) is appropriate for absolute dose measurement in presence of MF. Treatment delivery of the ViewRay (VR) system is via three 15,000Ci Cobalt-60 heads positioned 120-degrees apart and all calibration measurements were done in the presence of 0.35T MF. Two ADCL- calibrated ionization-chambers (Exradin A12, A16) were used for TG-51 calibration. Chambers were positioned at 5-cm depth, (SSD=105cm: VR's isocenter), and the MLC leaves were shaped to a 10.5cm × 10.5 cm field size. Percent-depth-dose (PDD) measurements were performed for 5 and 10 cm depths. Individual output of each head was measured using the AAPM- TG51 protocol. Calibration accuracy for each head was subsequently verified by Radiological Physics Center (RPC) TLD measurements. Measured ion-recombination (Pion) and polarity (Ppol) correction factors were less-than 1.002 and 1.006, respectively. Measured PDDs agreed with BJR-25 within ±0.2%. Maximum dose rates for the reference field size at VR's isocenter for heads 1, 2 and 3 were 1.445±0.005, 1.446±0.107, 1.431±0.006 Gy/minute, respectively. Our calibrations agreed with RPC- TLD measurements within ±1.3%, ±2.6% and ±2.0% for treatment-heads 1, 2 and 3, respectively. At the time of calibration, mean activity of the Co-60 sources was 10,800Ci±0.1%. This study shows that the TG- 51 calibration is feasible in the presence of 0.35T MF and the measurement agreement is within the range of results obtainable for conventional treatment machines. Drs. Green, Goddu, and Mutic served as scientific consultants for ViewRay, Inc. Dr. Mutic

  8. Classification of fMRI independent components using IC-fingerprints and support vector machine classifiers.

    PubMed

    De Martino, Federico; Gentile, Francesco; Esposito, Fabrizio; Balsi, Marco; Di Salle, Francesco; Goebel, Rainer; Formisano, Elia

    2007-01-01

    We present a general method for the classification of independent components (ICs) extracted from functional MRI (fMRI) data sets. The method consists of two steps. In the first step, each fMRI-IC is associated with an IC-fingerprint, i.e., a representation of the component in a multidimensional space of parameters. These parameters are post hoc estimates of global properties of the ICs and are largely independent of a specific experimental design and stimulus timing. In the second step a machine learning algorithm automatically separates the IC-fingerprints into six general classes after preliminary training performed on a small subset of expert-labeled components. We illustrate this approach in a multisubject fMRI study employing visual structure-from-motion stimuli encoding faces and control random shapes. We show that: (1) IC-fingerprints are a valuable tool for the inspection, characterization and selection of fMRI-ICs and (2) automatic classifications of fMRI-ICs in new subjects present a high correspondence with those obtained by expert visual inspection of the components. Importantly, our classification procedure highlights several neurophysiologically interesting processes. The most intriguing of which is reflected, with high intra- and inter-subject reproducibility, in one IC exhibiting a transiently task-related activation in the 'face' region of the primary sensorimotor cortex. This suggests that in addition to or as part of the mirror system, somatotopic regions of the sensorimotor cortex are involved in disambiguating the perception of a moving body part. Finally, we show that the same classification algorithm can be successfully applied, without re-training, to fMRI collected using acquisition parameters, stimulation modality and timing considerably different from those used for training.

  9. Multiclass Classification for the Differential Diagnosis on the ADHD Subtypes Using Recursive Feature Elimination and Hierarchical Extreme Learning Machine: Structural MRI Study

    PubMed Central

    Qureshi, Muhammad Naveed Iqbal; Min, Beomjun; Jo, Hang Joon; Lee, Boreom

    2016-01-01

    The classification of neuroimaging data for the diagnosis of certain brain diseases is one of the main research goals of the neuroscience and clinical communities. In this study, we performed multiclass classification using a hierarchical extreme learning machine (H-ELM) classifier. We compared the performance of this classifier with that of a support vector machine (SVM) and basic extreme learning machine (ELM) for cortical MRI data from attention deficit/hyperactivity disorder (ADHD) patients. We used 159 structural MRI images of children from the publicly available ADHD-200 MRI dataset. The data consisted of three types, namely, typically developing (TDC), ADHD-inattentive (ADHD-I), and ADHD-combined (ADHD-C). We carried out feature selection by using standard SVM-based recursive feature elimination (RFE-SVM) that enabled us to achieve good classification accuracy (60.78%). In this study, we found the RFE-SVM feature selection approach in combination with H-ELM to effectively enable the acquisition of high multiclass classification accuracy rates for structural neuroimaging data. In addition, we found that the most important features for classification were the surface area of the superior frontal lobe, and the cortical thickness, volume, and mean surface area of the whole cortex. PMID:27500640

  10. Multiclass Classification for the Differential Diagnosis on the ADHD Subtypes Using Recursive Feature Elimination and Hierarchical Extreme Learning Machine: Structural MRI Study.

    PubMed

    Qureshi, Muhammad Naveed Iqbal; Min, Beomjun; Jo, Hang Joon; Lee, Boreom

    2016-01-01

    The classification of neuroimaging data for the diagnosis of certain brain diseases is one of the main research goals of the neuroscience and clinical communities. In this study, we performed multiclass classification using a hierarchical extreme learning machine (H-ELM) classifier. We compared the performance of this classifier with that of a support vector machine (SVM) and basic extreme learning machine (ELM) for cortical MRI data from attention deficit/hyperactivity disorder (ADHD) patients. We used 159 structural MRI images of children from the publicly available ADHD-200 MRI dataset. The data consisted of three types, namely, typically developing (TDC), ADHD-inattentive (ADHD-I), and ADHD-combined (ADHD-C). We carried out feature selection by using standard SVM-based recursive feature elimination (RFE-SVM) that enabled us to achieve good classification accuracy (60.78%). In this study, we found the RFE-SVM feature selection approach in combination with H-ELM to effectively enable the acquisition of high multiclass classification accuracy rates for structural neuroimaging data. In addition, we found that the most important features for classification were the surface area of the superior frontal lobe, and the cortical thickness, volume, and mean surface area of the whole cortex.

  11. Ensemble support vector machine classification of dementia using structural MRI and mini-mental state examination.

    PubMed

    Sørensen, Lauge; Nielsen, Mads

    2018-05-15

    The International Challenge for Automated Prediction of MCI from MRI data offered independent, standardized comparison of machine learning algorithms for multi-class classification of normal control (NC), mild cognitive impairment (MCI), converting MCI (cMCI), and Alzheimer's disease (AD) using brain imaging and general cognition. We proposed to use an ensemble of support vector machines (SVMs) that combined bagging without replacement and feature selection. SVM is the most commonly used algorithm in multivariate classification of dementia, and it was therefore valuable to evaluate the potential benefit of ensembling this type of classifier. The ensemble SVM, using either a linear or a radial basis function (RBF) kernel, achieved multi-class classification accuracies of 55.6% and 55.0% in the challenge test set (60 NC, 60 MCI, 60 cMCI, 60 AD), resulting in a third place in the challenge. Similar feature subset sizes were obtained for both kernels, and the most frequently selected MRI features were the volumes of the two hippocampal subregions left presubiculum and right subiculum. Post-challenge analysis revealed that enforcing a minimum number of selected features and increasing the number of ensemble classifiers improved classification accuracy up to 59.1%. The ensemble SVM outperformed single SVM classifications consistently in the challenge test set. Ensemble methods using bagging and feature selection can improve the performance of the commonly applied SVM classifier in dementia classification. This resulted in competitive classification accuracies in the International Challenge for Automated Prediction of MCI from MRI data. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Multiparametric MRI characterization and prediction in autism spectrum disorder using graph theory and machine learning.

    PubMed

    Zhou, Yongxia; Yu, Fang; Duong, Timothy

    2014-01-01

    This study employed graph theory and machine learning analysis of multiparametric MRI data to improve characterization and prediction in autism spectrum disorders (ASD). Data from 127 children with ASD (13.5±6.0 years) and 153 age- and gender-matched typically developing children (14.5±5.7 years) were selected from the multi-center Functional Connectome Project. Regional gray matter volume and cortical thickness increased, whereas white matter volume decreased in ASD compared to controls. Small-world network analysis of quantitative MRI data demonstrated decreased global efficiency based on gray matter cortical thickness but not with functional connectivity MRI (fcMRI) or volumetry. An integrative model of 22 quantitative imaging features was used for classification and prediction of phenotypic features that included the autism diagnostic observation schedule, the revised autism diagnostic interview, and intelligence quotient scores. Among the 22 imaging features, four (caudate volume, caudate-cortical functional connectivity and inferior frontal gyrus functional connectivity) were found to be highly informative, markedly improving classification and prediction accuracy when compared with the single imaging features. This approach could potentially serve as a biomarker in prognosis, diagnosis, and monitoring disease progression.

  13. Advice Taking from Humans and Machines: An fMRI and Effective Connectivity Study.

    PubMed

    Goodyear, Kimberly; Parasuraman, Raja; Chernyak, Sergey; Madhavan, Poornima; Deshpande, Gopikrishna; Krueger, Frank

    2016-01-01

    With new technological advances, advice can come from different sources such as machines or humans, but how individuals respond to such advice and the neural correlates involved need to be better understood. We combined functional MRI and multivariate Granger causality analysis with an X-ray luggage-screening task to investigate the neural basis and corresponding effective connectivity involved with advice utilization from agents framed as experts. Participants were asked to accept or reject good or bad advice from a human or machine agent with low reliability (high false alarm rate). We showed that unreliable advice decreased performance overall and participants interacting with the human agent had a greater depreciation of advice utilization during bad advice compared to the machine agent. These differences in advice utilization can be perceivably due to reevaluation of expectations arising from association of dispositional credibility for each agent. We demonstrated that differences in advice utilization engaged brain regions that may be associated with evaluation of personal characteristics and traits (precuneus, posterior cingulate cortex, temporoparietal junction) and interoception (posterior insula). We found that the right posterior insula and left precuneus were the drivers of the advice utilization network that were reciprocally connected to each other and also projected to all other regions. Our behavioral and neuroimaging results have significant implications for society because of progressions in technology and increased interactions with machines.

  14. Advice Taking from Humans and Machines: An fMRI and Effective Connectivity Study

    PubMed Central

    Goodyear, Kimberly; Parasuraman, Raja; Chernyak, Sergey; Madhavan, Poornima; Deshpande, Gopikrishna; Krueger, Frank

    2016-01-01

    With new technological advances, advice can come from different sources such as machines or humans, but how individuals respond to such advice and the neural correlates involved need to be better understood. We combined functional MRI and multivariate Granger causality analysis with an X-ray luggage-screening task to investigate the neural basis and corresponding effective connectivity involved with advice utilization from agents framed as experts. Participants were asked to accept or reject good or bad advice from a human or machine agent with low reliability (high false alarm rate). We showed that unreliable advice decreased performance overall and participants interacting with the human agent had a greater depreciation of advice utilization during bad advice compared to the machine agent. These differences in advice utilization can be perceivably due to reevaluation of expectations arising from association of dispositional credibility for each agent. We demonstrated that differences in advice utilization engaged brain regions that may be associated with evaluation of personal characteristics and traits (precuneus, posterior cingulate cortex, temporoparietal junction) and interoception (posterior insula). We found that the right posterior insula and left precuneus were the drivers of the advice utilization network that were reciprocally connected to each other and also projected to all other regions. Our behavioral and neuroimaging results have significant implications for society because of progressions in technology and increased interactions with machines. PMID:27867351

  15. Calibrators measurement system for headlamp tester of motor vehicle base on machine vision

    NASA Astrophysics Data System (ADS)

    Pan, Yue; Zhang, Fan; Xu, Xi-ping; Zheng, Zhe

    2014-09-01

    With the development of photoelectric detection technology, machine vision has a wider use in the field of industry. The paper mainly introduces auto lamps tester calibrator measuring system, of which CCD image sampling system is the core. Also, it shows the measuring principle of optical axial angle and light intensity, and proves the linear relationship between calibrator's facula illumination and image plane illumination. The paper provides an important specification of CCD imaging system. Image processing by MATLAB can get flare's geometric midpoint and average gray level. By fitting the statistics via the method of the least square, we can get regression equation of illumination and gray level. It analyzes the error of experimental result of measurement system, and gives the standard uncertainty of synthesis and the resource of optical axial angle. Optical axial angle's average measuring accuracy is controlled within 40''. The whole testing process uses digital means instead of artificial factors, which has higher accuracy, more repeatability and better mentality than any other measuring systems.

  16. A new vibrator to stimulate muscle proprioceptors in fMRI.

    PubMed

    Montant, Marie; Romaiguère, Patricia; Roll, Jean-Pierre

    2009-03-01

    Studying cognitive brain functions by functional magnetic resonance imaging (fMRI) requires appropriate stimulation devices that do not interfere with the magnetic fields. Since the emergence of fMRI in the 90s, a number of stimulation devices have been developed for the visual and auditory modalities. Only few devices, however, have been developed for the somesthesic modality. Here, we present a vibration device for studying somesthesia that is compatible with high magnetic field environments and that can be used in fMRI machines. This device consists of a poly vinyl chloride (PVC) vibrator containing a wind turbine and of a pneumatic apparatus that controls 1-6 vibrators simultaneously. Just like classical electromagnetic vibrators, our device stimulates muscle mechanoreceptors (muscle spindles) and generates reliable illusions of movement. We provide the fMRI compatibility data (phantom test), the calibration curve (vibration frequency as a function of air flow), as well as the results of a kinesthetic test (perceived speed of the illusory movement as a function of vibration frequency). This device was used successfully in several brain imaging studies using both fMRI and magnetoencephalography.

  17. Discriminative analysis of schizophrenia using support vector machine and recursive feature elimination on structural MRI images.

    PubMed

    Lu, Xiaobing; Yang, Yongzhe; Wu, Fengchun; Gao, Minjian; Xu, Yong; Zhang, Yue; Yao, Yongcheng; Du, Xin; Li, Chengwei; Wu, Lei; Zhong, Xiaomei; Zhou, Yanling; Fan, Ni; Zheng, Yingjun; Xiong, Dongsheng; Peng, Hongjun; Escudero, Javier; Huang, Biao; Li, Xiaobo; Ning, Yuping; Wu, Kai

    2016-07-01

    Structural abnormalities in schizophrenia (SZ) patients have been well documented with structural magnetic resonance imaging (MRI) data using voxel-based morphometry (VBM) and region of interest (ROI) analyses. However, these analyses can only detect group-wise differences and thus, have a poor predictive value for individuals. In the present study, we applied a machine learning method that combined support vector machine (SVM) with recursive feature elimination (RFE) to discriminate SZ patients from normal controls (NCs) using their structural MRI data. We first employed both VBM and ROI analyses to compare gray matter volume (GMV) and white matter volume (WMV) between 41 SZ patients and 42 age- and sex-matched NCs. The method of SVM combined with RFE was used to discriminate SZ patients from NCs using significant between-group differences in both GMV and WMV as input features. We found that SZ patients showed GM and WM abnormalities in several brain structures primarily involved in the emotion, memory, and visual systems. An SVM with a RFE classifier using the significant structural abnormalities identified by the VBM analysis as input features achieved the best performance (an accuracy of 88.4%, a sensitivity of 91.9%, and a specificity of 84.4%) in the discriminative analyses of SZ patients. These results suggested that distinct neuroanatomical profiles associated with SZ patients might provide a potential biomarker for disease diagnosis, and machine-learning methods can reveal neurobiological mechanisms in psychiatric diseases.

  18. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    PubMed

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  19. Application of machine learning classification for structural brain MRI in mood disorders: Critical review from a clinical perspective.

    PubMed

    Kim, Yong-Ku; Na, Kyoung-Sae

    2018-01-03

    Mood disorders are a highly prevalent group of mental disorders causing substantial socioeconomic burden. There are various methodological approaches for identifying the underlying mechanisms of the etiology, symptomatology, and therapeutics of mood disorders; however, neuroimaging studies have provided the most direct evidence for mood disorder neural substrates by visualizing the brains of living individuals. The prefrontal cortex, hippocampus, amygdala, thalamus, ventral striatum, and corpus callosum are associated with depression and bipolar disorder. Identifying the distinct and common contributions of these anatomical regions to depression and bipolar disorder have broadened and deepened our understanding of mood disorders. However, the extent to which neuroimaging research findings contribute to clinical practice in the real-world setting is unclear. As traditional or non-machine learning MRI studies have analyzed group-level differences, it is not possible to directly translate findings from research to clinical practice; the knowledge gained pertains to the disorder, but not to individuals. On the other hand, a machine learning approach makes it possible to provide individual-level classifications. For the past two decades, many studies have reported on the classification accuracy of machine learning-based neuroimaging studies from the perspective of diagnosis and treatment response. However, for the application of a machine learning-based brain MRI approach in real world clinical settings, several major issues should be considered. Secondary changes due to illness duration and medication, clinical subtypes and heterogeneity, comorbidities, and cost-effectiveness restrict the generalization of the current machine learning findings. Sophisticated classification of clinical and diagnostic subtypes is needed. Additionally, as the approach is inevitably limited by sample size, multi-site participation and data-sharing are needed in the future. Copyright

  20. Supervised machine learning-based classification scheme to segment the brainstem on MRI in multicenter brain tumor treatment context.

    PubMed

    Dolz, Jose; Laprie, Anne; Ken, Soléakhéna; Leroy, Henri-Arthur; Reyns, Nicolas; Massoptier, Laurent; Vermandel, Maximilien

    2016-01-01

    To constrain the risk of severe toxicity in radiotherapy and radiosurgery, precise volume delineation of organs at risk is required. This task is still manually performed, which is time-consuming and prone to observer variability. To address these issues, and as alternative to atlas-based segmentation methods, machine learning techniques, such as support vector machines (SVM), have been recently presented to segment subcortical structures on magnetic resonance images (MRI). SVM is proposed to segment the brainstem on MRI in multicenter brain cancer context. A dataset composed by 14 adult brain MRI scans is used to evaluate its performance. In addition to spatial and probabilistic information, five different image intensity values (IIVs) configurations are evaluated as features to train the SVM classifier. Segmentation accuracy is evaluated by computing the Dice similarity coefficient (DSC), absolute volumes difference (AVD) and percentage volume difference between automatic and manual contours. Mean DSC for all proposed IIVs configurations ranged from 0.89 to 0.90. Mean AVD values were below 1.5 cm(3), where the value for best performing IIVs configuration was 0.85 cm(3), representing an absolute mean difference of 3.99% with respect to the manual segmented volumes. Results suggest consistent volume estimation and high spatial similarity with respect to expert delineations. The proposed approach outperformed presented methods to segment the brainstem, not only in volume similarity metrics, but also in segmentation time. Preliminary results showed that the approach might be promising for adoption in clinical use.

  1. Prostate cancer localization with multispectral MRI using cost-sensitive support vector machines and conditional random fields.

    PubMed

    Artan, Yusuf; Haider, Masoom A; Langer, Deanna L; van der Kwast, Theodorus H; Evans, Andrew J; Yang, Yongyi; Wernick, Miles N; Trachtenberg, John; Yetik, Imam Samil

    2010-09-01

    Prostate cancer is a leading cause of cancer death for men in the United States. Fortunately, the survival rate for early diagnosed patients is relatively high. Therefore, in vivo imaging plays an important role for the detection and treatment of the disease. Accurate prostate cancer localization with noninvasive imaging can be used to guide biopsy, radiotherapy, and surgery as well as to monitor disease progression. Magnetic resonance imaging (MRI) performed with an endorectal coil provides higher prostate cancer localization accuracy, when compared to transrectal ultrasound (TRUS). However, in general, a single type of MRI is not sufficient for reliable tumor localization. As an alternative, multispectral MRI, i.e., the use of multiple MRI-derived datasets, has emerged as a promising noninvasive imaging technique for the localization of prostate cancer; however almost all studies are with human readers. There is a significant inter and intraobserver variability for human readers, and it is substantially difficult for humans to analyze the large dataset of multispectral MRI. To solve these problems, this study presents an automated localization method using cost-sensitive support vector machines (SVMs) and shows that this method results in improved localization accuracy than classical SVM. Additionally, we develop a new segmentation method by combining conditional random fields (CRF) with a cost-sensitive framework and show that our method further improves cost-sensitive SVM results by incorporating spatial information. We test SVM, cost-sensitive SVM, and the proposed cost-sensitive CRF on multispectral MRI datasets acquired from 21 biopsy-confirmed cancer patients. Our results show that multispectral MRI helps to increase the accuracy of prostate cancer localization when compared to single MR images; and that using advanced methods such as cost-sensitive SVM as well as the proposed cost-sensitive CRF can boost the performance significantly when compared to SVM.

  2. Machine Learning and Radiology

    PubMed Central

    Wang, Shijun; Summers, Ronald M.

    2012-01-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077

  3. Machine learning and radiology.

    PubMed

    Wang, Shijun; Summers, Ronald M

    2012-07-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. Copyright © 2012. Published by Elsevier B.V.

  4. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining

    PubMed Central

    Mendikute, Alberto; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-01-01

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g., 0.1 mm error in 1 m) with

  5. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining.

    PubMed

    Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-09-09

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with

  6. Machine learning on brain MRI data for differential diagnosis of Parkinson's disease and Progressive Supranuclear Palsy.

    PubMed

    Salvatore, C; Cerasa, A; Castiglioni, I; Gallivanone, F; Augimeri, A; Lopez, M; Arabia, G; Morelli, M; Gilardi, M C; Quattrone, A

    2014-01-30

    Supervised machine learning has been proposed as a revolutionary approach for identifying sensitive medical image biomarkers (or combination of them) allowing for automatic diagnosis of individual subjects. The aim of this work was to assess the feasibility of a supervised machine learning algorithm for the assisted diagnosis of patients with clinically diagnosed Parkinson's disease (PD) and Progressive Supranuclear Palsy (PSP). Morphological T1-weighted Magnetic Resonance Images (MRIs) of PD patients (28), PSP patients (28) and healthy control subjects (28) were used by a supervised machine learning algorithm based on the combination of Principal Components Analysis as feature extraction technique and on Support Vector Machines as classification algorithm. The algorithm was able to obtain voxel-based morphological biomarkers of PD and PSP. The algorithm allowed individual diagnosis of PD versus controls, PSP versus controls and PSP versus PD with an Accuracy, Specificity and Sensitivity>90%. Voxels influencing classification between PD and PSP patients involved midbrain, pons, corpus callosum and thalamus, four critical regions known to be strongly involved in the pathophysiological mechanisms of PSP. Classification accuracy of individual PSP patients was consistent with previous manual morphological metrics and with other supervised machine learning application to MRI data, whereas accuracy in the detection of individual PD patients was significantly higher with our classification method. The algorithm provides excellent discrimination of PD patients from PSP patients at an individual level, thus encouraging the application of computer-based diagnosis in clinical practice. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Vessel calibre—a potential MRI biomarker of tumour response in clinical trials

    PubMed Central

    Emblem, Kyrre E.; Farrar, Christian T.; Gerstner, Elizabeth R.; Batchelor, Tracy T.; Borra, Ronald J. H.; Rosen, Bruce R.; Sorensen, A. Gregory; Jain, Rakesh K.

    2015-01-01

    Our understanding of the importance of blood vessels and angiogenesis in cancer has increased considerably over the past decades, and the assessment of tumour vessel calibre and structure has become increasingly important for in vivo monitoring of therapeutic response. The preferred method for in vivo imaging of most solid cancers is MRI, and the concept of vessel-calibre MRI has evolved since its initial inception in the early 1990s. Almost a quarter of a century later, unlike traditional contrast-enhanced MRI techniques, vessel-calibre MRI remains widely inaccessible to the general clinical community. The narrow availability of the technique is, in part, attributable to limited awareness and a lack of imaging standardization. Thus, the role of vessel-calibre MRI in early phase clinical trials remains to be determined. By contrast, regulatory approvals of antiangiogenic agents that are not directly cytotoxic have created an urgent need for clinical trials incorporating advanced imaging analyses, going beyond traditional assessments of tumour volume. To this end, we review the field of vessel-calibre MRI and summarize the emerging evidence supporting the use of this technique to monitor response to anticancer therapy. We also discuss the potential use of this biomarker assessment in clinical imaging trials and highlight relevant avenues for future research. PMID:25113840

  8. Support vector machine for breast cancer classification using diffusion-weighted MRI histogram features: Preliminary study.

    PubMed

    Vidić, Igor; Egnell, Liv; Jerome, Neil P; Teruel, Jose R; Sjøbakk, Torill E; Østlie, Agnes; Fjøsne, Hans E; Bathen, Tone F; Goa, Pål Erik

    2018-05-01

    Diffusion-weighted MRI (DWI) is currently one of the fastest developing MRI-based techniques in oncology. Histogram properties from model fitting of DWI are useful features for differentiation of lesions, and classification can potentially be improved by machine learning. To evaluate classification of malignant and benign tumors and breast cancer subtypes using support vector machine (SVM). Prospective. Fifty-one patients with benign (n = 23) and malignant (n = 28) breast tumors (26 ER+, whereof six were HER2+). Patients were imaged with DW-MRI (3T) using twice refocused spin-echo echo-planar imaging with echo time / repetition time (TR/TE) = 9000/86 msec, 90 × 90 matrix size, 2 × 2 mm in-plane resolution, 2.5 mm slice thickness, and 13 b-values. Apparent diffusion coefficient (ADC), relative enhanced diffusivity (RED), and the intravoxel incoherent motion (IVIM) parameters diffusivity (D), pseudo-diffusivity (D*), and perfusion fraction (f) were calculated. The histogram properties (median, mean, standard deviation, skewness, kurtosis) were used as features in SVM (10-fold cross-validation) for differentiation of lesions and subtyping. Accuracies of the SVM classifications were calculated to find the combination of features with highest prediction accuracy. Mann-Whitney tests were performed for univariate comparisons. For benign versus malignant tumors, univariate analysis found 11 histogram properties to be significant differentiators. Using SVM, the highest accuracy (0.96) was achieved from a single feature (mean of RED), or from three feature combinations of IVIM or ADC. Combining features from all models gave perfect classification. No single feature predicted HER2 status of ER + tumors (univariate or SVM), although high accuracy (0.90) was achieved with SVM combining several features. Importantly, these features had to include higher-order statistics (kurtosis and skewness), indicating the importance to account for heterogeneity. Our

  9. An fMRI and effective connectivity study investigating miss errors during advice utilization from human and machine agents.

    PubMed

    Goodyear, Kimberly; Parasuraman, Raja; Chernyak, Sergey; de Visser, Ewart; Madhavan, Poornima; Deshpande, Gopikrishna; Krueger, Frank

    2017-10-01

    As society becomes more reliant on machines and automation, understanding how people utilize advice is a necessary endeavor. Our objective was to reveal the underlying neural associations during advice utilization from expert human and machine agents with fMRI and multivariate Granger causality analysis. During an X-ray luggage-screening task, participants accepted or rejected good or bad advice from either the human or machine agent framed as experts with manipulated reliability (high miss rate). We showed that the machine-agent group decreased their advice utilization compared to the human-agent group and these differences in behaviors during advice utilization could be accounted for by high expectations of reliable advice and changes in attention allocation due to miss errors. Brain areas involved with the salience and mentalizing networks, as well as sensory processing involved with attention, were recruited during the task and the advice utilization network consisted of attentional modulation of sensory information with the lingual gyrus as the driver during the decision phase and the fusiform gyrus as the driver during the feedback phase. Our findings expand on the existing literature by showing that misses degrade advice utilization, which is represented in a neural network involving salience detection and self-processing with perceptual integration.

  10. Absolute calibration for complex-geometry biomedical diffuse optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Mastanduno, Michael A.; Jiang, Shudong; El-Ghussein, Fadi; diFlorio-Alexander, Roberta; Pogue, Brian W.; Paulsen, Keith D.

    2013-03-01

    We have presented methodology to calibrate data in NIRS/MRI imaging versus an absolute reference phantom and results in both phantoms and healthy volunteers. This method directly calibrates data to a diffusion-based model, takes advantage of patient specific geometry from MRI prior information, and generates an initial guess without the need for a large data set. This method of calibration allows for more accurate quantification of total hemoglobin, oxygen saturation, water content, scattering, and lipid concentration as compared with other, slope-based methods. We found the main source of error in the method to be derived from incorrect assignment of reference phantom optical properties rather than initial guess in reconstruction. We also present examples of phantom and breast images from a combined frequency domain and continuous wave MRI-coupled NIRS system. We were able to recover phantom data within 10% of expected contrast and within 10% of the actual value using this method and compare these results with slope-based calibration methods. Finally, we were able to use this technique to calibrate and reconstruct images from healthy volunteers. Representative images are shown and discussion is provided for comparison with existing literature. These methods work towards fully combining the synergistic attributes of MRI and NIRS for in-vivo imaging of breast cancer. Complete software and hardware integration in dual modality instruments is especially important due to the complexity of the technology and success will contribute to complex anatomical and molecular prognostic information that can be readily obtained in clinical use.

  11. Individualized prediction of illness course at the first psychotic episode: a support vector machine MRI study.

    PubMed

    Mourao-Miranda, J; Reinders, A A T S; Rocha-Rego, V; Lappin, J; Rondina, J; Morgan, C; Morgan, K D; Fearon, P; Jones, P B; Doody, G A; Murray, R M; Kapur, S; Dazzan, P

    2012-05-01

    To date, magnetic resonance imaging (MRI) has made little impact on the diagnosis and monitoring of psychoses in individual patients. In this study, we used a support vector machine (SVM) whole-brain classification approach to predict future illness course at the individual level from MRI data obtained at the first psychotic episode. One hundred patients at their first psychotic episode and 91 healthy controls had an MRI scan. Patients were re-evaluated 6.2 years (s.d.=2.3) later, and were classified as having a continuous, episodic or intermediate illness course. Twenty-eight subjects with a continuous course were compared with 28 patients with an episodic course and with 28 healthy controls. We trained each SVM classifier independently for the following contrasts: continuous versus episodic, continuous versus healthy controls, and episodic versus healthy controls. At baseline, patients with a continuous course were already distinguishable, with significance above chance level, from both patients with an episodic course (p=0.004, sensitivity=71, specificity=68) and healthy individuals (p=0.01, sensitivity=71, specificity=61). Patients with an episodic course could not be distinguished from healthy individuals. When patients with an intermediate outcome were classified according to the discriminating pattern episodic versus continuous, 74% of those who did not develop other episodes were classified as episodic, and 65% of those who did develop further episodes were classified as continuous (p=0.035). We provide preliminary evidence of MRI application in the individualized prediction of future illness course, using a simple and automated SVM pipeline. When replicated and validated in larger groups, this could enable targeted clinical decisions based on imaging data.

  12. Individualized prediction of illness course at the first psychotic episode: a support vector machine MRI study

    PubMed Central

    Mourao-Miranda, J.; Reinders, A. A. T. S.; Rocha-Rego, V.; Lappin, J.; Rondina, J.; Morgan, C.; Morgan, K. D.; Fearon, P.; Jones, P. B.; Doody, G. A.; Murray, R. M.; Kapur, S.; Dazzan, P.

    2012-01-01

    Background To date, magnetic resonance imaging (MRI) has made little impact on the diagnosis and monitoring of psychoses in individual patients. In this study, we used a support vector machine (SVM) whole-brain classification approach to predict future illness course at the individual level from MRI data obtained at the first psychotic episode. Method One hundred patients at their first psychotic episode and 91 healthy controls had an MRI scan. Patients were re-evaluated 6.2 years (s.d.=2.3) later, and were classified as having a continuous, episodic or intermediate illness course. Twenty-eight subjects with a continuous course were compared with 28 patients with an episodic course and with 28 healthy controls. We trained each SVM classifier independently for the following contrasts: continuous versus episodic, continuous versus healthy controls, and episodic versus healthy controls. Results At baseline, patients with a continuous course were already distinguishable, with significance above chance level, from both patients with an episodic course (p=0.004, sensitivity=71, specificity=68) and healthy individuals (p=0.01, sensitivity=71, specificity=61). Patients with an episodic course could not be distinguished from healthy individuals. When patients with an intermediate outcome were classified according to the discriminating pattern episodic versus continuous, 74% of those who did not develop other episodes were classified as episodic, and 65% of those who did develop further episodes were classified as continuous (p=0.035). Conclusions We provide preliminary evidence of MRI application in the individualized prediction of future illness course, using a simple and automated SVM pipeline. When replicated and validated in larger groups, this could enable targeted clinical decisions based on imaging data. PMID:22059690

  13. Power Doppler signal calibration between ultrasound machines by use of a capillary-flow phantom for pannus vascularity in rheumatoid finger joints: a basic study.

    PubMed

    Sakano, Ryosuke; Kamishima, Tamotsu; Nishida, Mutsumi; Horie, Tatsunori

    2015-01-01

    Ultrasound allows the detection and grading of inflammation in rheumatology. Despite these advantages of ultrasound in the management of rheumatoid patients, it is well known that there are significant machine-to-machine disagreements regarding signal quantification. In this study, we tried to calibrate the power Doppler (PD) signal of two models of ultrasound machines by using a capillary-flow phantom. After flow velocity analysis in the perfusion cartridge at various injection rates (0.1-0.5 ml/s), we measured the signal count in the perfusion cartridge at various injection rates and pulse repetition frequencies (PRFs) by using PD, perfusing an ultrasound micro-bubble contrast agent diluted with normal saline simulating human blood. By use of the data from two models of ultrasound machines, Aplio 500 (Toshiba) and Avius (Hitachi Aloka), the quantitative PD (QPD) index [the summation of the colored pixels in a 1 cm × 1 cm rectangular region of interest (ROI)] was calculated via Image J (internet free software). We found a positive correlation between the injection rate and the flow velocity. In Aplio 500 and Avius, we found negative correlations between the PRF and the QPD index when the flow velocity was constant, and a positive correlation between flow velocity and the QPD index at constant PRF. The equation for the relationship of the PRF between Aplio 500 and Avius was: y = 0.023x + 0.36 [y = PRF of Avius (kHz), x = PRF of Aplio 500 (kHz)]. Our results suggested that the signal calibration of various models of ultrasound machines is possible by adjustment of the PRF setting.

  14. Detection of physiological noise in resting state fMRI using machine learning.

    PubMed

    Ash, Tom; Suckling, John; Walter, Martin; Ooi, Cinly; Tempelmann, Claus; Carpenter, Adrian; Williams, Guy

    2013-04-01

    We present a technique for predicting cardiac and respiratory phase on a time point by time point basis, from fMRI image data. These predictions have utility in attempts to detrend effects of the physiological cycles from fMRI image data. We demonstrate the technique both in the case where it can be trained on a subject's own data, and when it cannot. The prediction scheme uses a multiclass support vector machine algorithm. Predictions are demonstrated to have a close fit to recorded physiological phase, with median Pearson correlation scores between recorded and predicted values of 0.99 for the best case scenario (cardiac cycle trained on a subject's own data) down to 0.83 for the worst case scenario (respiratory predictions trained on group data), as compared to random chance correlation score of 0.70. When predictions were used with RETROICOR--a popular physiological noise removal tool--the effects are compared to using recorded phase values. Using Fourier transforms and seed based correlation analysis, RETROICOR is shown to produce similar effects whether recorded physiological phase values are used, or they are predicted using this technique. This was seen by similar levels of noise reduction noise in the same regions of the Fourier spectra, and changes in seed based correlation scores in similar regions of the brain. This technique has a use in situations where data from direct monitoring of the cardiac and respiratory cycles are incomplete or absent, but researchers still wish to reduce this source of noise in the image data. Copyright © 2011 Wiley Periodicals, Inc.

  15. Machine learning based analytics of micro-MRI trabecular bone microarchitecture and texture in type 1 Gaucher disease.

    PubMed

    Sharma, Gulshan B; Robertson, Douglas D; Laney, Dawn A; Gambello, Michael J; Terk, Michael

    2016-06-14

    Type 1 Gaucher disease (GD) is an autosomal recessive lysosomal storage disease, affecting bone metabolism, structure and strength. Current bone assessment methods are not ideal. Semi-quantitative MRI scoring is unreliable, not standardized, and only evaluates bone marrow. DXA BMD is also used but is a limited predictor of bone fragility/fracture risk. Our purpose was to measure trabecular bone microarchitecture, as a biomarker of bone disease severity, in type 1 GD individuals with different GD genotypes and to apply machine learning based analytics to discriminate between GD patients and healthy individuals. Micro-MR imaging of the distal radius was performed on 20 type 1 GD patients and 10 healthy controls (HC). Fifteen stereological and textural measures (STM) were calculated from the MR images. General linear models demonstrated significant differences between GD and HC, and GD genotypes. Stereological measures, main contributors to the first two principal components (PCs), explained ~50% of data variation and were significantly different between males and females. Subsequent PCs textural measures were significantly different between GD patients and HC individuals. Textural measures also significantly differed between GD genotypes, and distinguished between GD patients with normal and pathologic DXA scores. PCA and SVM predictive analyses discriminated between GD and HC with maximum accuracy of 73% and area under ROC curve of 0.79. Trabecular STM differences can be quantified between GD patients and HC, and GD sub-types using micro-MRI and machine learning based analytics. Work is underway to expand this approach to evaluate GD disease burden and treatment efficacy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  17. Using Active Learning for Speeding up Calibration in Simulation Models

    PubMed Central

    Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2015-01-01

    Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190

  18. Application of advanced machine learning methods on resting-state fMRI network for identification of mild cognitive impairment and Alzheimer's disease.

    PubMed

    Khazaee, Ali; Ebrahimzadeh, Ata; Babajani-Feremi, Abbas

    2016-09-01

    The study of brain networks by resting-state functional magnetic resonance imaging (rs-fMRI) is a promising method for identifying patients with dementia from healthy controls (HC). Using graph theory, different aspects of the brain network can be efficiently characterized by calculating measures of integration and segregation. In this study, we combined a graph theoretical approach with advanced machine learning methods to study the brain network in 89 patients with mild cognitive impairment (MCI), 34 patients with Alzheimer's disease (AD), and 45 age-matched HC. The rs-fMRI connectivity matrix was constructed using a brain parcellation based on a 264 putative functional areas. Using the optimal features extracted from the graph measures, we were able to accurately classify three groups (i.e., HC, MCI, and AD) with accuracy of 88.4 %. We also investigated performance of our proposed method for a binary classification of a group (e.g., MCI) from two other groups (e.g., HC and AD). The classification accuracies for identifying HC from AD and MCI, AD from HC and MCI, and MCI from HC and AD, were 87.3, 97.5, and 72.0 %, respectively. In addition, results based on the parcellation of 264 regions were compared to that of the automated anatomical labeling atlas (AAL), consisted of 90 regions. The accuracy of classification of three groups using AAL was degraded to 83.2 %. Our results show that combining the graph measures with the machine learning approach, on the basis of the rs-fMRI connectivity analysis, may assist in diagnosis of AD and MCI.

  19. SU-D-18C-02: Feasibility of Using a Short ASL Scan for Calibrating Cerebral Blood Flow Obtained From DSC-MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P; Chang, T; Huang, K

    2014-06-01

    Purpose: This study aimed to evaluate the feasibility of using a short arterial spin labeling (ASL) scan for calibrating the dynamic susceptibility contrast- (DSC-) MRI in a group of patients with internal carotid artery stenosis. Methods: Six patients with unilateral ICA stenosis enrolled in the study on a 3T clinical MRI scanner. The ASL-cerebral blood flow (-CBF) maps were calculated by averaging different number of dynamic points (N=1-45) acquired by using a Q2TIPS sequence. For DSC perfusion analysis, arterial input function was selected to derive the relative cerebral blood flow (rCBF) map and the delay (Tmax) map. Patient-specific CF wasmore » calculated from the mean ASL- and DSC-CBF obtained from three different masks: (1)Tmax< 3s, (2)combined gray matter mask with mask 1, (3)mask 2 with large vessels removed. One CF value was created for each number of averages by using each of the three masks for calibrating the DSC-CBF map. The CF value of the largest number of averages (NL=45) was used to determine the acceptable range(< 10%, <15%, and <20%) of CF values corresponding to the minimally acceptable number of average (NS) for each patient. Results: Comparing DSC CBF maps corrected by CF values of NL (CBFL) in ACA, MCA and PCA territories, all masks resulted in smaller CBF on the ipsilateral side than the contralateral side of the MCA territory(p<.05). The values obtained from mask 1 were significantly different than the mask 3(p<.05). Using mask 3, the medium values of Ns were 4(<10%), 2(<15%) and 2(<20%), with the worst case scenario (maximum Ns) of 25, 4, and 4, respectively. Conclusion: This study found that reliable calibration of DSC-CBF can be achieved from a short pulsed ASL scan. We suggested use a mask based on the Tmax threshold, the inclusion of gray matter only and the exclusion of large vessels for performing the calibration.« less

  20. Comparison of fMRI data analysis by SPM99 on different operating systems.

    PubMed

    Shinagawa, Hideo; Honda, Ei-ichi; Ono, Takashi; Kurabayashi, Tohru; Ohyama, Kimie

    2004-09-01

    The hardware chosen for fMRI data analysis may depend on the platform already present in the laboratory or the supporting software. In this study, we ran SPM99 software on multiple platforms to examine whether we could analyze fMRI data by SPM99, and to compare their differences and limitations in processing fMRI data, which can be attributed to hardware capabilities. Six normal right-handed volunteers participated in a study of hand-grasping to obtain fMRI data. Each subject performed a run that consisted of 98 images. The run was measured using a gradient echo-type echo planar imaging sequence on a 1.5T apparatus with a head coil. We used several personal computer (PC), Unix and Linux machines to analyze the fMRI data. There were no differences in the results obtained on several PC, Unix and Linux machines. The only limitations in processing large amounts of the fMRI data were found using PC machines. This suggests that the results obtained with different machines were not affected by differences in hardware components, such as the CPU, memory and hard drive. Rather, it is likely that the limitations in analyzing a huge amount of the fMRI data were due to differences in the operating system (OS).

  1. Demystifying liver iron concentration measurements with MRI.

    PubMed

    Henninger, B

    2018-06-01

    This Editorial comment refers to the article: Non-invasive measurement of liver iron concentration using 3-Tesla magnetic resonance imaging: validation against biopsy. D'Assignies G, et al. Eur Radiol Nov 2017. • MRI is a widely accepted reliable tool to determine liver iron concentration. • MRI cannot measure iron directly, it needs calibration. • Calibration curves for 3.0T are rare in the literature. • The study by d'Assignies et al. provides valuable information on this topic. • Evaluation of liver iron overload should no longer be restricted to experts.

  2. Calibration of helical tomotherapy machine using EPR/alanine dosimetry.

    PubMed

    Perichon, Nicolas; Garcia, Tristan; François, Pascal; Lourenço, Valérie; Lesven, Caroline; Bordy, Jean-Marc

    2011-03-01

    Current codes of practice for clinical reference dosimetry of high-energy photon beams in conventional radiotherapy recommend using a 10 x 10 cm2 square field, with the detector at a reference depth of 10 cm in water and 100 cm source to surface distance (SSD) (AAPM TG-51) or 100 cm source-to-axis distance (SAD) (IAEA TRS-398). However, the maximum field size of a helical tomotherapy (HT) machine is 40 x 5 cm2 defined at 85 cm SAD. These nonstandard conditions prevent a direct implementation of these protocols. The purpose of this study is twofold: To check the absorbed dose in water and dose rate calibration of a tomotherapy unit as well as the accuracy of the tomotherapy treatment planning system (TPS) calculations for a specific test case. Both topics are based on the use of electron paramagnetic resonance (EPR) using alanine as transfer dosimeter between the Laboratoire National Henri Becquerel (LNHB) 60Co-gamma-ray reference beam and the Institut Curie's HT beam. Irradiations performed in the LNHB reference 60Co-gamma-ray beam allowed setting up the calibration method, which was then implemented and tested at the LNHB 6 MV linac x-ray beam, resulting in a deviation of 1.6% (at a 1% standard uncertainty) relative to the reference value determined with the standard IAEA TRS-398 protocol. HT beam dose rate estimation shows a difference of 2% with the value stated by the manufacturer at a 2% standard uncertainty. A 4% deviation between measured dose and the calculation from the tomotherapy TPS was found. The latter was originated by an inadequate representation of the phantom CT-scan values and, consequently, mass densities within the phantom. This difference has been explained by the mass density values given by the CT-scan and used by the TPS which were not the true ones. Once corrected using Monte Carlo N-Particle simulations to validate the accuracy of this process, the difference between corrected TPS calculations and alanine measured dose values was then

  3. Unsupervised nonlinear dimensionality reduction machine learning methods applied to multiparametric MRI in cerebral ischemia: preliminary results

    NASA Astrophysics Data System (ADS)

    Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.

    2014-03-01

    The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.

  4. An RF dosimeter for independent SAR measurement in MRI scanners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Di; Bottomley, Paul A.; El-Sharkawy, AbdEl-Monem M.

    2013-12-15

    Purpose: The monitoring and management of radio frequency (RF) exposure is critical for ensuring magnetic resonance imaging (MRI) safety. Commercial MRI scanners can overestimate specific absorption rates (SAR) and improperly restrict clinical MRI scans or the application of new MRI sequences, while underestimation of SAR can lead to tissue heating and thermal injury. Accurate scanner-independent RF dosimetry is essential for measuring actual exposure when SAR is critical for ensuring regulatory compliance and MRI safety, for establishing RF exposure while evaluating interventional leads and devices, and for routine MRI quality assessment by medical physicists. However, at present there are no scanner-independentmore » SAR dosimeters. Methods: An SAR dosimeter with an RF transducer comprises two orthogonal, rectangular copper loops and a spherical MRI phantom. The transducer is placed in the magnet bore and calibrated to approximate the resistive loading of the scanner's whole-body birdcage RF coil for human subjects in Philips, GE and Siemens 3 tesla (3T) MRI scanners. The transducer loop reactances are adjusted to minimize interference with the transmit RF field (B{sub 1}) at the MRI frequency. Power from the RF transducer is sampled with a high dynamic range power monitor and recorded on a computer. The deposited power is calibrated and tested on eight different MRI scanners. Whole-body absorbed power vs weight and body mass index (BMI) is measured directly on 26 subjects. Results: A single linear calibration curve sufficed for RF dosimetry at 127.8 MHz on three different Philips and three GE 3T MRI scanners. An RF dosimeter operating at 123.2 MHz on two Siemens 3T scanners required a separate transducer and a slightly different calibration curve. Measurement accuracy was ∼3%. With the torso landmarked at the xiphoid, human adult whole‑body absorbed power varied approximately linearly with patient weight and BMI. This indicates that whole-body torso SAR is

  5. Single slice US-MRI registration for neurosurgical MRI-guided US

    NASA Astrophysics Data System (ADS)

    Pardasani, Utsav; Baxter, John S. H.; Peters, Terry M.; Khan, Ali R.

    2016-03-01

    Image-based ultrasound to magnetic resonance image (US-MRI) registration can be an invaluable tool in image-guided neuronavigation systems. State-of-the-art commercial and research systems utilize image-based registration to assist in functions such as brain-shift correction, image fusion, and probe calibration. Since traditional US-MRI registration techniques use reconstructed US volumes or a series of tracked US slices, the functionality of this approach can be compromised by the limitations of optical or magnetic tracking systems in the neurosurgical operating room. These drawbacks include ergonomic issues, line-of-sight/magnetic interference, and maintenance of the sterile field. For those seeking a US vendor-agnostic system, these issues are compounded with the challenge of instrumenting the probe without permanent modification and calibrating the probe face to the tracking tool. To address these challenges, this paper explores the feasibility of a real-time US-MRI volume registration in a small virtual craniotomy site using a single slice. We employ the Linear Correlation of Linear Combination (LC2) similarity metric in its patch-based form on data from MNI's Brain Images for Tumour Evaluation (BITE) dataset as a PyCUDA enabled Python module in Slicer. By retaining the original orientation information, we are able to improve on the poses using this approach. To further assist the challenge of US-MRI registration, we also present the BOXLC2 metric which demonstrates a speed improvement to LC2, while retaining a similar accuracy in this context.

  6. Automated Segmentation of the Parotid Gland Based on Atlas Registration and Machine Learning: A Longitudinal MRI Study in Head-and-Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofeng; Wu, Ning; Cheng, Guanghui

    Purpose: To develop an automated magnetic resonance imaging (MRI) parotid segmentation method to monitor radiation-induced parotid gland changes in patients after head and neck radiation therapy (RT). Methods and Materials: The proposed method combines the atlas registration method, which captures the global variation of anatomy, with a machine learning technology, which captures the local statistical features, to automatically segment the parotid glands from the MRIs. The segmentation method consists of 3 major steps. First, an atlas (pre-RT MRI and manually contoured parotid gland mask) is built for each patient. A hybrid deformable image registration is used to map the pre-RTmore » MRI to the post-RT MRI, and the transformation is applied to the pre-RT parotid volume. Second, the kernel support vector machine (SVM) is trained with the subject-specific atlas pair consisting of multiple features (intensity, gradient, and others) from the aligned pre-RT MRI and the transformed parotid volume. Third, the well-trained kernel SVM is used to differentiate the parotid from surrounding tissues in the post-RT MRIs by statistically matching multiple texture features. A longitudinal study of 15 patients undergoing head and neck RT was conducted: baseline MRI was acquired prior to RT, and the post-RT MRIs were acquired at 3-, 6-, and 12-month follow-up examinations. The resulting segmentations were compared with the physicians' manual contours. Results: Successful parotid segmentation was achieved for all 15 patients (42 post-RT MRIs). The average percentage of volume differences between the automated segmentations and those of the physicians' manual contours were 7.98% for the left parotid and 8.12% for the right parotid. The average volume overlap was 91.1% ± 1.6% for the left parotid and 90.5% ± 2.4% for the right parotid. The parotid gland volume reduction at follow-up was 25% at 3 months, 27% at 6 months, and 16% at 12 months. Conclusions: We have validated

  7. Automated segmentation of the parotid gland based on atlas registration and machine learning: a longitudinal MRI study in head-and-neck radiation therapy.

    PubMed

    Yang, Xiaofeng; Wu, Ning; Cheng, Guanghui; Zhou, Zhengyang; Yu, David S; Beitler, Jonathan J; Curran, Walter J; Liu, Tian

    2014-12-01

    To develop an automated magnetic resonance imaging (MRI) parotid segmentation method to monitor radiation-induced parotid gland changes in patients after head and neck radiation therapy (RT). The proposed method combines the atlas registration method, which captures the global variation of anatomy, with a machine learning technology, which captures the local statistical features, to automatically segment the parotid glands from the MRIs. The segmentation method consists of 3 major steps. First, an atlas (pre-RT MRI and manually contoured parotid gland mask) is built for each patient. A hybrid deformable image registration is used to map the pre-RT MRI to the post-RT MRI, and the transformation is applied to the pre-RT parotid volume. Second, the kernel support vector machine (SVM) is trained with the subject-specific atlas pair consisting of multiple features (intensity, gradient, and others) from the aligned pre-RT MRI and the transformed parotid volume. Third, the well-trained kernel SVM is used to differentiate the parotid from surrounding tissues in the post-RT MRIs by statistically matching multiple texture features. A longitudinal study of 15 patients undergoing head and neck RT was conducted: baseline MRI was acquired prior to RT, and the post-RT MRIs were acquired at 3-, 6-, and 12-month follow-up examinations. The resulting segmentations were compared with the physicians' manual contours. Successful parotid segmentation was achieved for all 15 patients (42 post-RT MRIs). The average percentage of volume differences between the automated segmentations and those of the physicians' manual contours were 7.98% for the left parotid and 8.12% for the right parotid. The average volume overlap was 91.1% ± 1.6% for the left parotid and 90.5% ± 2.4% for the right parotid. The parotid gland volume reduction at follow-up was 25% at 3 months, 27% at 6 months, and 16% at 12 months. We have validated our automated parotid segmentation algorithm in a longitudinal study

  8. First steps in using machine learning on fMRI data to predict intrusive memories of traumatic film footage

    PubMed Central

    Clark, Ian A.; Niehaus, Katherine E.; Duff, Eugene P.; Di Simplicio, Martina C.; Clifford, Gari D.; Smith, Stephen M.; Mackay, Clare E.; Woolrich, Mark W.; Holmes, Emily A.

    2014-01-01

    After psychological trauma, why do some only some parts of the traumatic event return as intrusive memories while others do not? Intrusive memories are key to cognitive behavioural treatment for post-traumatic stress disorder, and an aetiological understanding is warranted. We present here analyses using multivariate pattern analysis (MVPA) and a machine learning classifier to investigate whether peri-traumatic brain activation was able to predict later intrusive memories (i.e. before they had happened). To provide a methodological basis for understanding the context of the current results, we first show how functional magnetic resonance imaging (fMRI) during an experimental analogue of trauma (a trauma film) via a prospective event-related design was able to capture an individual's later intrusive memories. Results showed widespread increases in brain activation at encoding when viewing a scene in the scanner that would later return as an intrusive memory in the real world. These fMRI results were replicated in a second study. While traditional mass univariate regression analysis highlighted an association between brain processing and symptomatology, this is not the same as prediction. Using MVPA and a machine learning classifier, it was possible to predict later intrusive memories across participants with 68% accuracy, and within a participant with 97% accuracy; i.e. the classifier could identify out of multiple scenes those that would later return as an intrusive memory. We also report here brain networks key in intrusive memory prediction. MVPA opens the possibility of decoding brain activity to reconstruct idiosyncratic cognitive events with relevance to understanding and predicting mental health symptoms. PMID:25151915

  9. The prototype of high stiffness load cell for Rockwell hardness testing machine calibration according to ISO 6508-2:2015

    NASA Astrophysics Data System (ADS)

    Pakkratoke, M.; Sanponpute, T.

    2017-09-01

    The penetrated depth of the Rockwell hardness testing machine is normally not more than 0.260 mm. Using commercial load cell cannot achieve the proposed force calibration according to ISO 6508-2[1]. For these reason, the high stiffness load cell (HSL) was fabricated. Its obvious advantage is deformation less than 0.020 mm at 150 kgf maximum load applied. The HSL prototype was designed in concept of direct compression and then confirmed with finite element analysis, FEA. The results showed that the maximum deformation was lower than 0.012 mm at capacity.

  10. Portable MRI developed at Los Alamos

    ScienceCinema

    Espy, Michelle

    2018-02-14

    Scientists at Los Alamos National Laboratory are developing an ultra-low-field Magnetic Resonance Imaging (MRI) system that could be low-power and lightweight enough for forward deployment on the battlefield and to field hospitals in the World's poorest regions. "MRI technology is a powerful medical diagnostic tool," said Michelle Espy, the Battlefield MRI (bMRI) project leader, "ideally suited for imaging soft-tissue injury, particularly to the brain." But hospital-based MRI devices are big and expensive, and require considerable infrastructure, such as large quantities of cryogens like liquid nitrogen and helium, and they typically use a large amount of energy. "Standard MRI machines just can't go everywhere," said Espy. "Soldiers wounded in battle usually have to be flown to a large hospital and people in emerging nations just don't have access to MRI at all. We've been in contact with doctors who routinely work in the Third World and report that MRI would be extremely valuable in treating pediatric encephalopathy, and other serious diseases in children." So the Los Alamos team started thinking about a way to make an MRI device that could be relatively easy to transport, set up, and use in an unconventional setting. Conventional MRI machines use very large magnetic fields that align the protons in water molecules to then create magnetic resonance signals, which are detected by the machine and turned into images. The large magnetic fields create exceptionally detailed images, but they are difficult and expensive to make. Espy and her team wanted to see if images of sufficient quality could be made with ultra-low-magnetic fields, similar in strength to the Earth's magnetic field. To achieve images at such low fields they use exquisitely sensitive detectors called Superconducting Quantum Interference Devices, or SQUIDs. SQUIDs are among the most sensitive magnetic field detectors available, so interference with the signal is the primary stumbling block. "SQUIDs are

  11. Portable MRI developed at Los Alamos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Espy, Michelle

    Scientists at Los Alamos National Laboratory are developing an ultra-low-field Magnetic Resonance Imaging (MRI) system that could be low-power and lightweight enough for forward deployment on the battlefield and to field hospitals in the World's poorest regions. "MRI technology is a powerful medical diagnostic tool," said Michelle Espy, the Battlefield MRI (bMRI) project leader, "ideally suited for imaging soft-tissue injury, particularly to the brain." But hospital-based MRI devices are big and expensive, and require considerable infrastructure, such as large quantities of cryogens like liquid nitrogen and helium, and they typically use a large amount of energy. "Standard MRI machines justmore » can't go everywhere," said Espy. "Soldiers wounded in battle usually have to be flown to a large hospital and people in emerging nations just don't have access to MRI at all. We've been in contact with doctors who routinely work in the Third World and report that MRI would be extremely valuable in treating pediatric encephalopathy, and other serious diseases in children." So the Los Alamos team started thinking about a way to make an MRI device that could be relatively easy to transport, set up, and use in an unconventional setting. Conventional MRI machines use very large magnetic fields that align the protons in water molecules to then create magnetic resonance signals, which are detected by the machine and turned into images. The large magnetic fields create exceptionally detailed images, but they are difficult and expensive to make. Espy and her team wanted to see if images of sufficient quality could be made with ultra-low-magnetic fields, similar in strength to the Earth's magnetic field. To achieve images at such low fields they use exquisitely sensitive detectors called Superconducting Quantum Interference Devices, or SQUIDs. SQUIDs are among the most sensitive magnetic field detectors available, so interference with the signal is the primary stumbling block

  12. Application of coordinate transform on ball plate calibration

    NASA Astrophysics Data System (ADS)

    Wei, Hengzheng; Wang, Weinong; Ren, Guoying; Pei, Limei

    2015-02-01

    For the ball plate calibration method with coordinate measurement machine (CMM) equipped with laser interferometer, it is essential to adjust the ball plate parallel to the direction of laser beam. It is very time-consuming. To solve this problem, a method based on coordinate transformation between machine system and object system is presented. With the fixed points' coordinates of the ball plate measured in the object system and machine system, the transformation matrix between the coordinate systems is calculated. The laser interferometer measurement data error due to the placement of ball plate can be corrected with this transformation matrix. Experimental results indicate that this method is consistent with the handy adjustment method. It avoids the complexity of ball plate adjustment. It also can be applied to the ball beam calibration.

  13. Z-correction, a method for achieving ultraprecise self-calibration on large area coordinate measurement machines for photomasks

    NASA Astrophysics Data System (ADS)

    Ekberg, Peter; Stiblert, Lars; Mattsson, Lars

    2014-05-01

    High-quality photomasks are a prerequisite for the production of flat panel TVs, tablets and other kinds of high-resolution displays. During the past years, the resolution demand has become more and more accelerated, and today, the high-definition standard HD, 1920 × 1080 pixels2, is well established, and already the next-generation so-called ultra-high-definition UHD or 4K display is entering the market. Highly advanced mask writers are used to produce the photomasks needed for the production of such displays. The dimensional tolerance in X and Y on absolute pattern placement on these photomasks, with sizes of square meters, has been in the range of 200-300 nm (3σ), but is now on the way to be <150 nm (3σ). To verify these photomasks, 2D ultra-precision coordinate measurement machines are used with even tighter tolerance requirements. The metrology tool MMS15000 is today the world standard tool used for the verification of large area photomasks. This paper will present a method called Z-correction that has been developed for the purpose of improving the absolute X, Y placement accuracy of features on the photomask in the writing process. However, Z-correction is also a prerequisite for achieving X and Y uncertainty levels <90 nm (3σ) in the self-calibration process of the MMS15000 stage area of 1.4 × 1.5 m2. When talking of uncertainty specifications below 200 nm (3σ) of such a large area, the calibration object used, here an 8-16 mm thick quartz plate of size approximately a square meter, cannot be treated as a rigid body. The reason for this is that the absolute shape of the plate will be affected by gravity and will therefore not be the same at different places on the measurement machine stage when it is used in the self-calibration process. This mechanical deformation will stretch or compress the top surface (i.e. the image side) of the plate where the pattern resides, and therefore spatially deform the mask pattern in the X- and Y-directions. Errors due

  14. Control-group feature normalization for multivariate pattern analysis of structural MRI data using the support vector machine.

    PubMed

    Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T

    2016-05-15

    Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Low-cost precision rotary index calibration

    NASA Astrophysics Data System (ADS)

    Ng, T. W.; Lim, T. S.

    2005-08-01

    The traditional method for calibrating angular indexing repeatability of rotary axes on machine tools and measuring equipment is with a precision polygon (usually 12 sided) and an autocollimator or angular interferometer. Such a setup is typically expensive. Here, we propose a far more cost-effective approach that uses just a laser, diffractive optical element, and CCD camera. We show that significantly high accuracies can be achieved for angular index calibration.

  16. Stable Atlas-based Mapped Prior (STAMP) machine-learning segmentation for multicenter large-scale MRI data.

    PubMed

    Kim, Eun Young; Magnotta, Vincent A; Liu, Dawei; Johnson, Hans J

    2014-09-01

    Machine learning (ML)-based segmentation methods are a common technique in the medical image processing field. In spite of numerous research groups that have investigated ML-based segmentation frameworks, there remains unanswered aspects of performance variability for the choice of two key components: ML algorithm and intensity normalization. This investigation reveals that the choice of those elements plays a major part in determining segmentation accuracy and generalizability. The approach we have used in this study aims to evaluate relative benefits of the two elements within a subcortical MRI segmentation framework. Experiments were conducted to contrast eight machine-learning algorithm configurations and 11 normalization strategies for our brain MR segmentation framework. For the intensity normalization, a Stable Atlas-based Mapped Prior (STAMP) was utilized to take better account of contrast along boundaries of structures. Comparing eight machine learning algorithms on down-sampled segmentation MR data, it was obvious that a significant improvement was obtained using ensemble-based ML algorithms (i.e., random forest) or ANN algorithms. Further investigation between these two algorithms also revealed that the random forest results provided exceptionally good agreement with manual delineations by experts. Additional experiments showed that the effect of STAMP-based intensity normalization also improved the robustness of segmentation for multicenter data sets. The constructed framework obtained good multicenter reliability and was successfully applied on a large multicenter MR data set (n>3000). Less than 10% of automated segmentations were recommended for minimal expert intervention. These results demonstrate the feasibility of using the ML-based segmentation tools for processing large amount of multicenter MR images. We demonstrated dramatically different result profiles in segmentation accuracy according to the choice of ML algorithm and intensity

  17. A machine learning calibration model using random forests to improve sensor performance for lower-cost air quality monitoring

    NASA Astrophysics Data System (ADS)

    Zimmerman, Naomi; Presto, Albert A.; Kumar, Sriniwasa P. N.; Gu, Jason; Hauryliuk, Aliaksei; Robinson, Ellis S.; Robinson, Allen L.; Subramanian, R.

    2018-01-01

    Low-cost sensing strategies hold the promise of denser air quality monitoring networks, which could significantly improve our understanding of personal air pollution exposure. Additionally, low-cost air quality sensors could be deployed to areas where limited monitoring exists. However, low-cost sensors are frequently sensitive to environmental conditions and pollutant cross-sensitivities, which have historically been poorly addressed by laboratory calibrations, limiting their utility for monitoring. In this study, we investigated different calibration models for the Real-time Affordable Multi-Pollutant (RAMP) sensor package, which measures CO, NO2, O3, and CO2. We explored three methods: (1) laboratory univariate linear regression, (2) empirical multiple linear regression, and (3) machine-learning-based calibration models using random forests (RF). Calibration models were developed for 16-19 RAMP monitors (varied by pollutant) using training and testing windows spanning August 2016 through February 2017 in Pittsburgh, PA, US. The random forest models matched (CO) or significantly outperformed (NO2, CO2, O3) the other calibration models, and their accuracy and precision were robust over time for testing windows of up to 16 weeks. Following calibration, average mean absolute error on the testing data set from the random forest models was 38 ppb for CO (14 % relative error), 10 ppm for CO2 (2 % relative error), 3.5 ppb for NO2 (29 % relative error), and 3.4 ppb for O3 (15 % relative error), and Pearson r versus the reference monitors exceeded 0.8 for most units. Model performance is explored in detail, including a quantification of model variable importance, accuracy across different concentration ranges, and performance in a range of monitoring contexts including the National Ambient Air Quality Standards (NAAQS) and the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. A key strength of the RF approach is that

  18. Machine-learning in grading of gliomas based on multi-parametric magnetic resonance imaging at 3T.

    PubMed

    Citak-Er, Fusun; Firat, Zeynep; Kovanlikaya, Ilhami; Ture, Ugur; Ozturk-Isik, Esin

    2018-06-15

    The objective of this study was to assess the contribution of multi-parametric (mp) magnetic resonance imaging (MRI) quantitative features in the machine learning-based grading of gliomas with a multi-region-of-interests approach. Forty-three patients who were newly diagnosed as having a glioma were included in this study. The patients were scanned prior to any therapy using a standard brain tumor magnetic resonance (MR) imaging protocol that included T1 and T2-weighted, diffusion-weighted, diffusion tensor, MR perfusion and MR spectroscopic imaging. Three different regions-of-interest were drawn for each subject to encompass tumor, immediate tumor periphery, and distant peritumoral edema/normal. The normalized mp-MRI features were used to build machine-learning models for differentiating low-grade gliomas (WHO grades I and II) from high grades (WHO grades III and IV). In order to assess the contribution of regional mp-MRI quantitative features to the classification models, a support vector machine-based recursive feature elimination method was applied prior to classification. A machine-learning model based on support vector machine algorithm with linear kernel achieved an accuracy of 93.0%, a specificity of 86.7%, and a sensitivity of 96.4% for the grading of gliomas using ten-fold cross validation based on the proposed subset of the mp-MRI features. In this study, machine-learning based on multiregional and multi-parametric MRI data has proven to be an important tool in grading glial tumors accurately even in this limited patient population. Future studies are needed to investigate the use of machine learning algorithms for brain tumor classification in a larger patient cohort. Copyright © 2018. Published by Elsevier Ltd.

  19. Deriving stable multi-parametric MRI radiomic signatures in the presence of inter-scanner variations: survival prediction of glioblastoma via imaging pattern analysis and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Rathore, Saima; Bakas, Spyridon; Akbari, Hamed; Shukla, Gaurav; Rozycki, Martin; Davatzikos, Christos

    2018-02-01

    There is mounting evidence that assessment of multi-parametric magnetic resonance imaging (mpMRI) profiles can noninvasively predict survival in many cancers, including glioblastoma. The clinical adoption of mpMRI as a prognostic biomarker, however, depends on its applicability in a multicenter setting, which is hampered by inter-scanner variations. This concept has not been addressed in existing studies. We developed a comprehensive set of within-patient normalized tumor features such as intensity profile, shape, volume, and tumor location, extracted from multicenter mpMRI of two large (npatients=353) cohorts, comprising the Hospital of the University of Pennsylvania (HUP, npatients=252, nscanners=3) and The Cancer Imaging Archive (TCIA, npatients=101, nscanners=8). Inter-scanner harmonization was conducted by normalizing the tumor intensity profile, with that of the contralateral healthy tissue. The extracted features were integrated by support vector machines to derive survival predictors. The predictors' generalizability was evaluated within each cohort, by two cross-validation configurations: i) pooled/scanner-agnostic, and ii) across scanners (training in multiple scanners and testing in one). The median survival in each configuration was used as a cut-off to divide patients in long- and short-survivors. Accuracy (ACC) for predicting long- versus short-survivors, for these configurations was ACCpooled=79.06% and ACCpooled=84.7%, ACCacross=73.55% and ACCacross=74.76%, in HUP and TCIA datasets, respectively. The hazard ratio at 95% confidence interval was 3.87 (2.87-5.20, P<0.001) and 6.65 (3.57-12.36, P<0.001) for HUP and TCIA datasets, respectively. Our findings suggest that adequate data normalization coupled with machine learning classification allows robust prediction of survival estimates on mpMRI acquired by multiple scanners.

  20. Mapping the pharmacological modulation of brain oxygen metabolism: The effects of caffeine on absolute CMRO2 measured using dual calibrated fMRI.

    PubMed

    Merola, Alberto; Germuska, Michael A; Warnert, Esther Ah; Richmond, Lewys; Helme, Daniel; Khot, Sharmila; Murphy, Kevin; Rogers, Peter J; Hall, Judith E; Wise, Richard G

    2017-07-15

    This study aims to map the acute effects of caffeine ingestion on grey matter oxygen metabolism and haemodynamics with a novel MRI method. Sixteen healthy caffeine consumers (8 males, age=24.7±5.1) were recruited to this randomised, double-blind, placebo-controlled study. Each participant was scanned on two days before and after the delivery of an oral caffeine (250mg) or placebo capsule. Our measurements were obtained with a newly proposed estimation approach applied to data from a dual calibration fMRI experiment that uses hypercapnia and hyperoxia to modulate brain blood flow and oxygenation. Estimates were based on a forward model that describes analytically the contributions of cerebral blood flow (CBF) and of the measured end-tidal partial pressures of CO 2 and O 2 to the acquired dual-echo GRE signal. The method allows the estimation of grey matter maps of: oxygen extraction fraction (OEF), CBF, CBF-related cerebrovascular reactivity (CVR) and cerebral metabolic rate of oxygen consumption (CMRO 2 ). Other estimates from a multi inversion time ASL acquisition (mTI-ASL), salivary samples of the caffeine concentration and behavioural measurements are also reported. We observed significant differences between caffeine and placebo on average across grey matter, with OEF showing an increase of 15.6% (SEM±4.9%, p<0.05) with caffeine, while CBF and CMRO 2 showed differences of -30.4% (SEM±1.6%, p<0.01) and -18.6% (SEM±2.9%, p<0.01) respectively with caffeine administration. The reduction in oxygen metabolism found is somehow unexpected, but consistent with a hypothesis of decreased energetic demand, supported by previous electrophysiological studies reporting reductions in spectral power with EEG. Moreover the maps of the physiological parameters estimated illustrate the spatial distribution of changes across grey matter enabling us to localise the effects of caffeine with voxel-wise resolution. CBF changes were widespread as reported by previous findings, while

  1. OT calibration and service maintenance manual.

    DOT National Transportation Integrated Search

    2012-01-01

    The machine conditions, as well as the values at the calibration and control parameters, may determine the quality of each test results obtained. In order to keep consistency and accuracy, the conditions, performance and measurements of an OT must be...

  2. Force supplementary comparison at SIM (SIM.M.F-S5), compression testing machines calibration, up to 200 kN

    NASA Astrophysics Data System (ADS)

    Cárdenas Moctezuma, A.; Torres Guzmán, J. C.

    2016-01-01

    CENAM, through the Force and Pressure Division, organized a comparison on testing machines calibration, in compression mode. The participating laboratories were SIM National Institutes of Metrology from Colombia, Peru and Costa Rica, where CENAM, Mexico was the pilot and reference laboratory. The results obtained by the laboratories are presented in this paper as well as the analysis of compatibility. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  3. The production of calibration specimens for impact testing of subsize Charpy specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, D.J.; Corwin, W.R.; Owings, T.D.

    1994-09-01

    Calibration specimens have been manufactured for checking the performance of a pendulum impact testing machine that has been configured for testing subsize specimens, both half-size (5.0 {times} 5.0 {times} 25.4 mm) and third-size (3.33 {times} 3.33 {times} 25.4 mm). Specimens were fabricated from quenched-and-tempered 4340 steel heat treated to produce different microstructures that would result in either high or low absorbed energy levels on testing. A large group of both half- and third-size specimens were tested at {minus}40{degrees}C. The results of the tests were analyzed for average value and standard deviation, and these values were used to establish calibration limitsmore » for the Charpy impact machine when testing subsize specimens. These average values plus or minus two standard deviations were set as the acceptable limits for the average of five tests for calibration of the impact testing machine.« less

  4. Calibrator device for the extrusion of cable coatings

    NASA Astrophysics Data System (ADS)

    Garbacz, Tomasz; Dulebová, Ľudmila; Spišák, Emil; Dulebová, Martina

    2016-05-01

    This paper presents selected results of theoretical and experimental research works on a new calibration device (calibrators) used to produce coatings of electric cables. The aim of this study is to present design solution calibration equipment and present a new calibration machine, which is an important element of the modernized technology extrusion lines for coating cables. As a result of the extrusion process of PVC modified with blowing agents, an extrudate in the form of an electrical cable was obtained. The conditions of the extrusion process were properly selected, which made it possible to obtain a product with solid external surface and cellular core.

  5. Method and apparatus for calibrating multi-axis load cells in a dexterous robot

    NASA Technical Reports Server (NTRS)

    Wampler, II, Charles W. (Inventor); Platt, Jr., Robert J. (Inventor)

    2012-01-01

    A robotic system includes a dexterous robot having robotic joints, angle sensors adapted for measuring joint angles at a corresponding one of the joints, load cells for measuring a set of strain values imparted to a corresponding one of the load cells during a predetermined pose of the robot, and a host machine. The host machine is electrically connected to the load cells and angle sensors, and receives the joint angle values and strain values during the predetermined pose. The robot presses together mating pairs of load cells to form the poses. The host machine executes an algorithm to process the joint angles and strain values, and from the set of all calibration matrices that minimize error in force balance equations, selects the set of calibration matrices that is closest in a value to a pre-specified value. A method for calibrating the load cells via the algorithm is also provided.

  6. Axial calibration methods of piezoelectric load sharing dynamometer

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Chang, Qingbing; Ren, Zongjin; Shao, Jun; Wang, Xinlei; Tian, Yu

    2018-06-01

    The relationship between input and output of load sharing dynamometer is seriously non-linear in different loading points of a plane, so it's significant for accutately measuring force to precisely calibrate the non-linear relationship. In this paper, firstly, based on piezoelectric load sharing dynamometer, calibration experiments of different loading points are performed in a plane. And then load sharing testing system is respectively calibrated based on BP algorithm and ELM (Extreme Learning Machine) algorithm. Finally, the results show that the calibration result of ELM is better than BP for calibrating the non-linear relationship between input and output of loading sharing dynamometer in the different loading points of a plane, which verifies that ELM algorithm is feasible in solving force non-linear measurement problem.

  7. Visual brain activity patterns classification with simultaneous EEG-fMRI: A multimodal approach.

    PubMed

    Ahmad, Rana Fayyaz; Malik, Aamir Saeed; Kamel, Nidal; Reza, Faruque; Amin, Hafeez Ullah; Hussain, Muhammad

    2017-01-01

    Classification of the visual information from the brain activity data is a challenging task. Many studies reported in the literature are based on the brain activity patterns using either fMRI or EEG/MEG only. EEG and fMRI considered as two complementary neuroimaging modalities in terms of their temporal and spatial resolution to map the brain activity. For getting a high spatial and temporal resolution of the brain at the same time, simultaneous EEG-fMRI seems to be fruitful. In this article, we propose a new method based on simultaneous EEG-fMRI data and machine learning approach to classify the visual brain activity patterns. We acquired EEG-fMRI data simultaneously on the ten healthy human participants by showing them visual stimuli. Data fusion approach is used to merge EEG and fMRI data. Machine learning classifier is used for the classification purposes. Results showed that superior classification performance has been achieved with simultaneous EEG-fMRI data as compared to the EEG and fMRI data standalone. This shows that multimodal approach improved the classification accuracy results as compared with other approaches reported in the literature. The proposed simultaneous EEG-fMRI approach for classifying the brain activity patterns can be helpful to predict or fully decode the brain activity patterns.

  8. AUTOMATIC CALIBRATING SYSTEM FOR PRESSURE TRANSDUCERS

    DOEpatents

    Amonette, E.L.; Rodgers, G.W.

    1958-01-01

    An automatic system for calibrating a number of pressure transducers is described. The disclosed embodiment of the invention uses a mercurial manometer to measure the air pressure applied to the transducer. A servo system follows the top of the mercury column as the pressure is changed and operates an analog- to-digital converter This converter furnishes electrical pulses, each representing an increment of pressure change, to a reversible counterThe transducer furnishes a signal at each calibration point, causing an electric typewriter and a card-punch machine to record the pressure at the instant as indicated by the counter. Another counter keeps track of the calibration points so that a number identifying each point is recorded with the corresponding pressure. A special relay control system controls the pressure trend and programs the sequential calibration of several transducers.

  9. A Non-Parametric Approach for the Activation Detection of Block Design fMRI Simulated Data Using Self-Organizing Maps and Support Vector Machine.

    PubMed

    Bahrami, Sheyda; Shamsi, Mousa

    2017-01-01

    Functional magnetic resonance imaging (fMRI) is a popular method to probe the functional organization of the brain using hemodynamic responses. In this method, volume images of the entire brain are obtained with a very good spatial resolution and low temporal resolution. However, they always suffer from high dimensionality in the face of classification algorithms. In this work, we combine a support vector machine (SVM) with a self-organizing map (SOM) for having a feature-based classification by using SVM. Then, a linear kernel SVM is used for detecting the active areas. Here, we use SOM for feature extracting and labeling the datasets. SOM has two major advances: (i) it reduces dimension of data sets for having less computational complexity and (ii) it is useful for identifying brain regions with small onset differences in hemodynamic responses. Our non-parametric model is compared with parametric and non-parametric methods. We use simulated fMRI data sets and block design inputs in this paper and consider the contrast to noise ratio (CNR) value equal to 0.6 for simulated datasets. fMRI simulated dataset has contrast 1-4% in active areas. The accuracy of our proposed method is 93.63% and the error rate is 6.37%.

  10. Teaching Camera Calibration by a Constructivist Methodology

    ERIC Educational Resources Information Center

    Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.

    2010-01-01

    This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…

  11. Machine tools error characterization and compensation by on-line measurement of artifact

    NASA Astrophysics Data System (ADS)

    Wahid Khan, Abdul; Chen, Wuyi; Wu, Lili

    2009-11-01

    Most manufacturing machine tools are utilized for mass production or batch production with high accuracy at a deterministic manufacturing principle. Volumetric accuracy of machine tools depends on the positional accuracy of the cutting tool, probe or end effector related to the workpiece in the workspace volume. In this research paper, a methodology is presented for volumetric calibration of machine tools by on-line measurement of an artifact or an object of a similar type. The machine tool geometric error characterization was carried out through a standard or an artifact, having similar geometry to the mass production or batch production product. The artifact was measured at an arbitrary position in the volumetric workspace with a calibrated Renishaw touch trigger probe system. Positional errors were stored into a computer for compensation purpose, to further run the manufacturing batch through compensated codes. This methodology was found quite effective to manufacture high precision components with more dimensional accuracy and reliability. Calibration by on-line measurement gives the advantage to improve the manufacturing process by use of deterministic manufacturing principle and found efficient and economical but limited to the workspace or envelop surface of the measured artifact's geometry or the profile.

  12. Classification of sodium MRI data of cartilage using machine learning.

    PubMed

    Madelin, Guillaume; Poidevin, Frederick; Makrymallis, Antonios; Regatte, Ravinder R

    2015-11-01

    To assess the possible utility of machine learning for classifying subjects with and subjects without osteoarthritis using sodium magnetic resonance imaging data. Theory: Support vector machine, k-nearest neighbors, naïve Bayes, discriminant analysis, linear regression, logistic regression, neural networks, decision tree, and tree bagging were tested. Sodium magnetic resonance imaging with and without fluid suppression by inversion recovery was acquired on the knee cartilage of 19 controls and 28 osteoarthritis patients. Sodium concentrations were measured in regions of interests in the knee for both acquisitions. Mean (MEAN) and standard deviation (STD) of these concentrations were measured in each regions of interest, and the minimum, maximum, and mean of these two measurements were calculated over all regions of interests for each subject. The resulting 12 variables per subject were used as predictors for classification. Either Min [STD] alone, or in combination with Mean [MEAN] or Min [MEAN], all from fluid suppressed data, were the best predictors with an accuracy >74%, mainly with linear logistic regression and linear support vector machine. Other good classifiers include discriminant analysis, linear regression, and naïve Bayes. Machine learning is a promising technique for classifying osteoarthritis patients and controls from sodium magnetic resonance imaging data. © 2014 Wiley Periodicals, Inc.

  13. Structural brain changes versus self-report: machine-learning classification of chronic fatigue syndrome patients.

    PubMed

    Sevel, Landrew S; Boissoneault, Jeff; Letzen, Janelle E; Robinson, Michael E; Staud, Roland

    2018-05-30

    Chronic fatigue syndrome (CFS) is a disorder associated with fatigue, pain, and structural/functional abnormalities seen during magnetic resonance brain imaging (MRI). Therefore, we evaluated the performance of structural MRI (sMRI) abnormalities in the classification of CFS patients versus healthy controls and compared it to machine learning (ML) classification based upon self-report (SR). Participants included 18 CFS patients and 15 healthy controls (HC). All subjects underwent T1-weighted sMRI and provided visual analogue-scale ratings of fatigue, pain intensity, anxiety, depression, anger, and sleep quality. sMRI data were segmented using FreeSurfer and 61 regions based on functional and structural abnormalities previously reported in patients with CFS. Classification was performed in RapidMiner using a linear support vector machine and bootstrap optimism correction. We compared ML classifiers based on (1) 61 a priori sMRI regional estimates and (2) SR ratings. The sMRI model achieved 79.58% classification accuracy. The SR (accuracy = 95.95%) outperformed both sMRI models. Estimates from multiple brain areas related to cognition, emotion, and memory contributed strongly to group classification. This is the first ML-based group classification of CFS. Our findings suggest that sMRI abnormalities are useful for discriminating CFS patients from HC, but SR ratings remain most effective in classification tasks.

  14. Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras

    NASA Astrophysics Data System (ADS)

    Quinn, Mark Kenneth

    2018-05-01

    Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.

  15. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    NASA Technical Reports Server (NTRS)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  16. Calibration Device Designed for proof ring used in SCC Experiment

    NASA Astrophysics Data System (ADS)

    Hu, X. Y.; Kang, Z. Y.; Yu, Y. L.

    2017-11-01

    In this paper, a calibration device for proof ring used in SCC (Stress Corrosion Cracking) experiment was designed. A compact size loading device was developed to replace traditional force standard machine or a long screw nut. The deformation of the proof ring was measured by a CCD (Charge-Coupled Device) during the calibration instead of digital caliper or a dial gauge. The calibration device was verified at laboratory that the precision of force loading is ±0.1% and the precision of deformation measurement is ±0.002mm.

  17. Design and calibration of a scanning tunneling microscope for large machined surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigg, D.A.; Russell, P.E.; Dow, T.A.

    During the last year the large sample STM has been designed, built and used for the observation of several different samples. Calibration of the scanner for prope dimensional interpretation of surface features has been a chief concern, as well as corrections for non-linear effects such as hysteresis during scans. Several procedures used in calibration and correction of piezoelectric scanners used in the laboratorys STMs are described.

  18. NMR, MRI, and spectroscopic MRI in inhomogeneous fields

    DOEpatents

    Demas, Vasiliki; Pines, Alexander; Martin, Rachel W; Franck, John; Reimer, Jeffrey A

    2013-12-24

    A method for locally creating effectively homogeneous or "clean" magnetic field gradients (of high uniformity) for imaging (with NMR, MRI, or spectroscopic MRI) both in in-situ and ex-situ systems with high degrees of inhomogeneous field strength. THe method of imaging comprises: a) providing a functional approximation of an inhomogeneous static magnetic field strength B.sub.0({right arrow over (r)}) at a spatial position {right arrow over (r)}; b) providing a temporal functional approximation of {right arrow over (G)}.sub.shim(t) with i basis functions and j variables for each basis function, resulting in v.sub.ij variables; c) providing a measured value .OMEGA., which is an temporally accumulated dephasing due to the inhomogeneities of B.sub.0({right arrow over(r)}); and d) minimizing a difference in the local dephasing angle .phi.({right arrow over (r)},t)=.gamma..intg..sub.0.sup.t{square root over (|{right arrow over (B)}.sub.1({right arrow over (r)},t')|.sup.2+({right arrow over (r)}{right arrow over (G)}.sub.shimG.sub.shim(t')+.parallel.{right arrow over (B)}.sub.0({right arrow over (r)}).parallel..DELTA..omega.({right arrow over (r)},t'/.gamma/).sup.2)}dt'-.OMEGA. by varying the v.sub.ij variables to form a set of minimized v.sub.ij variables. The method requires calibration of the static fields prior to minimization, but may thereafter be implemented without such calibration, may be used in open or closed systems, and potentially portable systems.

  19. Exploiting Task Constraints for Self-Calibrated Brain-Machine Interface Control Using Error-Related Potentials

    PubMed Central

    Iturrate, Iñaki; Grizou, Jonathan; Omedes, Jason; Oudeyer, Pierre-Yves; Lopes, Manuel; Montesano, Luis

    2015-01-01

    This paper presents a new approach for self-calibration BCI for reaching tasks using error-related potentials. The proposed method exploits task constraints to simultaneously calibrate the decoder and control the device, by using a robust likelihood function and an ad-hoc planner to cope with the large uncertainty resulting from the unknown task and decoder. The method has been evaluated in closed-loop online experiments with 8 users using a previously proposed BCI protocol for reaching tasks over a grid. The results show that it is possible to have a usable BCI control from the beginning of the experiment without any prior calibration. Furthermore, comparisons with simulations and previous results obtained using standard calibration hint that both the quality of recorded signals and the performance of the system were comparable to those obtained with a standard calibration approach. PMID:26131890

  20. An Introduction to Normalization and Calibration Methods in Functional MRI

    ERIC Educational Resources Information Center

    Liu, Thomas T.; Glover, Gary H.; Mueller, Bryon A.; Greve, Douglas N.; Brown, Gregory G.

    2013-01-01

    In functional magnetic resonance imaging (fMRI), the blood oxygenation level dependent (BOLD) signal is often interpreted as a measure of neural activity. However, because the BOLD signal reflects the complex interplay of neural, vascular, and metabolic processes, such an interpretation is not always valid. There is growing evidence that changes…

  1. Small mammal MRI imaging in spinal cord injury: a novel practical technique for using a 1.5 T MRI.

    PubMed

    Levene, Howard B; Mohamed, Feroze B; Faro, Scott H; Seshadri, Asha B; Loftus, Christopher M; Tuma, Ronald F; Jallo, Jack I

    2008-07-30

    The field of spinal cord injury research is an active one. The pathophysiology of SCI is not yet entirely revealed. As such, animal models are required for the exploration of new therapies and treatments. We present a novel technique using available hospital MRI machines to examine SCI in a mouse SCI model. The model is a 60 kdyne direct contusion injury in a mouse thoracic spine. No new electronic equipment is required. A 1.5T MRI machine with a human wrist coil is employed. A standard multisection 2D fast spin-echo (FSE) T2-weighted sequence is used for imaging the mouse. The contrast-to-noise ratio (CNR) between the injured and normal area of the spinal cord showed a three-fold increase in the contrast between these two regions. The MRI findings could be correlated with kinematic outcome scores of ambulation, such as BBB or BMS. The ability to follow a SCI in the same animal over time should improve the quality of data while reducing the quantity of animals required in SCI research. It is the aim of the authors to share this non-invasive technique and to make it available to the scientific research community.

  2. Nano Mechanical Machining Using AFM Probe

    NASA Astrophysics Data System (ADS)

    Mostofa, Md. Golam

    and burr formations through intermittent cutting. Combining the AFM probe based machining with vibration-assisted machining enhanced nano mechanical machining processes by improving the accuracy, productivity and surface finishes. In this study, several scratching tests are performed with a single crystal diamond AFM probe to investigate the cutting characteristics and model the ploughing cutting forces. Calibration of the probe for lateral force measurements, which is essential, is also extended through the force balance method. Furthermore, vibration-assisted machining system is developed and applied to fabricate different materials to overcome some of the limitations of the AFM probe based single point nano mechanical machining. The novelty of this study includes the application of vibration-assisted AFM probe based nano scale machining to fabricate micro/nano scale features, calibration of an AFM by considering different factors, and the investigation of the nano scale material removal process from a different perspective.

  3. Motion prediction in MRI-guided radiotherapy based on interleaved orthogonal cine-MRI

    NASA Astrophysics Data System (ADS)

    Seregni, M.; Paganelli, C.; Lee, D.; Greer, P. B.; Baroni, G.; Keall, P. J.; Riboldi, M.

    2016-01-01

    In-room cine-MRI guidance can provide non-invasive target localization during radiotherapy treatment. However, in order to cope with finite imaging frequency and system latencies between target localization and dose delivery, tumour motion prediction is required. This work proposes a framework for motion prediction dedicated to cine-MRI guidance, aiming at quantifying the geometric uncertainties introduced by this process for both tumour tracking and beam gating. The tumour position, identified through scale invariant features detected in cine-MRI slices, is estimated at high-frequency (25 Hz) using three independent predictors, one for each anatomical coordinate. Linear extrapolation, auto-regressive and support vector machine algorithms are compared against systems that use no prediction or surrogate-based motion estimation. Geometric uncertainties are reported as a function of image acquisition period and system latency. Average results show that the tracking error RMS can be decreased down to a [0.2; 1.2] mm range, for acquisition periods between 250 and 750 ms and system latencies between 50 and 300 ms. Except for the linear extrapolator, tracking and gating prediction errors were, on average, lower than those measured for surrogate-based motion estimation. This finding suggests that cine-MRI guidance, combined with appropriate prediction algorithms, could relevantly decrease geometric uncertainties in motion compensated treatments.

  4. Temperature Measurement and Numerical Prediction in Machining Inconel 718

    PubMed Central

    Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar

    2017-01-01

    Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning. PMID:28665312

  5. Temperature Measurement and Numerical Prediction in Machining Inconel 718.

    PubMed

    Díaz-Álvarez, José; Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar

    2017-06-30

    Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning.

  6. A Fabry-Perot Interferometry Based MRI-Compatible Miniature Uniaxial Force Sensor for Percutaneous Needle Placement

    PubMed Central

    Shang, Weijian; Su, Hao; Li, Gang; Furlong, Cosme; Fischer, Gregory S.

    2014-01-01

    Robot-assisted surgical procedures, taking advantage of the high soft tissue contrast and real-time imaging of magnetic resonance imaging (MRI), are developing rapidly. However, it is crucial to maintain tactile force feedback in MRI-guided needle-based procedures. This paper presents a Fabry-Perot interference (FPI) based system of an MRI-compatible fiber optic sensor which has been integrated into a piezoelectrically actuated robot for prostate cancer biopsy and brachytherapy in 3T MRI scanner. The opto-electronic sensing system design was minimized to fit inside an MRI-compatible robot controller enclosure. A flexure mechanism was designed that integrates the FPI sensor fiber for measuring needle insertion force, and finite element analysis was performed for optimizing the correct force-deformation relationship. The compact, low-cost FPI sensing system was integrated into the robot and calibration was conducted. The root mean square (RMS) error of the calibration among the range of 0–10 Newton was 0.318 Newton comparing to the theoretical model which has been proven sufficient for robot control and teleoperation. PMID:25126153

  7. Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)

    NASA Astrophysics Data System (ADS)

    Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.

    2013-12-01

    This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.

  8. Calibrating random forests for probability estimation.

    PubMed

    Dankowski, Theresa; Ziegler, Andreas

    2016-09-30

    Probabilities can be consistently estimated using random forests. It is, however, unclear how random forests should be updated to make predictions for other centers or at different time points. In this work, we present two approaches for updating random forests for probability estimation. The first method has been proposed by Elkan and may be used for updating any machine learning approach yielding consistent probabilities, so-called probability machines. The second approach is a new strategy specifically developed for random forests. Using the terminal nodes, which represent conditional probabilities, the random forest is first translated to logistic regression models. These are, in turn, used for re-calibration. The two updating strategies were compared in a simulation study and are illustrated with data from the German Stroke Study Collaboration. In most simulation scenarios, both methods led to similar improvements. In the simulation scenario in which the stricter assumptions of Elkan's method were not met, the logistic regression-based re-calibration approach for random forests outperformed Elkan's method. It also performed better on the stroke data than Elkan's method. The strength of Elkan's method is its general applicability to any probability machine. However, if the strict assumptions underlying this approach are not met, the logistic regression-based approach is preferable for updating random forests for probability estimation. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  9. Stable Local Volatility Calibration Using Kernel Splines

    NASA Astrophysics Data System (ADS)

    Coleman, Thomas F.; Li, Yuying; Wang, Cheng

    2010-09-01

    We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.

  10. Quantification of uncertainty in machining operations for on-machine acceptance.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Claudet, Andre A.; Tran, Hy D.; Su, Jiann-Chemg

    2008-09-01

    Manufactured parts are designed with acceptance tolerances, i.e. deviations from ideal design conditions, due to unavoidable errors in the manufacturing process. It is necessary to measure and evaluate the manufactured part, compared to the nominal design, to determine whether the part meets design specifications. The scope of this research project is dimensional acceptance of machined parts; specifically, parts machined using numerically controlled (NC, or also CNC for Computer Numerically Controlled) machines. In the design/build/accept cycle, the designer will specify both a nominal value, and an acceptable tolerance. As part of the typical design/build/accept business practice, it is required to verifymore » that the part did meet acceptable values prior to acceptance. Manufacturing cost must include not only raw materials and added labor, but also the cost of ensuring conformance to specifications. Ensuring conformance is a substantial portion of the cost of manufacturing. In this project, the costs of measurements were approximately 50% of the cost of the machined part. In production, cost of measurement would be smaller, but still a substantial proportion of manufacturing cost. The results of this research project will point to a science-based approach to reducing the cost of ensuring conformance to specifications. The approach that we take is to determine, a priori, how well a CNC machine can manufacture a particular geometry from stock. Based on the knowledge of the manufacturing process, we are then able to decide features which need further measurements from features which can be accepted 'as is' from the CNC. By calibration of the machine tool, and establishing a machining accuracy ratio, we can validate the ability of CNC to fabricate to a particular level of tolerance. This will eliminate the costs of checking for conformance for relatively large tolerances.« less

  11. Evaluating the diagnostic utility of applying a machine learning algorithm to diffusion tensor MRI measures in individuals with major depressive disorder.

    PubMed

    Schnyer, David M; Clasen, Peter C; Gonzalez, Christopher; Beevers, Christopher G

    2017-06-30

    Using MRI to diagnose mental disorders has been a long-term goal. Despite this, the vast majority of prior neuroimaging work has been descriptive rather than predictive. The current study applies support vector machine (SVM) learning to MRI measures of brain white matter to classify adults with Major Depressive Disorder (MDD) and healthy controls. In a precisely matched group of individuals with MDD (n =25) and healthy controls (n =25), SVM learning accurately (74%) classified patients and controls across a brain map of white matter fractional anisotropy values (FA). The study revealed three main findings: 1) SVM applied to DTI derived FA maps can accurately classify MDD vs. healthy controls; 2) prediction is strongest when only right hemisphere white matter is examined; and 3) removing FA values from a region identified by univariate contrast as significantly different between MDD and healthy controls does not change the SVM accuracy. These results indicate that SVM learning applied to neuroimaging data can classify the presence versus absence of MDD and that predictive information is distributed across brain networks rather than being highly localized. Finally, MDD group differences revealed through typical univariate contrasts do not necessarily reveal patterns that provide accurate predictive information. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  12. System and method for calibrating a rotary absolute position sensor

    NASA Technical Reports Server (NTRS)

    Davis, Donald R. (Inventor); Permenter, Frank Noble (Inventor); Radford, Nicolaus A (Inventor)

    2012-01-01

    A system includes a rotary device, a rotary absolute position (RAP) sensor generating encoded pairs of voltage signals describing positional data of the rotary device, a host machine, and an algorithm. The algorithm calculates calibration parameters usable to determine an absolute position of the rotary device using the encoded pairs, and is adapted for linearly-mapping an ellipse defined by the encoded pairs to thereby calculate the calibration parameters. A method of calibrating the RAP sensor includes measuring the rotary position as encoded pairs of voltage signals, linearly-mapping an ellipse defined by the encoded pairs to thereby calculate the calibration parameters, and calculating an absolute position of the rotary device using the calibration parameters. The calibration parameters include a positive definite matrix (A) and a center point (q) of the ellipse. The voltage signals may include an encoded sine and cosine of a rotary angle of the rotary device.

  13. Machine Learning Principles Can Improve Hip Fracture Prediction.

    PubMed

    Kruse, Christian; Eiken, Pia; Vestergaard, Peter

    2017-04-01

    Apply machine learning principles to predict hip fractures and estimate predictor importance in Dual-energy X-ray absorptiometry (DXA)-scanned men and women. Dual-energy X-ray absorptiometry data from two Danish regions between 1996 and 2006 were combined with national Danish patient data to comprise 4722 women and 717 men with 5 years of follow-up time (original cohort n = 6606 men and women). Twenty-four statistical models were built on 75% of data points through k-5, 5-repeat cross-validation, and then validated on the remaining 25% of data points to calculate area under the curve (AUC) and calibrate probability estimates. The best models were retrained with restricted predictor subsets to estimate the best subsets. For women, bootstrap aggregated flexible discriminant analysis ("bagFDA") performed best with a test AUC of 0.92 [0.89; 0.94] and well-calibrated probabilities following Naïve Bayes adjustments. A "bagFDA" model limited to 11 predictors (among them bone mineral densities (BMD), biochemical glucose measurements, general practitioner and dentist use) achieved a test AUC of 0.91 [0.88; 0.93]. For men, eXtreme Gradient Boosting ("xgbTree") performed best with a test AUC of 0.89 [0.82; 0.95], but with poor calibration in higher probabilities. A ten predictor subset (BMD, biochemical cholesterol and liver function tests, penicillin use and osteoarthritis diagnoses) achieved a test AUC of 0.86 [0.78; 0.94] using an "xgbTree" model. Machine learning can improve hip fracture prediction beyond logistic regression using ensemble models. Compiling data from international cohorts of longer follow-up and performing similar machine learning procedures has the potential to further improve discrimination and calibration.

  14. A quantitative comparison of two methods to correct eddy current-induced distortions in DT-MRI.

    PubMed

    Muñoz Maniega, Susana; Bastin, Mark E; Armitage, Paul A

    2007-04-01

    Eddy current-induced geometric distortions of single-shot, diffusion-weighted, echo-planar (DW-EP) images are a major confounding factor to the accurate determination of water diffusion parameters in diffusion tensor MRI (DT-MRI). Previously, it has been suggested that these geometric distortions can be removed from brain DW-EP images using affine transformations determined from phantom calibration experiments using iterative cross-correlation (ICC). Since this approach was first described, a number of image-based registration methods have become available that can also correct eddy current-induced distortions in DW-EP images. However, as yet no study has investigated whether separate eddy current calibration or image-based registration provides the most accurate way of removing these artefacts from DT-MRI data. Here we compare how ICC phantom calibration and affine FLIRT (http://www.fmrib.ox.ac.uk), a popular image-based multi-modal registration method that can correct both eddy current-induced distortions and bulk subject motion, perform when registering DW-EP images acquired with different slice thicknesses (2.8 and 5 mm) and b-values (1000 and 3000 s/mm(2)). With the use of consistency testing, it was found that ICC was a more robust algorithm for correcting eddy current-induced distortions than affine FLIRT, especially at high b-value and small slice thickness. In addition, principal component analysis demonstrated that the combination of ICC phantom calibration (to remove eddy current-induced distortions) with rigid body FLIRT (to remove bulk subject motion) provided a more accurate registration of DT-MRI data than that achieved by affine FLIRT.

  15. [How do metallic middle ear implants behave in the MRI?].

    PubMed

    Kwok, P; Waldeck, A; Strutz, J

    2003-01-01

    Magnetic resonance imaging (MRI) has gained in frequency and importance as a diagnostic procedure. In respect to the close anatomical relationship in the temporal bone it is necessary to know whether it is hazardous to patients with metallic middle ear implants regarding displacement and rise in temperature. For the MR image quality artefacts caused by metallic prostheses should be low. Four different stapes prostheses made from titanium, gold, teflon/platinum and teflon/steel, a titanium total ossicular reconstruction prosthesis (TORP) and two ventilation tubes (made from titanium and gold) were tested in a 1.5 Tesla MRI machine regarding their displacement. All objects were first placed in a petri dish, then suspended from a thread and finally immersed in a dish filled with Gadolinium. Temperature changes of the implants were recorded by a pyrometer. None of the implants moved when they were placed in the petri dish or suspended from the thread. On the water surface the teflon/platinum and the teflon/steel pistons adjusted their direction with their axis longitudinally to the MRI scanner opening and the teflon/steel piston floated towards the MRI-machine when put close enough to the scanner opening. No rise in temperature was recorded. All implants showed as little artefacts that would still make an evaluation of the surrounding tissue possible. Patients with any of the metallic middle ear implants that were examined in this study may undergo MRI-investigations without significant adverse effects.

  16. Optical/MRI Multimodality Molecular Imaging

    NASA Astrophysics Data System (ADS)

    Ma, Lixin; Smith, Charles; Yu, Ping

    2007-03-01

    Multimodality molecular imaging that combines anatomical and functional information has shown promise in development of tumor-targeted pharmaceuticals for cancer detection or therapy. We present a new multimodality imaging technique that combines fluorescence molecular tomography (FMT) and magnetic resonance imaging (MRI) for in vivo molecular imaging of preclinical tumor models. Unlike other optical/MRI systems, the new molecular imaging system uses parallel phase acquisition based on heterodyne principle. The system has a higher accuracy of phase measurements, reduced noise bandwidth, and an efficient modulation of the fluorescence diffuse density waves. Fluorescent Bombesin probes were developed for targeting breast cancer cells and prostate cancer cells. Tissue phantom and small animal experiments were performed for calibration of the imaging system and validation of the targeting probes.

  17. Classifying Cognitive Profiles Using Machine Learning with Privileged Information in Mild Cognitive Impairment.

    PubMed

    Alahmadi, Hanin H; Shen, Yuan; Fouad, Shereen; Luft, Caroline Di B; Bentham, Peter; Kourtzi, Zoe; Tino, Peter

    2016-01-01

    Early diagnosis of dementia is critical for assessing disease progression and potential treatment. State-or-the-art machine learning techniques have been increasingly employed to take on this diagnostic task. In this study, we employed Generalized Matrix Learning Vector Quantization (GMLVQ) classifiers to discriminate patients with Mild Cognitive Impairment (MCI) from healthy controls based on their cognitive skills. Further, we adopted a "Learning with privileged information" approach to combine cognitive and fMRI data for the classification task. The resulting classifier operates solely on the cognitive data while it incorporates the fMRI data as privileged information (PI) during training. This novel classifier is of practical use as the collection of brain imaging data is not always possible with patients and older participants. MCI patients and healthy age-matched controls were trained to extract structure from temporal sequences. We ask whether machine learning classifiers can be used to discriminate patients from controls and whether differences between these groups relate to individual cognitive profiles. To this end, we tested participants in four cognitive tasks: working memory, cognitive inhibition, divided attention, and selective attention. We also collected fMRI data before and after training on a probabilistic sequence learning task and extracted fMRI responses and connectivity as features for machine learning classifiers. Our results show that the PI guided GMLVQ classifiers outperform the baseline classifier that only used the cognitive data. In addition, we found that for the baseline classifier, divided attention is the only relevant cognitive feature. When PI was incorporated, divided attention remained the most relevant feature while cognitive inhibition became also relevant for the task. Interestingly, this analysis for the fMRI GMLVQ classifier suggests that (1) when overall fMRI signal is used as inputs to the classifier, the post

  18. Biomarkers for Musculoskeletal Pain Conditions: Use of Brain Imaging and Machine Learning.

    PubMed

    Boissoneault, Jeff; Sevel, Landrew; Letzen, Janelle; Robinson, Michael; Staud, Roland

    2017-01-01

    Chronic musculoskeletal pain condition often shows poor correlations between tissue abnormalities and clinical pain. Therefore, classification of pain conditions like chronic low back pain, osteoarthritis, and fibromyalgia depends mostly on self report and less on objective findings like X-ray or magnetic resonance imaging (MRI) changes. However, recent advances in structural and functional brain imaging have identified brain abnormalities in chronic pain conditions that can be used for illness classification. Because the analysis of complex and multivariate brain imaging data is challenging, machine learning techniques have been increasingly utilized for this purpose. The goal of machine learning is to train specific classifiers to best identify variables of interest on brain MRIs (i.e., biomarkers). This report describes classification techniques capable of separating MRI-based brain biomarkers of chronic pain patients from healthy controls with high accuracy (70-92%) using machine learning, as well as critical scientific, practical, and ethical considerations related to their potential clinical application. Although self-report remains the gold standard for pain assessment, machine learning may aid in the classification of chronic pain disorders like chronic back pain and fibromyalgia as well as provide mechanistic information regarding their neural correlates.

  19. Implementation of compressive sensing for preclinical cine-MRI

    NASA Astrophysics Data System (ADS)

    Tan, Elliot; Yang, Ming; Ma, Lixin; Zheng, Yahong Rosa

    2014-03-01

    This paper presents a practical implementation of Compressive Sensing (CS) for a preclinical MRI machine to acquire randomly undersampled k-space data in cardiac function imaging applications. First, random undersampling masks were generated based on Gaussian, Cauchy, wrapped Cauchy and von Mises probability distribution functions by the inverse transform method. The best masks for undersampling ratios of 0.3, 0.4 and 0.5 were chosen for animal experimentation, and were programmed into a Bruker Avance III BioSpec 7.0T MRI system through method programming in ParaVision. Three undersampled mouse heart datasets were obtained using a fast low angle shot (FLASH) sequence, along with a control undersampled phantom dataset. ECG and respiratory gating was used to obtain high quality images. After CS reconstructions were applied to all acquired data, resulting images were quantitatively analyzed using the performance metrics of reconstruction error and Structural Similarity Index (SSIM). The comparative analysis indicated that CS reconstructed images from MRI machine undersampled data were indeed comparable to CS reconstructed images from retrospective undersampled data, and that CS techniques are practical in a preclinical setting. The implementation achieved 2 to 4 times acceleration for image acquisition and satisfactory quality of image reconstruction.

  20. The Usefulness of Zone Division Using Belt Partition at the Entry Zone of MRI Machine Room: An Analysis of the Restrictive Effect of Dangerous Action Using a Questionnaire.

    PubMed

    Funada, Tatsuro; Shibuya, Tsubasa

    2016-08-01

    The American College of Radiology recommends dividing magnetic resonance imaging (MRI) machine rooms into four zones depending on the education level. However, structural limitations restrict us to apply such recommendation in most of the Japanese facilities. This study examines the effectiveness of the usage of a belt partition to create the zonal division by a questionnaire survey including three critical parameters. They are, the influence of individuals' background (relevance to MRI, years of experience, individuals' post, occupation [i.e., nurse or nursing assistant], outpatient section or ward), the presence or absence of a door or belt partition (opening or closing), and any four personnel scenarios that may be encountered during a visit to an MRI site (e.g., from visiting the MRI site to receive a patient) . In this survey, the influence of dangerous action is uncertain on individuals' backgrounds (maximum odds ratio: 6.3, 95% CI: 1.47-27.31) and the scenarios of personnel (maximum risk ratio: 2.4, 95% CI: 1.16-4.85). Conversely, the presence of the door and belt partition influences significantly (maximum risk ratio: 17.4, 95% CI: 7.94-17.38). For that reason, we suggest that visual impression has a strong influence on an individuals' actions. Even if structural limitations are present, zonal division by belt partition will provide a visual deterrent. Then, the partitioned zone will serve as a buffer zone. We conclude that if the belt partition is used properly, it is an inexpensive and effective safety management device for MRI rooms.

  1. The role of fMRI in cognitive neuroscience: where do we stand?

    PubMed

    Poldrack, Russell A

    2008-04-01

    Functional magnetic resonance imaging (fMRI) has quickly become the most prominent tool in cognitive neuroscience. In this article, I outline some of the limits on the kinds of inferences that can be supported by fMRI, focusing particularly on reverse inference, in which the engagement of specific mental processes is inferred from patterns of brain activation. Although this form of inference is weak, newly developed methods from the field of machine learning offer the potential to formalize and strengthen reverse inferences. I conclude by discussing the increasing presence of fMRI results in the popular media and the ethical implications of the increasing predictive power of fMRI.

  2. A Novel Diffusion MRI Phantom, and a Method for Enhancing MR Image Quality | NCI Technology Transfer Center | TTC

    Cancer.gov

    The use of Polyvinyl Pyrrolidone (PVP) solutions of varying concentrations as phantoms for diffusion MRI calibration and quality control is disclosed. This diffusion MRI phantom material is already being adopted by radiologists for quality control and assurance in clinical studies.

  3. Machine learning for predicting the response of breast cancer to neoadjuvant chemotherapy

    PubMed Central

    Mani, Subramani; Chen, Yukun; Li, Xia; Arlinghaus, Lori; Chakravarthy, A Bapsi; Abramson, Vandana; Bhave, Sandeep R; Levy, Mia A; Xu, Hua; Yankeelov, Thomas E

    2013-01-01

    Objective To employ machine learning methods to predict the eventual therapeutic response of breast cancer patients after a single cycle of neoadjuvant chemotherapy (NAC). Materials and methods Quantitative dynamic contrast-enhanced MRI and diffusion-weighted MRI data were acquired on 28 patients before and after one cycle of NAC. A total of 118 semiquantitative and quantitative parameters were derived from these data and combined with 11 clinical variables. We used Bayesian logistic regression in combination with feature selection using a machine learning framework for predictive model building. Results The best predictive models using feature selection obtained an area under the curve of 0.86 and an accuracy of 0.86, with a sensitivity of 0.88 and a specificity of 0.82. Discussion With the numerous options for NAC available, development of a method to predict response early in the course of therapy is needed. Unfortunately, by the time most patients are found not to be responding, their disease may no longer be surgically resectable, and this situation could be avoided by the development of techniques to assess response earlier in the treatment regimen. The method outlined here is one possible solution to this important clinical problem. Conclusions Predictive modeling approaches based on machine learning using readily available clinical and quantitative MRI data show promise in distinguishing breast cancer responders from non-responders after the first cycle of NAC. PMID:23616206

  4. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  5. Calibration procedures of the Tore-Supra infrared endoscopes

    NASA Astrophysics Data System (ADS)

    Desgranges, C.; Jouve, M.; Balorin, C.; Reichle, R.; Firdaouss, M.; Lipa, M.; Chantant, M.; Gardarein, J. L.; Saille, A.; Loarer, T.

    2018-01-01

    Five endoscopes equipped with infrared cameras working in the medium infrared range (3-5 μm) are installed on the controlled thermonuclear fusion research device Tore-Supra. These endoscopes aim at monitoring the plasma facing components surface temperature to prevent their overheating. Signals delivered by infrared cameras through endoscopes are analysed and used on the one hand through a real time feedback control loop acting on the heating systems of the plasma to decrease plasma facing components surface temperatures when necessary, on the other hand for physics studies such as determination of the incoming heat flux . To ensure these two roles a very accurate knowledge of the absolute surface temperatures is mandatory. Consequently the infrared endoscopes must be calibrated through a very careful procedure. This means determining their transmission coefficients which is a delicate operation. Methods to calibrate infrared endoscopes during the shutdown period of the Tore-Supra machine will be presented. As they do not allow determining the possible transmittances evolution during operation an in-situ method is presented. It permits the validation of the calibration performed in laboratory as well as the monitoring of their evolution during machine operation. This is possible by the use of the endoscope shutter and a dedicated plasma scenario developed to heat it. Possible improvements of this method are briefly evoked.

  6. Decision forests for learning prostate cancer probability maps from multiparametric MRI

    NASA Astrophysics Data System (ADS)

    Ehrenberg, Henry R.; Cornfeld, Daniel; Nawaf, Cayce B.; Sprenkle, Preston C.; Duncan, James S.

    2016-03-01

    Objectives: Advances in multiparametric magnetic resonance imaging (mpMRI) and ultrasound/MRI fusion imaging offer a powerful alternative to the typical undirected approach to diagnosing prostate cancer. However, these methods require the time and expertise needed to interpret mpMRI image scenes. In this paper, a machine learning framework for automatically detecting and localizing cancerous lesions within the prostate is developed and evaluated. Methods: Two studies were performed to gather MRI and pathology data. The 12 patients in the first study underwent an MRI session to obtain structural, diffusion-weighted, and dynamic contrast enhanced image vol- umes of the prostate, and regions suspected of being cancerous from the MRI data were manually contoured by radiologists. Whole-mount slices of the prostate were obtained for the patients in the second study, in addition to structural and diffusion-weighted MRI data, for pathology verification. A 3-D feature set for voxel-wise appear- ance description combining intensity data, textural operators, and zonal approximations was generated. Voxels in a test set were classified as normal or cancer using a decision forest-based model initialized using Gaussian discriminant analysis. A leave-one-patient-out cross-validation scheme was used to assess the predictions against the expert manual segmentations confirmed as cancer by biopsy. Results: We achieved an area under the average receiver-operator characteristic curve of 0.923 for the first study, and visual assessment of the probability maps showed 21 out of 22 tumors were identified while a high level of specificity was maintained. In addition to evaluating the model against related approaches, the effects of the individual MRI parameter types were explored, and pathological verification using whole-mount slices from the second study was performed. Conclusions: The results of this paper show that the

  7. Small animal simultaneous PET/MRI: initial experiences in a 9.4 T microMRI

    NASA Astrophysics Data System (ADS)

    Harsha Maramraju, Sri; Smith, S. David; Junnarkar, Sachin S.; Schulz, Daniela; Stoll, Sean; Ravindranath, Bosky; Purschke, Martin L.; Rescia, Sergio; Southekal, Sudeepti; Pratte, Jean-François; Vaska, Paul; Woody, Craig L.; Schlyer, David J.

    2011-04-01

    We developed a non-magnetic positron-emission tomography (PET) device based on the rat conscious animal PET that operates in a small-animal magnetic resonance imaging (MRI) scanner, thereby enabling us to carry out simultaneous PET/MRI studies. The PET detector comprises 12 detector blocks, each being a 4 × 8 array of lutetium oxyorthosilicate crystals (2.22 × 2.22 × 5 mm3) coupled to a matching non-magnetic avalanche photodiode array. The detector blocks, housed in a plastic case, form a 38 mm inner diameter ring with an 18 mm axial extent. Custom-built MRI coils fit inside the positron-emission tomography (PET) device, operating in transceiver mode. The PET insert is integrated with a Bruker 9.4 T 210 mm clear-bore diameter MRI scanner. We acquired simultaneous PET/MR images of phantoms, of in vivo rat brain, and of cardiac-gated mouse heart using [11C]raclopride and 2-deoxy-2-[18F]fluoro-d-glucose PET radiotracers. There was minor interference between the PET electronics and the MRI during simultaneous operation, and small effects on the signal-to-noise ratio in the MR images in the presence of the PET, but no noticeable visual artifacts. Gradient echo and high-duty-cycle spin echo radio frequency (RF) pulses resulted in a 7% and a 28% loss in PET counts, respectively, due to high PET counts during the RF pulses that had to be gated out. The calibration of the activity concentration of PET data during MR pulsing is reproducible within less than 6%. Our initial results demonstrate the feasibility of performing simultaneous PET and MRI studies in adult rats and mice using the same PET insert in a small-bore 9.4 T MRI.

  8. Prediction of individual brain maturity using fMRI.

    PubMed

    Dosenbach, Nico U F; Nardos, Binyam; Cohen, Alexander L; Fair, Damien A; Power, Jonathan D; Church, Jessica A; Nelson, Steven M; Wig, Gagan S; Vogel, Alecia C; Lessov-Schlaggar, Christina N; Barnes, Kelly Anne; Dubis, Joseph W; Feczko, Eric; Coalson, Rebecca S; Pruett, John R; Barch, Deanna M; Petersen, Steven E; Schlaggar, Bradley L

    2010-09-10

    Group functional connectivity magnetic resonance imaging (fcMRI) studies have documented reliable changes in human functional brain maturity over development. Here we show that support vector machine-based multivariate pattern analysis extracts sufficient information from fcMRI data to make accurate predictions about individuals' brain maturity across development. The use of only 5 minutes of resting-state fcMRI data from 238 scans of typically developing volunteers (ages 7 to 30 years) allowed prediction of individual brain maturity as a functional connectivity maturation index. The resultant functional maturation curve accounted for 55% of the sample variance and followed a nonlinear asymptotic growth curve shape. The greatest relative contribution to predicting individual brain maturity was made by the weakening of short-range functional connections between the adult brain's major functional networks.

  9. Using human brain activity to guide machine learning.

    PubMed

    Fong, Ruth C; Scheirer, Walter J; Cox, David D

    2018-03-29

    Machine learning is a field of computer science that builds algorithms that learn. In many cases, machine learning algorithms are used to recreate a human ability like adding a caption to a photo, driving a car, or playing a game. While the human brain has long served as a source of inspiration for machine learning, little effort has been made to directly use data collected from working brains as a guide for machine learning algorithms. Here we demonstrate a new paradigm of "neurally-weighted" machine learning, which takes fMRI measurements of human brain activity from subjects viewing images, and infuses these data into the training process of an object recognition learning algorithm to make it more consistent with the human brain. After training, these neurally-weighted classifiers are able to classify images without requiring any additional neural data. We show that our neural-weighting approach can lead to large performance gains when used with traditional machine vision features, as well as to significant improvements with already high-performing convolutional neural network features. The effectiveness of this approach points to a path forward for a new class of hybrid machine learning algorithms which take both inspiration and direct constraints from neuronal data.

  10. Thoughts turned into high-level commands: Proof-of-concept study of a vision-guided robot arm driven by functional MRI (fMRI) signals.

    PubMed

    Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia

    2012-06-01

    Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Calibration of 3D ultrasound to an electromagnetic tracking system

    NASA Astrophysics Data System (ADS)

    Lang, Andrew; Parthasarathy, Vijay; Jain, Ameet

    2011-03-01

    The use of electromagnetic (EM) tracking is an important guidance tool that can be used to aid procedures requiring accurate localization such as needle injections or catheter guidance. Using EM tracking, the information from different modalities can be easily combined using pre-procedural calibration information. These calibrations are performed individually, per modality, allowing different imaging systems to be mixed and matched according to the procedure at hand. In this work, a framework for the calibration of a 3D transesophageal echocardiography probe to EM tracking is developed. The complete calibration framework includes three required steps: data acquisition, needle segmentation, and calibration. Ultrasound (US) images of an EM tracked needle must be acquired with the position of the needles in each volume subsequently extracted by segmentation. The calibration transformation is determined through a registration between the segmented points and the recorded EM needle positions. Additionally, the speed of sound is compensated for since calibration is performed in water that has a different speed then is assumed by the US machine. A statistical validation framework has also been developed to provide further information related to the accuracy and consistency of the calibration. Further validation of the calibration showed an accuracy of 1.39 mm.

  12. Wavelet Entropy and Directed Acyclic Graph Support Vector Machine for Detection of Patients with Unilateral Hearing Loss in MRI Scanning.

    PubMed

    Wang, Shuihua; Yang, Ming; Du, Sidan; Yang, Jiquan; Liu, Bin; Gorriz, Juan M; Ramírez, Javier; Yuan, Ti-Fei; Zhang, Yudong

    2016-01-01

    Highlights We develop computer-aided diagnosis system for unilateral hearing loss detection in structural magnetic resonance imaging.Wavelet entropy is introduced to extract image global features from brain images. Directed acyclic graph is employed to endow support vector machine an ability to handle multi-class problems.The developed computer-aided diagnosis system achieves an overall accuracy of 95.1% for this three-class problem of differentiating left-sided and right-sided hearing loss from healthy controls. Aim: Sensorineural hearing loss (SNHL) is correlated to many neurodegenerative disease. Now more and more computer vision based methods are using to detect it in an automatic way. Materials: We have in total 49 subjects, scanned by 3.0T MRI (Siemens Medical Solutions, Erlangen, Germany). The subjects contain 14 patients with right-sided hearing loss (RHL), 15 patients with left-sided hearing loss (LHL), and 20 healthy controls (HC). Method: We treat this as a three-class classification problem: RHL, LHL, and HC. Wavelet entropy (WE) was selected from the magnetic resonance images of each subjects, and then submitted to a directed acyclic graph support vector machine (DAG-SVM). Results: The 10 repetition results of 10-fold cross validation shows 3-level decomposition will yield an overall accuracy of 95.10% for this three-class classification problem, higher than feedforward neural network, decision tree, and naive Bayesian classifier. Conclusions: This computer-aided diagnosis system is promising. We hope this study can attract more computer vision method for detecting hearing loss.

  13. Identification of Alzheimer's disease and mild cognitive impairment using multimodal sparse hierarchical extreme learning machine.

    PubMed

    Kim, Jongin; Lee, Boreom

    2018-05-07

    Different modalities such as structural MRI, FDG-PET, and CSF have complementary information, which is likely to be very useful for diagnosis of AD and MCI. Therefore, it is possible to develop a more effective and accurate AD/MCI automatic diagnosis method by integrating complementary information of different modalities. In this paper, we propose multi-modal sparse hierarchical extreme leaning machine (MSH-ELM). We used volume and mean intensity extracted from 93 regions of interest (ROIs) as features of MRI and FDG-PET, respectively, and used p-tau, t-tau, and Aβ42 as CSF features. In detail, high-level representation was individually extracted from each of MRI, FDG-PET, and CSF using a stacked sparse extreme learning machine auto-encoder (sELM-AE). Then, another stacked sELM-AE was devised to acquire a joint hierarchical feature representation by fusing the high-level representations obtained from each modality. Finally, we classified joint hierarchical feature representation using a kernel-based extreme learning machine (KELM). The results of MSH-ELM were compared with those of conventional ELM, single kernel support vector machine (SK-SVM), multiple kernel support vector machine (MK-SVM) and stacked auto-encoder (SAE). Performance was evaluated through 10-fold cross-validation. In the classification of AD vs. HC and MCI vs. HC problem, the proposed MSH-ELM method showed mean balanced accuracies of 96.10% and 86.46%, respectively, which is much better than those of competing methods. In summary, the proposed algorithm exhibits consistently better performance than SK-SVM, ELM, MK-SVM and SAE in the two binary classification problems (AD vs. HC and MCI vs. HC). © 2018 Wiley Periodicals, Inc.

  14. Machine-Specific Magnetic Resonance Imaging Quality Control Procedures for Stereotactic Radiosurgery Treatment Planning

    PubMed Central

    Taghizadeh, Somayeh; Yang, Claus Chunli; R. Kanakamedala, Madhava; Morris, Bart; Vijayakumar, Srinivasan

    2017-01-01

    Purpose Magnetic resonance (MR) images are necessary for accurate contouring of intracranial targets, determination of gross target volume and evaluation of organs at risk during stereotactic radiosurgery (SRS) treatment planning procedures. Many centers use magnetic resonance imaging (MRI) simulators or regular diagnostic MRI machines for SRS treatment planning; while both types of machine require two stages of quality control (QC), both machine- and patient-specific, before use for SRS, no accepted guidelines for such QC currently exist. This article describes appropriate machine-specific QC procedures for SRS applications. Methods and materials We describe the adaptation of American College of Radiology (ACR)-recommended QC tests using an ACR MRI phantom for SRS treatment planning. In addition, commercial Quasar MRID3D and Quasar GRID3D phantoms were used to evaluate the effects of static magnetic field (B0) inhomogeneity, gradient nonlinearity, and a Leksell G frame (SRS frame) and its accessories on geometrical distortion in MR images. Results QC procedures found in-plane distortions (Maximum = 3.5 mm, Mean = 0.91 mm, Standard deviation = 0.67 mm, >2.5 mm (%) = 2) in X-direction (Maximum = 2.51 mm, Mean = 0.52 mm, Standard deviation = 0.39 mm, > 2.5 mm (%) = 0) and in Y-direction (Maximum = 13. 1 mm , Mean = 2.38 mm, Standard deviation = 2.45 mm, > 2.5 mm (%) = 34) in Z-direction and < 1 mm distortion at a head-sized region of interest. MR images acquired using a Leksell G frame and localization devices showed a mean absolute deviation of 2.3 mm from isocenter. The results of modified ACR tests were all within recommended limits, and baseline measurements have been defined for regular weekly QC tests. Conclusions With appropriate QC procedures in place, it is possible to routinely obtain clinically useful MR images suitable for SRS treatment planning purposes. MRI examination for SRS planning can benefit from the improved localization and planning possible

  15. Machine-Specific Magnetic Resonance Imaging Quality Control Procedures for Stereotactic Radiosurgery Treatment Planning.

    PubMed

    Fatemi, Ali; Taghizadeh, Somayeh; Yang, Claus Chunli; R Kanakamedala, Madhava; Morris, Bart; Vijayakumar, Srinivasan

    2017-12-18

    Purpose Magnetic resonance (MR) images are necessary for accurate contouring of intracranial targets, determination of gross target volume and evaluation of organs at risk during stereotactic radiosurgery (SRS) treatment planning procedures. Many centers use magnetic resonance imaging (MRI) simulators or regular diagnostic MRI machines for SRS treatment planning; while both types of machine require two stages of quality control (QC), both machine- and patient-specific, before use for SRS, no accepted guidelines for such QC currently exist. This article describes appropriate machine-specific QC procedures for SRS applications. Methods and materials We describe the adaptation of American College of Radiology (ACR)-recommended QC tests using an ACR MRI phantom for SRS treatment planning. In addition, commercial Quasar MRID 3D and Quasar GRID 3D phantoms were used to evaluate the effects of static magnetic field (B 0 ) inhomogeneity, gradient nonlinearity, and a Leksell G frame (SRS frame) and its accessories on geometrical distortion in MR images. Results QC procedures found in-plane distortions (Maximum = 3.5 mm, Mean = 0.91 mm, Standard deviation = 0.67 mm, >2.5 mm (%) = 2) in X-direction (Maximum = 2.51 mm, Mean = 0.52 mm, Standard deviation = 0.39 mm, > 2.5 mm (%) = 0) and in Y-direction (Maximum = 13. 1 mm , Mean = 2.38 mm, Standard deviation = 2.45 mm, > 2.5 mm (%) = 34) in Z-direction and < 1 mm distortion at a head-sized region of interest. MR images acquired using a Leksell G frame and localization devices showed a mean absolute deviation of 2.3 mm from isocenter. The results of modified ACR tests were all within recommended limits, and baseline measurements have been defined for regular weekly QC tests. Conclusions With appropriate QC procedures in place, it is possible to routinely obtain clinically useful MR images suitable for SRS treatment planning purposes. MRI examination for SRS planning can benefit from the improved localization and planning

  16. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm.

    PubMed

    Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S

    2016-01-01

    High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  17. Autonomous Landmark Calibration Method for Indoor Localization

    PubMed Central

    Kim, Jae-Hoon; Kim, Byoung-Seop

    2017-01-01

    Machine-generated data expansion is a global phenomenon in recent Internet services. The proliferation of mobile communication and smart devices has increased the utilization of machine-generated data significantly. One of the most promising applications of machine-generated data is the estimation of the location of smart devices. The motion sensors integrated into smart devices generate continuous data that can be used to estimate the location of pedestrians in an indoor environment. We focus on the estimation of the accurate location of smart devices by determining the landmarks appropriately for location error calibration. In the motion sensor-based location estimation, the proposed threshold control method determines valid landmarks in real time to avoid the accumulation of errors. A statistical method analyzes the acquired motion sensor data and proposes a valid landmark for every movement of the smart devices. Motion sensor data used in the testbed are collected from the actual measurements taken throughout a commercial building to demonstrate the practical usefulness of the proposed method. PMID:28837071

  18. Robot calibration with a photogrammetric on-line system using reseau scanning cameras

    NASA Astrophysics Data System (ADS)

    Diewald, Bernd; Godding, Robert; Henrich, Andreas

    1994-03-01

    The possibility for testing and calibration of industrial robots becomes more and more important for manufacturers and users of such systems. Exacting applications in connection with the off-line programming techniques or the use of robots as measuring machines are impossible without a preceding robot calibration. At the LPA an efficient calibration technique has been developed. Instead of modeling the kinematic behavior of a robot, the new method describes the pose deviations within a user-defined section of the robot's working space. High- precision determination of 3D coordinates of defined path positions is necessary for calibration and can be done by digital photogrammetric systems. For the calibration of a robot at the LPA a digital photogrammetric system with three Rollei Reseau Scanning Cameras was used. This system allows an automatic measurement of a large number of robot poses with high accuracy.

  19. Towards System Calibration of Panoramic Laser Scanners from a Single Station

    PubMed Central

    Medić, Tomislav; Holst, Christoph; Kuhlmann, Heiner

    2017-01-01

    Terrestrial laser scanner measurements suffer from systematic errors due to internal misalignments. The magnitude of the resulting errors in the point cloud in many cases exceeds the magnitude of random errors. Hence, the task of calibrating a laser scanner is important for applications with high accuracy demands. This paper primarily addresses the case of panoramic terrestrial laser scanners. Herein, it is proven that most of the calibration parameters can be estimated from a single scanner station without a need for any reference information. This hypothesis is confirmed through an empirical experiment, which was conducted in a large machine hall using a Leica Scan Station P20 panoramic laser scanner. The calibration approach is based on the widely used target-based self-calibration approach, with small modifications. A new angular parameterization is used in order to implicitly introduce measurements in two faces of the instrument and for the implementation of calibration parameters describing genuine mechanical misalignments. Additionally, a computationally preferable calibration algorithm based on the two-face measurements is introduced. In the end, the calibration results are discussed, highlighting all necessary prerequisites for the scanner calibration from a single scanner station. PMID:28513548

  20. Predicting primary progressive aphasias with support vector machine approaches in structural MRI data.

    PubMed

    Bisenius, Sandrine; Mueller, Karsten; Diehl-Schmid, Janine; Fassbender, Klaus; Grimmer, Timo; Jessen, Frank; Kassubek, Jan; Kornhuber, Johannes; Landwehrmeyer, Bernhard; Ludolph, Albert; Schneider, Anja; Anderl-Straub, Sarah; Stuke, Katharina; Danek, Adrian; Otto, Markus; Schroeter, Matthias L

    2017-01-01

    Primary progressive aphasia (PPA) encompasses the three subtypes nonfluent/agrammatic variant PPA, semantic variant PPA, and the logopenic variant PPA, which are characterized by distinct patterns of language difficulties and regional brain atrophy. To validate the potential of structural magnetic resonance imaging data for early individual diagnosis, we used support vector machine classification on grey matter density maps obtained by voxel-based morphometry analysis to discriminate PPA subtypes (44 patients: 16 nonfluent/agrammatic variant PPA, 17 semantic variant PPA, 11 logopenic variant PPA) from 20 healthy controls (matched for sample size, age, and gender) in the cohort of the multi-center study of the German consortium for frontotemporal lobar degeneration. Here, we compared a whole-brain with a meta-analysis-based disease-specific regions-of-interest approach for support vector machine classification. We also used support vector machine classification to discriminate the three PPA subtypes from each other. Whole brain support vector machine classification enabled a very high accuracy between 91 and 97% for identifying specific PPA subtypes vs. healthy controls, and 78/95% for the discrimination between semantic variant vs. nonfluent/agrammatic or logopenic PPA variants. Only for the discrimination between nonfluent/agrammatic and logopenic PPA variants accuracy was low with 55%. Interestingly, the regions that contributed the most to the support vector machine classification of patients corresponded largely to the regions that were atrophic in these patients as revealed by group comparisons. Although the whole brain approach took also into account regions that were not covered in the regions-of-interest approach, both approaches showed similar accuracies due to the disease-specificity of the selected networks. Conclusion, support vector machine classification of multi-center structural magnetic resonance imaging data enables prediction of PPA subtypes with

  1. Study on on-machine defects measuring system on high power laser optical elements

    NASA Astrophysics Data System (ADS)

    Luo, Chi; Shi, Feng; Lin, Zhifan; Zhang, Tong; Wang, Guilin

    2017-10-01

    The influence of surface defects on high power laser optical elements will cause some harm to the performances of imaging system, including the energy consumption and the damage of film layer. To further increase surface defects on high power laser optical element, on-machine defects measuring system was investigated. Firstly, the selection and design are completed by the working condition analysis of the on-machine defects detection system. By designing on processing algorithms to realize the classification recognition and evaluation of surface defects. The calibration experiment of the scratch was done by using the self-made standard alignment plate. Finally, the detection and evaluation of surface defects of large diameter semi-cylindrical silicon mirror are realized. The calibration results show that the size deviation is less than 4% that meet the precision requirement of the detection of the defects. Through the detection of images the on-machine defects detection system can realize the accurate identification of surface defects.

  2. Identifying patients with Alzheimer's disease using resting-state fMRI and graph theory.

    PubMed

    Khazaee, Ali; Ebrahimzadeh, Ata; Babajani-Feremi, Abbas

    2015-11-01

    Study of brain network on the basis of resting-state functional magnetic resonance imaging (fMRI) has provided promising results to investigate changes in connectivity among different brain regions because of diseases. Graph theory can efficiently characterize different aspects of the brain network by calculating measures of integration and segregation. In this study, we combine graph theoretical approaches with advanced machine learning methods to study functional brain network alteration in patients with Alzheimer's disease (AD). Support vector machine (SVM) was used to explore the ability of graph measures in diagnosis of AD. We applied our method on the resting-state fMRI data of twenty patients with AD and twenty age and gender matched healthy subjects. The data were preprocessed and each subject's graph was constructed by parcellation of the whole brain into 90 distinct regions using the automated anatomical labeling (AAL) atlas. The graph measures were then calculated and used as the discriminating features. Extracted network-based features were fed to different feature selection algorithms to choose most significant features. In addition to the machine learning approach, statistical analysis was performed on connectivity matrices to find altered connectivity patterns in patients with AD. Using the selected features, we were able to accurately classify patients with AD from healthy subjects with accuracy of 100%. Results of this study show that pattern recognition and graph of brain network, on the basis of the resting state fMRI data, can efficiently assist in the diagnosis of AD. Classification based on the resting-state fMRI can be used as a non-invasive and automatic tool to diagnosis of Alzheimer's disease. Copyright © 2015 International Federation of Clinical Neurophysiology. All rights reserved.

  3. Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform

    NASA Astrophysics Data System (ADS)

    Brau-Avila, A.; Santolaria, J.; Acero, R.; Valenzuela-Galvan, M.; Herrera-Jimenez, V. M.; Aguilar, J. J.

    2017-03-01

    The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs.

  4. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    PubMed Central

    Yu, Chengyi; Chen, Xiaobo; Xi, Juntong

    2017-01-01

    A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844

  5. Predicting conversion from MCI to AD using resting-state fMRI, graph theoretical approach and SVM.

    PubMed

    Hojjati, Seyed Hani; Ebrahimzadeh, Ata; Khazaee, Ali; Babajani-Feremi, Abbas

    2017-04-15

    We investigated identifying patients with mild cognitive impairment (MCI) who progress to Alzheimer's disease (AD), MCI converter (MCI-C), from those with MCI who do not progress to AD, MCI non-converter (MCI-NC), based on resting-state fMRI (rs-fMRI). Graph theory and machine learning approach were utilized to predict progress of patients with MCI to AD using rs-fMRI. Eighteen MCI converts (average age 73.6 years; 11 male) and 62 age-matched MCI non-converters (average age 73.0 years, 28 male) were included in this study. We trained and tested a support vector machine (SVM) to classify MCI-C from MCI-NC using features constructed based on the local and global graph measures. A novel feature selection algorithm was developed and utilized to select an optimal subset of features. Using subset of optimal features in SVM, we classified MCI-C from MCI-NC with an accuracy, sensitivity, specificity, and the area under the receiver operating characteristic (ROC) curve of 91.4%, 83.24%, 90.1%, and 0.95, respectively. Furthermore, results of our statistical analyses were used to identify the affected brain regions in AD. To the best of our knowledge, this is the first study that combines the graph measures (constructed based on rs-fMRI) with machine learning approach and accurately classify MCI-C from MCI-NC. Results of this study demonstrate potential of the proposed approach for early AD diagnosis and demonstrate capability of rs-fMRI to predict conversion from MCI to AD by identifying affected brain regions underlying this conversion. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. The impact of inspired oxygen levels on calibrated fMRI measurements of M, OEF and resting CMRO2 using combined hypercapnia and hyperoxia

    PubMed Central

    Lajoie, Isabelle; Tancredi, Felipe B.; Hoge, Richard D.

    2017-01-01

    Recent calibrated fMRI techniques using combined hypercapnia and hyperoxia allow the mapping of resting cerebral metabolic rate of oxygen (CMRO2) in absolute units, oxygen extraction fraction (OEF) and calibration parameter M (maximum BOLD). The adoption of such technique necessitates knowledge about the precision and accuracy of the model-derived parameters. One of the factors that may impact the precision and accuracy is the level of oxygen provided during periods of hyperoxia (HO). A high level of oxygen may bring the BOLD responses closer to the maximum M value, and hence reduce the error associated with the M interpolation. However, an increased concentration of paramagnetic oxygen in the inhaled air may result in a larger susceptibility area around the frontal sinuses and nasal cavity. Additionally, a higher O2 level may generate a larger arterial blood T1 shortening, which require a bigger cerebral blood flow (CBF) T1 correction. To evaluate the impact of inspired oxygen levels on M, OEF and CMRO2 estimates, a cohort of six healthy adults underwent two different protocols: one where 60% of O2 was administered during HO (low HO or LHO) and one where 100% O2 was administered (high HO or HHO). The QUantitative O2 (QUO2) MRI approach was employed, where CBF and R2* are simultaneously acquired during periods of hypercapnia (HC) and hyperoxia, using a clinical 3 T scanner. Scan sessions were repeated to assess repeatability of results at the different O2 levels. Our T1 values during periods of hyperoxia were estimated based on an empirical ex-vivo relationship between T1 and the arterial partial pressure of O2. As expected, our T1 estimates revealed a larger T1 shortening in arterial blood when administering 100% O2 relative to 60% O2 (T1LHO = 1.56±0.01 sec vs. T1HHO = 1.47±0.01 sec, P < 4*10−13). In regard to the susceptibility artifacts, the patterns and number of affected voxels were comparable irrespective of the O2 concentration. Finally, the model

  7. The impact of inspired oxygen levels on calibrated fMRI measurements of M, OEF and resting CMRO2 using combined hypercapnia and hyperoxia.

    PubMed

    Lajoie, Isabelle; Tancredi, Felipe B; Hoge, Richard D

    2017-01-01

    Recent calibrated fMRI techniques using combined hypercapnia and hyperoxia allow the mapping of resting cerebral metabolic rate of oxygen (CMRO2) in absolute units, oxygen extraction fraction (OEF) and calibration parameter M (maximum BOLD). The adoption of such technique necessitates knowledge about the precision and accuracy of the model-derived parameters. One of the factors that may impact the precision and accuracy is the level of oxygen provided during periods of hyperoxia (HO). A high level of oxygen may bring the BOLD responses closer to the maximum M value, and hence reduce the error associated with the M interpolation. However, an increased concentration of paramagnetic oxygen in the inhaled air may result in a larger susceptibility area around the frontal sinuses and nasal cavity. Additionally, a higher O2 level may generate a larger arterial blood T1 shortening, which require a bigger cerebral blood flow (CBF) T1 correction. To evaluate the impact of inspired oxygen levels on M, OEF and CMRO2 estimates, a cohort of six healthy adults underwent two different protocols: one where 60% of O2 was administered during HO (low HO or LHO) and one where 100% O2 was administered (high HO or HHO). The QUantitative O2 (QUO2) MRI approach was employed, where CBF and R2* are simultaneously acquired during periods of hypercapnia (HC) and hyperoxia, using a clinical 3 T scanner. Scan sessions were repeated to assess repeatability of results at the different O2 levels. Our T1 values during periods of hyperoxia were estimated based on an empirical ex-vivo relationship between T1 and the arterial partial pressure of O2. As expected, our T1 estimates revealed a larger T1 shortening in arterial blood when administering 100% O2 relative to 60% O2 (T1LHO = 1.56±0.01 sec vs. T1HHO = 1.47±0.01 sec, P < 4*10-13). In regard to the susceptibility artifacts, the patterns and number of affected voxels were comparable irrespective of the O2 concentration. Finally, the model

  8. Wavelet Entropy and Directed Acyclic Graph Support Vector Machine for Detection of Patients with Unilateral Hearing Loss in MRI Scanning

    PubMed Central

    Wang, Shuihua; Yang, Ming; Du, Sidan; Yang, Jiquan; Liu, Bin; Gorriz, Juan M.; Ramírez, Javier; Yuan, Ti-Fei; Zhang, Yudong

    2016-01-01

    Highlights We develop computer-aided diagnosis system for unilateral hearing loss detection in structural magnetic resonance imaging.Wavelet entropy is introduced to extract image global features from brain images. Directed acyclic graph is employed to endow support vector machine an ability to handle multi-class problems.The developed computer-aided diagnosis system achieves an overall accuracy of 95.1% for this three-class problem of differentiating left-sided and right-sided hearing loss from healthy controls. Aim: Sensorineural hearing loss (SNHL) is correlated to many neurodegenerative disease. Now more and more computer vision based methods are using to detect it in an automatic way. Materials: We have in total 49 subjects, scanned by 3.0T MRI (Siemens Medical Solutions, Erlangen, Germany). The subjects contain 14 patients with right-sided hearing loss (RHL), 15 patients with left-sided hearing loss (LHL), and 20 healthy controls (HC). Method: We treat this as a three-class classification problem: RHL, LHL, and HC. Wavelet entropy (WE) was selected from the magnetic resonance images of each subjects, and then submitted to a directed acyclic graph support vector machine (DAG-SVM). Results: The 10 repetition results of 10-fold cross validation shows 3-level decomposition will yield an overall accuracy of 95.10% for this three-class classification problem, higher than feedforward neural network, decision tree, and naive Bayesian classifier. Conclusions: This computer-aided diagnosis system is promising. We hope this study can attract more computer vision method for detecting hearing loss. PMID:27807415

  9. A semi-supervised Support Vector Machine model for predicting the language outcomes following cochlear implantation based on pre-implant brain fMRI imaging.

    PubMed

    Tan, Lirong; Holland, Scott K; Deshpande, Aniruddha K; Chen, Ye; Choo, Daniel I; Lu, Long J

    2015-12-01

    We developed a machine learning model to predict whether or not a cochlear implant (CI) candidate will develop effective language skills within 2 years after the CI surgery by using the pre-implant brain fMRI data from the candidate. The language performance was measured 2 years after the CI surgery by the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2). Based on the CELF-P2 scores, the CI recipients were designated as either effective or ineffective CI users. For feature extraction from the fMRI data, we constructed contrast maps using the general linear model, and then utilized the Bag-of-Words (BoW) approach that we previously published to convert the contrast maps into feature vectors. We trained both supervised models and semi-supervised models to classify CI users as effective or ineffective. Compared with the conventional feature extraction approach, which used each single voxel as a feature, our BoW approach gave rise to much better performance for the classification of effective versus ineffective CI users. The semi-supervised model with the feature set extracted by the BoW approach from the contrast of speech versus silence achieved a leave-one-out cross-validation AUC as high as 0.97. Recursive feature elimination unexpectedly revealed that two features were sufficient to provide highly accurate classification of effective versus ineffective CI users based on our current dataset. We have validated the hypothesis that pre-implant cortical activation patterns revealed by fMRI during infancy correlate with language performance 2 years after cochlear implantation. The two brain regions highlighted by our classifier are potential biomarkers for the prediction of CI outcomes. Our study also demonstrated the superiority of the semi-supervised model over the supervised model. It is always worthwhile to try a semi-supervised model when unlabeled data are available.

  10. Hybrid MRI-Ultrasound acquisitions, and scannerless real-time imaging.

    PubMed

    Preiswerk, Frank; Toews, Matthew; Cheng, Cheng-Chieh; Chiou, Jr-Yuan George; Mei, Chang-Sheng; Schaefer, Lena F; Hoge, W Scott; Schwartz, Benjamin M; Panych, Lawrence P; Madore, Bruno

    2017-09-01

    To combine MRI, ultrasound, and computer science methodologies toward generating MRI contrast at the high frame rates of ultrasound, inside and even outside the MRI bore. A small transducer, held onto the abdomen with an adhesive bandage, collected ultrasound signals during MRI. Based on these ultrasound signals and their correlations with MRI, a machine-learning algorithm created synthetic MR images at frame rates up to 100 per second. In one particular implementation, volunteers were taken out of the MRI bore with the ultrasound sensor still in place, and MR images were generated on the basis of ultrasound signal and learned correlations alone in a "scannerless" manner. Hybrid ultrasound-MRI data were acquired in eight separate imaging sessions. Locations of liver features, in synthetic images, were compared with those from acquired images: The mean error was 1.0 pixel (2.1 mm), with best case 0.4 and worst case 4.1 pixels (in the presence of heavy coughing). For results from outside the bore, qualitative validation involved optically tracked ultrasound imaging with/without coughing. The proposed setup can generate an accurate stream of high-speed MR images, up to 100 frames per second, inside or even outside the MR bore. Magn Reson Med 78:897-908, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  11. The precision measurement and assembly for miniature parts based on double machine vision systems

    NASA Astrophysics Data System (ADS)

    Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.

    2015-02-01

    In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.

  12. Common component classification: what can we learn from machine learning?

    PubMed

    Anderson, Ariana; Labus, Jennifer S; Vianna, Eduardo P; Mayer, Emeran A; Cohen, Mark S

    2011-05-15

    Machine learning methods have been applied to classifying fMRI scans by studying locations in the brain that exhibit temporal intensity variation between groups, frequently reporting classification accuracy of 90% or better. Although empirical results are quite favorable, one might doubt the ability of classification methods to withstand changes in task ordering and the reproducibility of activation patterns over runs, and question how much of the classification machines' power is due to artifactual noise versus genuine neurological signal. To examine the true strength and power of machine learning classifiers we create and then deconstruct a classifier to examine its sensitivity to physiological noise, task reordering, and across-scan classification ability. The models are trained and tested both within and across runs to assess stability and reproducibility across conditions. We demonstrate the use of independent components analysis for both feature extraction and artifact removal and show that removal of such artifacts can reduce predictive accuracy even when data has been cleaned in the preprocessing stages. We demonstrate how mistakes in the feature selection process can cause the cross-validation error seen in publication to be a biased estimate of the testing error seen in practice and measure this bias by purposefully making flawed models. We discuss other ways to introduce bias and the statistical assumptions lying behind the data and model themselves. Finally we discuss the complications in drawing inference from the smaller sample sizes typically seen in fMRI studies, the effects of small or unbalanced samples on the Type 1 and Type 2 error rates, and how publication bias can give a false confidence of the power of such methods. Collectively this work identifies challenges specific to fMRI classification and methods affecting the stability of models. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Machine learning: a useful radiological adjunct in determination of a newly diagnosed glioma's grade and IDH status.

    PubMed

    De Looze, Céline; Beausang, Alan; Cryan, Jane; Loftus, Teresa; Buckley, Patrick G; Farrell, Michael; Looby, Seamus; Reilly, Richard; Brett, Francesca; Kearney, Hugh

    2018-05-16

    Machine learning methods have been introduced as a computer aided diagnostic tool, with applications to glioma characterisation on MRI. Such an algorithmic approach may provide a useful adjunct for a rapid and accurate diagnosis of a glioma. The aim of this study is to devise a machine learning algorithm that may be used by radiologists in routine practice to aid diagnosis of both: WHO grade and IDH mutation status in de novo gliomas. To evaluate the status quo, we interrogated the accuracy of neuroradiology reports in relation to WHO grade: grade II 96.49% (95% confidence intervals [CI] 0.88, 0.99); III 36.51% (95% CI 0.24, 0.50); IV 72.9% (95% CI 0.67, 0.78). We derived five MRI parameters from the same diagnostic brain scans, in under two minutes per case, and then supplied these data to a random forest algorithm. Machine learning resulted in a high level of accuracy in prediction of tumour grade: grade II/III; area under the receiver operating characteristic curve (AUC) = 98%, sensitivity = 0.82, specificity = 0.94; grade II/IV; AUC = 100%, sensitivity = 1.0, specificity = 1.0; grade III/IV; AUC = 97%, sensitivity = 0.83, specificity = 0.97. Furthermore, machine learning also facilitated the discrimination of IDH status: AUC of 88%, sensitivity = 0.81, specificity = 0.77. These data demonstrate the ability of machine learning to accurately classify diffuse gliomas by both WHO grade and IDH status from routine MRI alone-without significant image processing, which may facilitate usage as a diagnostic adjunct in clinical practice.

  14. Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision.

    PubMed

    Tu, Junchao; Zhang, Liyan

    2018-01-12

    A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.

  15. MRI Detects Myocardial Iron in the Human Heart

    PubMed Central

    Ghugre, Nilesh R.; Enriquez, Cathleen M.; Gonzalez, Ignacio; Nelson, Marvin D.; Coates, Thomas D.; Wood, John C.

    2010-01-01

    Iron-induced cardiac dysfunction is a leading cause of death in transfusion-dependent anemia. MRI relaxation rates R2(1/T2) and R2∗(1∕T2∗) accurately predict liver iron concentration, but their ability to predict cardiac iron has been challenged by some investigators. Studies in animal models support similar R2 and R2∗ behavior with heart and liver iron, but human studies are lacking. To determine the relationship between MRI relaxivities and cardiac iron, regional variations in R2 and R2∗ were compared with iron distribution in one freshly deceased, unfixed, iron-loaded heart. R2 and R2∗ were proportionally related to regional iron concentrations and highly concordant with one another within the interventricular septum. A comparison of postmortem and in vitro measurements supports the notion that cardiac R2∗ should be assessed in the septum rather than the whole heart. These data, along with measurements from controls, provide bounds on MRI-iron calibration curves in human heart and further support the clinical use of cardiac MRI in iron-overload syndromes. PMID:16888797

  16. Analysing exoplanetary data using unsupervised machine-learning

    NASA Astrophysics Data System (ADS)

    Waldmann, I. P.

    2012-04-01

    The field of transiting extrasolar planets and especially the study of their atmospheres is one of the youngest and most dynamic subjects in current astrophysics. Permanently at the edge of technical feasibility, we are successfully discovering and characterising smaller and smaller planets. To study exoplanetary atmospheres, we typically require a 10-4 to 10-5 level of accuracy in flux. Achieving such a precision has become the central challenge to exoplanetary research and is often impeded by systematic (nongaussian) noise from either the instrument, stellar activity or both. Dedicated missions, such as Kepler, feature an a priori instrument calibration plan to the required accuracy but nonetheless remain limited by stellar systematics. More generic instruments often lack a sufficiently defined instrument response function, making it very hard to calibrate. In these cases, it becomes interesting to know how well we can calibrate the data without any additional or prior knowledge of the instrument or star. In this conference, we present a non-parametric machine-learning algorithm, based on the concept of independent component analysis, to de-convolve the systematic noise and all non-Gaussian signals from the desired astrophysical signal. Such a 'blind' signal de-mixing is commonly known as the 'Cocktail Party problem' in signal-processing. We showcase the importance and broad applicability of unsupervised machine learning in exoplanetary data analysis by discussing: 1) the removal of instrument systematics in a re-analysis of an HD189733b transmission spectrum obtained with Hubble/NICMOS; 2) the removal of time-correlated stellar noise in individual lightcurves observed by the Kepler mission.

  17. MATE: Machine Learning for Adaptive Calibration Template Detection

    PubMed Central

    Donné, Simon; De Vylder, Jonas; Goossens, Bart; Philips, Wilfried

    2016-01-01

    The problem of camera calibration is two-fold. On the one hand, the parameters are estimated from known correspondences between the captured image and the real world. On the other, these correspondences themselves—typically in the form of chessboard corners—need to be found. Many distinct approaches for this feature template extraction are available, often of large computational and/or implementational complexity. We exploit the generalized nature of deep learning networks to detect checkerboard corners: our proposed method is a convolutional neural network (CNN) trained on a large set of example chessboard images, which generalizes several existing solutions. The network is trained explicitly against noisy inputs, as well as inputs with large degrees of lens distortion. The trained network that we evaluate is as accurate as existing techniques while offering improved execution time and increased adaptability to specific situations with little effort. The proposed method is not only robust against the types of degradation present in the training set (lens distortions, and large amounts of sensor noise), but also to perspective deformations, e.g., resulting from multi-camera set-ups. PMID:27827920

  18. Machine learning for neuroimaging with scikit-learn.

    PubMed

    Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël

    2014-01-01

    Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain.

  19. Machine learning for neuroimaging with scikit-learn

    PubMed Central

    Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël

    2014-01-01

    Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain. PMID:24600388

  20. Man vs. Machine: An interactive poll to evaluate hydrological model performance of a manual and an automatic calibration

    NASA Astrophysics Data System (ADS)

    Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that

  1. Tracked ultrasound calibration studies with a phantom made of LEGO bricks

    NASA Astrophysics Data System (ADS)

    Soehl, Marie; Walsh, Ryan; Rankin, Adam; Lasso, Andras; Fichtinger, Gabor

    2014-03-01

    In this study, spatial calibration of tracked ultrasound was compared by using a calibration phantom made of LEGO® bricks and two 3-D printed N-wire phantoms. METHODS: The accuracy and variance of calibrations were compared under a variety of operating conditions. Twenty trials were performed using an electromagnetic tracking device with a linear probe and three trials were performed using varied probes, varied tracking devices and the three aforementioned phantoms. The accuracy and variance of spatial calibrations found through the standard deviation and error of the 3-D image reprojection were used to compare the calibrations produced from the phantoms. RESULTS: This study found no significant difference between the measured variables of the calibrations. The average standard deviation of multiple 3-D image reprojections with the highest performing printed phantom and those from the phantom made of LEGO® bricks differed by 0.05 mm and the error of the reprojections differed by 0.13 mm. CONCLUSION: Given that the phantom made of LEGO® bricks is significantly less expensive, more readily available, and more easily modified than precision-machined N-wire phantoms, it prompts to be a viable calibration tool especially for quick laboratory research and proof of concept implementations of tracked ultrasound navigation.

  2. The Alzheimer's Disease Neuroimaging Initiative (ADNI): MRI Methods

    PubMed Central

    Jack, Clifford R.; Bernstein, Matt A.; Fox, Nick C.; Thompson, Paul; Alexander, Gene; Harvey, Danielle; Borowski, Bret; Britson, Paula J.; Whitwell, Jennifer L.; Ward, Chadwick; Dale, Anders M.; Felmlee, Joel P.; Gunter, Jeffrey L.; Hill, Derek L.G.; Killiany, Ron; Schuff, Norbert; Fox-Bosetti, Sabrina; Lin, Chen; Studholme, Colin; DeCarli, Charles S.; Krueger, Gunnar; Ward, Heidi A.; Metzger, Gregory J.; Scott, Katherine T.; Mallozzi, Richard; Blezek, Daniel; Levy, Joshua; Debbins, Josef P.; Fleisher, Adam S.; Albert, Marilyn; Green, Robert; Bartzokis, George; Glover, Gary; Mugler, John; Weiner, Michael W.

    2008-01-01

    The Alzheimer's Disease Neuroimaging Initiative (ADNI) is a longitudinal multisite observational study of healthy elders, mild cognitive impairment (MCI), and Alzheimer's disease. Magnetic resonance imaging (MRI), (18F)-fluorode-oxyglucose positron emission tomography (FDG PET), urine serum, and cerebrospinal fluid (CSF) biomarkers, as well as clinical/psychometric assessments are acquiredat multiple time points. All data will be cross-linked and made available to the general scientific community. The purpose of this report is to describe the MRI methods employed in ADNI. The ADNI MRI core established specifications thatguided protocol development. A major effort was devoted toevaluating 3D T1-weighted sequences for morphometric analyses. Several options for this sequence were optimized for the relevant manufacturer platforms and then compared in a reduced-scale clinical trial. The protocol selected for the ADNI study includes: back-to-back 3D magnetization prepared rapid gradient echo (MP-RAGE) scans; B1-calibration scans when applicable; and an axial proton density-T2 dual contrast (i.e., echo) fast spin echo/turbo spin echo (FSE/TSE) for pathology detection. ADNI MRI methods seek to maximize scientific utility while minimizing the burden placed on participants. The approach taken in ADNI to standardization across sites and platforms of the MRI protocol, postacquisition corrections, and phantom-based monitoring of all scanners could be used as a model for other multisite trials. PMID:18302232

  3. STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.

    PubMed

    Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X

    2009-08-01

    This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.

  4. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle.

    PubMed

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  5. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle

    NASA Astrophysics Data System (ADS)

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  6. Prediction of activation patterns preceding hallucinations in patients with schizophrenia using machine learning with structured sparsity.

    PubMed

    de Pierrefeu, Amicie; Fovet, Thomas; Hadj-Selem, Fouad; Löfstedt, Tommy; Ciuciu, Philippe; Lefebvre, Stephanie; Thomas, Pierre; Lopes, Renaud; Jardri, Renaud; Duchesnay, Edouard

    2018-04-01

    Despite significant progress in the field, the detection of fMRI signal changes during hallucinatory events remains difficult and time-consuming. This article first proposes a machine-learning algorithm to automatically identify resting-state fMRI periods that precede hallucinations versus periods that do not. When applied to whole-brain fMRI data, state-of-the-art classification methods, such as support vector machines (SVM), yield dense solutions that are difficult to interpret. We proposed to extend the existing sparse classification methods by taking the spatial structure of brain images into account with structured sparsity using the total variation penalty. Based on this approach, we obtained reliable classifying performances associated with interpretable predictive patterns, composed of two clearly identifiable clusters in speech-related brain regions. The variation in transition-to-hallucination functional patterns not only from one patient to another but also from one occurrence to the next (e.g., also depending on the sensory modalities involved) appeared to be the major difficulty when developing effective classifiers. Consequently, second, this article aimed to characterize the variability within the prehallucination patterns using an extension of principal component analysis with spatial constraints. The principal components (PCs) and the associated basis patterns shed light on the intrinsic structures of the variability present in the dataset. Such results are promising in the scope of innovative fMRI-guided therapy for drug-resistant hallucinations, such as fMRI-based neurofeedback. © 2018 Wiley Periodicals, Inc.

  7. Prediction of brain maturity in infants using machine-learning algorithms.

    PubMed

    Smyser, Christopher D; Dosenbach, Nico U F; Smyser, Tara A; Snyder, Abraham Z; Rogers, Cynthia E; Inder, Terrie E; Schlaggar, Bradley L; Neil, Jeffrey J

    2016-08-01

    Recent resting-state functional MRI investigations have demonstrated that much of the large-scale functional network architecture supporting motor, sensory and cognitive functions in older pediatric and adult populations is present in term- and prematurely-born infants. Application of new analytical approaches can help translate the improved understanding of early functional connectivity provided through these studies into predictive models of neurodevelopmental outcome. One approach to achieving this goal is multivariate pattern analysis, a machine-learning, pattern classification approach well-suited for high-dimensional neuroimaging data. It has previously been adapted to predict brain maturity in children and adolescents using structural and resting state-functional MRI data. In this study, we evaluated resting state-functional MRI data from 50 preterm-born infants (born at 23-29weeks of gestation and without moderate-severe brain injury) scanned at term equivalent postmenstrual age compared with data from 50 term-born control infants studied within the first week of life. Using 214 regions of interest, binary support vector machines distinguished term from preterm infants with 84% accuracy (p<0.0001). Inter- and intra-hemispheric connections throughout the brain were important for group categorization, indicating that widespread changes in the brain's functional network architecture associated with preterm birth are detectable by term equivalent age. Support vector regression enabled quantitative estimation of birth gestational age in single subjects using only term equivalent resting state-functional MRI data, indicating that the present approach is sensitive to the degree of disruption of brain development associated with preterm birth (using gestational age as a surrogate for the extent of disruption). This suggests that support vector regression may provide a means for predicting neurodevelopmental outcome in individual infants. Copyright © 2016 Elsevier

  8. Prediction of brain maturity in infants using machine-learning algorithms

    PubMed Central

    Smyser, Christopher D.; Dosenbach, Nico U.F.; Smyser, Tara A.; Snyder, Abraham Z.; Rogers, Cynthia E.; Inder, Terrie E.; Schlaggar, Bradley L.; Neil, Jeffrey J.

    2016-01-01

    Recent resting-state functional MRI investigations have demonstrated that much of the large-scale functional network architecture supporting motor, sensory and cognitive functions in older pediatric and adult populations is present in term- and prematurely-born infants. Application of new analytical approaches can help translate the improved understanding of early functional connectivity provided through these studies into predictive models of neurodevelopmental outcome. One approach to achieving this goal is multivariate pattern analysis, a machine-learning, pattern classification approach well-suited for high-dimensional neuroimaging data. It has previously been adapted to predict brain maturity in children and adolescents using structural and resting state-functional MRI data. In this study, we evaluated resting state-functional MRI data from 50 preterm-born infants (born at 23–29 weeks of gestation and without moderate–severe brain injury) scanned at term equivalent postmenstrual age compared with data from 50 term-born control infants studied within the first week of life. Using 214 regions of interest, binary support vector machines distinguished term from preterm infants with 84% accuracy (p < 0.0001). Inter- and intra-hemispheric connections throughout the brain were important for group categorization, indicating that widespread changes in the brain's functional network architecture associated with preterm birth are detectable by term equivalent age. Support vector regression enabled quantitative estimation of birth gestational age in single subjects using only term equivalent resting state-functional MRI data, indicating that the present approach is sensitive to the degree of disruption of brain development associated with preterm birth (using gestational age as a surrogate for the extent of disruption). This suggests that support vector regression may provide a means for predicting neurodevelopmental outcome in individual infants. PMID:27179605

  9. Design features and results from fatigue reliability research machines.

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Kececioglu, D.; Mcconnell, J. B.

    1971-01-01

    The design, fabrication, development, operation, calibration and results from reversed bending combined with steady torque fatigue research machines are presented. Fifteen-centimeter long, notched, SAE 4340 steel specimens are subjected to various combinations of these stresses and cycled to failure. Failure occurs when the crack in the notch passes through the specimen automatically shutting down the test machine. These cycles-to-failure data are statistically analyzed to develop a probabilistic S-N diagram. These diagrams have many uses; a rotating component design example given in the literature shows that minimum size and weight for a specified number of cycles and reliability can be calculated using these diagrams.

  10. Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes

    PubMed Central

    Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian

    2016-01-01

    Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes. PMID:26751451

  11. Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes.

    PubMed

    Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian

    2016-01-07

    Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes.

  12. Juvenile Osteochondritis Dissecans: Correlation Between Histopathology and MRI.

    PubMed

    Zbojniewicz, Andrew M; Stringer, Keith F; Laor, Tal; Wall, Eric J

    2015-07-01

    The objective of our study was to correlate specimens of juvenile osteochondritis dissecans (OCD) lesions of the knee to MRI examinations to elucidate the histopathologic basis of characteristic imaging features. Five children (three boys and two girls; age range, 12-13 years old) who underwent transarticular biopsy of juvenile OCD lesions of the knee were retrospectively included in this study. Two radiologists reviewed the MRI examinations and a pathologist reviewed the histopathologic specimens and recorded characteristic features. Digital specimen photographs were calibrated to the size of the respective MR image with the use of a reference scale. Photographs were rendered semitransparent and over-laid onto the MR image with the location chosen on the basis of the site of the prior biopsy. A total of seven biopsy specimens were included. On MRI, all lesions showed cystlike foci in the subchondral bone, bone marrow edema pattern on proton density-or T2-weighted images, and relatively thick unossified epiphyseal cartilage. In four patients, a laminar signal intensity pattern was seen, and two patients had multiple breaks in the subchondral bone plate. Fibrovascular tissue was found at histopathology in all patients. Cleft spaces near the cartilage-bone interface and were seen in all patients while chondrocyte cloning was present in most cases. Focal bone necrosis and inflammation were infrequent MRI findings. Precise correlation of the MRI appearance to the histopathologic overlays consistently was found. A direct correlation exists between the histopathologic findings and the MRI features in patients with juvenile OCD. Additional studies are needed to correlate these MRI features with juvenile OCD healing success rates.

  13. Reliably detectable flaw size for NDE methods that use calibration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh18232 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.

  14. Reliably Detectable Flaw Size for NDE Methods that Use Calibration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh1823 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.

  15. A SYSTEMS APPROACH UTILIZING GENERAL-PURPOSE AND SPECIAL-PURPOSE TEACHING MACHINES.

    ERIC Educational Resources Information Center

    SILVERN, LEONARD C.

    IN ORDER TO IMPROVE THE EMPLOYEE TRAINING-EVALUATION METHOD, TEACHING MACHINES AND PERFORMANCE AIDS MUST BE PHYSICALLY AND OPERATIONALLY INTEGRATED INTO THE SYSTEM, THUS RETURNING TRAINING TO THE ACTUAL JOB ENVIRONMENT. GIVEN THESE CONDITIONS, TRAINING CAN BE MEASURED, CALIBRATED, AND CONTROLLED WITH RESPECT TO ACTUAL JOB PERFORMANCE STANDARDS AND…

  16. Volumetric brain magnetic resonance imaging predicts functioning in bipolar disorder: A machine learning approach.

    PubMed

    Sartori, Juliana M; Reckziegel, Ramiro; Passos, Ives Cavalcante; Czepielewski, Leticia S; Fijtman, Adam; Sodré, Leonardo A; Massuda, Raffael; Goi, Pedro D; Vianna-Sulzbach, Miréia; Cardoso, Taiane de Azevedo; Kapczinski, Flávio; Mwangi, Benson; Gama, Clarissa S

    2018-08-01

    Neuroimaging studies have been steadily explored in Bipolar Disorder (BD) in the last decades. Neuroanatomical changes tend to be more pronounced in patients with repeated episodes. Although the role of such changes in cognition and memory is well established, daily-life functioning impairments bulge among the consequences of the proposed progression. The objective of this study was to analyze MRI volumetric modifications in BD and healthy controls (HC) as possible predictors of daily-life functioning through a machine learning approach. Ninety-four participants (35 DSM-IV BD type I and 59 HC) underwent clinical and functioning assessments, and structural MRI. Functioning was assessed using the Functioning Assessment Short Test (FAST). The machine learning analysis was used to identify possible candidates of regional brain volumes that could predict functioning status, through a support vector regression algorithm. Patients with BD and HC did not differ in age, education and marital status. There were significant differences between groups in gender, BMI, FAST score, and employment status. There was significant correlation between observed and predicted FAST score for patients with BD, but not for controls. According to the model, the brain structures volumes that could predict FAST scores were: left superior frontal cortex, left rostral medial frontal cortex, right white matter total volume and right lateral ventricle volume. The machine learning approach demonstrated that brain volume changes in MRI were predictors of FAST score in patients with BD and could identify specific brain areas related to functioning impairment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Photometric Redshift Calibration Strategy for WFIRST Cosmology

    NASA Astrophysics Data System (ADS)

    Hemmati, Shoubaneh; WFIRST, WFIRST-HLS-COSMOLOGY

    2018-01-01

    In order for WFIRST and other Stage IV Dark energy experiments (e.g. LSST, Euclid) to infer cosmological parameters not limited by systematic errors, accurate redshift measurements are needed. This accuracy can only be met using spectroscopic subsamples to calibrate the full sample. In this poster, we employ the machine leaning, SOM based spectroscopic sampling technique developed in Masters et al. 2015, using the empirical color-redshift relation among galaxies to find the minimum spectra required for the WFIRST weak lensing calibration. We use galaxies from the CANDELS survey to build the LSST+WFIRST lensing analog sample of ~36k objects and train the LSST+WFIRST SOM. We show that 26% of the WFIRST lensing sample consists of sources fainter than the Euclid depth in the optical, 91% of which live in color cells already occupied by brighter galaxies. We demonstrate the similarity between faint and bright galaxies as well as the feasibility of redshift measurements at different brightness levels. 4% of SOM cells are however only occupied by faint galaxies for which we recommend extra spectroscopy of ~200 new sources. Acquiring the spectra of these sources will enable the comprehensive calibration of the WFIRST color-redshift relation.

  18. INFLUENCE OF IRON CHELATION ON R1 AND R2 CALIBRATION CURVES IN GERBIL LIVER AND HEART

    PubMed Central

    Wood, John C.; Aguilar, Michelle; Otto-Duessel, Maya; Nick, Hanspeter; Nelson, Marvin D.; Moats, Rex

    2008-01-01

    MRI is gaining increasing importance for the noninvasive quantification of organ iron burden. Since transverse relaxation rates depend on iron distribution as well as iron concentration, physiologic and pharmacologic processes that alter iron distribution could change MRI calibration curves. This paper compares the effect of three iron chelators, deferoxamine, deferiprone, and deferasirox on R1 and R2 calibration curves according to two iron loading and chelation strategies. 33 Mongolian gerbils underwent iron loading (iron dextran 500 mg/kg/wk) for 3 weeks followed by 4 weeks of chelation. An additional 56 animals received less aggressive loading (200 mg/kg/week) for 10 weeks, followed by 12 weeks of chelation. R1 and R2 calibration curves were compared to results from 23 iron-loaded animals that had not received chelation. Acute iron loading and chelation biased R1 and R2 from the unchelated reference calibration curves but chelator-specific changes were not observed, suggesting physiologic rather than pharmacologic differences in iron distribution. Long term chelation deferiprone treatment increased liver R1 50% (p<0.01), while long term deferasirox lowered liver R2 30.9% (p<0.0001). The relationship between R1 and R2 and organ iron concentration may depend upon the acuity of iron loading and unloading as well as the iron chelator administered. PMID:18581418

  19. Magnetic Resonance Medical Imaging (MRI)-from the inside

    NASA Astrophysics Data System (ADS)

    Bottomley, Paul

    There are about 36,000 magnetic resonance imaging (MRI) scanners in the world, with annual sales of 2500. In the USA about 34 million MRI studies are done annually, and 60-70% of all scanners operate at 1.5 Tesla (T). In 1982 there were none. How MRI got to be-and how it got to1.5T is the subject of this talk. Its an insider's view-mine-as a physics PhD student at Nottingham University when MRI (almost) began, through to the invention of the 1.5T clinical MRI scanner at GE's research center in Schenectady NY.Before 1977 all MRI was done on laboratory nuclear magnetic resonance instruments used for analyzing small specimens via chemical shift spectroscopy (MRS). It began with Lauterbur's 1973 observation that turning up the spectrometer's linear gradient magnetic field, generated a spectrum that was a 1D projection of the sample in the direction of the gradient. What followed in the 70's was the development of 3 key methods of 3D spatial localization that remain fundamental to MRI today.As the 1980's began, the once unimaginable prospect of upscaling from 2cm test-tubes to human body-sized magnets, gradient and RF transmit/receive systems, was well underway, evolving from arm-sized, to whole-body electromagnet-based systems operating at <0.2T. I moved to Johns Hopkins University to apply MRI methods to localized MRS and study cardiac metabolism, and then to GE to build a whole-body MRS machine. The largest uniform magnet possible-then, a 1.5T superconducting system-was required. Body MRI was first thought impossible above 0.35T due to RF penetration, detector coil and signal-to-noise ratio (SNR) issues. When GE finally did take on MRI, their plan was to drop the field to 0.3T. We opted to make MRI work at 1.5T instead. The result was a scanner that could study both anatomy and metabolism with a SNR way beyond its lower field rivals. MRI's success truly reflects the team efforts of many: from the NMR physics to the engineering of magnets, gradient and RF systems.

  20. Machine Learning-based Texture Analysis of Contrast-enhanced MR Imaging to Differentiate between Glioblastoma and Primary Central Nervous System Lymphoma.

    PubMed

    Kunimatsu, Akira; Kunimatsu, Natsuko; Yasaka, Koichiro; Akai, Hiroyuki; Kamiya, Kouhei; Watadani, Takeyuki; Mori, Harushi; Abe, Osamu

    2018-05-16

    Although advanced MRI techniques are increasingly available, imaging differentiation between glioblastoma and primary central nervous system lymphoma (PCNSL) is sometimes confusing. We aimed to evaluate the performance of image classification by support vector machine, a method of traditional machine learning, using texture features computed from contrast-enhanced T 1 -weighted images. This retrospective study on preoperative brain tumor MRI included 76 consecutives, initially treated patients with glioblastoma (n = 55) or PCNSL (n = 21) from one institution, consisting of independent training group (n = 60: 44 glioblastomas and 16 PCNSLs) and test group (n = 16: 11 glioblastomas and 5 PCNSLs) sequentially separated by time periods. A total set of 67 texture features was computed on routine contrast-enhanced T 1 -weighted images of the training group, and the top four most discriminating features were selected as input variables to train support vector machine classifiers. These features were then evaluated on the test group with subsequent image classification. The area under the receiver operating characteristic curves on the training data was calculated at 0.99 (95% confidence interval [CI]: 0.96-1.00) for the classifier with a Gaussian kernel and 0.87 (95% CI: 0.77-0.95) for the classifier with a linear kernel. On the test data, both of the classifiers showed prediction accuracy of 75% (12/16) of the test images. Although further improvement is needed, our preliminary results suggest that machine learning-based image classification may provide complementary diagnostic information on routine brain MRI.

  1. SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jen, M; Yan, F; Tseng, Y

    2015-06-15

    Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtractionmore » of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL.« less

  2. An fMRI-Based Neural Signature of Decisions to Smoke Cannabis.

    PubMed

    Bedi, Gillinder; Lindquist, Martin A; Haney, Margaret

    2015-11-01

    Drug dependence may be at its core a pathology of choice, defined by continued decisions to use drugs irrespective of negative consequences. Despite evidence of dysregulated decision making in addiction, little is known about the neural processes underlying the most clinically relevant decisions drug users make: decisions to use drugs. Here, we combined functional magnetic resonance imaging (fMRI), machine learning, and human laboratory drug administration to investigate neural activation underlying decisions to smoke cannabis. Nontreatment-seeking daily cannabis smokers completed an fMRI choice task, making repeated decisions to purchase or decline 1-12 placebo or active cannabis 'puffs' ($0.25-$5/puff). One randomly selected decision was implemented. If the selected choice had been bought, the cost was deducted from study earnings and the purchased cannabis smoked in the laboratory; alternatively, the participant remained in the laboratory without cannabis. Machine learning with leave-one-subject-out cross-validation identified distributed neural activation patterns discriminating decisions to buy cannabis from declined offers. A total of 21 participants were included in behavioral analyses; 17 purchased cannabis and were thus included in fMRI analyses. Purchasing varied lawfully with dose and cost. The classifier discriminated with 100% accuracy between fMRI activation patterns for purchased vs declined cannabis at the level of the individual. Dorsal striatum, insula, posterior parietal regions, anterior and posterior cingulate, and dorsolateral prefrontal cortex all contributed reliably to this neural signature of decisions to smoke cannabis. These findings provide the basis for a brain-based characterization of drug-related decision making in drug abuse, including effects of psychological and pharmacological interventions on these processes.

  3. TU-F-CAMPUS-J-05: Fast Volumetric MRI On An MRI-Linac Enables On-Line QA On Dose Deposition in the Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crijns, S; Glitzner, M; Kontaxis, C

    Purpose: The introduction of the MRI-linac in radiotherapy brings MRI-guided treatment with daily plan adaptions within reach. This paradigm demands on-line QA. With its ability to perform continuous volumetric imaging in an outstanding soft-tissue contrast, the MRI- linac promises to elucidate the dose deposition process during a treatment session. Here we study for a prostate case how dynamic MRI combined with linac machine parameters and a fast dose-engine can be used for on-line dose accumulation. Methods: Prostate imaging was performed in healthy volunteer on a 1.5T MR-scanner (Philips, Best, NL) according to a clinical MR-sim protocol, followed by 10min ofmore » dynamic imaging (FLASH, 4s/volume, FOV 40×40×12cm{sup 3}, voxels 3×3×3mm{sup 3}, TR/TE/α=3.5ms/1.7ms/5°). An experienced radiation oncologist made delineations, considering the prostate CTV. Planning was performed on a two-compartment pseudoCT (air/water density) according to clinical constraints (77Gy in PTV) using a Monte-Carlo (MC) based TPS that accounts for magnetic fields. Delivery of one fraction (2.2Gy) was simulated on an emulator for the Axesse linac (Elekta, Stockholm, SE). Machine parameters (MLC settings, gantry angle, dose rate, etc.) were recorded at 25Hz. These were re-grouped per dynamic volume and fed into the MC-engine to calculate a dose delivered for each of the dynamics. Deformations derived from non-rigid registration of each dynamic against the first allowed dose accumulation on a common reference grid. Results: The DVH parameters on the PTV compared to the optimized plan showed little changes. Local deformations however resulted in local deviations, primarily around the air/rectum interface. This clearly indicates the potential of intra-fraction adaptations based on the accumulated dose. Application in each fraction helps to track the influence of plan adaptations to the eventual dose distribution. Calculation times were about twice the delivery time. Conclusion: The

  4. Calibration of fluorescence resonance energy transfer in microscopy

    DOEpatents

    Youvan, Dougalas C.; Silva, Christopher M.; Bylina, Edward J.; Coleman, William J.; Dilworth, Michael R.; Yang, Mary M.

    2003-12-09

    Imaging hardware, software, calibrants, and methods are provided to visualize and quantitate the amount of Fluorescence Resonance Energy Transfer (FRET) occurring between donor and acceptor molecules in epifluorescence microscopy. The MicroFRET system compensates for overlap among donor, acceptor, and FRET spectra using well characterized fluorescent beads as standards in conjunction with radiometrically calibrated image processing techniques. The MicroFRET system also provides precisely machined epifluorescence cubes to maintain proper image registration as the sample is illuminated at the donor and acceptor excitation wavelengths. Algorithms are described that pseudocolor the image to display pixels exhibiting radiometrically-corrected fluorescence emission from the donor (blue), the acceptor (green) and FRET (red). The method is demonstrated on samples exhibiting FRET between genetically engineered derivatives of the Green Fluorescent Protein (GFP) bound to the surface of Ni chelating beads by histidine-tags.

  5. Calibration of fluorescence resonance energy transfer in microscopy

    DOEpatents

    Youvan, Douglas C.; Silva, Christopher M.; Bylina, Edward J.; Coleman, William J.; Dilworth, Michael R.; Yang, Mary M.

    2002-09-24

    Imaging hardware, software, calibrants, and methods are provided to visualize and quantitate the amount of Fluorescence Resonance Energy Transfer (FRET) occurring between donor and acceptor molecules in epifluorescence microscopy. The MicroFRET system compensates for overlap among donor, acceptor, and FRET spectra using well characterized fluorescent beads as standards in conjunction with radiometrically calibrated image processing techniques. The MicroFRET system also provides precisely machined epifluorescence cubes to maintain proper image registration as the sample is illuminated at the donor and acceptor excitation wavelengths. Algorithms are described that pseudocolor the image to display pixels exhibiting radiometrically-corrected fluorescence emission from the donor (blue), the acceptor (green) and FRET (red). The method is demonstrated on samples exhibiting FRET between genetically engineered derivatives of the Green Fluorescent Protein (GFP) bound to the surface of Ni chelating beads by histidine-tags.

  6. Assessment of the extent of pituitary macroadenomas resection in immediate postoperative MRI.

    PubMed

    Taberner López, E; Vañó Molina, M; Calatayud Gregori, J; Jornet Sanz, M; Jornet Fayos, J; Pastor Del Campo, A; Caño Gómez, A; Mollá Olmos, E

    To evaluate if it is possible to determine the extent of pituitary macroadenomas resection in the immediate postoperative pituitary magnetic resonance imaging (MRI). MRI of patient with pituitary macroadenomas from January 2010 until October 2014 were reviewed. Those patients who had diagnostic MRI, immediate post-surgical MRI and at least one MRI control were included. We evaluate if the findings between the immediate postsurgical MRI and the subsequent MRI were concordant. Cases which didn't have evolutionary controls and those who were reoperation for recurrence were excluded. The degree of tumor resection was divided into groups: total resection, partial resection and doubtful. All MRI studies were performed on a1.5T machine following the same protocol sequences for all cases. One morphological part, a dynamic contrast iv and late contrast part. Of the 73 cases included, immediate postoperative pituitary MRI was interpreted as total resection in 38 cases and tumoral rest in 28 cases, uncertainty among rest or inflammatory changes in 7 cases. Follow- up MRI identified 41 cases total resection and tumoral rest in 32. Sensitivity and specificity of 0.78 and 0.82 and positive and negative predictive value (PPV and NPV) 0.89 and 0.89 respectively were calculated. Immediate post-surgery pituitary MRI is useful for assessing the degree of tumor resection and is a good predictor of the final degree of real resection compared with the following MRI studies. It allows us to decide the most appropriate treatment at an early stage. Copyright © 2017 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.

  7. Computer-aided classification of Alzheimer's disease based on support vector machine with combination of cerebral image features in MRI

    NASA Astrophysics Data System (ADS)

    Jongkreangkrai, C.; Vichianin, Y.; Tocharoenchai, C.; Arimura, H.; Alzheimer's Disease Neuroimaging Initiative

    2016-03-01

    Several studies have differentiated Alzheimer's disease (AD) using cerebral image features derived from MR brain images. In this study, we were interested in combining hippocampus and amygdala volumes and entorhinal cortex thickness to improve the performance of AD differentiation. Thus, our objective was to investigate the useful features obtained from MRI for classification of AD patients using support vector machine (SVM). T1-weighted MR brain images of 100 AD patients and 100 normal subjects were processed using FreeSurfer software to measure hippocampus and amygdala volumes and entorhinal cortex thicknesses in both brain hemispheres. Relative volumes of hippocampus and amygdala were calculated to correct variation in individual head size. SVM was employed with five combinations of features (H: hippocampus relative volumes, A: amygdala relative volumes, E: entorhinal cortex thicknesses, HA: hippocampus and amygdala relative volumes and ALL: all features). Receiver operating characteristic (ROC) analysis was used to evaluate the method. AUC values of five combinations were 0.8575 (H), 0.8374 (A), 0.8422 (E), 0.8631 (HA) and 0.8906 (ALL). Although “ALL” provided the highest AUC, there were no statistically significant differences among them except for “A” feature. Our results showed that all suggested features may be feasible for computer-aided classification of AD patients.

  8. Machine learning methods for the classification of gliomas: Initial results using features extracted from MR spectroscopy.

    PubMed

    Ranjith, G; Parvathy, R; Vikas, V; Chandrasekharan, Kesavadas; Nair, Suresh

    2015-04-01

    With the advent of new imaging modalities, radiologists are faced with handling increasing volumes of data for diagnosis and treatment planning. The use of automated and intelligent systems is becoming essential in such a scenario. Machine learning, a branch of artificial intelligence, is increasingly being used in medical image analysis applications such as image segmentation, registration and computer-aided diagnosis and detection. Histopathological analysis is currently the gold standard for classification of brain tumors. The use of machine learning algorithms along with extraction of relevant features from magnetic resonance imaging (MRI) holds promise of replacing conventional invasive methods of tumor classification. The aim of the study is to classify gliomas into benign and malignant types using MRI data. Retrospective data from 28 patients who were diagnosed with glioma were used for the analysis. WHO Grade II (low-grade astrocytoma) was classified as benign while Grade III (anaplastic astrocytoma) and Grade IV (glioblastoma multiforme) were classified as malignant. Features were extracted from MR spectroscopy. The classification was done using four machine learning algorithms: multilayer perceptrons, support vector machine, random forest and locally weighted learning. Three of the four machine learning algorithms gave an area under ROC curve in excess of 0.80. Random forest gave the best performance in terms of AUC (0.911) while sensitivity was best for locally weighted learning (86.1%). The performance of different machine learning algorithms in the classification of gliomas is promising. An even better performance may be expected by integrating features extracted from other MR sequences. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  9. Standard method of test for grindability of coal by the Hardgrove-machine method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1975-01-01

    A procedure is described for sampling coal, grinding in a Hardgrove grinding machine, and passing through standard sieves to determine the degree of pulverization of coals. The grindability index of the coal tested is calculated from a calibration chart prepared by plotting weight of material passing a No. 200 sieve versus the Hardgrove Grindability Index for the standard reference samples. The Hardgrove machine is shown schematically. The method for preparing and determining grindability indexes of standard reference samples is given in the appendix. (BLM)

  10. Combination of rs-fMRI and sMRI Data to Discriminate Autism Spectrum Disorders in Young Children Using Deep Belief Network.

    PubMed

    Akhavan Aghdam, Maryam; Sharifi, Arash; Pedram, Mir Mohsen

    2018-05-07

    In recent years, the use of advanced magnetic resonance (MR) imaging methods such as functional magnetic resonance imaging (fMRI) and structural magnetic resonance imaging (sMRI) has recorded a great increase in neuropsychiatric disorders. Deep learning is a branch of machine learning that is increasingly being used for applications of medical image analysis such as computer-aided diagnosis. In a bid to classify and represent learning tasks, this study utilized one of the most powerful deep learning algorithms (deep belief network (DBN)) for the combination of data from Autism Brain Imaging Data Exchange I and II (ABIDE I and ABIDE II) datasets. The DBN was employed so as to focus on the combination of resting-state fMRI (rs-fMRI), gray matter (GM), and white matter (WM) data. This was done based on the brain regions that were defined using the automated anatomical labeling (AAL), in order to classify autism spectrum disorders (ASDs) from typical controls (TCs). Since the diagnosis of ASD is much more effective at an early age, only 185 individuals (116 ASD and 69 TC) ranging in age from 5 to 10 years were included in this analysis. In contrast, the proposed method is used to exploit the latent or abstract high-level features inside rs-fMRI and sMRI data while the old methods consider only the simple low-level features extracted from neuroimages. Moreover, combining multiple data types and increasing the depth of DBN can improve classification accuracy. In this study, the best combination comprised rs-fMRI, GM, and WM for DBN of depth 3 with 65.56% accuracy (sensitivity = 84%, specificity = 32.96%, F1 score = 74.76%) obtained via 10-fold cross-validation. This result outperforms previously presented methods on ABIDE I dataset.

  11. Geometric Calibration of Full Spherical Panoramic Ricoh-Theta Camera

    NASA Astrophysics Data System (ADS)

    Aghayari, S.; Saadatseresht, M.; Omidalizarandi, M.; Neumann, I.

    2017-05-01

    A novel calibration process of RICOH-THETA, full-view fisheye camera, is proposed which has numerous applications as a low cost sensor in different disciplines such as photogrammetry, robotic and machine vision and so on. Ricoh Company developed this camera in 2014 that consists of two lenses and is able to capture the whole surrounding environment in one shot. In this research, each lens is calibrated separately and interior/relative orientation parameters (IOPs and ROPs) of the camera are determined on the basis of designed calibration network on the central and side images captured by the aforementioned lenses. Accordingly, designed calibration network is considered as a free distortion grid and applied to the measured control points in the image space as correction terms by means of bilinear interpolation. By performing corresponding corrections, image coordinates are transformed to the unit sphere as an intermediate space between object space and image space in the form of spherical coordinates. Afterwards, IOPs and EOPs of each lens are determined separately through statistical bundle adjustment procedure based on collinearity condition equations. Subsequently, ROPs of two lenses is computed from both EOPs. Our experiments show that by applying 3*3 free distortion grid, image measurements residuals diminish from 1.5 to 0.25 degrees on aforementioned unit sphere.

  12. An RF-induced voltage sensor for investigating pacemaker safety in MRI.

    PubMed

    Barbier, Thérèse; Piumatti, Roberto; Hecker, Bertrand; Odille, Freddy; Felblinger, Jacques; Pasquier, Cédric

    2014-12-01

    Magnetic resonance imaging (MRI) is inadvisable for patients with pacemakers, as radiofrequency (RF) voltages induced in the pacemaker leads may cause the device to malfunction. Our goal is to develop a sensor to measure such RF-induced voltages during MRI safety tests. A sensor was designed (16.6 cm(2)) for measuring voltages at the connection between the pacemaker lead and its case. The induced voltage is demodulated, digitized, and transferred by optical fibres. The sensor was calibrated on the bench using RF pulses of known amplitude and duration. Then the sensor was tested during MRI scanning at 1.5 T in a saline gel filled phantom. Bench tests showed measurement errors below 5% with a (-40 V; +40 V) range, a precision of 0.06 V, and a temporal resolution of 24.2 μs. In MRI tests, variability in the measured voltages was below 3.7% for 996 measurements with different sensors and RF exposure. Coupling between the sensor and the MRI electromagnetic environment was estimated with a second sensor connected and was below 6.2%. For a typical clinical MRI sequence, voltages around ten Vp were detected. We have built an accurate and reproducible tool for measuring RF-induced voltages in pacemaker leads during MR safety investigations. The sensor might also be used with other conducting cables including those used for electrocardiography and neurostimulation.

  13. Stepwise Regression Analysis of MDOE Balance Calibration Data Acquired at DNW

    NASA Technical Reports Server (NTRS)

    DeLoach, RIchard; Philipsen, Iwan

    2007-01-01

    This paper reports a comparison of two experiment design methods applied in the calibration of a strain-gage balance. One features a 734-point test matrix in which loads are varied systematically according to a method commonly applied in aerospace research and known in the literature of experiment design as One Factor At a Time (OFAT) testing. Two variations of an alternative experiment design were also executed on the same balance, each with different features of an MDOE experiment design. The Modern Design of Experiments (MDOE) is an integrated process of experiment design, execution, and analysis applied at NASA's Langley Research Center to achieve significant reductions in cycle time, direct operating cost, and experimental uncertainty in aerospace research generally and in balance calibration experiments specifically. Personnel in the Instrumentation and Controls Department of the German Dutch Wind Tunnels (DNW) have applied MDOE methods to evaluate them in the calibration of a balance using an automated calibration machine. The data have been sent to Langley Research Center for analysis and comparison. This paper reports key findings from this analysis. The chief result is that a 100-point calibration exploiting MDOE principles delivered quality comparable to a 700+ point OFAT calibration with significantly reduced cycle time and attendant savings in direct and indirect costs. While the DNW test matrices implemented key MDOE principles and produced excellent results, additional MDOE concepts implemented in balance calibrations at Langley Research Center are also identified and described.

  14. Differentiating between bipolar and unipolar depression in functional and structural MRI studies.

    PubMed

    Han, Kyu-Man; De Berardis, Domenico; Fornaro, Michele; Kim, Yong-Ku

    2018-03-28

    Distinguishing depression in bipolar disorder (BD) from unipolar depression (UD) solely based on clinical clues is difficult, which has led to the exploration of promising neural markers in neuroimaging measures for discriminating between BD depression and UD. In this article, we review structural and functional magnetic resonance imaging (MRI) studies that directly compare UD and BD depression based on neuroimaging modalities including functional MRI studies on regional brain activation or functional connectivity, structural MRI on gray or white matter morphology, and pattern classification analyses using a machine learning approach. Numerous studies have reported distinct functional and structural alterations in emotion- or reward-processing neural circuits between BD depression and UD. Different activation patterns in neural networks including the amygdala, anterior cingulate cortex (ACC), prefrontal cortex (PFC), and striatum during emotion-, reward-, or cognition-related tasks have been reported between BD and UD. A stronger functional connectivity pattern in BD was pronounced in default mode and in frontoparietal networks and brain regions including the PFC, ACC, parietal and temporal regions, and thalamus compared to UD. Gray matter volume differences in the ACC, hippocampus, amygdala, and dorsolateral prefrontal cortex (DLPFC) have been reported between BD and UD, along with a thinner DLPFC in BD compared to UD. BD showed reduced integrity in the anterior part of the corpus callosum and posterior cingulum compared to UD. Several studies performed pattern classification analysis using structural and functional MRI data to distinguish between UD and BD depression using a supervised machine learning approach, which yielded a moderate level of accuracy in classification. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions.

    PubMed

    Akkus, Zeynettin; Galimzianova, Alfiia; Hoogi, Assaf; Rubin, Daniel L; Erickson, Bradley J

    2017-08-01

    Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.

  16. Testing the quality of images for permanent magnet desktop MRI systems using specially designed phantoms.

    PubMed

    Qiu, Jianfeng; Wang, Guozhu; Min, Jiao; Wang, Xiaoyan; Wang, Pengcheng

    2013-12-21

    Our aim was to measure the performance of desktop magnetic resonance imaging (MRI) systems using specially designed phantoms, by testing imaging parameters and analysing the imaging quality. We designed multifunction phantoms with diameters of 18 and 60 mm for desktop MRI scanners in accordance with the American Association of Physicists in Medicine (AAPM) report no. 28. We scanned the phantoms with three permanent magnet 0.5 T desktop MRI systems, measured the MRI image parameters, and analysed imaging quality by comparing the data with the AAPM criteria and Chinese national standards. Image parameters included: resonance frequency, high contrast spatial resolution, low contrast object detectability, slice thickness, geometrical distortion, signal-to-noise ratio (SNR), and image uniformity. The image parameters of three desktop MRI machines could be measured using our specially designed phantoms, and most parameters were in line with MRI quality control criterion, including: resonance frequency, high contrast spatial resolution, low contrast object detectability, slice thickness, geometrical distortion, image uniformity and slice position accuracy. However, SNR was significantly lower than in some references. The imaging test and quality control are necessary for desktop MRI systems, and should be performed with the applicable phantom and corresponding standards.

  17. Multivariate data analysis and machine learning in Alzheimer's disease with a focus on structural magnetic resonance imaging.

    PubMed

    Falahati, Farshad; Westman, Eric; Simmons, Andrew

    2014-01-01

    Machine learning algorithms and multivariate data analysis methods have been widely utilized in the field of Alzheimer's disease (AD) research in recent years. Advances in medical imaging and medical image analysis have provided a means to generate and extract valuable neuroimaging information. Automatic classification techniques provide tools to analyze this information and observe inherent disease-related patterns in the data. In particular, these classifiers have been used to discriminate AD patients from healthy control subjects and to predict conversion from mild cognitive impairment to AD. In this paper, recent studies are reviewed that have used machine learning and multivariate analysis in the field of AD research. The main focus is on studies that used structural magnetic resonance imaging (MRI), but studies that included positron emission tomography and cerebrospinal fluid biomarkers in addition to MRI are also considered. A wide variety of materials and methods has been employed in different studies, resulting in a range of different outcomes. Influential factors such as classifiers, feature extraction algorithms, feature selection methods, validation approaches, and cohort properties are reviewed, as well as key MRI-based and multi-modal based studies. Current and future trends are discussed.

  18. Regional autonomy changes in resting-state functional MRI in patients with HIV associated neurocognitive disorder

    NASA Astrophysics Data System (ADS)

    DSouza, Adora M.; Abidin, Anas Z.; Chockanathan, Udaysankar; Wismüller, Axel

    2018-03-01

    In this study, we investigate whether there are discernable changes in influence that brain regions have on themselves once patients show symptoms of HIV Associated Neurocognitive Disorder (HAND) using functional MRI (fMRI). Simple functional connectivity measures, such as correlation cannot reveal such information. To this end, we use mutual connectivity analysis (MCA) with Local Models (LM), which reveals a measure of influence in terms of predictability. Once such measures of interaction are obtained, we train two classifiers to characterize difference in patterns of regional self-influence between healthy subjects and subjects presenting with HAND symptoms. The two classifiers we use are Support Vector Machines (SVM) and Localized Generalized Matrix Learning Vector Quantization (LGMLVQ). Performing machine learning on fMRI connectivity measures is popularly known as multi-voxel pattern analysis (MVPA). By performing such an analysis, we are interested in studying the impact HIV infection has on an individual's brain. The high area under receiver operating curve (AUC) and accuracy values for 100 different train/test separations using MCA-LM self-influence measures (SVM: mean AUC=0.86, LGMLVQ: mean AUC=0.88, SVM and LGMLVQ: mean accuracy=0.78) compared with standard MVPA analysis using cross-correlation between fMRI time-series (SVM: mean AUC=0.58, LGMLVQ: mean AUC=0.57), demonstrates that self-influence features can be more discriminative than measures of interaction between time-series pairs. Furthermore, our results suggest that incorporating measures of self-influence in MVPA analysis used commonly in fMRI analysis has the potential to provide a performance boost and indicate important changes in dynamics of regions in the brain as a consequence of HIV infection.

  19. Improvement of the repeatability of parallel transmission at 7T using interleaved acquisition in the calibration scan.

    PubMed

    Kameda, Hiroyuki; Kudo, Kohsuke; Matsuda, Tsuyoshi; Harada, Taisuke; Iwadate, Yuji; Uwano, Ikuko; Yamashita, Fumio; Yoshioka, Kunihiro; Sasaki, Makoto; Shirato, Hiroki

    2017-12-04

    Respiration-induced phase shift affects B 0 /B 1 + mapping repeatability in parallel transmission (pTx) calibration for 7T brain MRI, but is improved by breath-holding (BH). However, BH cannot be applied during long scans. To examine whether interleaved acquisition during calibration scanning could improve pTx repeatability and image homogeneity. Prospective. Nine healthy subjects. 7T MRI with a two-channel RF transmission system was used. Calibration scanning for B 0 /B 1 + mapping was performed under sequential acquisition/free-breathing (Seq-FB), Seq-BH, and interleaved acquisition/FB (Int-FB) conditions. The B 0 map was calculated with two echo times, and the B 1 + map was obtained using the Bloch-Siegert method. Actual flip-angle imaging (AFI) and gradient echo (GRE) imaging were performed using pTx and quadrature-Tx (qTx). All scans were acquired in five sessions. Repeatability was evaluated using intersession standard deviation (SD) or coefficient of variance (CV), and in-plane homogeneity was evaluated using in-plane CV. A paired t-test with Bonferroni correction for multiple comparisons was used. The intersession CV/SDs for the B 0 /B 1 + maps were significantly smaller in Int-FB than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The intersession CVs for the AFI and GRE images were also significantly smaller in Int-FB, Seq-BH, and qTx than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The in-plane CVs for the AFI and GRE images in Seq-FB, Int-FB, and Seq-BH were significantly smaller than in qTx (Bonferroni-corrected P < 0.01 for all). Using interleaved acquisition during calibration scans of pTx for 7T brain MRI improved the repeatability of B 0 /B 1 + mapping, AFI, and GRE images, without BH. 1 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Invited Article: A novel calibration method for the JET real-time far infrared polarimeter and integration of polarimetry-based line-integrated density measurements for machine protection of a fusion plant.

    PubMed

    Boboc, A; Bieg, B; Felton, R; Dalley, S; Kravtsov, Yu

    2015-09-01

    In this paper, we present the work in the implementation of a new calibration for the JET real-time polarimeter based on the complex amplitude ratio technique and a new self-validation mechanism of data. This allowed easy integration of the polarimetry measurements into the JET plasma density control (gas feedback control) and as well as machine protection systems (neutral beam injection heating safety interlocks). The new addition was used successfully during 2014 JET Campaign and is envisaged that will operate routinely from 2015 campaign onwards in any plasma condition (including ITER relevant scenarios). This mode of operation elevated the importance of the polarimetry as a diagnostic tool in the view of future fusion experiments.

  1. Functional connectivity analysis of resting-state fMRI networks in nicotine dependent patients

    NASA Astrophysics Data System (ADS)

    Smith, Aria; Ehtemami, Anahid; Fratte, Daniel; Meyer-Baese, Anke; Zavala-Romero, Olmo; Goudriaan, Anna E.; Schmaal, Lianne; Schulte, Mieke H. J.

    2016-03-01

    Brain imaging studies identified brain networks that play a key role in nicotine dependence-related behavior. Functional connectivity of the brain is dynamic; it changes over time due to different causes such as learning, or quitting a habit. Functional connectivity analysis is useful in discovering and comparing patterns between functional magnetic resonance imaging (fMRI) scans of patients' brains. In the resting state, the patient is asked to remain calm and not do any task to minimize the contribution of external stimuli. The study of resting-state fMRI networks have shown functionally connected brain regions that have a high level of activity during this state. In this project, we are interested in the relationship between these functionally connected brain regions to identify nicotine dependent patients, who underwent a smoking cessation treatment. Our approach is on the comparison of the set of connections between the fMRI scans before and after treatment. We applied support vector machines, a machine learning technique, to classify patients based on receiving the treatment or the placebo. Using the functional connectivity (CONN) toolbox, we were able to form a correlation matrix based on the functional connectivity between different regions of the brain. The experimental results show that there is inadequate predictive information to classify nicotine dependent patients using the SVM classifier. We propose other classification methods be explored to better classify the nicotine dependent patients.

  2. Optimization and validation of accelerated golden-angle radial sparse MRI reconstruction with self-calibrating GRAPPA operator gridding.

    PubMed

    Benkert, Thomas; Tian, Ye; Huang, Chenchan; DiBella, Edward V R; Chandarana, Hersh; Feng, Li

    2018-07-01

    Golden-angle radial sparse parallel (GRASP) MRI reconstruction requires gridding and regridding to transform data between radial and Cartesian k-space. These operations are repeatedly performed in each iteration, which makes the reconstruction computationally demanding. This work aimed to accelerate GRASP reconstruction using self-calibrating GRAPPA operator gridding (GROG) and to validate its performance in clinical imaging. GROG is an alternative gridding approach based on parallel imaging, in which k-space data acquired on a non-Cartesian grid are shifted onto a Cartesian k-space grid using information from multicoil arrays. For iterative non-Cartesian image reconstruction, GROG is performed only once as a preprocessing step. Therefore, the subsequent iterative reconstruction can be performed directly in Cartesian space, which significantly reduces computational burden. Here, a framework combining GROG with GRASP (GROG-GRASP) is first optimized and then compared with standard GRASP reconstruction in 22 prostate patients. GROG-GRASP achieved approximately 4.2-fold reduction in reconstruction time compared with GRASP (∼333 min versus ∼78 min) while maintaining image quality (structural similarity index ≈ 0.97 and root mean square error ≈ 0.007). Visual image quality assessment by two experienced radiologists did not show significant differences between the two reconstruction schemes. With a graphics processing unit implementation, image reconstruction time can be further reduced to approximately 14 min. The GRASP reconstruction can be substantially accelerated using GROG. This framework is promising toward broader clinical application of GRASP and other iterative non-Cartesian reconstruction methods. Magn Reson Med 80:286-293, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. Position calibration of a 3-DOF hand-controller with hybrid structure

    NASA Astrophysics Data System (ADS)

    Zhu, Chengcheng; Song, Aiguo

    2017-09-01

    A hand-controller is a human-robot interactive device, which measures the 3-DOF (Degree of Freedom) position of the human hand and sends it as a command to control robot movement. The device also receives 3-DOF force feedback from the robot and applies it to the human hand. Thus, the precision of 3-DOF position measurements is a key performance factor for hand-controllers. However, when using a hybrid type 3-DOF hand controller, various errors occur and are considered originating from machining and assembly variations within the device. This paper presents a calibration method to improve the position tracking accuracy of hybrid type hand-controllers by determining the actual size of the hand-controller parts. By re-measuring and re-calibrating this kind of hand-controller, the actual size of the key parts that cause errors is determined. Modifying the formula parameters with the actual sizes, which are obtained in the calibrating process, improves the end position tracking accuracy of the device.

  4. Calibration and verification of thermographic cameras for geometric measurements

    NASA Astrophysics Data System (ADS)

    Lagüela, S.; González-Jorge, H.; Armesto, J.; Arias, P.

    2011-03-01

    Infrared thermography is a technique with an increasing degree of development and applications. Quality assessment in the measurements performed with the thermal cameras should be achieved through metrology calibration and verification. Infrared cameras acquire temperature and geometric information, although calibration and verification procedures are only usual for thermal data. Black bodies are used for these purposes. Moreover, the geometric information is important for many fields as architecture, civil engineering and industry. This work presents a calibration procedure that allows the photogrammetric restitution and a portable artefact to verify the geometric accuracy, repeatability and drift of thermographic cameras. These results allow the incorporation of this information into the quality control processes of the companies. A grid based on burning lamps is used for the geometric calibration of thermographic cameras. The artefact designed for the geometric verification consists of five delrin spheres and seven cubes of different sizes. Metrology traceability for the artefact is obtained from a coordinate measuring machine. Two sets of targets with different reflectivity are fixed to the spheres and cubes to make data processing and photogrammetric restitution possible. Reflectivity was the chosen material propriety due to the thermographic and visual cameras ability to detect it. Two thermographic cameras from Flir and Nec manufacturers, and one visible camera from Jai are calibrated, verified and compared using calibration grids and the standard artefact. The calibration system based on burning lamps shows its capability to perform the internal orientation of the thermal cameras. Verification results show repeatability better than 1 mm for all cases, being better than 0.5 mm for the visible one. As it must be expected, also accuracy appears higher in the visible camera, and the geometric comparison between thermographic cameras shows slightly better

  5. Proton beam deflection in MRI fields: Implications for MRI-guided proton therapy.

    PubMed

    Oborn, B M; Dowdell, S; Metcalfe, P E; Crozier, S; Mohan, R; Keall, P J

    2015-05-01

    This paper investigates, via magnetic modeling and Monte Carlo simulation, the ability to deliver proton beams to the treatment zone inside a split-bore MRI-guided proton therapy system. Field maps from a split-bore 1 T MRI-Linac system are used as input to geant4 Monte Carlo simulations which model the trajectory of proton beams during their paths to the isocenter of the treatment area. Both inline (along the MRI bore) and perpendicular (through the split-bore gap) orientations are simulated. Monoenergetic parallel and diverging beams of energy 90, 195, and 300 MeV starting from 1.5 and 5 m above isocenter are modeled. A phase space file detailing a 2D calibration pattern is used to set the particle starting positions, and their spatial location as they cross isocenter is recorded. No beam scattering, collimation, or modulation of the proton beams is modeled. In the inline orientation, the radial symmetry of the solenoidal style fringe field acts to rotate the protons around the beam's central axis. For protons starting at 1.5 m from isocenter, this rotation is 19° (90 MeV) and 9.8° (300 MeV). A minor focusing toward the beam's central axis is also seen, but only significant, i.e., 2 mm shift at 150 mm off-axis, for 90 MeV protons. For the perpendicular orientation, the main MRI field and near fringe field act as the strongest to deflect the protons in a consistent direction. When starting from 1.5 m above isocenter shifts of 135 mm (90 MeV) and 65 mm (300 MeV) were observed. Further to this, off-axis protons are slightly deflected toward or away from the central axis in the direction perpendicular to the main deflection direction. This leads to a distortion of the phase space pattern, not just a shift. This distortion increases from zero at the central axis to 10 mm (90 MeV) and 5 mm (300 MeV) for a proton 150 mm off-axis. In both orientations, there is a small but subtle difference in the deflection and distortion pattern between protons fired parallel to the

  6. Machine learning for the automatic localisation of foetal body parts in cine-MRI scans

    NASA Astrophysics Data System (ADS)

    Bowles, Christopher; Nowlan, Niamh C.; Hayat, Tayyib T. A.; Malamateniou, Christina; Rutherford, Mary; Hajnal, Joseph V.; Rueckert, Daniel; Kainz, Bernhard

    2015-03-01

    Being able to automate the location of individual foetal body parts has the potential to dramatically reduce the work required to analyse time resolved foetal Magnetic Resonance Imaging (cine-MRI) scans, for example, for use in the automatic evaluation of the foetal development. Currently, manual preprocessing of every scan is required to locate body parts before analysis can be performed, leading to a significant time overhead. With the volume of scans becoming available set to increase as cine-MRI scans become more prevalent in clinical practice, this stage of manual preprocessing is a bottleneck, limiting the data available for further analysis. Any tools which can automate this process will therefore save many hours of research time and increase the rate of new discoveries in what is a key area in understanding early human development. Here we present a series of techniques which can be applied to foetal cine-MRI scans in order to first locate and then differentiate between individual body parts. A novel approach to maternal movement suppression and segmentation using Fourier transforms is put forward as a preprocessing step, allowing for easy extraction of short movements of individual foetal body parts via the clustering of optical flow vector fields. These body part movements are compared to a labelled database and probabilistically classified before being spatially and temporally combined to give a final estimate for the location of each body part.

  7. Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.

    PubMed

    Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias

    2017-11-27

    Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.

  8. Coupling machine learning with mechanistic models to study runoff production and river flow at the hillslope scale

    NASA Astrophysics Data System (ADS)

    Marçais, J.; Gupta, H. V.; De Dreuzy, J. R.; Troch, P. A. A.

    2016-12-01

    Geomorphological structure and geological heterogeneity of hillslopes are major controls on runoff responses. The diversity of hillslopes (morphological shapes and geological structures) on one hand, and the highly non linear runoff mechanism response on the other hand, make it difficult to transpose what has been learnt at one specific hillslope to another. Therefore, making reliable predictions on runoff appearance or river flow for a given hillslope is a challenge. Applying a classic model calibration (based on inverse problems technique) requires doing it for each specific hillslope and having some data available for calibration. When applied to thousands of cases it cannot always be promoted. Here we propose a novel modeling framework based on coupling process based models with data based approach. First we develop a mechanistic model, based on hillslope storage Boussinesq equations (Troch et al. 2003), able to model non linear runoff responses to rainfall at the hillslope scale. Second we set up a model database, representing thousands of non calibrated simulations. These simulations investigate different hillslope shapes (real ones obtained by analyzing 5m digital elevation model of Brittany and synthetic ones), different hillslope geological structures (i.e. different parametrizations) and different hydrologic forcing terms (i.e. different infiltration chronicles). Then, we use this model library to train a machine learning model on this physically based database. Machine learning model performance is then assessed by a classic validating phase (testing it on new hillslopes and comparing machine learning with mechanistic outputs). Finally we use this machine learning model to learn what are the hillslope properties controlling runoffs. This methodology will be further tested combining synthetic datasets with real ones.

  9. TU-F-BRB-02: Motion Artifacts and Suppression in MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, X.

    The current clinical standard of organ respiratory imaging, 4D-CT, is fundamentally limited by poor soft-tissue contrast and imaging dose. These limitations are potential barriers to beneficial “4D” radiotherapy methods which optimize the target and OAR dose-volume considering breathing motion but rely on a robust motion characterization. Conversely, MRI imparts no known radiation risk and has excellent soft-tissue contrast. MRI-based motion management is therefore highly desirable and holds great promise to improve radiotherapy of moving cancers, particularly in the abdomen. Over the past decade, MRI techniques have improved significantly, making MR-based motion management clinically feasible. For example, cine MRI has highmore » temporal resolution up to 10 f/s and has been used to track and/or characterize tumor motion, study correlation between external and internal motions. New MR technologies, such as 4D-MRI and MRI hybrid treatment machines (i.e. MR-linac or MR-Co60), have been recently developed. These technologies can lead to more accurate target volume determination and more precise radiation dose delivery via direct tumor gating or tracking. Despite all these promises, great challenges exist and the achievable clinical benefit of MRI-based tumor motion management has yet to be fully explored, much less realized. In this proposal, we will review novel MR-based motion management methods and technologies, the state-of-the-art concerning MRI development and clinical application and the barriers to more widespread adoption. Learning Objectives: Discuss the need of MR-based motion management for improving patient care in radiotherapy. Understand MR techniques for motion imaging and tumor motion characterization. Understand the current state of the art and future steps for clinical integration. Henry Ford Health System holds research agreements with Philips Healthcare. Research sponsored in part by a Henry Ford Health System Internal Mentored Grant.« less

  10. TU-F-BRB-00: MRI-Based Motion Management for RT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The current clinical standard of organ respiratory imaging, 4D-CT, is fundamentally limited by poor soft-tissue contrast and imaging dose. These limitations are potential barriers to beneficial “4D” radiotherapy methods which optimize the target and OAR dose-volume considering breathing motion but rely on a robust motion characterization. Conversely, MRI imparts no known radiation risk and has excellent soft-tissue contrast. MRI-based motion management is therefore highly desirable and holds great promise to improve radiotherapy of moving cancers, particularly in the abdomen. Over the past decade, MRI techniques have improved significantly, making MR-based motion management clinically feasible. For example, cine MRI has highmore » temporal resolution up to 10 f/s and has been used to track and/or characterize tumor motion, study correlation between external and internal motions. New MR technologies, such as 4D-MRI and MRI hybrid treatment machines (i.e. MR-linac or MR-Co60), have been recently developed. These technologies can lead to more accurate target volume determination and more precise radiation dose delivery via direct tumor gating or tracking. Despite all these promises, great challenges exist and the achievable clinical benefit of MRI-based tumor motion management has yet to be fully explored, much less realized. In this proposal, we will review novel MR-based motion management methods and technologies, the state-of-the-art concerning MRI development and clinical application and the barriers to more widespread adoption. Learning Objectives: Discuss the need of MR-based motion management for improving patient care in radiotherapy. Understand MR techniques for motion imaging and tumor motion characterization. Understand the current state of the art and future steps for clinical integration. Henry Ford Health System holds research agreements with Philips Healthcare. Research sponsored in part by a Henry Ford Health System Internal Mentored Grant.« less

  11. Game controller modification for fMRI hyperscanning experiments in a cooperative virtual reality environment.

    PubMed

    Trees, Jason; Snider, Joseph; Falahpour, Maryam; Guo, Nick; Lu, Kun; Johnson, Douglas C; Poizner, Howard; Liu, Thomas T

    2014-01-01

    Hyperscanning, an emerging technique in which data from multiple interacting subjects' brains are simultaneously recorded, has become an increasingly popular way to address complex topics, such as "theory of mind." However, most previous fMRI hyperscanning experiments have been limited to abstract social interactions (e.g. phone conversations). Our new method utilizes a virtual reality (VR) environment used for military training, Virtual Battlespace 2 (VBS2), to create realistic avatar-avatar interactions and cooperative tasks. To control the virtual avatar, subjects use a MRI compatible Playstation 3 game controller, modified by removing all extraneous metal components and replacing any necessary ones with 3D printed plastic models. Control of both scanners' operation is initiated by a VBS2 plugin to sync scanner time to the known time within the VR environment. Our modifications include:•Modification of game controller to be MRI compatible.•Design of VBS2 virtual environment for cooperative interactions.•Syncing two MRI machines for simultaneous recording.

  12. Game controller modification for fMRI hyperscanning experiments in a cooperative virtual reality environment

    PubMed Central

    Trees, Jason; Snider, Joseph; Falahpour, Maryam; Guo, Nick; Lu, Kun; Johnson, Douglas C.; Poizner, Howard; Liu, Thomas T.

    2014-01-01

    Hyperscanning, an emerging technique in which data from multiple interacting subjects’ brains are simultaneously recorded, has become an increasingly popular way to address complex topics, such as “theory of mind.” However, most previous fMRI hyperscanning experiments have been limited to abstract social interactions (e.g. phone conversations). Our new method utilizes a virtual reality (VR) environment used for military training, Virtual Battlespace 2 (VBS2), to create realistic avatar-avatar interactions and cooperative tasks. To control the virtual avatar, subjects use a MRI compatible Playstation 3 game controller, modified by removing all extraneous metal components and replacing any necessary ones with 3D printed plastic models. Control of both scanners’ operation is initiated by a VBS2 plugin to sync scanner time to the known time within the VR environment. Our modifications include:•Modification of game controller to be MRI compatible.•Design of VBS2 virtual environment for cooperative interactions.•Syncing two MRI machines for simultaneous recording. PMID:26150964

  13. Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays.

    PubMed

    Itoh, Yuta; Klinker, Gudrun

    2015-04-01

    A critical requirement for AR applications with Optical See-Through Head-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user - more particularly, according to the user's eye position. Recently-proposed interaction-free calibration methods [16], [17] automatically estimate this projection by tracking the user's eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors - the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.

  14. Calibration of High Frequency MEMS Microphones

    NASA Technical Reports Server (NTRS)

    Shams, Qamar A.; Humphreys, William M.; Bartram, Scott M.; Zuckewar, Allan J.

    2007-01-01

    Understanding and controlling aircraft noise is one of the major research topics of the NASA Fundamental Aeronautics Program. One of the measurement technologies used to acquire noise data is the microphone directional array (DA). Traditional direction array hardware, consisting of commercially available condenser microphones and preamplifiers can be too expensive and their installation in hard-walled wind tunnel test sections too complicated. An emerging micro-machining technology coupled with the latest cutting edge technologies for smaller and faster systems have opened the way for development of MEMS microphones. The MEMS microphone devices are available in the market but suffer from certain important shortcomings. Based on early experiments with array prototypes, it has been found that both the bandwidth and the sound pressure level dynamic range of the microphones should be increased significantly to improve the performance and flexibility of the overall array. Thus, in collaboration with an outside MEMS design vendor, NASA Langley modified commercially available MEMS microphone as shown in Figure 1 to meet the new requirements. Coupled with the design of the enhanced MEMS microphones was the development of a new calibration method for simultaneously obtaining the sensitivity and phase response of the devices over their entire broadband frequency range. Over the years, several methods have been used for microphone calibration. Some of the common methods of microphone calibration are Coupler (Reciprocity, Substitution, and Simultaneous), Pistonphone, Electrostatic actuator, and Free-field calibration (Reciprocity, Substitution, and Simultaneous). Traditionally, electrostatic actuators (EA) have been used to characterize air-condenser microphones for wideband frequency ranges; however, MEMS microphones are not adaptable to the EA method due to their construction and very small diaphragm size. Hence a substitution-based, free-field method was developed to

  15. Bayesian model calibration of ramp compression experiments on Z

    NASA Astrophysics Data System (ADS)

    Brown, Justin; Hund, Lauren

    2017-06-01

    Bayesian model calibration (BMC) is a statistical framework to estimate inputs for a computational model in the presence of multiple uncertainties, making it well suited to dynamic experiments which must be coupled with numerical simulations to interpret the results. Often, dynamic experiments are diagnosed using velocimetry and this output can be modeled using a hydrocode. Several calibration issues unique to this type of scenario including the functional nature of the output, uncertainty of nuisance parameters within the simulation, and model discrepancy identifiability are addressed, and a novel BMC process is proposed. As a proof of concept, we examine experiments conducted on Sandia National Laboratories' Z-machine which ramp compressed tantalum to peak stresses of 250 GPa. The proposed BMC framework is used to calibrate the cold curve of Ta (with uncertainty), and we conclude that the procedure results in simple, fast, and valid inferences. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  16. Large enhancement of perfusion contribution on fMRI signal

    PubMed Central

    Wang, Xiao; Zhu, Xiao-Hong; Zhang, Yi; Chen, Wei

    2012-01-01

    The perfusion contribution to the total functional magnetic resonance imaging (fMRI) signal was investigated using a rat model with mild hypercapnia at 9.4 T, and human subjects with visual stimulation at 4 T. It was found that the total fMRI signal change could be approximated as a linear superposition of ‘true' blood oxygenation level-dependent (BOLD; T2/T2*) effect and the blood flow-related (T1) effect. The latter effect was significantly enhanced by using short repetition time and large radiofrequency pulse flip angle and became comparable to the ‘true' BOLD signal in response to a mild hypercapnia in the rat brain, resulting in an improved contrast-to-noise ratio (CNR). Bipolar diffusion gradients suppressed the intravascular signals but had no significant effect on the flow-related signal. Similar results of enhanced fMRI signal were observed in the human study. The overall results suggest that the observed flow-related signal enhancement is likely originated from perfusion, and this enhancement can improve CNR and the spatial specificity for mapping brain activity and physiology changes. The nature of mixed BOLD and perfusion-related contributions in the total fMRI signal also has implication on BOLD quantification, in particular, the BOLD calibration model commonly used to estimate the change of cerebral metabolic rate of oxygen. PMID:22395206

  17. Smart Cutting Tools and Smart Machining: Development Approaches, and Their Implementation and Application Perspectives

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Niu, Zhi-Chao; Wang, Robin C.; Rakowski, Richard; Bateman, Richard

    2017-09-01

    Smart machining has tremendous potential and is becoming one of new generation high value precision manufacturing technologies in line with the advance of Industry 4.0 concepts. This paper presents some innovative design concepts and, in particular, the development of four types of smart cutting tools, including a force-based smart cutting tool, a temperature-based internally-cooled cutting tool, a fast tool servo (FTS) and smart collets for ultraprecision and micro manufacturing purposes. Implementation and application perspectives of these smart cutting tools are explored and discussed particularly for smart machining against a number of industrial application requirements. They are contamination-free machining, machining of tool-wear-prone Si-based infra-red devices and medical applications, high speed micro milling and micro drilling, etc. Furthermore, implementation techniques are presented focusing on: (a) plug-and-produce design principle and the associated smart control algorithms, (b) piezoelectric film and surface acoustic wave transducers to measure cutting forces in process, (c) critical cutting temperature control in real-time machining, (d) in-process calibration through machining trials, (e) FE-based design and analysis of smart cutting tools, and (f) application exemplars on adaptive smart machining.

  18. Invited Article: A novel calibration method for the JET real-time far infrared polarimeter and integration of polarimetry-based line-integrated density measurements for machine protection of a fusion plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboc, A., E-mail: Alexandru.Boboc@ccfe.ac.uk; Felton, R.; Dalley, S.

    2015-09-15

    In this paper, we present the work in the implementation of a new calibration for the JET real-time polarimeter based on the complex amplitude ratio technique and a new self-validation mechanism of data. This allowed easy integration of the polarimetry measurements into the JET plasma density control (gas feedback control) and as well as machine protection systems (neutral beam injection heating safety interlocks). The new addition was used successfully during 2014 JET Campaign and is envisaged that will operate routinely from 2015 campaign onwards in any plasma condition (including ITER relevant scenarios). This mode of operation elevated the importance ofmore » the polarimetry as a diagnostic tool in the view of future fusion experiments.« less

  19. Calibration of raw accelerometer data to measure physical activity: A systematic review.

    PubMed

    de Almeida Mendes, Márcio; da Silva, Inácio C M; Ramires, Virgílio V; Reichert, Felipe F; Martins, Rafaela C; Tomasi, Elaine

    2018-03-01

    Most of calibration studies based on accelerometry were developed using count-based analyses. In contrast, calibration studies based on raw acceleration signals are relatively recent and their evidences are incipient. The aim of the current study was to systematically review the literature in order to summarize methodological characteristics and results from raw data calibration studies. The review was conducted up to May 2017 using four databases: PubMed, Scopus, SPORTDiscus and Web of Science. Methodological quality of the included studies was evaluated using the Landis and Koch's guidelines. Initially, 1669 titles were identified and, after assessing titles, abstracts and full-articles, 20 studies were included. All studies were conducted in high-income countries, most of them with relatively small samples and specific population groups. Physical activity protocols were different among studies and the indirect calorimetry was the criterion measure mostly used. High mean values of sensitivity, specificity and accuracy from the intensity thresholds of cut-point-based studies were observed (93.7%, 91.9% and 95.8%, respectively). The most frequent statistical approach applied was machine learning-based modelling, in which the mean coefficient of determination was 0.70 to predict physical activity energy expenditure. Regarding the recognition of physical activity types, the mean values of accuracy for sedentary, household and locomotive activities were 82.9%, 55.4% and 89.7%, respectively. In conclusion, considering the construct of physical activity that each approach assesses, linear regression, machine-learning and cut-point-based approaches presented promising validity parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Metrological Characterization of the Vickers Hardness Primary Standard Machine Established at CSIR-NPL

    NASA Astrophysics Data System (ADS)

    Titus, S. Seelakumar; Vikram; Girish; Jain, Sushil Kumar

    2018-06-01

    CSIR-National Physical Laboratory (CSIR-NPL) is the National Metrological Institute (NMI) of India, which has the mandate for the realization of SI units of measurements and dissemination of the same to the user organizations. CSIR-NPL has established a hardness standardizing machine for realizing the Vickers hardness scale as per ISO 6507-3 standard for providing national traceability in hardness measurement. Direct verification of the machine has been carried out by measuring the uncertainty in the generated force, the indenter geometry and the indentation measuring system. From these measurements, it is found that the machine exhibits a calibration and measurement capability (CMC) of ±1.5% for HV1-HV3 scales and ±1.0% for HV5-HV50 scales and ±0.8% for HV100 scale.

  1. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli

    PubMed Central

    Mandelkow, Hendrik; de Zwart, Jacco A.; Duyn, Jeff H.

    2016-01-01

    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  2. fMRI Validation of fNIRS Measurements During a Naturalistic Task

    PubMed Central

    Noah, J. Adam; Ono, Yumie; Nomoto, Yasunori; Shimada, Sotaro; Tachibana, Atsumichi; Zhang, Xian; Bronner, Shaw; Hirsch, Joy

    2015-01-01

    We present a method to compare brain activity recorded with near-infrared spectroscopy (fNIRS) in a dance video game task to that recorded in a reduced version of the task using fMRI (functional magnetic resonance imaging). Recently, it has been shown that fNIRS can accurately record functional brain activities equivalent to those concurrently recorded with functional magnetic resonance imaging for classic psychophysical tasks and simple finger tapping paradigms. However, an often quoted benefit of fNIRS is that the technique allows for studying neural mechanisms of complex, naturalistic behaviors that are not possible using the constrained environment of fMRI. Our goal was to extend the findings of previous studies that have shown high correlation between concurrently recorded fNIRS and fMRI signals to compare neural recordings obtained in fMRI procedures to those separately obtained in naturalistic fNIRS experiments. Specifically, we developed a modified version of the dance video game Dance Dance Revolution (DDR) to be compatible with both fMRI and fNIRS imaging procedures. In this methodology we explain the modifications to the software and hardware for compatibility with each technique as well as the scanning and calibration procedures used to obtain representative results. The results of the study show a task-related increase in oxyhemoglobin in both modalities and demonstrate that it is possible to replicate the findings of fMRI using fNIRS in a naturalistic task. This technique represents a methodology to compare fMRI imaging paradigms which utilize a reduced-world environment to fNIRS in closer approximation to naturalistic, full-body activities and behaviors. Further development of this technique may apply to neurodegenerative diseases, such as Parkinson’s disease, late states of dementia, or those with magnetic susceptibility which are contraindicated for fMRI scanning. PMID:26132365

  3. An EEG Finger-Print of fMRI deep regional activation.

    PubMed

    Meir-Hasson, Yehudit; Kinreich, Sivan; Podlipsky, Ilana; Hendler, Talma; Intrator, Nathan

    2014-11-15

    This work introduces a general framework for producing an EEG Finger-Print (EFP) which can be used to predict specific brain activity as measured by fMRI at a given deep region. This new approach allows for improved EEG spatial resolution based on simultaneous fMRI activity measurements. Advanced signal processing and machine learning methods were applied on EEG data acquired simultaneously with fMRI during relaxation training guided by on-line continuous feedback on changing alpha/theta EEG measure. We focused on demonstrating improved EEG prediction of activation in sub-cortical regions such as the amygdala. Our analysis shows that a ridge regression model that is based on time/frequency representation of EEG data from a single electrode, can predict the amygdala related activity significantly better than a traditional theta/alpha activity sampled from the best electrode and about 1/3 of the times, significantly better than a linear combination of frequencies with a pre-defined delay. The far-reaching goal of our approach is to be able to reduce the need for fMRI scanning for probing specific sub-cortical regions such as the amygdala as the basis for brain-training procedures. On the other hand, activity in those regions can be characterized with higher temporal resolution than is obtained by fMRI alone thus revealing additional information about their processing mode. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Study of Environmental Data Complexity using Extreme Learning Machine

    NASA Astrophysics Data System (ADS)

    Leuenberger, Michael; Kanevski, Mikhail

    2017-04-01

    The main goals of environmental data science using machine learning algorithm deal, in a broad sense, around the calibration, the prediction and the visualization of hidden relationship between input and output variables. In order to optimize the models and to understand the phenomenon under study, the characterization of the complexity (at different levels) should be taken into account. Therefore, the identification of the linear or non-linear behavior between input and output variables adds valuable information for the knowledge of the phenomenon complexity. The present research highlights and investigates the different issues that can occur when identifying the complexity (linear/non-linear) of environmental data using machine learning algorithm. In particular, the main attention is paid to the description of a self-consistent methodology for the use of Extreme Learning Machines (ELM, Huang et al., 2006), which recently gained a great popularity. By applying two ELM models (with linear and non-linear activation functions) and by comparing their efficiency, quantification of the linearity can be evaluated. The considered approach is accompanied by simulated and real high dimensional and multivariate data case studies. In conclusion, the current challenges and future development in complexity quantification using environmental data mining are discussed. References - Huang, G.-B., Zhu, Q.-Y., Siew, C.-K., 2006. Extreme learning machine: theory and applications. Neurocomputing 70 (1-3), 489-501. - Kanevski, M., Pozdnoukhov, A., Timonin, V., 2009. Machine Learning for Spatial Environmental Data. EPFL Press; Lausanne, Switzerland, p.392. - Leuenberger, M., Kanevski, M., 2015. Extreme Learning Machines for spatial environmental data. Computers and Geosciences 85, 64-73.

  5. The SED Machine: a dedicated transient IFU spectrograph

    NASA Astrophysics Data System (ADS)

    Ben-Ami, Sagi; Konidaris, Nick; Quimby, Robert; Davis, Jack T.; Ngeow, Chow Choong; Ritter, Andreas; Rudy, Alexander

    2012-09-01

    The Spectral Energy Distribution (SED) Machine is an Integral Field Unit (IFU) spectrograph designed specifically to classify transients. It is comprised of two subsystems. A lenselet based IFU, with a 26" × 26" Field of View (FoV) and ˜ 0.75" spaxels feeds a constant resolution (R˜100) triple-prism. The dispersed rays are than imaged onto an off-the-shelf CCD detector. The second subsystem, the Rainbow Camera (RC), is a 4-band seeing-limited imager with a 12.5' × 12.5' FoV around the IFU that will allow real time spectrophotometric calibrations with a ˜ 5% accuracy. Data from both subsystems will be processed in real time using a dedicated reduction pipeline. The SED Machine will be mounted on the Palomar 60-inch robotic telescope (P60), covers a wavelength range of 370 - 920nm at high throughput and will classify transients from on-going and future surveys at a high rate. This will provide good statistics for common types of transients, and a better ability to discover and study rare and exotic ones. We present the science cases, optical design, and data reduction strategy of the SED Machine. The SED machine is currently being constructed at the Calofornia Institute of Technology, and will be comissioned on the spring of 2013.

  6. Design of a tracked ultrasound calibration phantom made of LEGO bricks

    NASA Astrophysics Data System (ADS)

    Walsh, Ryan; Soehl, Marie; Rankin, Adam; Lasso, Andras; Fichtinger, Gabor

    2014-03-01

    PURPOSE: Spatial calibration of tracked ultrasound systems is commonly performed using precisely fabricated phantoms. Machining or 3D printing has relatively high cost and not easily available. Moreover, the possibilities for modifying the phantoms are very limited. Our goal was to find a method to construct a calibration phantom from affordable, widely available components, which can be built in short time, can be easily modified, and provides comparable accuracy to the existing solutions. METHODS: We designed an N-wire calibration phantom made of LEGO® bricks. To affirm the phantom's reproducibility and build time, ten builds were done by first-time users. The phantoms were used for a tracked ultrasound calibration by an experienced user. The success of each user's build was determined by the lowest root mean square (RMS) wire reprojection error of three calibrations. The accuracy and variance of calibrations were evaluated for the calibrations produced for various tracked ultrasound probes. The proposed model was compared to two of the currently available phantom models for both electromagnetic and optical tracking. RESULTS: The phantom was successfully built by all ten first-time users in an average time of 18.8 minutes. It cost approximately $10 CAD for the required LEGO® bricks and averaged a 0.69mm of error in the calibration reproducibility for ultrasound calibrations. It is one third the cost of similar 3D printed phantoms and takes much less time to build. The proposed phantom's image reprojections were 0.13mm more erroneous than those of the highest performing current phantom model The average standard deviation of multiple 3D image reprojections differed by 0.05mm between the phantoms CONCLUSION: It was found that the phantom could be built in less time, was one third the cost, compared to similar 3D printed models. The proposed phantom was found to be capable of producing equivalent calibrations to 3D printed phantoms.

  7. Can multi-slice or navigator-gated R2* MRI replace single-slice breath-hold acquisition for hepatic iron quantification?

    PubMed

    Loeffler, Ralf B; McCarville, M Beth; Wagstaff, Anne W; Smeltzer, Matthew P; Krafft, Axel J; Song, Ruitian; Hankins, Jane S; Hillenbrand, Claudia M

    2017-01-01

    Liver R2* values calculated from multi-gradient echo (mGRE) magnetic resonance images (MRI) are strongly correlated with hepatic iron concentration (HIC) as shown in several independently derived biopsy calibration studies. These calibrations were established for axial single-slice breath-hold imaging at the location of the portal vein. Scanning in multi-slice mode makes the exam more efficient, since whole-liver coverage can be achieved with two breath-holds and the optimal slice can be selected afterward. Navigator echoes remove the need for breath-holds and allow use in sedated patients. To evaluate if the existing biopsy calibrations can be applied to multi-slice and navigator-controlled mGRE imaging in children with hepatic iron overload, by testing if there is a bias-free correlation between single-slice R2* and multi-slice or multi-slice navigator controlled R2*. This study included MRI data from 71 patients with transfusional iron overload, who received an MRI exam to estimate HIC using gradient echo sequences. Patient scans contained 2 or 3 of the following imaging methods used for analysis: single-slice images (n = 71), multi-slice images (n = 69) and navigator-controlled images (n = 17). Small and large blood corrected region of interests were selected on axial images of the liver to obtain R2* values for all data sets. Bland-Altman and linear regression analysis were used to compare R2* values from single-slice images to those of multi-slice images and navigator-controlled images. Bland-Altman analysis showed that all imaging method comparisons were strongly associated with each other and had high correlation coefficients (0.98 ≤ r ≤ 1.00) with P-values ≤0.0001. Linear regression yielded slopes that were close to 1. We found that navigator-gated or breath-held multi-slice R2* MRI for HIC determination measures R2* values comparable to the biopsy-validated single-slice, single breath-hold scan. We conclude that these three R2

  8. Head MRI

    MedlinePlus

    ... the head; MRI - cranial; NMR - cranial; Cranial MRI; Brain MRI; MRI - brain; MRI - head ... the test, tell your provider if you have: Brain aneurysm clips An artificial heart valves Heart defibrillator ...

  9. Intelligent and automatic in vivo detection and quantification of transplanted cells in MRI.

    PubMed

    Afridi, Muhammad Jamal; Ross, Arun; Liu, Xiaoming; Bennewitz, Margaret F; Shuboni, Dorela D; Shapiro, Erik M

    2017-11-01

    Magnetic resonance imaging (MRI)-based cell tracking has emerged as a useful tool for identifying the location of transplanted cells, and even their migration. Magnetically labeled cells appear as dark contrast in T2*-weighted MRI, with sensitivities of individual cells. One key hurdle to the widespread use of MRI-based cell tracking is the inability to determine the number of transplanted cells based on this contrast feature. In the case of single cell detection, manual enumeration of spots in three-dimensional (3D) MRI in principle is possible; however, it is a tedious and time-consuming task that is prone to subjectivity and inaccuracy on a large scale. This research presents the first comprehensive study on how a computer-based intelligent, automatic, and accurate cell quantification approach can be designed for spot detection in MRI scans. Magnetically labeled mesenchymal stem cells (MSCs) were transplanted into rats using an intracardiac injection, accomplishing single cell seeding in the brain. T2*-weighted MRI of these rat brains were performed where labeled MSCs appeared as spots. Using machine learning and computer vision paradigms, approaches were designed to systematically explore the possibility of automatic detection of these spots in MRI. Experiments were validated against known in vitro scenarios. Using the proposed deep convolutional neural network (CNN) architecture, an in vivo accuracy up to 97.3% and in vitro accuracy of up to 99.8% was achieved for automated spot detection in MRI data. The proposed approach for automatic quantification of MRI-based cell tracking will facilitate the use of MRI in large-scale cell therapy studies. Magn Reson Med 78:1991-2002, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  10. Detection of subjects and brain regions related to Alzheimer's disease using 3D MRI scans based on eigenbrain and machine learning

    PubMed Central

    Zhang, Yudong; Dong, Zhengchao; Phillips, Preetha; Wang, Shuihua; Ji, Genlin; Yang, Jiquan; Yuan, Ti-Fei

    2015-01-01

    Purpose: Early diagnosis or detection of Alzheimer's disease (AD) from the normal elder control (NC) is very important. However, the computer-aided diagnosis (CAD) was not widely used, and the classification performance did not reach the standard of practical use. We proposed a novel CAD system for MR brain images based on eigenbrains and machine learning with two goals: accurate detection of both AD subjects and AD-related brain regions. Method: First, we used maximum inter-class variance (ICV) to select key slices from 3D volumetric data. Second, we generated an eigenbrain set for each subject. Third, the most important eigenbrain (MIE) was obtained by Welch's t-test (WTT). Finally, kernel support-vector-machines with different kernels that were trained by particle swarm optimization, were used to make an accurate prediction of AD subjects. Coefficients of MIE with values higher than 0.98 quantile were highlighted to obtain the discriminant regions that distinguish AD from NC. Results: The experiments showed that the proposed method can predict AD subjects with a competitive performance with existing methods, especially the accuracy of the polynomial kernel (92.36 ± 0.94) was better than the linear kernel of 91.47 ± 1.02 and the radial basis function (RBF) kernel of 86.71 ± 1.93. The proposed eigenbrain-based CAD system detected 30 AD-related brain regions (Anterior Cingulate, Caudate Nucleus, Cerebellum, Cingulate Gyrus, Claustrum, Inferior Frontal Gyrus, Inferior Parietal Lobule, Insula, Lateral Ventricle, Lentiform Nucleus, Lingual Gyrus, Medial Frontal Gyrus, Middle Frontal Gyrus, Middle Occipital Gyrus, Middle Temporal Gyrus, Paracentral Lobule, Parahippocampal Gyrus, Postcentral Gyrus, Posterial Cingulate, Precentral Gyrus, Precuneus, Subcallosal Gyrus, Sub-Gyral, Superior Frontal Gyrus, Superior Parietal Lobule, Superior Temporal Gyrus, Supramarginal Gyrus, Thalamus, Transverse Temporal Gyrus, and Uncus). The results were coherent with existing

  11. A general prediction model for the detection of ADHD and Autism using structural and functional MRI.

    PubMed

    Sen, Bhaskar; Borle, Neil C; Greiner, Russell; Brown, Matthew R G

    2018-01-01

    This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject's fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data-exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.

  12. Characterization of Adrenal Lesions on Unenhanced MRI Using Texture Analysis: A Machine-Learning Approach.

    PubMed

    Romeo, Valeria; Maurea, Simone; Cuocolo, Renato; Petretta, Mario; Mainenti, Pier Paolo; Verde, Francesco; Coppola, Milena; Dell'Aversana, Serena; Brunetti, Arturo

    2018-01-17

    Adrenal adenomas (AA) are the most common benign adrenal lesions, often characterized based on intralesional fat content as either lipid-rich (LRA) or lipid-poor (LPA). The differentiation of AA, particularly LPA, from nonadenoma adrenal lesions (NAL) may be challenging. Texture analysis (TA) can extract quantitative parameters from MR images. Machine learning is a technique for recognizing patterns that can be applied to medical images by identifying the best combination of TA features to create a predictive model for the diagnosis of interest. To assess the diagnostic efficacy of TA-derived parameters extracted from MR images in characterizing LRA, LPA, and NAL using a machine-learning approach. Retrospective, observational study. Sixty MR examinations, including 20 LRA, 20 LPA, and 20 NAL. Unenhanced T 1 -weighted in-phase (IP) and out-of-phase (OP) as well as T 2 -weighted (T 2 -w) MR images acquired at 3T. Adrenal lesions were manually segmented, placing a spherical volume of interest on IP, OP, and T 2 -w images. Different selection methods were trained and tested using the J48 machine-learning classifiers. The feature selection method that obtained the highest diagnostic performance using the J48 classifier was identified; the diagnostic performance was also compared with that of a senior radiologist by means of McNemar's test. A total of 138 TA-derived features were extracted; among these, four features were selected, extracted from the IP (Short_Run_High_Gray_Level_Emphasis), OP (Mean_Intensity and Maximum_3D_Diameter), and T 2 -w (Standard_Deviation) images; the J48 classifier obtained a diagnostic accuracy of 80%. The expert radiologist obtained a diagnostic accuracy of 73%. McNemar's test did not show significant differences in terms of diagnostic performance between the J48 classifier and the expert radiologist. Machine learning conducted on MR TA-derived features is a potential tool to characterize adrenal lesions. 4 Technical Efficacy: Stage 2 J

  13. Computed inverse MRI for magnetic susceptibility map reconstruction

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2015-01-01

    Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372

  14. Multi-Parametric MRI and Texture Analysis to Visualize Spatial Histologic Heterogeneity and Tumor Extent in Glioblastoma.

    PubMed

    Hu, Leland S; Ning, Shuluo; Eschbacher, Jennifer M; Gaw, Nathan; Dueck, Amylou C; Smith, Kris A; Nakaji, Peter; Plasencia, Jonathan; Ranjbar, Sara; Price, Stephen J; Tran, Nhan; Loftus, Joseph; Jenkins, Robert; O'Neill, Brian P; Elmquist, William; Baxter, Leslie C; Gao, Fei; Frakes, David; Karis, John P; Zwart, Christine; Swanson, Kristin R; Sarkaria, Jann; Wu, Teresa; Mitchell, J Ross; Li, Jing

    2015-01-01

    Genetic profiling represents the future of neuro-oncology but suffers from inadequate biopsies in heterogeneous tumors like Glioblastoma (GBM). Contrast-enhanced MRI (CE-MRI) targets enhancing core (ENH) but yields adequate tumor in only ~60% of cases. Further, CE-MRI poorly localizes infiltrative tumor within surrounding non-enhancing parenchyma, or brain-around-tumor (BAT), despite the importance of characterizing this tumor segment, which universally recurs. In this study, we use multiple texture analysis and machine learning (ML) algorithms to analyze multi-parametric MRI, and produce new images indicating tumor-rich targets in GBM. We recruited primary GBM patients undergoing image-guided biopsies and acquired pre-operative MRI: CE-MRI, Dynamic-Susceptibility-weighted-Contrast-enhanced-MRI, and Diffusion Tensor Imaging. Following image coregistration and region of interest placement at biopsy locations, we compared MRI metrics and regional texture with histologic diagnoses of high- vs low-tumor content (≥80% vs <80% tumor nuclei) for corresponding samples. In a training set, we used three texture analysis algorithms and three ML methods to identify MRI-texture features that optimized model accuracy to distinguish tumor content. We confirmed model accuracy in a separate validation set. We collected 82 biopsies from 18 GBMs throughout ENH and BAT. The MRI-based model achieved 85% cross-validated accuracy to diagnose high- vs low-tumor in the training set (60 biopsies, 11 patients). The model achieved 81.8% accuracy in the validation set (22 biopsies, 7 patients). Multi-parametric MRI and texture analysis can help characterize and visualize GBM's spatial histologic heterogeneity to identify regional tumor-rich biopsy targets.

  15. Brain MRI analysis for Alzheimer's disease diagnosis using an ensemble system of deep convolutional neural networks.

    PubMed

    Islam, Jyoti; Zhang, Yanqing

    2018-05-31

    Alzheimer's disease is an incurable, progressive neurological brain disorder. Earlier detection of Alzheimer's disease can help with proper treatment and prevent brain tissue damage. Several statistical and machine learning models have been exploited by researchers for Alzheimer's disease diagnosis. Analyzing magnetic resonance imaging (MRI) is a common practice for Alzheimer's disease diagnosis in clinical research. Detection of Alzheimer's disease is exacting due to the similarity in Alzheimer's disease MRI data and standard healthy MRI data of older people. Recently, advanced deep learning techniques have successfully demonstrated human-level performance in numerous fields including medical image analysis. We propose a deep convolutional neural network for Alzheimer's disease diagnosis using brain MRI data analysis. While most of the existing approaches perform binary classification, our model can identify different stages of Alzheimer's disease and obtains superior performance for early-stage diagnosis. We conducted ample experiments to demonstrate that our proposed model outperformed comparative baselines on the Open Access Series of Imaging Studies dataset.

  16. Realistic Analytical Polyhedral MRI Phantoms

    PubMed Central

    Ngo, Tri M.; Fung, George S. K.; Han, Shuo; Chen, Min; Prince, Jerry L.; Tsui, Benjamin M. W.; McVeigh, Elliot R.; Herzka, Daniel A.

    2015-01-01

    Purpose Analytical phantoms have closed form Fourier transform expressions and are used to simulate MRI acquisitions. Existing 3D analytical phantoms are unable to accurately model shapes of biomedical interest. It is demonstrated that polyhedral analytical phantoms have closed form Fourier transform expressions and can accurately represent 3D biomedical shapes. Theory The derivations of the Fourier transform of a polygon and polyhedron are presented. Methods The Fourier transform of a polyhedron was implemented and its accuracy in representing faceted and smooth surfaces was characterized. Realistic anthropomorphic polyhedral brain and torso phantoms were constructed and their use in simulated 3D/2D MRI acquisitions was described. Results Using polyhedra, the Fourier transform of faceted shapes can be computed to within machine precision. Smooth surfaces can be approximated with increasing accuracy by increasing the number of facets in the polyhedron; the additional accumulated numerical imprecision of the Fourier transform of polyhedra with many faces remained small. Simulations of 3D/2D brain and 2D torso cine acquisitions produced realistic reconstructions free of high frequency edge aliasing as compared to equivalent voxelized/rasterized phantoms. Conclusion Analytical polyhedral phantoms are easy to construct and can accurately simulate shapes of biomedical interest. PMID:26479724

  17. Navigation-supported diagnosis of the substantia nigra by matching midbrain sonography and MRI

    NASA Astrophysics Data System (ADS)

    Salah, Zein; Weise, David; Preim, Bernhard; Classen, Joseph; Rose, Georg

    2012-03-01

    Transcranial sonography (TCS) is a well-established neuroimaging technique that allows for visualizing several brainstem structures, including the substantia nigra, and helps for the diagnosis and differential diagnosis of various movement disorders, especially in Parkinsonian syndromes. However, proximate brainstem anatomy can hardly be recognized due to the limited image quality of B-scans. In this paper, a visualization system for the diagnosis of the substantia nigra is presented, which utilizes neuronavigated TCS to reconstruct tomographical slices from registered MRI datasets and visualizes them simultaneously with corresponding TCS planes in realtime. To generate MRI tomographical slices, the tracking data of the calibrated ultrasound probe are passed to an optimized slicing algorithm, which computes cross sections at arbitrary positions and orientations from the registered MRI dataset. The extracted MRI cross sections are finally fused with the region of interest from the ultrasound image. The system allows for the computation and visualization of slices at a near real-time rate. Primary tests of the system show an added value to the pure sonographic imaging. The system also allows for reconstructing volumetric (3D) ultrasonic data of the region of interest, and thus contributes to enhancing the diagnostic yield of midbrain sonography.

  18. Segmentation of white matter hyperintensities using convolutional neural networks with global spatial information in routine clinical brain MRI with none or mild vascular pathology.

    PubMed

    Rachmadi, Muhammad Febrian; Valdés-Hernández, Maria Del C; Agan, Maria Leonora Fatimah; Di Perri, Carol; Komura, Taku

    2018-06-01

    We propose an adaptation of a convolutional neural network (CNN) scheme proposed for segmenting brain lesions with considerable mass-effect, to segment white matter hyperintensities (WMH) characteristic of brains with none or mild vascular pathology in routine clinical brain magnetic resonance images (MRI). This is a rather difficult segmentation problem because of the small area (i.e., volume) of the WMH and their similarity to non-pathological brain tissue. We investigate the effectiveness of the 2D CNN scheme by comparing its performance against those obtained from another deep learning approach: Deep Boltzmann Machine (DBM), two conventional machine learning approaches: Support Vector Machine (SVM) and Random Forest (RF), and a public toolbox: Lesion Segmentation Tool (LST), all reported to be useful for segmenting WMH in MRI. We also introduce a way to incorporate spatial information in convolution level of CNN for WMH segmentation named global spatial information (GSI). Analysis of covariance corroborated known associations between WMH progression, as assessed by all methods evaluated, and demographic and clinical data. Deep learning algorithms outperform conventional machine learning algorithms by excluding MRI artefacts and pathologies that appear similar to WMH. Our proposed approach of incorporating GSI also successfully helped CNN to achieve better automatic WMH segmentation regardless of network's settings tested. The mean Dice Similarity Coefficient (DSC) values for LST-LGA, SVM, RF, DBM, CNN and CNN-GSI were 0.2963, 0.1194, 0.1633, 0.3264, 0.5359 and 5389 respectively. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  19. MRI/TRUS data fusion for prostate brachytherapy. Preliminary results.

    PubMed

    Reynier, Christophe; Troccaz, Jocelyne; Fourneret, Philippe; Dusserre, André; Gay-Jeune, Cécile; Descotes, Jean-Luc; Bolla, Michel; Giraud, Jean-Yves

    2004-06-01

    Prostate brachytherapy involves implanting radioactive seeds (I125 for instance) permanently in the gland for the treatment of localized prostate cancers, e.g., cT1c-T2a N0 M0 with good prognostic factors. Treatment planning and seed implanting are most often based on the intensive use of transrectal ultrasound (TRUS) imaging. This is not easy because prostate visualization is difficult in this imaging modality particularly as regards the apex of the gland and from an intra- and interobserver variability standpoint. Radioactive seeds are implanted inside open interventional MR machines in some centers. Since MRI was shown to be sensitive and specific for prostate imaging whilst open MR is prohibitive for most centers and makes surgical procedures very complex, this work suggests bringing the MR virtually in the operating room with MRI/TRUS data fusion. This involves providing the physician with bi-modality images (TRUS plus MRI) intended to improve treatment planning from the data registration stage. The paper describes the method developed and implemented in the PROCUR system. Results are reported for a phantom and first series of patients. Phantom experiments helped characterize the accuracy of the process. Patient experiments have shown that using MRI data linked with TRUS data improves TRUS image segmentation especially regarding the apex and base of the prostate. This may significantly modify prostate volume definition and have an impact on treatment planning.

  20. Polarimetric SAR calibration experiment using active radar calibrators

    NASA Astrophysics Data System (ADS)

    Freeman, Anthony; Shen, Yuhsyen; Werner, Charles L.

    1990-03-01

    Active radar calibrators are used to derive both the amplitude and phase characteristics of a multichannel polarimetric SAR from the complex image data. Results are presented from an experiment carried out using the NASA/JPL DC-8 aircraft SAR over a calibration site at Goldstone, California. As part of the experiment, polarimetric active radar calibrators (PARCs) with adjustable polarization signatures were deployed. Experimental results demonstrate that the PARCs can be used to calibrate polarimetric SAR images successfully. Restrictions on the application of the PARC calibration procedure are discussed.

  1. Polarimetric SAR calibration experiment using active radar calibrators

    NASA Technical Reports Server (NTRS)

    Freeman, Anthony; Shen, Yuhsyen; Werner, Charles L.

    1990-01-01

    Active radar calibrators are used to derive both the amplitude and phase characteristics of a multichannel polarimetric SAR from the complex image data. Results are presented from an experiment carried out using the NASA/JPL DC-8 aircraft SAR over a calibration site at Goldstone, California. As part of the experiment, polarimetric active radar calibrators (PARCs) with adjustable polarization signatures were deployed. Experimental results demonstrate that the PARCs can be used to calibrate polarimetric SAR images successfully. Restrictions on the application of the PARC calibration procedure are discussed.

  2. Kernel Principal Component Analysis for dimensionality reduction in fMRI-based diagnosis of ADHD.

    PubMed

    Sidhu, Gagan S; Asgarian, Nasimeh; Greiner, Russell; Brown, Matthew R G

    2012-01-01

    This study explored various feature extraction methods for use in automated diagnosis of Attention-Deficit Hyperactivity Disorder (ADHD) from functional Magnetic Resonance Image (fMRI) data. Each participant's data consisted of a resting state fMRI scan as well as phenotypic data (age, gender, handedness, IQ, and site of scanning) from the ADHD-200 dataset. We used machine learning techniques to produce support vector machine (SVM) classifiers that attempted to differentiate between (1) all ADHD patients vs. healthy controls and (2) ADHD combined (ADHD-c) type vs. ADHD inattentive (ADHD-i) type vs. controls. In different tests, we used only the phenotypic data, only the imaging data, or else both the phenotypic and imaging data. For feature extraction on fMRI data, we tested the Fast Fourier Transform (FFT), different variants of Principal Component Analysis (PCA), and combinations of FFT and PCA. PCA variants included PCA over time (PCA-t), PCA over space and time (PCA-st), and kernelized PCA (kPCA-st). Baseline chance accuracy was 64.2% produced by guessing healthy control (the majority class) for all participants. Using only phenotypic data produced 72.9% accuracy on two class diagnosis and 66.8% on three class diagnosis. Diagnosis using only imaging data did not perform as well as phenotypic-only approaches. Using both phenotypic and imaging data with combined FFT and kPCA-st feature extraction yielded accuracies of 76.0% on two class diagnosis and 68.6% on three class diagnosis-better than phenotypic-only approaches. Our results demonstrate the potential of using FFT and kPCA-st with resting-state fMRI data as well as phenotypic data for automated diagnosis of ADHD. These results are encouraging given known challenges of learning ADHD diagnostic classifiers using the ADHD-200 dataset (see Brown et al., 2012).

  3. Heart MRI

    MedlinePlus

    Magnetic resonance imaging - cardiac; Magnetic resonance imaging - heart; Nuclear magnetic resonance - cardiac; NMR - cardiac; MRI of the heart; Cardiomyopathy - MRI; Heart failure - MRI; Congenital heart disease - MRI

  4. A catalog of stellar spectrophotometry (Adelman, et al. 1989): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.; Adelman, Saul J.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the astronomical data centers, is described. The catalog is a collection of spectrophotometric observations made using rotating grating scanners and calibrated with the fluxes of Vega. The observations cover various wavelength regions between about 330 and 1080 nm.

  5. UNFOLD-SENSE: a parallel MRI method with self-calibration and artifact suppression.

    PubMed

    Madore, Bruno

    2004-08-01

    This work aims at improving the performance of parallel imaging by using it with our "unaliasing by Fourier-encoding the overlaps in the temporal dimension" (UNFOLD) temporal strategy. A self-calibration method called "self, hybrid referencing with UNFOLD and GRAPPA" (SHRUG) is presented. SHRUG combines the UNFOLD-based sensitivity mapping strategy introduced in the TSENSE method by Kellman et al. (5), with the strategy introduced in the GRAPPA method by Griswold et al. (10). SHRUG merges the two approaches to alleviate their respective limitations, and provides fast self-calibration at any given acceleration factor. UNFOLD-SENSE further includes an UNFOLD artifact suppression scheme to significantly suppress artifacts and amplified noise produced by parallel imaging. This suppression scheme, which was published previously (4), is related to another method that was presented independently as part of TSENSE. While the two are equivalent at accelerations < or = 2.0, the present approach is shown here to be significantly superior at accelerations > 2.0, with up to double the artifact suppression at high accelerations. Furthermore, a slight modification of Cartesian SENSE is introduced, which allows departures from purely Cartesian sampling grids. This technique, termed variable-density SENSE (vdSENSE), allows the variable-density data required by SHRUG to be reconstructed with the simplicity and fast processing of Cartesian SENSE. UNFOLD-SENSE is given by the combination of SHRUG for sensitivity mapping, vdSENSE for reconstruction, and UNFOLD for artifact/amplified noise suppression. The method was implemented, with online reconstruction, on both an SSFP and a myocardium-perfusion sequence. The results from six patients scanned with UNFOLD-SENSE are presented.

  6. Automated assessment of thigh composition using machine learning for Dixon magnetic resonance images.

    PubMed

    Yang, Yu Xin; Chong, Mei Sian; Tay, Laura; Yew, Suzanne; Yeo, Audrey; Tan, Cher Heng

    2016-10-01

    To develop and validate a machine learning based automated segmentation method that jointly analyzes the four contrasts provided by Dixon MRI technique for improved thigh composition segmentation accuracy. The automatic detection of body composition is formulized as a three-class classification issue. Each image voxel in the training dataset is assigned with a correct label. A voxel classifier is trained and subsequently used to predict unseen data. Morphological operations are finally applied to generate volumetric segmented images for different structures. We applied this algorithm on datasets of (1) four contrast images, (2) water and fat images, and (3) unsuppressed images acquired from 190 subjects. The proposed method using four contrasts achieved most accurate and robust segmentation compared to the use of combined fat and water images and the use of unsuppressed image, average Dice coefficients of 0.94 ± 0.03, 0.96 ± 0.03, 0.80 ± 0.03, and 0.97 ± 0.01 has been achieved to bone region, subcutaneous adipose tissue (SAT), inter-muscular adipose tissue (IMAT), and muscle respectively. Our proposed method based on machine learning produces accurate tissue quantification and showed an effective use of large information provided by the four contrast images from Dixon MRI.

  7. The Potential for an Enhanced Role for MRI in Radiation-therapy Treatment Planning

    PubMed Central

    Metcalfe, P.; Liney, G. P.; Holloway, L.; Walker, A.; Barton, M.; Delaney, G. P.; Vinod, S.; Tomé, W.

    2013-01-01

    The exquisite soft-tissue contrast of magnetic resonance imaging (MRI) has meant that the technique is having an increasing role in contouring the gross tumor volume (GTV) and organs at risk (OAR) in radiation therapy treatment planning systems (TPS). MRI-planning scans from diagnostic MRI scanners are currently incorporated into the planning process by being registered to CT data. The soft-tissue data from the MRI provides target outline guidance and the CT provides a solid geometric and electron density map for accurate dose calculation on the TPS computer. There is increasing interest in MRI machine placement in radiotherapy clinics as an adjunct to CT simulators. Most vendors now offer 70 cm bores with flat couch inserts and specialised RF coil designs. We would refer to these devices as MR-simulators. There is also research into the future application of MR-simulators independent of CT and as in-room image-guidance devices. It is within the background of this increased interest in the utility of MRI in radiotherapy treatment planning that this paper is couched. The paper outlines publications that deal with standard MRI sequences used in current clinical practice. It then discusses the potential for using processed functional diffusion maps (fDM) derived from diffusion weighted image sequences in tracking tumor activity and tumor recurrence. Next, this paper reviews publications that describe the use of MRI in patient-management applications that may, in turn, be relevant to radiotherapy treatment planning. The review briefly discusses the concepts behind functional techniques such as dynamic contrast enhanced (DCE), diffusion-weighted (DW) MRI sequences and magnetic resonance spectroscopic imaging (MRSI). Significant applications of MR are discussed in terms of the following treatment sites: brain, head and neck, breast, lung, prostate and cervix. While not yet routine, the use of apparent diffusion coefficient (ADC) map analysis indicates an exciting future

  8. Monitoring local heating around an interventional MRI antenna with RF radiometry

    PubMed Central

    Ertürk, M. Arcan; El-Sharkawy, AbdEl-Monem M.; Bottomley, Paul A.

    2015-01-01

    Purpose: Radiofrequency (RF) radiometry uses thermal noise detected by an antenna to measure the temperature of objects independent of medical imaging technologies such as magnetic resonance imaging (MRI). Here, an active interventional MRI antenna can be deployed as a RF radiometer to measure local heating, as a possible new method of monitoring device safety and thermal therapy. Methods: A 128 MHz radiometer receiver was fabricated to measure the RF noise voltage from an interventional 3 T MRI loopless antenna and calibrated for temperature in a uniformly heated bioanalogous gel phantom. Local heating (ΔT) was induced using the antenna for RF transmission and measured by RF radiometry, fiber-optic thermal sensors, and MRI thermometry. The spatial thermal sensitivity of the antenna radiometer was numerically computed using a method-of-moment electric field analyses. The gel’s thermal conductivity was measured by MRI thermometry, and the localized time-dependent ΔT distribution computed from the bioheat transfer equation and compared with radiometry measurements. A “H-factor” relating the 1 g-averaged ΔT to the radiometric temperature was introduced to estimate peak temperature rise in the antenna’s sensitive region. Results: The loopless antenna radiometer linearly tracked temperature inside a thermally equilibrated phantom up to 73 °C to within ±0.3 °C at a 2 Hz sample rate. Computed and MRI thermometric measures of peak ΔT agreed within 13%. The peak 1 g-average temperature was H = 1.36 ± 0.02 times higher than the radiometric temperature for any media with a thermal conductivity of 0.15–0.50 (W/m)/K, indicating that the radiometer can measure peak 1 g-averaged ΔT in physiologically relevant tissue within ±0.4 °C. Conclusions: Active internal MRI detectors can serve as RF radiometers at the MRI frequency to provide accurate independent measures of local and peak temperature without the artifacts that can accompany MRI thermometry or

  9. Monitoring local heating around an interventional MRI antenna with RF radiometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ertürk, M. Arcan; El-Sharkawy, AbdEl-Monem M.; Bottomley, Paul A., E-mail: bottoml@mri.jhu.edu

    Purpose: Radiofrequency (RF) radiometry uses thermal noise detected by an antenna to measure the temperature of objects independent of medical imaging technologies such as magnetic resonance imaging (MRI). Here, an active interventional MRI antenna can be deployed as a RF radiometer to measure local heating, as a possible new method of monitoring device safety and thermal therapy. Methods: A 128 MHz radiometer receiver was fabricated to measure the RF noise voltage from an interventional 3 T MRI loopless antenna and calibrated for temperature in a uniformly heated bioanalogous gel phantom. Local heating (ΔT) was induced using the antenna for RFmore » transmission and measured by RF radiometry, fiber-optic thermal sensors, and MRI thermometry. The spatial thermal sensitivity of the antenna radiometer was numerically computed using a method-of-moment electric field analyses. The gel’s thermal conductivity was measured by MRI thermometry, and the localized time-dependent ΔT distribution computed from the bioheat transfer equation and compared with radiometry measurements. A “H-factor” relating the 1 g-averaged ΔT to the radiometric temperature was introduced to estimate peak temperature rise in the antenna’s sensitive region. Results: The loopless antenna radiometer linearly tracked temperature inside a thermally equilibrated phantom up to 73 °C to within ±0.3 °C at a 2 Hz sample rate. Computed and MRI thermometric measures of peak ΔT agreed within 13%. The peak 1 g-average temperature was H = 1.36 ± 0.02 times higher than the radiometric temperature for any media with a thermal conductivity of 0.15–0.50 (W/m)/K, indicating that the radiometer can measure peak 1 g-averaged ΔT in physiologically relevant tissue within ±0.4 °C. Conclusions: Active internal MRI detectors can serve as RF radiometers at the MRI frequency to provide accurate independent measures of local and peak temperature without the artifacts that can accompany MRI

  10. Automated detection of focal cortical dysplasia type II with surface-based magnetic resonance imaging postprocessing and machine learning.

    PubMed

    Jin, Bo; Krishnan, Balu; Adler, Sophie; Wagstyl, Konrad; Hu, Wenhan; Jones, Stephen; Najm, Imad; Alexopoulos, Andreas; Zhang, Kai; Zhang, Jianguo; Ding, Meiping; Wang, Shuang; Wang, Zhong Irene

    2018-05-01

    Focal cortical dysplasia (FCD) is a major pathology in patients undergoing surgical resection to treat pharmacoresistant epilepsy. Magnetic resonance imaging (MRI) postprocessing methods may provide essential help for detection of FCD. In this study, we utilized surface-based MRI morphometry and machine learning for automated lesion detection in a mixed cohort of patients with FCD type II from 3 different epilepsy centers. Sixty-one patients with pharmacoresistant epilepsy and histologically proven FCD type II were included in the study. The patients had been evaluated at 3 different epilepsy centers using 3 different MRI scanners. T1-volumetric sequence was used for postprocessing. A normal database was constructed with 120 healthy controls. We also included 35 healthy test controls and 15 disease test controls with histologically confirmed hippocampal sclerosis to assess specificity. Features were calculated and incorporated into a nonlinear neural network classifier, which was trained to identify lesional cluster. We optimized the threshold of the output probability map from the classifier by performing receiver operating characteristic (ROC) analyses. Success of detection was defined by overlap between the final cluster and the manual labeling. Performance was evaluated using k-fold cross-validation. The threshold of 0.9 showed optimal sensitivity of 73.7% and specificity of 90.0%. The area under the curve for the ROC analysis was 0.75, which suggests a discriminative classifier. Sensitivity and specificity were not significantly different for patients from different centers, suggesting robustness of performance. Correct detection rate was significantly lower in patients with initially normal MRI than patients with unequivocally positive MRI. Subgroup analysis showed the size of the training group and normal control database impacted classifier performance. Automated surface-based MRI morphometry equipped with machine learning showed robust performance across

  11. Machine rates for selected forest harvesting machines

    Treesearch

    R.W. Brinker; J. Kinard; Robert Rummer; B. Lanford

    2002-01-01

    Very little new literature has been published on the subject of machine rates and machine cost analysis since 1989 when the Alabama Agricultural Experiment Station Circular 296, Machine Rates for Selected Forest Harvesting Machines, was originally published. Many machines discussed in the original publication have undergone substantial changes in various aspects, not...

  12. Ultra-low field MRI: bringing MRI to new arenas

    DOE PAGES

    Magnelind, Per Erik; Matlashov, Andrei Nikolaevich; Newman, Shaun Garrett; ...

    2016-11-01

    Conventional magnetic resonance imaging (MRI) is moving toward the use of stronger and stronger magnetic fields with 3T, and even 7 T systems being increasingly used in routine clinical applications. However there is another branch of MRI, namely Ultra Low Field MRI (ULF-MRI) where the magnetic fields during readout are several orders of magnitude smaller, namely 1–100 μT. While conventional high-field MRI remains the gold standard there are several situations such as in military emergencies or in developing countries where for cost and logistical reasons, conventional MRI is not practical. In such scenarios, ULF-MRI could provide a solution. Lastly, thismore » article describes the basic principles and the potential of ULF-MRI.« less

  13. Ultra-low field MRI: bringing MRI to new arenas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magnelind, Per Erik; Matlashov, Andrei Nikolaevich; Newman, Shaun Garrett

    Conventional magnetic resonance imaging (MRI) is moving toward the use of stronger and stronger magnetic fields with 3T, and even 7 T systems being increasingly used in routine clinical applications. However there is another branch of MRI, namely Ultra Low Field MRI (ULF-MRI) where the magnetic fields during readout are several orders of magnitude smaller, namely 1–100 μT. While conventional high-field MRI remains the gold standard there are several situations such as in military emergencies or in developing countries where for cost and logistical reasons, conventional MRI is not practical. In such scenarios, ULF-MRI could provide a solution. Lastly, thismore » article describes the basic principles and the potential of ULF-MRI.« less

  14. Abnormal brain structure as a potential biomarker for venous erectile dysfunction: evidence from multimodal MRI and machine learning.

    PubMed

    Li, Lingli; Fan, Wenliang; Li, Jun; Li, Quanlin; Wang, Jin; Fan, Yang; Ye, Tianhe; Guo, Jialun; Li, Sen; Zhang, Youpeng; Cheng, Yongbiao; Tang, Yong; Zeng, Hanqing; Yang, Lian; Zhu, Zhaohui

    2018-03-29

    To investigate the cerebral structural changes related to venous erectile dysfunction (VED) and the relationship of these changes to clinical symptoms and disorder duration and distinguish patients with VED from healthy controls using a machine learning classification. 45 VED patients and 50 healthy controls were included. Voxel-based morphometry (VBM), tract-based spatial statistics (TBSS) and correlation analyses of VED patients and clinical variables were performed. The machine learning classification method was adopted to confirm its effectiveness in distinguishing VED patients from healthy controls. Compared to healthy control subjects, VED patients showed significantly decreased cortical volumes in the left postcentral gyrus and precentral gyrus, while only the right middle temporal gyrus showed a significant increase in cortical volume. Increased axial diffusivity (AD), radial diffusivity (RD) and mean diffusivity (MD) values were observed in widespread brain regions. Certain regions of these alterations related to VED patients showed significant correlations with clinical symptoms and disorder durations. Machine learning analyses discriminated patients from controls with overall accuracy 96.7%, sensitivity 93.3% and specificity 99.0%. Cortical volume and white matter (WM) microstructural changes were observed in VED patients, and showed significant correlations with clinical symptoms and dysfunction durations. Various DTI-derived indices of some brain regions could be regarded as reliable discriminating features between VED patients and healthy control subjects, as shown by machine learning analyses. • Multimodal magnetic resonance imaging helps clinicians to assess patients with VED. • VED patients show cerebral structural alterations related to their clinical symptoms. • Machine learning analyses discriminated VED patients from controls with an excellent performance. • Machine learning classification provided a preliminary demonstration of DTI

  15. Evaluation of factors to convert absorbed dose calibrations from graphite to water for the NPL high-energy photon calibration service.

    PubMed

    Nutbrown, R F; Duane, S; Shipley, D R; Thomas, R A S

    2002-02-07

    The National Physical Laboratory (NPL) provides a high-energy photon calibration service using 4-19 MV x-rays and 60Co gamma-radiation for secondary standard dosemeters in terms of absorbed dose to water. The primary standard used for this service is a graphite calorimeter and so absorbed dose calibrations must be converted from graphite to water. The conversion factors currently in use were determined prior to the launch of this service in 1988. Since then, it has been found that the differences in inherent filtration between the NPL LINAC and typical clinical machines are large enough to affect absorbed dose calibrations and, since 1992, calibrations have been performed in heavily filtered qualities. The conversion factors for heavily filtered qualities were determined by interpolation and extrapolation of lightly filtered results as a function of tissue phantom ratio 20,10 (TPR20,10). This paper aims to evaluate these factors for all mega-voltage photon energies provided by the NPL LINAC for both lightly and heavily filtered qualities and for 60Co y-radiation in two ways. The first method involves the use of the photon fluence-scaling theorem. This states that if two blocks of different material are irradiated by the same photon beam, and if all dimensions are scaled in the inverse ratio of the electron densities of the two media, then, assuming that all photon interactions occur by Compton scatter the photon attenuation and scatter factors at corresponding scaled points of measurement in the phantom will be identical. The second method involves making in-phantom measurements of chamber response at a constant target-chamber distance. Monte Carlo techniques are then used to determine the corresponding dose to the medium in order to determine the chamber calibration factor directly. Values of the ratio of absorbed dose calibration factors in water and in graphite determined in these two ways agree with each other to within 0.2% (1sigma uncertainty). The best fit

  16. Spectrophotometric determination of ternary mixtures of thiamin, riboflavin and pyridoxal in pharmaceutical and human plasma by least-squares support vector machines.

    PubMed

    Niazi, Ali; Zolgharnein, Javad; Afiuni-Zadeh, Somaie

    2007-11-01

    Ternary mixtures of thiamin, riboflavin and pyridoxal have been simultaneously determined in synthetic and real samples by applications of spectrophotometric and least-squares support vector machines. The calibration graphs were linear in the ranges of 1.0 - 20.0, 1.0 - 10.0 and 1.0 - 20.0 microg ml(-1) with detection limits of 0.6, 0.5 and 0.7 microg ml(-1) for thiamin, riboflavin and pyridoxal, respectively. The experimental calibration matrix was designed with 21 mixtures of these chemicals. The concentrations were varied between calibration graph concentrations of vitamins. The simultaneous determination of these vitamin mixtures by using spectrophotometric methods is a difficult problem, due to spectral interferences. The partial least squares (PLS) modeling and least-squares support vector machines were used for the multivariate calibration of the spectrophotometric data. An excellent model was built using LS-SVM, with low prediction errors and superior performance in relation to PLS. The root mean square errors of prediction (RMSEP) for thiamin, riboflavin and pyridoxal with PLS and LS-SVM were 0.6926, 0.3755, 0.4322 and 0.0421, 0.0318, 0.0457, respectively. The proposed method was satisfactorily applied to the rapid simultaneous determination of thiamin, riboflavin and pyridoxal in commercial pharmaceutical preparations and human plasma samples.

  17. Imaging patterns predict patient survival and molecular subtype in glioblastoma via machine learning techniques

    PubMed Central

    Macyszyn, Luke; Akbari, Hamed; Pisapia, Jared M.; Da, Xiao; Attiah, Mark; Pigrish, Vadim; Bi, Yingtao; Pal, Sharmistha; Davuluri, Ramana V.; Roccograndi, Laura; Dahmane, Nadia; Martinez-Lage, Maria; Biros, George; Wolf, Ronald L.; Bilello, Michel; O'Rourke, Donald M.; Davatzikos, Christos

    2016-01-01

    Background MRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB). Methods One hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients. Results Survival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy. Conclusions By employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood–brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients. PMID:26188015

  18. Radiomics based targeted radiotherapy planning (Rad-TRaP): a computational framework for prostate cancer treatment planning with MRI.

    PubMed

    Shiradkar, Rakesh; Podder, Tarun K; Algohary, Ahmad; Viswanath, Satish; Ellis, Rodney J; Madabhushi, Anant

    2016-11-10

    Radiomics or computer - extracted texture features have been shown to achieve superior performance than multiparametric MRI (mpMRI) signal intensities alone in targeting prostate cancer (PCa) lesions. Radiomics along with deformable co-registration tools can be used to develop a framework to generate targeted focal radiotherapy treatment plans. The Rad-TRaP framework comprises three distinct modules. Firstly, a module for radiomics based detection of PCa lesions on mpMRI via a feature enabled machine learning classifier. The second module comprises a multi-modal deformable co-registration scheme to map tissue, organ, and delineated target volumes from MRI onto CT. Finally, the third module involves generation of a radiomics based dose plan on MRI for brachytherapy and on CT for EBRT using the target delineations transferred from the MRI to the CT. Rad-TRaP framework was evaluated using a retrospective cohort of 23 patient studies from two different institutions. 11 patients from the first institution were used to train a radiomics classifier, which was used to detect tumor regions in 12 patients from the second institution. The ground truth cancer delineations for training the machine learning classifier were made by an experienced radiation oncologist using mpMRI, knowledge of biopsy location and radiology reports. The detected tumor regions were used to generate treatment plans for brachytherapy using mpMRI, and tumor regions mapped from MRI to CT to generate corresponding treatment plans for EBRT. For each of EBRT and brachytherapy, 3 dose plans were generated - whole gland homogeneous ([Formula: see text]) which is the current clinical standard, radiomics based focal ([Formula: see text]), and whole gland with a radiomics based focal boost ([Formula: see text]). Comparison of [Formula: see text] against conventional [Formula: see text] revealed that targeted focal brachytherapy would result in a marked reduction in dosage to the OARs while ensuring that the

  19. MR signal intensity: staying on the bright side in MR image interpretation

    PubMed Central

    Bloem, Johan L; Reijnierse, Monique; Huizinga, Tom W J

    2018-01-01

    In 2003, the Nobel Prize for Medicine was awarded for contribution to the invention of MRI, reflecting the incredible value of MRI for medicine. Since 2003, enormous technical advancements have been made in acquiring MR images. However, MRI has a complicated, accident-prone dark side; images are not calibrated and respective images are dependent on all kinds of subjective choices in the settings of the machine, acquisition technique parameters, reconstruction techniques, data transmission, filtering and postprocessing techniques. The bright side is that understanding MR techniques increases opportunities to unravel characteristics of tissue. In this viewpoint, we summarise the different subjective choices that can be made to generate MR images and stress the importance of communication between radiologists and rheumatologists to correctly interpret images.

  20. Rapid geodesic mapping of brain functional connectivity: implementation of a dedicated co-processor in a field-programmable gate array (FPGA) and application to resting state functional MRI.

    PubMed

    Minati, Ludovico; Cercignani, Mara; Chan, Dennis

    2013-10-01

    Graph theory-based analyses of brain network topology can be used to model the spatiotemporal correlations in neural activity detected through fMRI, and such approaches have wide-ranging potential, from detection of alterations in preclinical Alzheimer's disease through to command identification in brain-machine interfaces. However, due to prohibitive computational costs, graph-based analyses to date have principally focused on measuring connection density rather than mapping the topological architecture in full by exhaustive shortest-path determination. This paper outlines a solution to this problem through parallel implementation of Dijkstra's algorithm in programmable logic. The processor design is optimized for large, sparse graphs and provided in full as synthesizable VHDL code. An acceleration factor between 15 and 18 is obtained on a representative resting-state fMRI dataset, and maps of Euclidean path length reveal the anticipated heterogeneous cortical involvement in long-range integrative processing. These results enable high-resolution geodesic connectivity mapping for resting-state fMRI in patient populations and real-time geodesic mapping to support identification of imagined actions for fMRI-based brain-machine interfaces. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. A computerized MRI biomarker quantification scheme for a canine model of Duchenne muscular dystrophy.

    PubMed

    Wang, Jiahui; Fan, Zheng; Vandenborne, Krista; Walter, Glenn; Shiloh-Malawsky, Yael; An, Hongyu; Kornegay, Joe N; Styner, Martin A

    2013-09-01

    Golden retriever muscular dystrophy (GRMD) is a widely used canine model of Duchenne muscular dystrophy (DMD). Recent studies have shown that magnetic resonance imaging (MRI) can be used to non-invasively detect consistent changes in both DMD and GRMD. In this paper, we propose a semiautomated system to quantify MRI biomarkers of GRMD. Our system was applied to a database of 45 MRI scans from 8 normal and 10 GRMD dogs in a longitudinal natural history study. We first segmented six proximal pelvic limb muscles using a semiautomated full muscle segmentation method. We then performed preprocessing, including intensity inhomogeneity correction, spatial registration of different image sequences, intensity calibration of T2-weighted and T2-weighted fat-suppressed images, and calculation of MRI biomarker maps. Finally, for each of the segmented muscles, we automatically measured MRI biomarkers of muscle volume, intensity statistics over MRI biomarker maps, and statistical image texture features. The muscle volume and the mean intensities in T2 value, fat, and water maps showed group differences between normal and GRMD dogs. For the statistical texture biomarkers, both the histogram and run-length matrix features showed obvious group differences between normal and GRMD dogs. The full muscle segmentation showed significantly less error and variability in the proposed biomarkers when compared to the standard, limited muscle range segmentation. The experimental results demonstrated that this quantification tool could reliably quantify MRI biomarkers in GRMD dogs, suggesting that it would also be useful for quantifying disease progression and measuring therapeutic effect in DMD patients.

  2. MRI-based quantification of Duchenne muscular dystrophy in a canine model

    NASA Astrophysics Data System (ADS)

    Wang, Jiahui; Fan, Zheng; Kornegay, Joe N.; Styner, Martin A.

    2011-03-01

    Duchenne muscular dystrophy (DMD) is a progressive and fatal X-linked disease caused by mutations in the DMD gene. Magnetic resonance imaging (MRI) has shown potential to provide non-invasive and objective biomarkers for monitoring disease progression and therapeutic effect in DMD. In this paper, we propose a semi-automated scheme to quantify MRI features of golden retriever muscular dystrophy (GRMD), a canine model of DMD. Our method was applied to a natural history data set and a hydrodynamic limb perfusion data set. The scheme is composed of three modules: pre-processing, muscle segmentation, and feature analysis. The pre-processing module includes: calculation of T2 maps, spatial registration of T2 weighted (T2WI) images, T2 weighted fat suppressed (T2FS) images, and T2 maps, and intensity calibration of T2WI and T2FS images. We then manually segment six pelvic limb muscles. For each of the segmented muscles, we finally automatically measure volume and intensity statistics of the T2FS images and T2 maps. For the natural history study, our results showed that four of six muscles in affected dogs had smaller volumes and all had higher mean intensities in T2 maps as compared to normal dogs. For the perfusion study, the muscle volumes and mean intensities in T2FS were increased in the post-perfusion MRI scans as compared to pre-perfusion MRI scans, as predicted. We conclude that our scheme successfully performs quantitative analysis of muscle MRI features of GRMD.

  3. Machine processing of ERTS and ground truth data

    NASA Technical Reports Server (NTRS)

    Rogers, R. H. (Principal Investigator); Peacock, K.

    1973-01-01

    The author has identified the following significant results. Results achieved by ERTS-Atmospheric Experiment PR303, whose objective is to establish a radiometric calibration technique, are reported. This technique, which determines and removes solar and atmospheric parameters that degrade the radiometric fidelity of ERTS-1 data, transforms the ERTS-1 sensor radiance measurements to absolute target reflectance signatures. A radiant power measuring instrument and its use in determining atmospheric parameters needed for ground truth are discussed. The procedures used and results achieved in machine processing ERTS-1 computer -compatible tapes and atmospheric parameters to obtain target reflectance are reviewed.

  4. TU-F-BRB-01: Resolving and Characterizing Breathing Motion for Radiotherapy with MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tryggestad, E.

    The current clinical standard of organ respiratory imaging, 4D-CT, is fundamentally limited by poor soft-tissue contrast and imaging dose. These limitations are potential barriers to beneficial “4D” radiotherapy methods which optimize the target and OAR dose-volume considering breathing motion but rely on a robust motion characterization. Conversely, MRI imparts no known radiation risk and has excellent soft-tissue contrast. MRI-based motion management is therefore highly desirable and holds great promise to improve radiotherapy of moving cancers, particularly in the abdomen. Over the past decade, MRI techniques have improved significantly, making MR-based motion management clinically feasible. For example, cine MRI has highmore » temporal resolution up to 10 f/s and has been used to track and/or characterize tumor motion, study correlation between external and internal motions. New MR technologies, such as 4D-MRI and MRI hybrid treatment machines (i.e. MR-linac or MR-Co60), have been recently developed. These technologies can lead to more accurate target volume determination and more precise radiation dose delivery via direct tumor gating or tracking. Despite all these promises, great challenges exist and the achievable clinical benefit of MRI-based tumor motion management has yet to be fully explored, much less realized. In this proposal, we will review novel MR-based motion management methods and technologies, the state-of-the-art concerning MRI development and clinical application and the barriers to more widespread adoption. Learning Objectives: Discuss the need of MR-based motion management for improving patient care in radiotherapy. Understand MR techniques for motion imaging and tumor motion characterization. Understand the current state of the art and future steps for clinical integration. Henry Ford Health System holds research agreements with Philips Healthcare. Research sponsored in part by a Henry Ford Health System Internal Mentored Grant.« less

  5. Time-dependent correlation of cerebral blood flow with oxygen metabolism in activated human visual cortex as measured by fMRI.

    PubMed

    Lin, Ai-Ling; Fox, Peter T; Yang, Yihong; Lu, Hanzhang; Tan, Li-Hai; Gao, Jia-Hong

    2009-01-01

    The aim of this study was to investigate the relationship between relative cerebral blood flow (delta CBF) and relative cerebral metabolic rate of oxygen (delta CMRO(2)) during continuous visual stimulation (21 min at 8 Hz) with fMRI biophysical models by simultaneously measuring of BOLD, CBF and CBV fMRI signals. The delta CMRO(2) was determined by both a newly calibrated single-compartment model (SCM) and a multi-compartment model (MCM) and was in agreement between these two models (P>0.5). The duration-varying delta CBF and delta CMRO(2) showed a negative correlation with time (r=-0.97, P<0.001); i.e., delta CBF declines while delta CMRO(2) increases during continuous stimulation. This study also illustrated that without properly calibrating the critical parameters employed in the SCM, an incorrect and even an opposite appearance of the flow-metabolism relationship during prolonged visual stimulation (positively linear coupling) can result. The time-dependent negative correlation between flow and metabolism demonstrated in this fMRI study is consistent with a previous PET observation and further supports the view that the increase in CBF is driven by factors other than oxygen demand and the energy demands will eventually require increased aerobic metabolism as stimulation continues.

  6. Waveguide Calibrator for Multi-Element Probe Calibration

    NASA Technical Reports Server (NTRS)

    Sommerfeldt, Scott D.; Blotter, Jonathan D.

    2007-01-01

    A calibrator, referred to as the spider design, can be used to calibrate probes incorporating multiple acoustic sensing elements. The application is an acoustic energy density probe, although the calibrator can be used for other types of acoustic probes. The calibrator relies on the use of acoustic waveguide technology to produce the same acoustic field at each of the sensing elements. As a result, the sensing elements can be separated from each other, but still calibrated through use of the acoustic waveguides. Standard calibration techniques involve placement of an individual microphone into a small cavity with a known, uniform pressure to perform the calibration. If a cavity is manufactured with sufficient size to insert the energy density probe, it has been found that a uniform pressure field can only be created at very low frequencies, due to the size of the probe. The size of the energy density probe prevents one from having the same pressure at each microphone in a cavity, due to the wave effects. The "spider" design probe is effective in calibrating multiple microphones separated from each other. The spider design ensures that the same wave effects exist for each microphone, each with an indivdual sound path. The calibrator s speaker is mounted at one end of a 14-cm-long and 4.1-cm diameter small plane-wave tube. This length was chosen so that the first evanescent cross mode of the plane-wave tube would be attenuated by about 90 dB, thus leaving just the plane wave at the termination plane of the tube. The tube terminates with a small, acrylic plate with five holes placed symmetrically about the axis of the speaker. Four ports are included for the four microphones on the probe. The fifth port is included for the pre-calibrated reference microphone. The ports in the acrylic plate are in turn connected to the probe sensing elements via flexible PVC tubes. These five tubes are the same length, so the acoustic wave effects are the same in each tube. The

  7. [A new machinability test machine and the machinability of composite resins for core built-up].

    PubMed

    Iwasaki, N

    2001-06-01

    A new machinability test machine especially for dental materials was contrived. The purpose of this study was to evaluate the effects of grinding conditions on machinability of core built-up resins using this machine, and to confirm the relationship between machinability and other properties of composite resins. The experimental machinability test machine consisted of a dental air-turbine handpiece, a control weight unit, a driving unit of the stage fixing the test specimen, and so on. The machinability was evaluated as the change in volume after grinding using a diamond point. Five kinds of core built-up resins and human teeth were used in this study. The machinabilities of these composite resins increased with an increasing load during grinding, and decreased with repeated grinding. There was no obvious correlation between the machinability and Vickers' hardness; however, a negative correlation was observed between machinability and scratch width.

  8. Cost-Benefit Analysis of Computer Resources for Machine Learning

    USGS Publications Warehouse

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  9. Quantitative MRI for hepatic fat fraction and T2* measurement in pediatric patients with non-alcoholic fatty liver disease.

    PubMed

    Deng, Jie; Fishbein, Mark H; Rigsby, Cynthia K; Zhang, Gang; Schoeneman, Samantha E; Donaldson, James S

    2014-11-01

    Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease in children. The gold standard for diagnosis is liver biopsy. MRI is a non-invasive imaging method to provide quantitative measurement of hepatic fat content. The methodology is particularly appealing for the pediatric population because of its rapidity and radiation-free imaging techniques. To develop a multi-point Dixon MRI method with multi-interference models (multi-fat-peak modeling and bi-exponential T2* correction) for accurate hepatic fat fraction (FF) and T2* measurements in pediatric patients with NAFLD. A phantom study was first performed to validate the accuracy of the MRI fat fraction measurement by comparing it with the chemical fat composition of the ex-vivo pork liver-fat homogenate. The most accurate model determined from the phantom study was used for fat fraction and T2* measurements in 52 children and young adults referred from the pediatric hepatology clinic with suspected or identified NAFLD. Separate T2* values of water (T2*W) and fat (T2*F) components derived from the bi-exponential fitting were evaluated and plotted as a function of fat fraction. In ten patients undergoing liver biopsy, we compared histological analysis of liver fat fraction with MRI fat fraction. In the phantom study the 6-point Dixon with 5-fat-peak, bi-exponential T2* modeling demonstrated the best precision and accuracy in fat fraction measurements compared with other methods. This model was further calibrated with chemical fat fraction and applied in patients, where similar patterns were observed as in the phantom study that conventional 2-point and 3-point Dixon methods underestimated fat fraction compared to the calibrated 6-point 5-fat-peak bi-exponential model (P < 0.0001). With increasing fat fraction, T2*W (27.9 ± 3.5 ms) decreased, whereas T2*F (20.3 ± 5.5 ms) increased; and T2*W and T2*F became increasingly more similar when fat fraction was higher than

  10. A machine learning approach to the accurate prediction of monitor units for a compact proton machine.

    PubMed

    Sun, Baozhou; Lam, Dao; Yang, Deshan; Grantham, Kevin; Zhang, Tiezhi; Mutic, Sasa; Zhao, Tianyu

    2018-05-01

    Clinical treatment planning systems for proton therapy currently do not calculate monitor units (MUs) in passive scatter proton therapy due to the complexity of the beam delivery systems. Physical phantom measurements are commonly employed to determine the field-specific output factors (OFs) but are often subject to limited machine time, measurement uncertainties and intensive labor. In this study, a machine learning-based approach was developed to predict output (cGy/MU) and derive MUs, incorporating the dependencies on gantry angle and field size for a single-room proton therapy system. The goal of this study was to develop a secondary check tool for OF measurements and eventually eliminate patient-specific OF measurements. The OFs of 1754 fields previously measured in a water phantom with calibrated ionization chambers and electrometers for patient-specific fields with various range and modulation width combinations for 23 options were included in this study. The training data sets for machine learning models in three different methods (Random Forest, XGBoost and Cubist) included 1431 (~81%) OFs. Ten-fold cross-validation was used to prevent "overfitting" and to validate each model. The remaining 323 (~19%) OFs were used to test the trained models. The difference between the measured and predicted values from machine learning models was analyzed. Model prediction accuracy was also compared with that of the semi-empirical model developed by Kooy (Phys. Med. Biol. 50, 2005). Additionally, gantry angle dependence of OFs was measured for three groups of options categorized on the selection of the second scatters. Field size dependence of OFs was investigated for the measurements with and without patient-specific apertures. All three machine learning methods showed higher accuracy than the semi-empirical model which shows considerably large discrepancy of up to 7.7% for the treatment fields with full range and full modulation width. The Cubist-based solution

  11. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    PubMed Central

    Spinosa, Emanuele; Roberts, David A.

    2017-01-01

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access. PMID:28757553

  12. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    PubMed

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  13. Construction of a Calibrated Probabilistic Classification Catalog: Application to 50k Variable Sources in the All-Sky Automated Survey

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Brink, Henrik; Crellin-Quick, Arien

    2012-12-01

    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.

  14. CONSTRUCTION OF A CALIBRATED PROBABILISTIC CLASSIFICATION CATALOG: APPLICATION TO 50k VARIABLE SOURCES IN THE ALL-SKY AUTOMATED SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.

    2012-12-15

    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In additionmore » to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.« less

  15. A Baseline Load Schedule for the Manual Calibration of a Force Balance

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Gisler, R.

    2013-01-01

    A baseline load schedule for the manual calibration of a force balance was developed that takes current capabilities at the NASA Ames Balance Calibration Laboratory into account. The load schedule consists of 18 load series with a total of 194 data points. It was designed to satisfy six requirements: (i) positive and negative loadings should be applied for each load component; (ii) at least three loadings should be applied between 0 % and 100 % load capacity; (iii) normal and side force loadings should be applied at the forward gage location, the aft gage location, and the balance moment center; (iv) the balance should be used in UP and DOWN orientation to get axial force loadings; (v) the constant normal and side force approaches should be used to get the rolling moment loadings; (vi) rolling moment loadings should be obtained for 0, 90, 180, and 270 degrees balance orientation. Three different approaches are also reviewed that may be used to independently estimate the natural zeros of the balance. These three approaches provide gage output differences that may be used to estimate the weight of both the metric and non-metric part of the balance. Manual calibration data of NASA s MK29A balance and machine calibration data of NASA s MC60D balance are used to illustrate and evaluate different aspects of the proposed baseline load schedule design.

  16. Development of a new calibration procedure and its experimental validation applied to a human motion capture system.

    PubMed

    Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge

    2014-12-01

    Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.

  17. Machine Learning Techniques for Global Sensitivity Analysis in Climate Models

    NASA Astrophysics Data System (ADS)

    Safta, C.; Sargsyan, K.; Ricciuto, D. M.

    2017-12-01

    Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.

  18. Human-machine interface for a VR-based medical imaging environment

    NASA Astrophysics Data System (ADS)

    Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans

    1997-05-01

    Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.

  19. A computerized MRI biomarker quantification scheme for a canine model of Duchenne muscular dystrophy

    PubMed Central

    Wang, Jiahui; Fan, Zheng; Vandenborne, Krista; Walter, Glenn; Shiloh-Malawsky, Yael; An, Hongyu; Kornegay, Joe N.; Styner, Martin A.

    2015-01-01

    Purpose Golden retriever muscular dystrophy (GRMD) is a widely used canine model of Duchenne muscular dystrophy (DMD). Recent studies have shown that magnetic resonance imaging (MRI) can be used to non-invasively detect consistent changes in both DMD and GRMD. In this paper, we propose a semi-automated system to quantify MRI biomarkers of GRMD. Methods Our system was applied to a database of 45 MRI scans from 8 normal and 10 GRMD dogs in a longitudinal natural history study. We first segmented six proximal pelvic limb muscles using two competing schemes: 1) standard, limited muscle range segmentation and 2) semi-automatic full muscle segmentation. We then performed pre-processing, including: intensity inhomogeneity correction, spatial registration of different image sequences, intensity calibration of T2-weighted (T2w) and T2-weighted fat suppressed (T2fs) images, and calculation of MRI biomarker maps. Finally, for each of the segmented muscles, we automatically measured MRI biomarkers of muscle volume and intensity statistics over MRI biomarker maps, and statistical image texture features. Results The muscle volume and the mean intensities in T2 value, fat, and water maps showed group differences between normal and GRMD dogs. For the statistical texture biomarkers, both the histogram and run-length matrix features showed obvious group differences between normal and GRMD dogs. The full muscle segmentation shows significantly less error and variability in the proposed biomarkers when compared to the standard, limited muscle range segmentation. Conclusion The experimental results demonstrated that this quantification tool can reliably quantify MRI biomarkers in GRMD dogs, suggesting that it would also be useful for quantifying disease progression and measuring therapeutic effect in DMD patients. PMID:23299128

  20. Quantitative performance evaluation of 124I PET/MRI lesion dosimetry in differentiated thyroid cancer

    NASA Astrophysics Data System (ADS)

    Wierts, R.; Jentzen, W.; Quick, H. H.; Wisselink, H. J.; Pooters, I. N. A.; Wildberger, J. E.; Herrmann, K.; Kemerink, G. J.; Backes, W. H.; Mottaghy, F. M.

    2018-01-01

    The aim was to investigate the quantitative performance of 124I PET/MRI for pre-therapy lesion dosimetry in differentiated thyroid cancer (DTC). Phantom measurements were performed on a PET/MRI system (Biograph mMR, Siemens Healthcare) using 124I and 18F. The PET calibration factor and the influence of radiofrequency coil attenuation were determined using a cylindrical phantom homogeneously filled with radioactivity. The calibration factor was 1.00  ±  0.02 for 18F and 0.88  ±  0.02 for 124I. Near the radiofrequency surface coil an underestimation of less than 5% in radioactivity concentration was observed. Soft-tissue sphere recovery coefficients were determined using the NEMA IEC body phantom. Recovery coefficients were systematically higher for 18F than for 124I. In addition, the six spheres of the phantom were segmented using a PET-based iterative segmentation algorithm. For all 124I measurements, the deviations in segmented lesion volume and mean radioactivity concentration relative to the actual values were smaller than 15% and 25%, respectively. The effect of MR-based attenuation correction (three- and four-segment µ-maps) on bone lesion quantification was assessed using radioactive spheres filled with a K2HPO4 solution mimicking bone lesions. The four-segment µ-map resulted in an underestimation of the imaged radioactivity concentration of up to 15%, whereas the three-segment µ-map resulted in an overestimation of up to 10%. For twenty lesions identified in six patients, a comparison of 124I PET/MRI to PET/CT was performed with respect to segmented lesion volume and radioactivity concentration. The interclass correlation coefficients showed excellent agreement in segmented lesion volume and radioactivity concentration (0.999 and 0.95, respectively). In conclusion, it is feasible that accurate quantitative 124I PET/MRI could be used to perform radioiodine pre-therapy lesion dosimetry in DTC.

  1. A SVM-based quantitative fMRI method for resting-state functional network detection.

    PubMed

    Song, Xiaomu; Chen, Nan-kuei

    2014-09-01

    Resting-state functional magnetic resonance imaging (fMRI) aims to measure baseline neuronal connectivity independent of specific functional tasks and to capture changes in the connectivity due to neurological diseases. Most existing network detection methods rely on a fixed threshold to identify functionally connected voxels under the resting state. Due to fMRI non-stationarity, the threshold cannot adapt to variation of data characteristics across sessions and subjects, and generates unreliable mapping results. In this study, a new method is presented for resting-state fMRI data analysis. Specifically, the resting-state network mapping is formulated as an outlier detection process that is implemented using one-class support vector machine (SVM). The results are refined by using a spatial-feature domain prototype selection method and two-class SVM reclassification. The final decision on each voxel is made by comparing its probabilities of functionally connected and unconnected instead of a threshold. Multiple features for resting-state analysis were extracted and examined using an SVM-based feature selection method, and the most representative features were identified. The proposed method was evaluated using synthetic and experimental fMRI data. A comparison study was also performed with independent component analysis (ICA) and correlation analysis. The experimental results show that the proposed method can provide comparable or better network detection performance than ICA and correlation analysis. The method is potentially applicable to various resting-state quantitative fMRI studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Adaptation of a haptic robot in a 3T fMRI.

    PubMed

    Snider, Joseph; Plank, Markus; May, Larry; Liu, Thomas T; Poizner, Howard

    2011-10-04

    Functional magnetic resonance imaging (fMRI) provides excellent functional brain imaging via the BOLD signal with advantages including non-ionizing radiation, millimeter spatial accuracy of anatomical and functional data, and nearly real-time analyses. Haptic robots provide precise measurement and control of position and force of a cursor in a reasonably confined space. Here we combine these two technologies to allow precision experiments involving motor control with haptic/tactile environment interaction such as reaching or grasping. The basic idea is to attach an 8 foot end effecter supported in the center to the robot allowing the subject to use the robot, but shielding it and keeping it out of the most extreme part of the magnetic field from the fMRI machine (Figure 1). The Phantom Premium 3.0, 6DoF, high-force robot (SensAble Technologies, Inc.) is an excellent choice for providing force-feedback in virtual reality experiments, but it is inherently non-MR safe, introduces significant noise to the sensitive fMRI equipment, and its electric motors may be affected by the fMRI's strongly varying magnetic field. We have constructed a table and shielding system that allows the robot to be safely introduced into the fMRI environment and limits both the degradation of the fMRI signal by the electrically noisy motors and the degradation of the electric motor performance by the strongly varying magnetic field of the fMRI. With the shield, the signal to noise ratio (SNR: mean signal/noise standard deviation) of the fMRI goes from a baseline of ~380 to ~330, and ~250 without the shielding. The remaining noise appears to be uncorrelated and does not add artifacts to the fMRI of a test sphere (Figure 2). The long, stiff handle allows placement of the robot out of range of the most strongly varying parts of the magnetic field so there is no significant effect of the fMRI on the robot. The effect of the handle on the robot's kinematics is minimal since it is lightweight (~2

  3. Can machines think? Interaction and perspective taking with robots investigated via fMRI.

    PubMed

    Krach, Sören; Hegel, Frank; Wrede, Britta; Sagerer, Gerhard; Binkofski, Ferdinand; Kircher, Tilo

    2008-07-09

    When our PC goes on strike again we tend to curse it as if it were a human being. Why and under which circumstances do we attribute human-like properties to machines? Although humans increasingly interact directly with machines it remains unclear whether humans implicitly attribute intentions to them and, if so, whether such interactions resemble human-human interactions on a neural level. In social cognitive neuroscience the ability to attribute intentions and desires to others is being referred to as having a Theory of Mind (ToM). With the present study we investigated whether an increase of human-likeness of interaction partners modulates the participants' ToM associated cortical activity. By means of functional magnetic resonance imaging (subjects n = 20) we investigated cortical activity modulation during highly interactive human-robot game. Increasing degrees of human-likeness for the game partner were introduced by means of a computer partner, a functional robot, an anthropomorphic robot and a human partner. The classical iterated prisoner's dilemma game was applied as experimental task which allowed for an implicit detection of ToM associated cortical activity. During the experiment participants always played against a random sequence unknowingly to them. Irrespective of the surmised interaction partners' responses participants indicated having experienced more fun and competition in the interaction with increasing human-like features of their partners. Parametric modulation of the functional imaging data revealed a highly significant linear increase of cortical activity in the medial frontal cortex as well as in the right temporo-parietal junction in correspondence with the increase of human-likeness of the interaction partner (computer

  4. Altered neural correlates of reward and loss processing during simulated slot-machine fMRI in pathological gambling and cocaine dependence☆

    PubMed Central

    Worhunsky, Patrick D.; Malison, Robert T.; Rogers, Robert D.; Potenza, Marc N.

    2014-01-01

    Background Individuals with gambling or substance-use disorders exhibit similar functional alterations in reward circuitry suggestive of a shared underlying vulnerability in addictive disorders. Additional research into common and unique alterations in reward-processing in substance-related and non-substance-related addictions may identify neural factors that could be targeted in treatment development for these disorders. Methods To investigate contextual reward-processing in pathological gambling, a slot-machine fMRI task was performed by three groups (with pathological gambling, cocaine dependence and neither disorder; N=24 each) to determine the extent to which two groups with addictions (non-substance-related and substance-related) showed similarities and differences with respect to each other and a non-addicted group during anticipatory periods and following the delivery of winning, losing and ‘near-miss’ outcomes. Results Individuals with pathological gambling or cocaine dependence compared to those with neither disorder exhibited exaggerated anticipatory activity in mesolimbic and ventrocortical regions, with pathological-gambling participants displaying greater positive possible-reward anticipation and cocaine-dependent participants displaying more negative certain-loss anticipation. Neither clinical sample exhibited medial frontal or striatal responses that were observed following near-miss outcomes in healthy comparison participants. Conclusions Alterations in anticipatory processing may be sensitive to the valence of rewards and content-disorder-specific. Common and unique findings in pathological gambling and cocaine dependence with respect to anticipatory reward and near-miss loss processing suggest shared and unique elements that might be targeted through behavioral or pharmacological interventions in the treatment of addictions. PMID:25448081

  5. MODIS calibration

    NASA Technical Reports Server (NTRS)

    Barker, John L.

    1992-01-01

    The MODIS/MCST (MODIS Characterization Support Team) Status Report contains an outline of the calibration strategy, handbook, and plan. It also contains an outline of the MODIS/MCST action item from the 4th EOS Cal/Val Meeting, for which the objective was to locate potential MODIS calibration targets on the Earth's surface that are radiometrically homogeneous on a scale of 3 by 3 Km. As appendices, draft copies of the handbook table of contents, calibration plan table of contents, and detailed agenda for MODIS calibration working group are included.

  6. Voxel-wise prostate cell density prediction using multiparametric magnetic resonance imaging and machine learning.

    PubMed

    Sun, Yu; Reynolds, Hayley M; Wraith, Darren; Williams, Scott; Finnegan, Mary E; Mitchell, Catherine; Murphy, Declan; Haworth, Annette

    2018-04-26

    There are currently no methods to estimate cell density in the prostate. This study aimed to develop predictive models to estimate prostate cell density from multiparametric magnetic resonance imaging (mpMRI) data at a voxel level using machine learning techniques. In vivo mpMRI data were collected from 30 patients before radical prostatectomy. Sequences included T2-weighted imaging, diffusion-weighted imaging and dynamic contrast-enhanced imaging. Ground truth cell density maps were computed from histology and co-registered with mpMRI. Feature extraction and selection were performed on mpMRI data. Final models were fitted using three regression algorithms including multivariate adaptive regression spline (MARS), polynomial regression (PR) and generalised additive model (GAM). Model parameters were optimised using leave-one-out cross-validation on the training data and model performance was evaluated on test data using root mean square error (RMSE) measurements. Predictive models to estimate voxel-wise prostate cell density were successfully trained and tested using the three algorithms. The best model (GAM) achieved a RMSE of 1.06 (± 0.06) × 10 3 cells/mm 2 and a relative deviation of 13.3 ± 0.8%. Prostate cell density can be quantitatively estimated non-invasively from mpMRI data using high-quality co-registered data at a voxel level. These cell density predictions could be used for tissue classification, treatment response evaluation and personalised radiotherapy.

  7. External validation of the MRI-DRAGON score: early prediction of stroke outcome after intravenous thrombolysis.

    PubMed

    Turc, Guillaume; Aguettaz, Pierre; Ponchelle-Dequatre, Nelly; Hénon, Hilde; Naggara, Olivier; Leclerc, Xavier; Cordonnier, Charlotte; Leys, Didier; Mas, Jean-Louis; Oppenheim, Catherine

    2014-01-01

    The aim of our study was to validate in an independent cohort the MRI-DRAGON score, an adaptation of the (CT-) DRAGON score to predict 3-month outcome in acute ischemic stroke patients undergoing MRI before intravenous thrombolysis (IV-tPA). We reviewed consecutive (2009-2013) anterior circulation stroke patients treated within 4.5 hours by IV-tPA in the Lille stroke unit (France), where MRI is the first-line pretherapeutic work-up. We assessed the discrimination and calibration of the MRI-DRAGON score to predict poor 3-month outcome, defined as modified Rankin Score >2, using c-statistic and the Hosmer-Lemeshow test, respectively. We included 230 patients (mean ±SD age 70.4±16.0 years, median [IQR] baseline NIHSS 8 [5]-[14]; poor outcome in 78(34%) patients). The c-statistic was 0.81 (95%CI 0.75-0.87), and the Hosmer-Lemeshow test was not significant (p = 0.54). The MRI-DRAGON score showed good prognostic performance in the external validation cohort. It could therefore be used to inform the patient's relatives about long-term prognosis and help to identify poor responders to IV-tPA alone, who may be candidates for additional therapeutic strategies, if they are otherwise eligible for such procedures based on the institutional criteria.

  8. Arm MRI scan

    MedlinePlus

    ... MRI and often available in the emergency room. Alternative Names MRI - arm; Wrist MRI; MRI - wrist; Elbow ... any medical emergency or for the diagnosis or treatment of any medical condition. A licensed physician should ...

  9. A TEX86 surface sediment database and extended Bayesian calibration

    NASA Astrophysics Data System (ADS)

    Tierney, Jessica E.; Tingley, Martin P.

    2015-06-01

    Quantitative estimates of past temperature changes are a cornerstone of paleoclimatology. For a number of marine sediment-based proxies, the accuracy and precision of past temperature reconstructions depends on a spatial calibration of modern surface sediment measurements to overlying water temperatures. Here, we present a database of 1095 surface sediment measurements of TEX86, a temperature proxy based on the relative cyclization of marine archaeal glycerol dialkyl glycerol tetraether (GDGT) lipids. The dataset is archived in a machine-readable format with geospatial information, fractional abundances of lipids (if available), and metadata. We use this new database to update surface and subsurface temperature calibration models for TEX86 and demonstrate the applicability of the TEX86 proxy to past temperature prediction. The TEX86 database confirms that surface sediment GDGT distribution has a strong relationship to temperature, which accounts for over 70% of the variance in the data. Future efforts, made possible by the data presented here, will seek to identify variables with secondary relationships to GDGT distributions, such as archaeal community composition.

  10. A new polarimetric active radar calibrator and calibration technique

    NASA Astrophysics Data System (ADS)

    Tang, Jianguo; Xu, Xiaojian

    2015-10-01

    Polarimetric active radar calibrator (PARC) is one of the most important calibrators with high radar cross section (RCS) for polarimetry measurement. In this paper, a new double-antenna polarimetric active radar calibrator (DPARC) is proposed, which consists of two rotatable antennas with wideband electromagnetic polarization filters (EMPF) to achieve lower cross-polarization for transmission and reception. With two antennas which are rotatable around the radar line of sight (LOS), the DPARC provides a variety of standard polarimetric scattering matrices (PSM) through the rotation combination of receiving and transmitting polarization, which are useful for polarimatric calibration in different applications. In addition, a technique based on Fourier analysis is proposed for calibration processing. Numerical simulation results are presented to demonstrate the superior performance of the proposed DPARC and processing technique.

  11. Vision 20/20: Magnetic resonance imaging-guided attenuation correction in PET/MRI: Challenges, solutions, and opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch

    Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, inmore » contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in

  12. Vision 20/20: Magnetic resonance imaging-guided attenuation correction in PET/MRI: Challenges, solutions, and opportunities.

    PubMed

    Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib

    2016-03-01

    Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, in contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in commercial

  13. Decoding of visual activity patterns from fMRI responses using multivariate pattern analyses and convolutional neural network.

    PubMed

    Zafar, Raheel; Kamel, Nidal; Naufal, Mohamad; Malik, Aamir Saeed; Dass, Sarat C; Ahmad, Rana Fayyaz; Abdullah, Jafri M; Reza, Faruque

    2017-01-01

    Decoding of human brain activity has always been a primary goal in neuroscience especially with functional magnetic resonance imaging (fMRI) data. In recent years, Convolutional neural network (CNN) has become a popular method for the extraction of features due to its higher accuracy, however it needs a lot of computation and training data. In this study, an algorithm is developed using Multivariate pattern analysis (MVPA) and modified CNN to decode the behavior of brain for different images with limited data set. Selection of significant features is an important part of fMRI data analysis, since it reduces the computational burden and improves the prediction performance; significant features are selected using t-test. MVPA uses machine learning algorithms to classify different brain states and helps in prediction during the task. General linear model (GLM) is used to find the unknown parameters of every individual voxel and the classification is done using multi-class support vector machine (SVM). MVPA-CNN based proposed algorithm is compared with region of interest (ROI) based method and MVPA based estimated values. The proposed method showed better overall accuracy (68.6%) compared to ROI (61.88%) and estimation values (64.17%).

  14. Trend analysis of Terra/ASTER/VNIR radiometric calibration coefficient through onboard and vicarious calibrations as well as cross calibration with MODIS

    NASA Astrophysics Data System (ADS)

    Arai, Kohei

    2012-07-01

    More than 11 years Radiometric Calibration Coefficients (RCC) derived from onboard and vicarious calibrations are compared together with cross comparison to the well calibrated MODIS RCC. Fault Tree Analysis (FTA) is also conducted for clarification of possible causes of the RCC degradation together with sensitivity analysis for vicarious calibration. One of the suspects of causes of RCC degradation is clarified through FTA. Test site dependency on vicarious calibration is quite obvious. It is because of the vicarious calibration RCC is sensitive to surface reflectance measurement accuracy, not atmospheric optical depth. The results from cross calibration with MODIS support that significant sensitivity of surface reflectance measurements on vicarious calibration.

  15. Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI

    PubMed Central

    Krach, Sören; Hegel, Frank; Wrede, Britta; Sagerer, Gerhard; Binkofski, Ferdinand; Kircher, Tilo

    2008-01-01

    Background When our PC goes on strike again we tend to curse it as if it were a human being. Why and under which circumstances do we attribute human-like properties to machines? Although humans increasingly interact directly with machines it remains unclear whether humans implicitly attribute intentions to them and, if so, whether such interactions resemble human-human interactions on a neural level. In social cognitive neuroscience the ability to attribute intentions and desires to others is being referred to as having a Theory of Mind (ToM). With the present study we investigated whether an increase of human-likeness of interaction partners modulates the participants' ToM associated cortical activity. Methodology/Principal Findings By means of functional magnetic resonance imaging (subjects n = 20) we investigated cortical activity modulation during highly interactive human-robot game. Increasing degrees of human-likeness for the game partner were introduced by means of a computer partner, a functional robot, an anthropomorphic robot and a human partner. The classical iterated prisoner's dilemma game was applied as experimental task which allowed for an implicit detection of ToM associated cortical activity. During the experiment participants always played against a random sequence unknowingly to them. Irrespective of the surmised interaction partners' responses participants indicated having experienced more fun and competition in the interaction with increasing human-like features of their partners. Parametric modulation of the functional imaging data revealed a highly significant linear increase of cortical activity in the medial frontal cortex as well as in the right temporo-parietal junction in correspondence with the increase of human-likeness of the interaction partner (computer

  16. Multivariate detrending of fMRI signal drifts for real-time multiclass pattern classification.

    PubMed

    Lee, Dongha; Jang, Changwon; Park, Hae-Jeong

    2015-03-01

    Signal drift in functional magnetic resonance imaging (fMRI) is an unavoidable artifact that limits classification performance in multi-voxel pattern analysis of fMRI. As conventional methods to reduce signal drift, global demeaning or proportional scaling disregards regional variations of drift, whereas voxel-wise univariate detrending is too sensitive to noisy fluctuations. To overcome these drawbacks, we propose a multivariate real-time detrending method for multiclass classification that involves spatial demeaning at each scan and the recursive detrending of drifts in the classifier outputs driven by a multiclass linear support vector machine. Experiments using binary and multiclass data showed that the linear trend estimation of the classifier output drift for each class (a weighted sum of drifts in the class-specific voxels) was more robust against voxel-wise artifacts that lead to inconsistent spatial patterns and the effect of online processing than voxel-wise detrending. The classification performance of the proposed method was significantly better, especially for multiclass data, than that of voxel-wise linear detrending, global demeaning, and classifier output detrending without demeaning. We concluded that the multivariate approach using classifier output detrending of fMRI signals with spatial demeaning preserves spatial patterns, is less sensitive than conventional methods to sample size, and increases classification performance, which is a useful feature for real-time fMRI classification. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Breast cancer Ki67 expression preoperative discrimination by DCE-MRI radiomics features

    NASA Astrophysics Data System (ADS)

    Ma, Wenjuan; Ji, Yu; Qin, Zhuanping; Guo, Xinpeng; Jian, Xiqi; Liu, Peifang

    2018-02-01

    To investigate whether quantitative radiomics features extracted from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) are associated with Ki67 expression of breast cancer. In this institutional review board approved retrospective study, we collected 377 cases Chinese women who were diagnosed with invasive breast cancer in 2015. This cohort included 53 low-Ki67 expression (Ki67 proliferation index less than 14%) and 324 cases with high-Ki67 expression (Ki67 proliferation index more than 14%). A binary-classification of low- vs. high- Ki67 expression was performed. A set of 52 quantitative radiomics features, including morphological, gray scale statistic, and texture features, were extracted from the segmented lesion area. Three most common machine learning classification methods, including Naive Bayes, k-Nearest Neighbor and support vector machine with Gaussian kernel, were employed for the classification and the least absolute shrink age and selection operator (LASSO) method was used to select most predictive features set for the classifiers. Classification performance was evaluated by the area under receiver operating characteristic curve (AUC), accuracy, sensitivity and specificity. The model that used Naive Bayes classification method achieved the best performance than the other two methods, yielding 0.773 AUC value, 0.757 accuracy, 0.777 sensitivity and 0.769 specificity. Our study showed that quantitative radiomics imaging features of breast tumor extracted from DCE-MRI are associated with breast cancer Ki67 expression. Future larger studies are needed in order to further evaluate the findings.

  18. The radiation metrology network related to the field of mammography: implementation and uncertainty analysis of the calibration system

    NASA Astrophysics Data System (ADS)

    Peixoto, J. G. P.; de Almeida, C. E.

    2001-09-01

    It is recognized by the international guidelines that it is necessary to offer calibration services for mammography beams in order to improve the quality of clinical diagnosis. Major efforts have been made by several laboratories in order to establish an appropriate and traceable calibration infrastructure and to provide the basis for a quality control programme in mammography. The contribution of the radiation metrology network to the users of mammography is reviewed in this work. Also steps required for the implementation of a mammography calibration system using a constant potential x-ray and a clinical mammography x-ray machine are presented. The various qualities of mammography radiation discussed in this work are in accordance with the IEC 61674 and the AAPM recommendations. They are at present available at several primary standard dosimetry laboratories (PSDLs), namely the PTB, NIST and BEV and a few secondary standard dosimetry laboratories (SSDLs) such as at the University of Wisconsin and at the IAEA's SSDL. We discuss the uncertainties involved in all steps of the calibration chain in accord with the ISO recommendations.

  19. Enhanced Quality Control in Pharmaceutical Applications by Combining Raman Spectroscopy and Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Martinez, J. C.; Guzmán-Sepúlveda, J. R.; Bolañoz Evia, G. R.; Córdova, T.; Guzmán-Cabrera, R.

    2018-06-01

    In this work, we applied machine learning techniques to Raman spectra for the characterization and classification of manufactured pharmaceutical products. Our measurements were taken with commercial equipment, for accurate assessment of variations with respect to one calibrated control sample. Unlike the typical use of Raman spectroscopy in pharmaceutical applications, in our approach the principal components of the Raman spectrum are used concurrently as attributes in machine learning algorithms. This permits an efficient comparison and classification of the spectra measured from the samples under study. This also allows for accurate quality control as all relevant spectral components are considered simultaneously. We demonstrate our approach with respect to the specific case of acetaminophen, which is one of the most widely used analgesics in the market. In the experiments, commercial samples from thirteen different laboratories were analyzed and compared against a control sample. The raw data were analyzed based on an arithmetic difference between the nominal active substance and the measured values in each commercial sample. The principal component analysis was applied to the data for quantitative verification (i.e., without considering the actual concentration of the active substance) of the difference in the calibrated sample. Our results show that by following this approach adulterations in pharmaceutical compositions can be clearly identified and accurately quantified.

  20. Automatic Determination of the Need for Intravenous Contrast in Musculoskeletal MRI Examinations Using IBM Watson's Natural Language Processing Algorithm.

    PubMed

    Trivedi, Hari; Mesterhazy, Joseph; Laguna, Benjamin; Vu, Thienkhai; Sohn, Jae Ho

    2018-04-01

    Magnetic resonance imaging (MRI) protocoling can be time- and resource-intensive, and protocols can often be suboptimal dependent upon the expertise or preferences of the protocoling radiologist. Providing a best-practice recommendation for an MRI protocol has the potential to improve efficiency and decrease the likelihood of a suboptimal or erroneous study. The goal of this study was to develop and validate a machine learning-based natural language classifier that can automatically assign the use of intravenous contrast for musculoskeletal MRI protocols based upon the free-text clinical indication of the study, thereby improving efficiency of the protocoling radiologist and potentially decreasing errors. We utilized a deep learning-based natural language classification system from IBM Watson, a question-answering supercomputer that gained fame after challenging the best human players on Jeopardy! in 2011. We compared this solution to a series of traditional machine learning-based natural language processing techniques that utilize a term-document frequency matrix. Each classifier was trained with 1240 MRI protocols plus their respective clinical indications and validated with a test set of 280. Ground truth of contrast assignment was obtained from the clinical record. For evaluation of inter-reader agreement, a blinded second reader radiologist analyzed all cases and determined contrast assignment based on only the free-text clinical indication. In the test set, Watson demonstrated overall accuracy of 83.2% when compared to the original protocol. This was similar to the overall accuracy of 80.2% achieved by an ensemble of eight traditional machine learning algorithms based on a term-document matrix. When compared to the second reader's contrast assignment, Watson achieved 88.6% agreement. When evaluating only the subset of cases where the original protocol and second reader were concordant (n = 251), agreement climbed further to 90.0%. The classifier was

  1. Decoding Lifespan Changes of the Human Brain Using Resting-State Functional Connectivity MRI

    PubMed Central

    Wang, Lubin; Su, Longfei; Shen, Hui; Hu, Dewen

    2012-01-01

    The development of large-scale functional brain networks is a complex, lifelong process that can be investigated using resting-state functional connectivity MRI (rs-fcMRI). In this study, we aimed to decode the developmental dynamics of the whole-brain functional network in seven decades (8–79 years) of the human lifespan. We first used parametric curve fitting to examine linear and nonlinear age effect on the resting human brain, and then combined manifold learning and support vector machine methods to predict individuals' “brain ages” from rs-fcMRI data. We found that age-related changes in interregional functional connectivity exhibited spatially and temporally specific patterns. During brain development from childhood to senescence, functional connections tended to linearly increase in the emotion system and decrease in the sensorimotor system; while quadratic trajectories were observed in functional connections related to higher-order cognitive functions. The complex patterns of age effect on the whole-brain functional network could be effectively represented by a low-dimensional, nonlinear manifold embedded in the functional connectivity space, which uncovered the inherent structure of brain maturation and aging. Regression of manifold coordinates with age further showed that the manifold representation extracted sufficient information from rs-fcMRI data to make prediction about individual brains' functional development levels. Our study not only gives insights into the neural substrates that underlie behavioral and cognitive changes over age, but also provides a possible way to quantitatively describe the typical and atypical developmental progression of human brain function using rs-fcMRI. PMID:22952990

  2. Decoding lifespan changes of the human brain using resting-state functional connectivity MRI.

    PubMed

    Wang, Lubin; Su, Longfei; Shen, Hui; Hu, Dewen

    2012-01-01

    The development of large-scale functional brain networks is a complex, lifelong process that can be investigated using resting-state functional connectivity MRI (rs-fcMRI). In this study, we aimed to decode the developmental dynamics of the whole-brain functional network in seven decades (8-79 years) of the human lifespan. We first used parametric curve fitting to examine linear and nonlinear age effect on the resting human brain, and then combined manifold learning and support vector machine methods to predict individuals' "brain ages" from rs-fcMRI data. We found that age-related changes in interregional functional connectivity exhibited spatially and temporally specific patterns. During brain development from childhood to senescence, functional connections tended to linearly increase in the emotion system and decrease in the sensorimotor system; while quadratic trajectories were observed in functional connections related to higher-order cognitive functions. The complex patterns of age effect on the whole-brain functional network could be effectively represented by a low-dimensional, nonlinear manifold embedded in the functional connectivity space, which uncovered the inherent structure of brain maturation and aging. Regression of manifold coordinates with age further showed that the manifold representation extracted sufficient information from rs-fcMRI data to make prediction about individual brains' functional development levels. Our study not only gives insights into the neural substrates that underlie behavioral and cognitive changes over age, but also provides a possible way to quantitatively describe the typical and atypical developmental progression of human brain function using rs-fcMRI.

  3. Imaging patterns predict patient survival and molecular subtype in glioblastoma via machine learning techniques.

    PubMed

    Macyszyn, Luke; Akbari, Hamed; Pisapia, Jared M; Da, Xiao; Attiah, Mark; Pigrish, Vadim; Bi, Yingtao; Pal, Sharmistha; Davuluri, Ramana V; Roccograndi, Laura; Dahmane, Nadia; Martinez-Lage, Maria; Biros, George; Wolf, Ronald L; Bilello, Michel; O'Rourke, Donald M; Davatzikos, Christos

    2016-03-01

    MRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB). One hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients. Survival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy. By employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood-brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Neuro-Oncology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. gr-MRI: A software package for magnetic resonance imaging using software defined radios

    NASA Astrophysics Data System (ADS)

    Hasselwander, Christopher J.; Cao, Zhipeng; Grissom, William A.

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5 Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately 2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500 kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.

  5. gr-MRI: A software package for magnetic resonance imaging using software defined radios.

    PubMed

    Hasselwander, Christopher J; Cao, Zhipeng; Grissom, William A

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately $2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs. Copyright

  6. Precise on-machine extraction of the surface normal vector using an eddy current sensor array

    NASA Astrophysics Data System (ADS)

    Wang, Yongqing; Lian, Meng; Liu, Haibo; Ying, Yangwei; Sheng, Xianjun

    2016-11-01

    To satisfy the requirements of on-machine measurement of the surface normal during complex surface manufacturing, a highly robust normal vector extraction method using an Eddy current (EC) displacement sensor array is developed, the output of which is almost unaffected by surface brightness, machining coolant and environmental noise. A precise normal vector extraction model based on a triangular-distributed EC sensor array is first established. Calibration of the effects of object surface inclination and coupling interference on measurement results, and the relative position of EC sensors, is involved. A novel apparatus employing three EC sensors and a force transducer was designed, which can be easily integrated into the computer numerical control (CNC) machine tool spindle and/or robot terminal execution. Finally, to test the validity and practicability of the proposed method, typical experiments were conducted with specified testing pieces using the developed approach and system, such as an inclined plane and cylindrical and spherical surfaces.

  7. Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2017-01-01

    A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.

  8. Two-dimensional imaging in a lightweight portable MRI scanner without gradient coils.

    PubMed

    Cooley, Clarissa Zimmerman; Stockmann, Jason P; Armstrong, Brandon D; Sarracanie, Mathieu; Lev, Michael H; Rosen, Matthew S; Wald, Lawrence L

    2015-02-01

    As the premiere modality for brain imaging, MRI could find wider applicability if lightweight, portable systems were available for siting in unconventional locations such as intensive care units, physician offices, surgical suites, ambulances, emergency rooms, sports facilities, or rural healthcare sites. We construct and validate a truly portable (<100 kg) and silent proof-of-concept MRI scanner which replaces conventional gradient encoding with a rotating lightweight cryogen-free, low-field magnet. When rotated about the object, the inhomogeneous field pattern is used as a rotating spatial encoding magnetic field (rSEM) to create generalized projections which encode the iteratively reconstructed two-dimensional (2D) image. Multiple receive channels are used to disambiguate the nonbijective encoding field. The system is validated with experimental images of 2D test phantoms. Similar to other nonlinear field encoding schemes, the spatial resolution is position dependent with blurring in the center, but is shown to be likely sufficient for many medical applications. The presented MRI scanner demonstrates the potential for portability by simultaneously relaxing the magnet homogeneity criteria and eliminating the gradient coil. This new architecture and encoding scheme shows convincing proof of concept images that are expected to be further improved with refinement of the calibration and methodology. © 2014 Wiley Periodicals, Inc.

  9. 2D Imaging in a Lightweight Portable MRI Scanner without Gradient Coils

    PubMed Central

    Cooley, Clarissa Zimmerman; Stockmann, Jason P.; Armstrong, Brandon D.; Sarracanie, Mathieu; Lev, Michael H.; Rosen, Matthew S.; Wald, Lawrence L.

    2014-01-01

    Purpose As the premiere modality for brain imaging, MRI could find wider applicability if lightweight, portable systems were available for siting in unconventional locations such as Intensive Care Units, physician offices, surgical suites, ambulances, emergency rooms, sports facilities, or rural healthcare sites. Methods We construct and validate a truly portable (<100kg) and silent proof-of-concept MRI scanner which replaces conventional gradient encoding with a rotating lightweight cryogen-free, low-field magnet. When rotated about the object, the inhomogeneous field pattern is used as a rotating Spatial Encoding Magnetic field (rSEM) to create generalized projections which encode the iteratively reconstructed 2D image. Multiple receive channels are used to disambiguate the non-bijective encoding field. Results The system is validated with experimental images of 2D test phantoms. Similar to other non-linear field encoding schemes, the spatial resolution is position dependent with blurring in the center, but is shown to be likely sufficient for many medical applications. Conclusion The presented MRI scanner demonstrates the potential for portability by simultaneously relaxing the magnet homogeneity criteria and eliminating the gradient coil. This new architecture and encoding scheme shows convincing proof of concept images that are expected to be further improved with refinement of the calibration and methodology. PMID:24668520

  10. Radiomics for ultrafast dynamic contrast-enhanced breast MRI in the diagnosis of breast cancer: a pilot study

    NASA Astrophysics Data System (ADS)

    Drukker, Karen; Anderson, Rachel; Edwards, Alexandra; Papaioannou, John; Pineda, Fred; Abe, Hiroyuke; Karzcmar, Gregory; Giger, Maryellen L.

    2018-02-01

    Radiomics for dynamic contrast-enhanced (DCE) breast MRI have shown promise in the diagnosis of breast cancer as applied to conventional DCE-MRI protocols. Here, we investigate the potential of using such radiomic features in the diagnosis of breast cancer applied on ultrafast breast MRI in which images are acquired every few seconds. The dataset consisted of 64 lesions (33 malignant and 31 benign) imaged with both `conventional' and ultrafast DCE-MRI. After automated lesion segmentation in each image sequence, we calculated 38 radiomic features categorized as describing size, shape, margin, enhancement-texture, kinetics, and enhancement variance kinetics. For each feature, we calculated the 95% confidence interval of the area under the ROC curve (AUC) to determine whether the performance of each feature in the task of distinguishing between malignant and benign lesions was better than random guessing. Subsequently, we assessed performance of radiomic signatures in 10-fold cross-validation repeated 10 times using a support vector machine with as input all the features as well as features by category. We found that many of the features remained useful (AUC>0.5) for the ultrafast protocol, with the exception of some features, e.g., those designed for latephase kinetics such as the washout rate. For ultrafast MRI, the radiomics enhancement-texture signature achieved the best performance, which was comparable to that of the kinetics signature for `conventional' DCE-MRI, both achieving AUC values of 0.71. Radiomic developed for `conventional' DCE-MRI shows promise for translation to the ultrafast protocol, where enhancement texture appears to play a dominant role.

  11. Comparative studies of brain activation with MEG and functional MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, J.S.; Aine, C.J.; Sanders, J.A.

    The past two years have witnessed the emergence of MRI as a functional imaging methodology. Initial demonstrations involved the injection of a paramagnetic contrast agent and required ultrafast echo planar imaging capability to adequately resolve the passage of the injected bolus. By measuring the local reduction in image intensity due to magnetic susceptibility, it was possible to calculate blood volume, which changes as a function of neural activation. Later developments have exploited endogenous contrast mechanisms to monitor changes in blood volume or in venous blood oxygen content. Recently, we and others have demonstrated that it is possible to make suchmore » measurements in a clinical imager, suggesting that the large installed base of such machines might be utilized for functional imaging. Although it is likely that functional MRI (fMRI) will subsume some of the clinical and basic neuroscience applications now touted for MEG, it is also clear that these techniques offer different largely complementary, capabilities. At the very least, it is useful to compare and cross-validate the activation maps produced by these techniques. Such studies will be valuable as a check on results of neuromagnetic distributed current reconstructions and will allow better characterization of the relationship between neurophysiological activation and associated hemodynamic changes. A more exciting prospect is the development of analyses that combine information from the two modalities to produce a better description of underlying neural activity than is possible with either technique in isolation. In this paper we describe some results from initial comparative studies and outline several techniques that can be used to treat MEG and fMRI data within a unified computational framework.« less

  12. A proposed standard method for polarimetric calibration and calibration verification

    NASA Astrophysics Data System (ADS)

    Persons, Christopher M.; Jones, Michael W.; Farlow, Craig A.; Morell, L. Denise; Gulley, Michael G.; Spradley, Kevin D.

    2007-09-01

    Accurate calibration of polarimetric sensors is critical to reducing and analyzing phenomenology data, producing uniform polarimetric imagery for deployable sensors, and ensuring predictable performance of polarimetric algorithms. It is desirable to develop a standard calibration method, including verification reporting, in order to increase credibility with customers and foster communication and understanding within the polarimetric community. This paper seeks to facilitate discussions within the community on arriving at such standards. Both the calibration and verification methods presented here are performed easily with common polarimetric equipment, and are applicable to visible and infrared systems with either partial Stokes or full Stokes sensitivity. The calibration procedure has been used on infrared and visible polarimetric imagers over a six year period, and resulting imagery has been presented previously at conferences and workshops. The proposed calibration method involves the familiar calculation of the polarimetric data reduction matrix by measuring the polarimeter's response to a set of input Stokes vectors. With this method, however, linear combinations of Stokes vectors are used to generate highly accurate input states. This allows the direct measurement of all system effects, in contrast with fitting modeled calibration parameters to measured data. This direct measurement of the data reduction matrix allows higher order effects that are difficult to model to be discovered and corrected for in calibration. This paper begins with a detailed tutorial on the proposed calibration and verification reporting methods. Example results are then presented for a LWIR rotating half-wave retarder polarimeter.

  13. A Boltzmann machine for the organization of intelligent machines

    NASA Technical Reports Server (NTRS)

    Moed, Michael C.; Saridis, George N.

    1989-01-01

    In the present technological society, there is a major need to build machines that would execute intelligent tasks operating in uncertain environments with minimum interaction with a human operator. Although some designers have built smart robots, utilizing heuristic ideas, there is no systematic approach to design such machines in an engineering manner. Recently, cross-disciplinary research from the fields of computers, systems AI and information theory has served to set the foundations of the emerging area of the design of intelligent machines. Since 1977 Saridis has been developing an approach, defined as Hierarchical Intelligent Control, designed to organize, coordinate and execute anthropomorphic tasks by a machine with minimum interaction with a human operator. This approach utilizes analytical (probabilistic) models to describe and control the various functions of the intelligent machine structured by the intuitively defined principle of Increasing Precision with Decreasing Intelligence (IPDI) (Saridis 1979). This principle, even though resembles the managerial structure of organizational systems (Levis 1988), has been derived on an analytic basis by Saridis (1988). The purpose is to derive analytically a Boltzmann machine suitable for optimal connection of nodes in a neural net (Fahlman, Hinton, Sejnowski, 1985). Then this machine will serve to search for the optimal design of the organization level of an intelligent machine. In order to accomplish this, some mathematical theory of the intelligent machines will be first outlined. Then some definitions of the variables associated with the principle, like machine intelligence, machine knowledge, and precision will be made (Saridis, Valavanis 1988). Then a procedure to establish the Boltzmann machine on an analytic basis will be presented and illustrated by an example in designing the organization level of an Intelligent Machine. A new search technique, the Modified Genetic Algorithm, is presented and proved

  14. Auto calibration of a cone-beam-CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gross, Daniel; Heil, Ulrich; Schulze, Ralf

    2012-10-15

    Purpose: This paper introduces a novel autocalibration method for cone-beam-CTs (CBCT) or flat-panel CTs, assuming a perfect rotation. The method is based on ellipse-fitting. Autocalibration refers to accurate recovery of the geometric alignment of a CBCT device from projection images alone, without any manual measurements. Methods: The authors use test objects containing small arbitrarily positioned radio-opaque markers. No information regarding the relative positions of the markers is used. In practice, the authors use three to eight metal ball bearings (diameter of 1 mm), e.g., positioned roughly in a vertical line such that their projection image curves on the detector preferablymore » form large ellipses over the circular orbit. From this ellipse-to-curve mapping and also from its inversion the authors derive an explicit formula. Nonlinear optimization based on this mapping enables them to determine the six relevant parameters of the system up to the device rotation angle, which is sufficient to define the geometry of a CBCT-machine assuming a perfect rotational movement. These parameters also include out-of-plane rotations. The authors evaluate their method by simulation based on data used in two similar approaches [L. Smekal, M. Kachelriess, S. E, and K. Wa, 'Geometric misalignment and calibration in cone-beam tomography,' Med. Phys. 31(12), 3242-3266 (2004); K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, 'A geometric calibration method for cone beam CT systems,' Med. Phys. 33(6), 1695-1706 (2006)]. This allows a direct comparison of accuracy. Furthermore, the authors present real-world 3D reconstructions of a dry human spine segment and an electronic device. The reconstructions were computed from projections taken with a commercial dental CBCT device having two different focus-to-detector distances that were both calibrated with their method. The authors compare their reconstruction with a reconstruction computed by the manufacturer of the CBCT

  15. Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement

    PubMed Central

    Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian

    2013-01-01

    Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system

  16. Accuracy study of a robotic system for MRI-guided prostate needle placement.

    PubMed

    Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian

    2013-09-01

    Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system

  17. Using temporal ICA to selectively remove global noise while preserving global signal in functional MRI data.

    PubMed

    Glasser, Matthew F; Coalson, Timothy S; Bijsterbosch, Janine D; Harrison, Samuel J; Harms, Michael P; Anticevic, Alan; Van Essen, David C; Smith, Stephen M

    2018-06-02

    Temporal fluctuations in functional Magnetic Resonance Imaging (fMRI) have been profitably used to study brain activity and connectivity for over two decades. Unfortunately, fMRI data also contain structured temporal "noise" from a variety of sources, including subject motion, subject physiology, and the MRI equipment. Recently, methods have been developed to automatically and selectively remove spatially specific structured noise from fMRI data using spatial Independent Components Analysis (ICA) and machine learning classifiers. Spatial ICA is particularly effective at removing spatially specific structured noise from high temporal and spatial resolution fMRI data of the type acquired by the Human Connectome Project and similar studies. However, spatial ICA is mathematically, by design, unable to separate spatially widespread "global" structured noise from fMRI data (e.g., blood flow modulations from subject respiration). No methods currently exist to selectively and completely remove global structured noise while retaining the global signal from neural activity. This has left the field in a quandary-to do or not to do global signal regression-given that both choices have substantial downsides. Here we show that temporal ICA can selectively segregate and remove global structured noise while retaining global neural signal in both task-based and resting state fMRI data. We compare the results before and after temporal ICA cleanup to those from global signal regression and show that temporal ICA cleanup removes the global positive biases caused by global physiological noise without inducing the network-specific negative biases of global signal regression. We believe that temporal ICA cleanup provides a "best of both worlds" solution to the global signal and global noise dilemma and that temporal ICA itself unlocks interesting neurobiological insights from fMRI data. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Real-Time fMRI Pattern Decoding and Neurofeedback Using FRIEND: An FSL-Integrated BCI Toolbox

    PubMed Central

    Sato, João R.; Basilio, Rodrigo; Paiva, Fernando F.; Garrido, Griselda J.; Bramati, Ivanei E.; Bado, Patricia; Tovar-Moll, Fernanda; Zahn, Roland; Moll, Jorge

    2013-01-01

    The demonstration that humans can learn to modulate their own brain activity based on feedback of neurophysiological signals opened up exciting opportunities for fundamental and applied neuroscience. Although EEG-based neurofeedback has been long employed both in experimental and clinical investigation, functional MRI (fMRI)-based neurofeedback emerged as a promising method, given its superior spatial resolution and ability to gauge deep cortical and subcortical brain regions. In combination with improved computational approaches, such as pattern recognition analysis (e.g., Support Vector Machines, SVM), fMRI neurofeedback and brain decoding represent key innovations in the field of neuromodulation and functional plasticity. Expansion in this field and its applications critically depend on the existence of freely available, integrated and user-friendly tools for the neuroimaging research community. Here, we introduce FRIEND, a graphic-oriented user-friendly interface package for fMRI neurofeedback and real-time multivoxel pattern decoding. The package integrates routines for image preprocessing in real-time, ROI-based feedback (single-ROI BOLD level and functional connectivity) and brain decoding-based feedback using SVM. FRIEND delivers an intuitive graphic interface with flexible processing pipelines involving optimized procedures embedding widely validated packages, such as FSL and libSVM. In addition, a user-defined visual neurofeedback module allows users to easily design and run fMRI neurofeedback experiments using ROI-based or multivariate classification approaches. FRIEND is open-source and free for non-commercial use. Processing tutorials and extensive documentation are available. PMID:24312569

  19. Cursor control by Kalman filter with a non-invasive body–machine interface

    PubMed Central

    Seáñez-González, Ismael; Mussa-Ivaldi, Ferdinando A

    2015-01-01

    Objective We describe a novel human–machine interface for the control of a two-dimensional (2D) computer cursor using four inertial measurement units (IMUs) placed on the user’s upper-body. Approach A calibration paradigm where human subjects follow a cursor with their body as if they were controlling it with their shoulders generates a map between shoulder motions and cursor kinematics. This map is used in a Kalman filter to estimate the desired cursor coordinates from upper-body motions. We compared cursor control performance in a centre-out reaching task performed by subjects using different amounts of information from the IMUs to control the 2D cursor. Main results Our results indicate that taking advantage of the redundancy of the signals from the IMUs improved overall performance. Our work also demonstrates the potential of non-invasive IMU-based body–machine interface systems as an alternative or complement to brain–machine interfaces for accomplishing cursor control in 2D space. Significance The present study may serve as a platform for people with high-tetraplegia to control assistive devices such as powered wheelchairs using a joystick. PMID:25242561

  20. SU-G-JeP2-12: Quantification of 3D Geometric Distortion for 1.5T and 3T MRI Scanners Used for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stowe, M; Gupta, N; Raterman, B

    Purpose: To quantify the magnitude of geometric distortion for MRI scanners and provide recommendations for MRI imaging for radiation therapy Methods: A novel phantom, QUASAR MRID3D [Modus Medical Devices Inc.], was scanned to evaluate the level of 3D geometric distortion present in five MRI scanners used for radiation therapy in our department. The phantom was scanned using the body coil with 1mm image slice thickness to acquire 3D images of the phantom body. The phantom was aligned to its geometric center for each scan, and the field of view was set to visualize the entire phantom. The dependence of distortionmore » magnitude with distance from imaging isocenter and with magnetic field strength (1.5T and 3T) was investigated. Additionally, the characteristics of distortion for Siemens and GE machines were compared. The image distortion for each scanner was quantified in terms of mean, standard deviation (STD), maximum distortion, and skewness. Results: The 3T and 1.5T scans show a similar absolute distortion with a mean of 1.38mm (0.33mm STD) for 3T and 1.39mm (0.34mm STD) for 1.5T for a 100mm radius distance from isocenter. Some machines can have a distortion larger than 10mm at a distance of 200mm from the isocenter. The distortions are presented with plots of the x, y, and z directional components. Conclusion: The results indicate that quantification of MRI image distortion is crucial in radiation oncology for target and organ delineation and treatment planning. The magnitude of geometric distortion determines the margin needed for target contouring which is usually neglected in treatment planning process, especially for SRS/SBRT treatments. Understanding the 3D distribution of the MRI image distortion will improve the accuracy of target delineation and, hence, treatment efficacy. MRI imaging with proper patient alignment to the isocenter is vital to reducing the effects of MRI distortion in treatment planning.« less

  1. Along-track calibration of SWIR push-broom hyperspectral imaging system

    NASA Astrophysics Data System (ADS)

    Jemec, Jurij; Pernuš, Franjo; Likar, Boštjan; Bürmen, Miran

    2016-05-01

    Push-broom hyperspectral imaging systems are increasingly used for various medical, agricultural and military purposes. The acquired images contain spectral information in every pixel of the imaged scene collecting additional information about the imaged scene compared to the classical RGB color imaging. Due to the misalignment and imperfections in the optical components comprising the push-broom hyperspectral imaging system, variable spectral and spatial misalignments and blur are present in the acquired images. To capture these distortions, a spatially and spectrally variant response function must be identified at each spatial and spectral position. In this study, we propose a procedure to characterize the variant response function of Short-Wavelength Infrared (SWIR) push-broom hyperspectral imaging systems in the across-track and along-track direction and remove its effect from the acquired images. A custom laser-machined spatial calibration targets are used for the characterization. The spatial and spectral variability of the response function in the across-track and along-track direction is modeled by a parametrized basis function. Finally, the characterization results are used to restore the distorted hyperspectral images in the across-track and along-track direction by a Richardson-Lucy deconvolution-based algorithm. The proposed calibration method in the across-track and along-track direction is thoroughly evaluated on images of targets with well-defined geometric properties. The results suggest that the proposed procedure is well suited for fast and accurate spatial calibration of push-broom hyperspectral imaging systems.

  2. Multi-class SVM model for fMRI-based classification and grading of liver fibrosis

    NASA Astrophysics Data System (ADS)

    Freiman, M.; Sela, Y.; Edrei, Y.; Pappo, O.; Joskowicz, L.; Abramovitch, R.

    2010-03-01

    We present a novel non-invasive automatic method for the classification and grading of liver fibrosis from fMRI maps based on hepatic hemodynamic changes. This method automatically creates a model for liver fibrosis grading based on training datasets. Our supervised learning method evaluates hepatic hemodynamics from an anatomical MRI image and three T2*-W fMRI signal intensity time-course scans acquired during the breathing of air, air-carbon dioxide, and carbogen. It constructs a statistical model of liver fibrosis from these fMRI scans using a binary-based one-against-all multi class Support Vector Machine (SVM) classifier. We evaluated the resulting classification model with the leave-one out technique and compared it to both full multi-class SVM and K-Nearest Neighbor (KNN) classifications. Our experimental study analyzed 57 slice sets from 13 mice, and yielded a 98.2% separation accuracy between healthy and low grade fibrotic subjects, and an overall accuracy of 84.2% for fibrosis grading. These results are better than the existing image-based methods which can only discriminate between healthy and high grade fibrosis subjects. With appropriate extensions, our method may be used for non-invasive classification and progression monitoring of liver fibrosis in human patients instead of more invasive approaches, such as biopsy or contrast-enhanced imaging.

  3. Monitoring machining conditions by infrared images

    NASA Astrophysics Data System (ADS)

    Borelli, Joao E.; Gonzaga Trabasso, Luis; Gonzaga, Adilson; Coelho, Reginaldo T.

    2001-03-01

    During machining process the knowledge of the temperature is the most important factor in tool analysis. It allows to control main factors that influence tool use, life time and waste. The temperature in the contact area between the piece and the tool is resulting from the material removal in cutting operation and it is too difficult to be obtained because the tool and the work piece are in motion. One way to measure the temperature in this situation is detecting the infrared radiation. This work presents a new methodology for diagnosis and monitoring of machining processes with the use of infrared images. The infrared image provides a map in gray tones of the elements in the process: tool, work piece and chips. Each gray tone in the image corresponds to a certain temperature for each one of those materials and the relationship between the gray tones and the temperature is gotten by the previous of infrared camera calibration. The system developed in this work uses an infrared camera, a frame grabber board and a software composed of three modules. The first module makes the image acquisition and processing. The second module makes the feature image extraction and performs the feature vector. Finally, the third module uses fuzzy logic to evaluate the feature vector and supplies the tool state diagnostic as output.

  4. A theoretical framework to model DSC-MRI data acquired in the presence of contrast agent extravasation

    NASA Astrophysics Data System (ADS)

    Quarles, C. C.; Gochberg, D. F.; Gore, J. C.; Yankeelov, T. E.

    2009-10-01

    Dynamic susceptibility contrast (DSC) MRI methods rely on compartmentalization of the contrast agent such that a susceptibility gradient can be induced between the contrast-containing compartment and adjacent spaces, such as between intravascular and extravascular spaces. When there is a disruption of the blood-brain barrier, as is frequently the case with brain tumors, a contrast agent leaks out of the vasculature, resulting in additional T1, T2 and T*2 relaxation effects in the extravascular space, thereby affecting the signal intensity time course and reducing the reliability of the computed hemodynamic parameters. In this study, a theoretical model describing these dynamic intra- and extravascular T1, T2 and T*2 relaxation interactions is proposed. The applicability of using the proposed model to investigate the influence of relevant MRI pulse sequences (e.g. echo time, flip angle), and physical (e.g. susceptibility calibration factors, pre-contrast relaxation rates) and physiological parameters (e.g. permeability, blood flow, compartmental volume fractions) on DSC-MRI signal time curves is demonstrated. Such a model could yield important insights into the biophysical basis of contrast-agent-extravasastion-induced effects on measured DSC-MRI signals and provide a means to investigate pulse sequence optimization and appropriate data analysis methods for the extraction of physiologically relevant imaging metrics.

  5. Disrupted white matter connectivity underlying developmental dyslexia: A machine learning approach.

    PubMed

    Cui, Zaixu; Xia, Zhichao; Su, Mengmeng; Shu, Hua; Gong, Gaolang

    2016-04-01

    Developmental dyslexia has been hypothesized to result from multiple causes and exhibit multiple manifestations, implying a distributed multidimensional effect on human brain. The disruption of specific white-matter (WM) tracts/regions has been observed in dyslexic children. However, it remains unknown if developmental dyslexia affects the human brain WM in a multidimensional manner. Being a natural tool for evaluating this hypothesis, the multivariate machine learning approach was applied in this study to compare 28 school-aged dyslexic children with 33 age-matched controls. Structural magnetic resonance imaging (MRI) and diffusion tensor imaging were acquired to extract five multitype WM features at a regional level: white matter volume, fractional anisotropy, mean diffusivity, axial diffusivity, and radial diffusivity. A linear support vector machine (LSVM) classifier achieved an accuracy of 83.61% using these MRI features to distinguish dyslexic children from controls. Notably, the most discriminative features that contributed to the classification were primarily associated with WM regions within the putative reading network/system (e.g., the superior longitudinal fasciculus, inferior fronto-occipital fasciculus, thalamocortical projections, and corpus callosum), the limbic system (e.g., the cingulum and fornix), and the motor system (e.g., the cerebellar peduncle, corona radiata, and corticospinal tract). These results were well replicated using a logistic regression classifier. These findings provided direct evidence supporting a multidimensional effect of developmental dyslexia on WM connectivity of human brain, and highlighted the involvement of WM tracts/regions beyond the well-recognized reading system in dyslexia. Finally, the discriminating results demonstrated a potential of WM neuroimaging features as imaging markers for identifying dyslexic individuals. © 2016 Wiley Periodicals, Inc.

  6. TWSTFT Link Calibration Report

    DTIC Science & Technology

    2015-09-01

    1 Annex II. TWSTFT link calibration with a GPS calibrator Calibration reference: CI-888-2015 Version history: ZJ/V0/25Feb2015, V0a,b/HE/ZJ...7Mar; V0s/VZ9Mar; V0d,e,f+/DM10,17Mar; V1.0/1Apr; Final version 1Sept2015 TWSTFT link calibration report -- Calibration of the Lab(k)-PTB UTC...bipm.org * Coordinator Abstract This report includes the calibration results of the Lab(k)-PTB TWSTFT link and closure measurements of the BIPM

  7. Simple transfer calibration method for a Cimel Sun-Moon photometer: calculating lunar calibration coefficients from Sun calibration constants.

    PubMed

    Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane

    2016-09-20

    The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.

  8. Pharmacological MRI (phMRI) of the Human Central Nervous System.

    PubMed

    Lanfermann, H; Schindler, C; Jordan, J; Krug, N; Raab, P

    2015-10-01

    Pharmacological magnetic resonance imaging (phMRI) of the central nervous system (CNS) addresses the increasing demands in the biopharma industry for new methods that can accurately predict, as early as possible, whether novel CNS agents will be effective and safe. Imaging of physiological and molecular-level function can provide a more direct measure of a drug mechanism of action, enabling more predictive measures of drug activity. The availability of phMRI of the nervous system within the professional infrastructure of the Clinical Research Center (CRC) Hannover as proof of concept center ensures that advances in basic science progress swiftly into benefits for patients. Advanced standardized MRI techniques including quantitative MRI, kurtosis determination, functional MRI, and spectroscopic imaging of the entire brain are necessary for phMRI. As a result, MR scanners will evolve into high-precision measuring instruments for assessment of desirable and undesirable effects of drugs as the basic precondition for individually tailored therapy. The CRC's Imaging Unit with high-end large-scale equipment will allow the following unique opportunities: for example, identification of MR-based biomarkers to assess the effect of drugs (surrogate parameters), establishment of normal levels and reference ranges for MRI-based biomarkers, evaluation of the most relevant MRI sequences for drug monitoring in outpatient care. Another very important prerequisite for phMRI is the MHH Core Facility as the scientific and operational study unit of the CRC partner Hannover Medical School. This unit is responsible for the study coordination, conduction, complete study logistics, administration, and application of the quality assurance system based on required industry standards.

  9. Non-invasive in vivo evaluation of in situ forming PLGA implants by benchtop magnetic resonance imaging (BT-MRI) and EPR spectroscopy.

    PubMed

    Kempe, Sabine; Metz, Hendrik; Pereira, Priscila G C; Mäder, Karsten

    2010-01-01

    In the present study, we used benchtop magnetic resonance imaging (BT-MRI) for non-invasive and continuous in vivo studies of in situ forming poly(lactide-co-glycolide) (PLGA) implants without the use of contrast agents. Polyethylene glycol (PEG) 400 was used as an alternative solvent to the clinically used NMP. In addition to BT-MRI, we applied electron paramagnetic resonance (EPR) spectroscopy to characterize implant formation and drug delivery processes in vitro and in vivo. We were able to follow key processes of implant formation by EPR and MRI. Because EPR spectra are sensitive to polarity and mobility, we were able to follow the kinetics of the solvent/non-solvent exchange and the PLGA precipitation. Due to the high water affinity of PEG 400, we observed a transient accumulation of water in the implant neighbourhood. Furthermore, we detected the encapsulation by BT-MRI of the implant as a response of the biological system to the polymer, followed by degradation over a period of two months. We could show that MRI in general has the potential to get new insights in the in vivo fate of in situ forming implants. The study also clearly shows that BT-MRI is a new viable and much less expensive alternative for superconducting MRI machines to monitor drug delivery processes in vivo in small mammals. Copyright 2009 Elsevier B.V. All rights reserved.

  10. In pursuit of precision: the calibration of minds and machines in late nineteenth-century psychology.

    PubMed

    Benschop, R; Draaisma, D

    2000-01-01

    A prominent feature of late nineteenth-century psychology was its intense preoccupation with precision. Precision was at once an ideal and an argument: the quest for precision helped psychology to establish its status as a mature science, sharing a characteristic concern with the natural sciences. We will analyse how psychologists set out to produce precision in 'mental chronometry', the measurement of the duration of psychological processes. In his Leipzig laboratory, Wundt inaugurated an elaborate research programme on mental chronometry. We will look at the problem of calibration of experimental apparatus and will describe the intricate material, literary, and social technologies involved in the manufacture of precision. First, we shall discuss some of the technical problems involved in the measurement of ever shorter time-spans. Next, the Cattell-Berger experiments will help us to argue against the received view that all the precision went into the hardware, and practically none into the social organization of experimentation. Experimenters made deliberate efforts to bring themselves and their subjects under a regime of control and calibration similar to that which reigned over the experimental machinery. In Leipzig psychology, the particular blend of material and social technology resulted in a specific object of study: the generalized mind. We will then show that the distribution of precision in experimental psychology outside Leipzig demanded a concerted effort of instruments, texts, and people. It will appear that the forceful attempts to produce precision and uniformity had some rather paradoxical consequences.

  11. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2015-08-01

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology.

  12. Pelvis MRI scan

    MedlinePlus

    ... and most often available in the emergency room. Alternative Names MRI - pelvis; MRI - hips; Pelvic MRI with ... any medical emergency or for the diagnosis or treatment of any medical condition. A licensed physician should ...

  13. Spitzer/JWST Cross Calibration: IRAC Observations of Potential Calibrators for JWST

    NASA Astrophysics Data System (ADS)

    Carey, Sean J.; Gordon, Karl D.; Lowrance, Patrick; Ingalls, James G.; Glaccum, William J.; Grillmair, Carl J.; E Krick, Jessica; Laine, Seppo J.; Fazio, Giovanni G.; Hora, Joseph L.; Bohlin, Ralph

    2017-06-01

    We present observations at 3.6 and 4.5 microns using IRAC on the Spitzer Space Telescope of a set of main sequence A stars and white dwarfs that are potential calibrators across the JWST instrument suite. The stars range from brightnesses of 4.4 to 15 mag in K band. The calibration observations use a similar redundancy to the observing strategy for the IRAC primary calibrators (Reach et al. 2005) and the photometry is obtained using identical methods and instrumental photometric corrections as those applied to the IRAC primary calibrators (Carey et al. 2009). The resulting photometry is then compared to the predictions based on spectra from the CALSPEC Calibration Database (http://www.stsci.edu/hst/observatory/crds/calspec.html) and the IRAC bandpasses. These observations are part of an ongoing collaboration between IPAC and STScI investigating absolute calibration in the infrared.

  14. A Tool for Creating Regionally Calibrated High-Resolution Land Cover Data Sets for the West African Sahel: Using Machine Learning to Scale Up Hand-Classified Maps in a Data-Sparse Environment

    NASA Astrophysics Data System (ADS)

    Van Gordon, M.; Van Gordon, S.; Min, A.; Sullivan, J.; Weiner, Z.; Tappan, G. G.

    2017-12-01

    Using support vector machine (SVM) learning and high-accuracy hand-classified maps, we have developed a publicly available land cover classification tool for the West African Sahel. Our classifier produces high-resolution and regionally calibrated land cover maps for the Sahel, representing a significant contribution to the data available for this region. Global land cover products are unreliable for the Sahel, and accurate land cover data for the region are sparse. To address this gap, the U.S. Geological Survey and the Regional Center for Agriculture, Hydrology and Meteorology (AGRHYMET) in Niger produced high-quality land cover maps for the region via hand-classification of Landsat images. This method produces highly accurate maps, but the time and labor required constrain the spatial and temporal resolution of the data products. By using these hand-classified maps alongside SVM techniques, we successfully increase the resolution of the land cover maps by 1-2 orders of magnitude, from 2km-decadal resolution to 30m-annual resolution. These high-resolution regionally calibrated land cover datasets, along with the classifier we developed to produce them, lay the foundation for major advances in studies of land surface processes in the region. These datasets will provide more accurate inputs for food security modeling, hydrologic modeling, analyses of land cover change and climate change adaptation efforts. The land cover classification tool we have developed will be publicly available for use in creating additional West Africa land cover datasets with future remote sensing data and can be adapted for use in other parts of the world.

  15. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  16. Reproducibility of Brain Morphometry from Short-Term Repeat Clinical MRI Examinations: A Retrospective Study

    PubMed Central

    Liu, Hon-Man; Chen, Shan-Kai; Chen, Ya-Fang; Lee, Chung-Wei; Yeh, Lee-Ren

    2016-01-01

    Purpose To assess the inter session reproducibility of automatic segmented MRI-derived measures by FreeSurfer in a group of subjects with normal-appearing MR images. Materials and Methods After retrospectively reviewing a brain MRI database from our institute consisting of 14,758 adults, those subjects who had repeat scans and had no history of neurodegenerative disorders were selected for morphometry analysis using FreeSurfer. A total of 34 subjects were grouped by MRI scanner model. After automatic segmentation using FreeSurfer, label-wise comparison (involving area, thickness, and volume) was performed on all segmented results. An intraclass correlation coefficient was used to estimate the agreement between sessions. Wilcoxon signed rank test was used to assess the population mean rank differences across sessions. Mean-difference analysis was used to evaluate the difference intervals across scanners. Absolute percent difference was used to estimate the reproducibility errors across the MRI models. Kruskal-Wallis test was used to determine the across-scanner effect. Results The agreement in segmentation results for area, volume, and thickness measurements of all segmented anatomical labels was generally higher in Signa Excite and Verio models when compared with Sonata and TrioTim models. There were significant rank differences found across sessions in some labels of different measures. Smaller difference intervals in global volume measurements were noted on images acquired by Signa Excite and Verio models. For some brain regions, significant MRI model effects were observed on certain segmentation results. Conclusions Short-term scan-rescan reliability of automatic brain MRI morphometry is feasible in the clinical setting. However, since repeatability of software performance is contingent on the reproducibility of the scanner performance, the scanner performance must be calibrated before conducting such studies or before using such software for retrospective

  17. SPRT Calibration Uncertainties and Internal Quality Control at a Commercial SPRT Calibration Facility

    NASA Astrophysics Data System (ADS)

    Wiandt, T. J.

    2008-06-01

    The Hart Scientific Division of the Fluke Corporation operates two accredited standard platinum resistance thermometer (SPRT) calibration facilities, one at the Hart Scientific factory in Utah, USA, and the other at a service facility in Norwich, UK. The US facility is accredited through National Voluntary Laboratory Accreditation Program (NVLAP), and the UK facility is accredited through UKAS. Both provide SPRT calibrations using similar equipment and procedures, and at similar levels of uncertainty. These uncertainties are among the lowest available commercially. To achieve and maintain low uncertainties, it is required that the calibration procedures be thorough and optimized. However, to minimize customer downtime, it is also important that the instruments be calibrated in a timely manner and returned to the customer. Consequently, subjecting the instrument to repeated calibrations or extensive repeated measurements is not a viable approach. Additionally, these laboratories provide SPRT calibration services involving a wide variety of SPRT designs. These designs behave differently, yet predictably, when subjected to calibration measurements. To this end, an evaluation strategy involving both statistical process control and internal consistency measures is utilized to provide confidence in both the instrument calibration and the calibration process. This article describes the calibration facilities, procedure, uncertainty analysis, and internal quality assurance measures employed in the calibration of SPRTs. Data will be reviewed and generalities will be presented. Finally, challenges and considerations for future improvements will be discussed.

  18. Support vector machine in machine condition monitoring and fault diagnosis

    NASA Astrophysics Data System (ADS)

    Widodo, Achmad; Yang, Bo-Suk

    2007-08-01

    Recently, the issue of machine condition monitoring and fault diagnosis as a part of maintenance system became global due to the potential advantages to be gained from reduced maintenance costs, improved productivity and increased machine availability. This paper presents a survey of machine condition monitoring and fault diagnosis using support vector machine (SVM). It attempts to summarize and review the recent research and developments of SVM in machine condition monitoring and diagnosis. Numerous methods have been developed based on intelligent systems such as artificial neural network, fuzzy expert system, condition-based reasoning, random forest, etc. However, the use of SVM for machine condition monitoring and fault diagnosis is still rare. SVM has excellent performance in generalization so it can produce high accuracy in classification for machine condition monitoring and diagnosis. Until 2006, the use of SVM in machine condition monitoring and fault diagnosis is tending to develop towards expertise orientation and problem-oriented domain. Finally, the ability to continually change and obtain a novel idea for machine condition monitoring and fault diagnosis using SVM will be future works.

  19. SU-F-J-166: Volumetric Spatial Distortions Comparison for 1.5 Tesla Versus 3 Tesla MRI for Gamma Knife Radiosurgery Scans Using Frame Marker Fusion and Co-Registration Modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neyman, G

    Purpose: To compare typical volumetric spatial distortions for 1.5 Tesla versus 3 Tesla MRI Gamma Knife radiosurgery scans in the frame marker fusion and co-registration frame-less modes. Methods: Quasar phantom by Modus Medical Devices Inc. with GRID image distortion software was used for measurements of volumetric distortions. 3D volumetric T1 weighted scans of the phantom were produced on 1.5 T Avanto and 3 T Skyra MRI Siemens scanners. The analysis was done two ways: for scans with localizer markers from the Leksell frame and relatively to the phantom only (simulated co-registration technique). The phantom grid contained a total of 2002more » vertices or control points that were used in the assessment of volumetric geometric distortion for all scans. Results: Volumetric mean absolute spatial deviations relatively to the frame localizer markers for 1.5 and 3 Tesla machine were: 1.39 ± 0.15 and 1.63 ± 0.28 mm with max errors of 1.86 and 2.65 mm correspondingly. Mean 2D errors from the Gamma Plan were 0.3 and 1.0 mm. For simulated co-registration technique the volumetric mean absolute spatial deviations relatively to the phantom for 1.5 and 3 Tesla machine were: 0.36 ± 0.08 and 0.62 ± 0.13 mm with max errors of 0.57 and 1.22 mm correspondingly. Conclusion: Volumetric spatial distortions are lower for 1.5 Tesla versus 3 Tesla MRI machines localized with markers on frames and significantly lower for co-registration techniques with no frame localization. The results show the advantage of using co-registration technique for minimizing MRI volumetric spatial distortions which can be especially important for steep dose gradient fields typically used in Gamma Knife radiosurgery. Consultant for Elekta AB.« less

  20. Improving near-infrared prediction model robustness with support vector machine regression: a pharmaceutical tablet assay example.

    PubMed

    Igne, Benoît; Drennen, James K; Anderson, Carl A

    2014-01-01

    Changes in raw materials and process wear and tear can have significant effects on the prediction error of near-infrared calibration models. When the variability that is present during routine manufacturing is not included in the calibration, test, and validation sets, the long-term performance and robustness of the model will be limited. Nonlinearity is a major source of interference. In near-infrared spectroscopy, nonlinearity can arise from light path-length differences that can come from differences in particle size or density. The usefulness of support vector machine (SVM) regression to handle nonlinearity and improve the robustness of calibration models in scenarios where the calibration set did not include all the variability present in test was evaluated. Compared to partial least squares (PLS) regression, SVM regression was less affected by physical (particle size) and chemical (moisture) differences. The linearity of the SVM predicted values was also improved. Nevertheless, although visualization and interpretation tools have been developed to enhance the usability of SVM-based methods, work is yet to be done to provide chemometricians in the pharmaceutical industry with a regression method that can supplement PLS-based methods.

  1. Radiomic machine-learning classifiers for prognostic biomarkers of advanced nasopharyngeal carcinoma.

    PubMed

    Zhang, Bin; He, Xin; Ouyang, Fusheng; Gu, Dongsheng; Dong, Yuhao; Zhang, Lu; Mo, Xiaokai; Huang, Wenhui; Tian, Jie; Zhang, Shuixing

    2017-09-10

    We aimed to identify optimal machine-learning methods for radiomics-based prediction of local failure and distant failure in advanced nasopharyngeal carcinoma (NPC). We enrolled 110 patients with advanced NPC. A total of 970 radiomic features were extracted from MRI images for each patient. Six feature selection methods and nine classification methods were evaluated in terms of their performance. We applied the 10-fold cross-validation as the criterion for feature selection and classification. We repeated each combination for 50 times to obtain the mean area under the curve (AUC) and test error. We observed that the combination methods Random Forest (RF) + RF (AUC, 0.8464 ± 0.0069; test error, 0.3135 ± 0.0088) had the highest prognostic performance, followed by RF + Adaptive Boosting (AdaBoost) (AUC, 0.8204 ± 0.0095; test error, 0.3384 ± 0.0097), and Sure Independence Screening (SIS) + Linear Support Vector Machines (LSVM) (AUC, 0.7883 ± 0.0096; test error, 0.3985 ± 0.0100). Our radiomics study identified optimal machine-learning methods for the radiomics-based prediction of local failure and distant failure in advanced NPC, which could enhance the applications of radiomics in precision oncology and clinical practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Stereoscopic Machine-Vision System Using Projected Circles

    NASA Technical Reports Server (NTRS)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  3. Single element ultrasonic imaging of limb geometry: an in-vivo study with comparison to MRI

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Fincke, Jonathan R.; Anthony, Brian W.

    2016-04-01

    Despite advancements in medical imaging, current prosthetic fitting methods remain subjective, operator dependent, and non-repeatable. The standard plaster casting method relies on prosthetist experience and tactile feel of the limb to design the prosthetic socket. Often times, many fitting iterations are required to achieve an acceptable fit. Use of improper socket fittings can lead to painful pathologies including neuromas, inflammation, soft tissue calcification, and pressure sores, often forcing the wearer to into a wheelchair and reducing mobility and quality of life. Computer software along with MRI/CT imaging has already been explored to aid the socket design process. In this paper, we explore the use of ultrasound instead of MRI/CT to accurately obtain the underlying limb geometry to assist the prosthetic socket design process. Using a single element ultrasound system, multiple subjects' proximal limbs were imaged using 1, 2.25, and 5 MHz single element transducers. Each ultrasound transducer was calibrated to ensure acoustic exposure within the limits defined by the FDA. To validate image quality, each patient was also imaged in an MRI. Fiducial markers visible in both MRI and ultrasound were used to compare the same limb cross-sectional image for each patient. After applying a migration algorithm, B-mode ultrasound cross-sections showed sufficiently high image resolution to characterize the skin and bone boundaries along with the underlying tissue structures.

  4. A machine vision system for micro-EDM based on linux

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Wansheng; Li, Gang; Li, Zhiyong; Zhang, Yong

    2006-11-01

    Due to the high precision and good surface quality that it can give, Electrical Discharge Machining (EDM) is potentially an important process for the fabrication of micro-tools and micro-components. However, a number of issues remain unsolved before micro-EDM becomes a reliable process with repeatable results. To deal with the difficulties in micro electrodes on-line fabrication and tool wear compensation, a micro-EDM machine vision system is developed with a Charge Coupled Device (CCD) camera, with an optical resolution of 1.61μm and an overall magnification of 113~729. Based on the Linux operating system, an image capturing program is developed with the V4L2 API, and an image processing program is exploited by using OpenCV. The contour of micro electrodes can be extracted by means of the Canny edge detector. Through the system calibration, the micro electrodes diameter can be measured on-line. Experiments have been carried out to prove its performance, and the reasons of measurement error are also analyzed.

  5. Strategies for reducing large fMRI data sets for independent component analysis.

    PubMed

    Wang, Ze; Wang, Jiongjiong; Calhoun, Vince; Rao, Hengyi; Detre, John A; Childress, Anna R

    2006-06-01

    In independent component analysis (ICA), principal component analysis (PCA) is generally used to reduce the raw data to a few principal components (PCs) through eigenvector decomposition (EVD) on the data covariance matrix. Although this works for spatial ICA (sICA) on moderately sized fMRI data, it is intractable for temporal ICA (tICA), since typical fMRI data have a high spatial dimension, resulting in an unmanageable data covariance matrix. To solve this problem, two practical data reduction methods are presented in this paper. The first solution is to calculate the PCs of tICA from the PCs of sICA. This approach works well for moderately sized fMRI data; however, it is highly computationally intensive, even intractable, when the number of scans increases. The second solution proposed is to perform PCA decomposition via a cascade recursive least squared (CRLS) network, which provides a uniform data reduction solution for both sICA and tICA. Without the need to calculate the covariance matrix, CRLS extracts PCs directly from the raw data, and the PC extraction can be terminated after computing an arbitrary number of PCs without the need to estimate the whole set of PCs. Moreover, when the whole data set becomes too large to be loaded into the machine memory, CRLS-PCA can save data retrieval time by reading the data once, while the conventional PCA requires numerous data retrieval steps for both covariance matrix calculation and PC extractions. Real fMRI data were used to evaluate the PC extraction precision, computational expense, and memory usage of the presented methods.

  6. Differentiation of Glioblastoma and Lymphoma Using Feature Extraction and Support Vector Machine.

    PubMed

    Yang, Zhangjing; Feng, Piaopiao; Wen, Tian; Wan, Minghua; Hong, Xunning

    2017-01-01

    Differentiation of glioblastoma multiformes (GBMs) and lymphomas using multi-sequence magnetic resonance imaging (MRI) is an important task that is valuable for treatment planning. However, this task is a challenge because GBMs and lymphomas may have a similar appearance in MRI images. This similarity may lead to misclassification and could affect the treatment results. In this paper, we propose a semi-automatic method based on multi-sequence MRI to differentiate these two types of brain tumors. Our method consists of three steps: 1) the key slice is selected from 3D MRIs and region of interests (ROIs) are drawn around the tumor region; 2) different features are extracted based on prior clinical knowledge and validated using a t-test; and 3) features that are helpful for classification are used to build an original feature vector and a support vector machine is applied to perform classification. In total, 58 GBM cases and 37 lymphoma cases are used to validate our method. A leave-one-out crossvalidation strategy is adopted in our experiments. The global accuracy of our method was determined as 96.84%, which indicates that our method is effective for the differentiation of GBM and lymphoma and can be applied in clinical diagnosis. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  7. SAR calibration technology review

    NASA Technical Reports Server (NTRS)

    Walker, J. L.; Larson, R. W.

    1981-01-01

    Synthetic Aperture Radar (SAR) calibration technology including a general description of the primary calibration techniques and some of the factors which affect the performance of calibrated SAR systems are reviewed. The use of reference reflectors for measurement of the total system transfer function along with an on-board calibration signal generator for monitoring the temporal variations of the receiver to processor output is a practical approach for SAR calibration. However, preliminary error analysis and previous experimental measurements indicate that reflectivity measurement accuracies of better than 3 dB will be difficult to achieve. This is not adequate for many applications and, therefore, improved end-to-end SAR calibration techniques are required.

  8. Research on self-calibration biaxial autocollimator based on ZYNQ

    NASA Astrophysics Data System (ADS)

    Guo, Pan; Liu, Bingguo; Liu, Guodong; Zhong, Yao; Lu, Binghui

    2018-01-01

    Autocollimators are mainly based on computers or the electronic devices that can be connected to the internet, and its precision, measurement range and resolution are all defective, and external displays are needed to display images in real time. What's more, there is no real-time calibration for autocollimator in the market. In this paper, we propose a biaxial autocollimator based on the ZYNQ embedded platform to solve the above problems. Firstly, the traditional optical system is improved and a light path is added for real-time calibration. Then, in order to improve measurement speed, the embedded platform based on ZYNQ that combines Linux operating system with autocollimator is designed. In this part, image acquisition, image processing, image display and the man-machine interaction interface based on Qt are achieved. Finally, the system realizes two-dimensional small angle measurement. Experimental results showed that the proposed method can improve the angle measurement accuracy. The standard deviation of the close distance (1.5m) is 0.15" in horizontal direction of image and 0.24"in vertical direction, the repeatability of measurement of the long distance (10m) is improved by 0.12 in horizontal direction of image and 0.3 in vertical direction.

  9. Surface-Based fMRI-Driven Diffusion Tractography in the Presence of Significant Brain Pathology: A Study Linking Structure and Function in Cerebral Palsy

    PubMed Central

    Cunnington, Ross; Boyd, Roslyn N.; Rose, Stephen E.

    2016-01-01

    Diffusion MRI (dMRI) tractography analyses are difficult to perform in the presence of brain pathology. Automated methods that rely on cortical parcellation for structural connectivity studies often fail, while manually defining regions is extremely time consuming and can introduce human error. Both methods also make assumptions about structure-function relationships that may not hold after cortical reorganisation. Seeding tractography with functional-MRI (fMRI) activation is an emerging method that reduces these confounds, but inherent smoothing of fMRI signal may result in the inclusion of irrelevant pathways. This paper describes a novel fMRI-seeded dMRI-analysis pipeline based on surface-meshes that reduces these issues and utilises machine-learning to generate task specific white matter pathways, minimising the requirement for manually-drawn ROIs. We directly compared this new strategy to a standard voxelwise fMRI-dMRI approach, by investigating correlations between clinical scores and dMRI metrics of thalamocortical and corticomotor tracts in 31 children with unilateral cerebral palsy. The surface-based approach successfully processed more participants (87%) than the voxel-based approach (65%), and provided significantly more-coherent tractography. Significant correlations between dMRI metrics and five clinical scores of function were found for the more superior regions of these tracts. These significant correlations were stronger and more frequently found with the surface-based method (15/20 investigated were significant; R2 = 0.43–0.73) than the voxelwise analysis (2 sig. correlations; 0.38 & 0.49). More restricted fMRI signal, better-constrained tractography, and the novel track-classification method all appeared to contribute toward these differences. PMID:27487011

  10. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    PubMed

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  11. PyMVPA: A python toolbox for multivariate pattern analysis of fMRI data.

    PubMed

    Hanke, Michael; Halchenko, Yaroslav O; Sederberg, Per B; Hanson, Stephen José; Haxby, James V; Pollmann, Stefan

    2009-01-01

    Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability.

  12. PyMVPA: A Python toolbox for multivariate pattern analysis of fMRI data

    PubMed Central

    Hanke, Michael; Halchenko, Yaroslav O.; Sederberg, Per B.; Hanson, Stephen José; Haxby, James V.; Pollmann, Stefan

    2009-01-01

    Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine-learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability. PMID:19184561

  13. In vivo measurements of proton relaxation times in human brain, liver, and skeletal muscle: a multicenter MRI study.

    PubMed

    de Certaines, J D; Henriksen, O; Spisni, A; Cortsen, M; Ring, P B

    1993-01-01

    Quantitative magnetic resonance imaging may offer unique potential for tissue characterization in vivo. In this connection texture analysis of quantitative MR images may be of special importance. Because evaluation of texture analysis needs large data material, multicenter approaches become mandatory. Within the frame of BME Concerted Action on Tissue Characterization by MRI and MRS, a pilot multicenter study was launched in order to evaluate the technical problems including comparability of relaxation time measurements carried out in the individual sites. Human brain, skeletal muscle, and liver were used as models. A total of 218 healthy volunteers were studied. Fifteen MRI scanners with field strength ranging from 0.08 T to 1.5 T were induced. Measurement accuracy was tested on the Eurospin relaxation time test object (TO5) and the obtained calibration curve was used for correction of the in vivo data. The results established that, by following a standardized procedure, comparable quantitative measurements can be obtained in vivo from a number of MR sites. The overall variation coefficient in vivo was in the same order of magnitude as ex vivo relaxometry. Thus, it is possible to carry out international multicenter studies on quantitative imaging, provided that quality control with respect to measurement accuracy and calibration of the MR equipments are performed.

  14. 16. Interior, Machine Shop, Roundhouse Machine Shop Extension, Southern Pacific ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. Interior, Machine Shop, Roundhouse Machine Shop Extension, Southern Pacific Railroad Carlin Shops, view to south (90mm lens). Note the large segmental-arched doorway to move locomotives in and out of Machine Shop. - Southern Pacific Railroad, Carlin Shops, Roundhouse Machine Shop Extension, Foot of Sixth Street, Carlin, Elko County, NV

  15. Numerical calibration of the stable poisson loaded specimen

    NASA Technical Reports Server (NTRS)

    Ghosn, Louis J.; Calomino, Anthony M.; Brewer, Dave N.

    1992-01-01

    An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-Curve determination. The crack mouth opening displacements (CMOD's) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMOD's, and crack displacement profiles are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length, thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.

  16. Longitudinal MRI assessment: the identification of relevant features in the development of Posterior Fossa Syndrome in children

    NASA Astrophysics Data System (ADS)

    Spiteri, M.; Lewis, E.; Windridge, D.; Avula, S.

    2015-03-01

    Up to 25% of children who undergo brain tumour resection surgery in the posterior fossa develop posterior fossa syndrome (PFS). This syndrome is characterised by mutism and disturbance in speech. Our hypothesis is that there is a correlation between PFS and the occurrence of hypertrophic olivary degeneration (HOD) in lobes within the posterior fossa, known as the inferior olivary nuclei (ION). HOD is exhibited as an increase in size and intensity of the ION on an MR image. Intra-operative MRI (IoMRI) is used during surgical procedures at the Alder Hey Children's Hospital, Liver- pool, England, in the treatment of Posterior Fossa tumours and allows visualisation of the brain during surgery. The final MR scan on the IoMRI allows early assessment of the ION immediately after the surgical procedure. The longitudinal MRI data of 28 patients was analysed in a collaborative study with Alder Hey Children's Hospital, in order to identify the most relevant imaging features that relate to the development of PFS, specifically related to HOD. A semi-automated segmentation process was carried out to delineate the ION on each MRI. Feature selection techniques were used to identify the most relevant features amongst the MRI data, demographics and clinical data provided by the hospital. A support vector machine (SVM) was used to analyse the discriminative ability of the selected features. The results indicate the presence of HOD as the most efficient feature that correlates with the development of PFS, followed by the change in intensity and size of the ION and whether HOD occurred bilaterally or unilaterally.

  17. High-precision and low-cost vibration generator for low-frequency calibration system

    NASA Astrophysics Data System (ADS)

    Li, Rui-Jun; Lei, Ying-Jun; Zhang, Lian-Sheng; Chang, Zhen-Xin; Fan, Kuang-Chao; Cheng, Zhen-Ying; Hu, Peng-Hao

    2018-03-01

    Low-frequency vibration is one of the harmful factors that affect the accuracy of micro-/nano-measuring machines because its amplitude is significantly small and it is very difficult to avoid. In this paper, a low-cost and high-precision vibration generator was developed to calibrate an optical accelerometer, which is self-designed to detect low-frequency vibration. A piezoelectric actuator is used as vibration exciter, a leaf spring made of beryllium copper is used as an elastic component, and a high-resolution, low-thermal-drift eddy current sensor is applied to investigate the vibrator’s performance. Experimental results demonstrate that the vibration generator can achieve steady output displacement with frequency range from 0.6 Hz to 50 Hz, an analytical displacement resolution of 3.1 nm and an acceleration range from 3.72 mm s-2 to 1935.41 mm s-2 with a relative standard deviation less than 1.79%. The effectiveness of the high-precision and low-cost vibration generator was verified by calibrating our optical accelerometer.

  18. Larger Optics and Improved Calibration Techniques for Small Satellite Observations with the ERAU OSCOM System

    NASA Astrophysics Data System (ADS)

    Bilardi, S.; Barjatya, A.; Gasdia, F.

    OSCOM, Optical tracking and Spectral characterization of CubeSats for Operational Missions, is a system capable of providing time-resolved satellite photometry using commercial-off-the-shelf (COTS) hardware and custom tracking and analysis software. This system has acquired photometry of objects as small as CubeSats using a Celestron 11” RASA and an inexpensive CMOS machine vision camera. For satellites with known shapes, these light curves can be used to verify a satellite’s attitude and the state of its deployed solar panels or antennae. While the OSCOM system can successfully track satellites and produce light curves, there is ongoing improvement towards increasing its automation while supporting additional mounts and telescopes. A newly acquired Celestron 14” Edge HD can be used with a Starizona Hyperstar to increase the SNR for small objects as well as extend beyond the limiting magnitude of the 11” RASA. OSCOM currently corrects instrumental brightness measurements for satellite range and observatory site average atmospheric extinction, but calibrated absolute brightness is required to determine information about satellites other than their spin rate, such as surface albedo. A calibration method that automatically detects and identifies background stars can use their catalog magnitudes to calibrate the brightness of the satellite in the image. We present a photometric light curve from both the 14” Edge HD and 11” RASA optical systems as well as plans for a calibration method that will perform background star photometry to efficiently determine calibrated satellite brightness in each frame.

  19. Absolute radiometric calibration of Landsat using a pseudo invariant calibration site

    USGS Publications Warehouse

    Helder, D.; Thome, K.J.; Mishra, N.; Chander, G.; Xiong, Xiaoxiong; Angal, A.; Choi, Tae-young

    2013-01-01

    Pseudo invariant calibration sites (PICS) have been used for on-orbit radiometric trending of optical satellite systems for more than 15 years. This approach to vicarious calibration has demonstrated a high degree of reliability and repeatability at the level of 1-3% depending on the site, spectral channel, and imaging geometries. A variety of sensors have used this approach for trending because it is broadly applicable and easy to implement. Models to describe the surface reflectance properties, as well as the intervening atmosphere have also been developed to improve the precision of the method. However, one limiting factor of using PICS is that an absolute calibration capability has not yet been fully developed. Because of this, PICS are primarily limited to providing only long term trending information for individual sensors or cross-calibration opportunities between two sensors. This paper builds an argument that PICS can be used more extensively for absolute calibration. To illustrate this, a simple empirical model is developed for the well-known Libya 4 PICS based on observations by Terra MODIS and EO-1 Hyperion. The model is validated by comparing model predicted top-of-atmosphere reflectance values to actual measurements made by the Landsat ETM+ sensor reflective bands. Following this, an outline is presented to develop a more comprehensive and accurate PICS absolute calibration model that can be Système international d'unités (SI) traceable. These initial concepts suggest that absolute calibration using PICS is possible on a broad scale and can lead to improved on-orbit calibration capabilities for optical satellite sensors.

  20. Machine characterization based on an abstract high-level language machine

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.; Smith, Alan Jay; Miya, Eugene

    1989-01-01

    Measurements are presented for a large number of machines ranging from small workstations to supercomputers. The authors combine these measurements into groups of parameters which relate to specific aspects of the machine implementation, and use these groups to provide overall machine characterizations. The authors also define the concept of pershapes, which represent the level of performance of a machine for different types of computation. A metric based on pershapes is introduced that provides a quantitative way of measuring how similar two machines are in terms of their performance distributions. The metric is related to the extent to which pairs of machines have varying relative performance levels depending on which benchmark is used.

  1. Learning a common dictionary for subject-transfer decoding with resting calibration.

    PubMed

    Morioka, Hiroshi; Kanemura, Atsunori; Hirayama, Jun-ichiro; Shikauchi, Manabu; Ogawa, Takeshi; Ikeda, Shigeyuki; Kawanabe, Motoaki; Ishii, Shin

    2015-05-01

    Brain signals measured over a series of experiments have inherent variability because of different physical and mental conditions among multiple subjects and sessions. Such variability complicates the analysis of data from multiple subjects and sessions in a consistent way, and degrades the performance of subject-transfer decoding in a brain-machine interface (BMI). To accommodate the variability in brain signals, we propose 1) a method for extracting spatial bases (or a dictionary) shared by multiple subjects, by employing a signal-processing technique of dictionary learning modified to compensate for variations between subjects and sessions, and 2) an approach to subject-transfer decoding that uses the resting-state activity of a previously unseen target subject as calibration data for compensating for variations, eliminating the need for a standard calibration based on task sessions. Applying our methodology to a dataset of electroencephalography (EEG) recordings during a selective visual-spatial attention task from multiple subjects and sessions, where the variability compensation was essential for reducing the redundancy of the dictionary, we found that the extracted common brain activities were reasonable in the light of neuroscience knowledge. The applicability to subject-transfer decoding was confirmed by improved performance over existing decoding methods. These results suggest that analyzing multisubject brain activities on common bases by the proposed method enables information sharing across subjects with low-burden resting calibration, and is effective for practical use of BMI in variable environments. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. [Fusion of MRI, fMRI and intraoperative MRI data. Methods and clinical significance exemplified by neurosurgical interventions].

    PubMed

    Moche, M; Busse, H; Dannenberg, C; Schulz, T; Schmitgen, A; Trantakis, C; Winkler, D; Schmidt, F; Kahn, T

    2001-11-01

    The aim of this work was to realize and clinically evaluate an image fusion platform for the integration of preoperative MRI and fMRI data into the intraoperative images of an interventional MRI system with a focus on neurosurgical procedures. A vertically open 0.5 T MRI scanner was equipped with a dedicated navigation system enabling the registration of additional imaging modalities (MRI, fMRI, CT) with the intraoperatively acquired data sets. These merged image data served as the basis for interventional planning and multimodal navigation. So far, the system has been used in 70 neurosurgical interventions (13 of which involved image data fusion--requiring 15 minutes extra time). The augmented navigation system is characterized by a higher frame rate and a higher image quality as compared to the system-integrated navigation based on continuously acquired (near) real time images. Patient movement and tissue shifts can be immediately detected by monitoring the morphological differences between both navigation scenes. The multimodal image fusion allowed a refined navigation planning especially for the resection of deeply seated brain lesions or pathologies close to eloquent areas. Augmented intraoperative orientation and instrument guidance improve the safety and accuracy of neurosurgical interventions.

  3. Reference tissue quantification of DCE-MRI data without a contrast agent calibration

    NASA Astrophysics Data System (ADS)

    Walker-Samuel, Simon; Leach, Martin O.; Collins, David J.

    2007-02-01

    The quantification of dynamic contrast-enhanced (DCE) MRI data conventionally requires a conversion from signal intensity to contrast agent concentration by measuring a change in the tissue longitudinal relaxation rate, R1. In this paper, it is shown that the use of a spoiled gradient-echo acquisition sequence (optimized so that signal intensity scales linearly with contrast agent concentration) in conjunction with a reference tissue-derived vascular input function (VIF), avoids the need for the conversion to Gd-DTPA concentration. This study evaluates how to optimize such sequences and which dynamic time-series parameters are most suitable for this type of analysis. It is shown that signal difference and relative enhancement provide useful alternatives when full contrast agent quantification cannot be achieved, but that pharmacokinetic parameters derived from both contain sources of error (such as those caused by differences between reference tissue and region of interest proton density and native T1 values). It is shown in a rectal cancer study that these sources of uncertainty are smaller when using signal difference, compared with relative enhancement (15 ± 4% compared with 33 ± 4%). Both of these uncertainties are of the order of those associated with the conversion to Gd-DTPA concentration, according to literature estimates.

  4. MRI Scans

    MedlinePlus

    Magnetic resonance imaging (MRI) uses a large magnet and radio waves to look at organs and structures inside your body. Health care professionals use MRI scans to diagnose a variety of conditions, from torn ...

  5. Self-Calibrating Pressure Transducer

    NASA Technical Reports Server (NTRS)

    Lueck, Dale E. (Inventor)

    2006-01-01

    A self-calibrating pressure transducer is disclosed. The device uses an embedded zirconia membrane which pumps a determined quantity of oxygen into the device. The associated pressure can be determined, and thus, the transducer pressure readings can be calibrated. The zirconia membrane obtains oxygen .from the surrounding environment when possible. Otherwise, an oxygen reservoir or other source is utilized. In another embodiment, a reversible fuel cell assembly is used to pump oxygen and hydrogen into the system. Since a known amount of gas is pumped across the cell, the pressure produced can be determined, and thus, the device can be calibrated. An isolation valve system is used to allow the device to be calibrated in situ. Calibration is optionally automated so that calibration can be continuously monitored. The device is preferably a fully integrated MEMS device. Since the device can be calibrated without removing it from the process, reductions in costs and down time are realized.

  6. Humanizing machines: Anthropomorphization of slot machines increases gambling.

    PubMed

    Riva, Paolo; Sacchi, Simona; Brambilla, Marco

    2015-12-01

    Do people gamble more on slot machines if they think that they are playing against humanlike minds rather than mathematical algorithms? Research has shown that people have a strong cognitive tendency to imbue humanlike mental states to nonhuman entities (i.e., anthropomorphism). The present research tested whether anthropomorphizing slot machines would increase gambling. Four studies manipulated slot machine anthropomorphization and found that exposing people to an anthropomorphized description of a slot machine increased gambling behavior and reduced gambling outcomes. Such findings emerged using tasks that focused on gambling behavior (Studies 1 to 3) as well as in experimental paradigms that included gambling outcomes (Studies 2 to 4). We found that gambling outcomes decrease because participants primed with the anthropomorphic slot machine gambled more (Study 4). Furthermore, we found that high-arousal positive emotions (e.g., feeling excited) played a role in the effect of anthropomorphism on gambling behavior (Studies 3 and 4). Our research indicates that the psychological process of gambling-machine anthropomorphism can be advantageous for the gaming industry; however, this may come at great expense for gamblers' (and their families') economic resources and psychological well-being. (c) 2015 APA, all rights reserved).

  7. Textural kinetics: a novel dynamic contrast-enhanced (DCE)-MRI feature for breast lesion classification.

    PubMed

    Agner, Shannon C; Soman, Salil; Libfeld, Edward; McDonald, Margie; Thomas, Kathleen; Englander, Sarah; Rosen, Mark A; Chin, Deanna; Nosher, John; Madabhushi, Anant

    2011-06-01

    Dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) of the breast has emerged as an adjunct imaging tool to conventional X-ray mammography due to its high detection sensitivity. Despite the increasing use of breast DCE-MRI, specificity in distinguishing malignant from benign breast lesions is low, and interobserver variability in lesion classification is high. The novel contribution of this paper is in the definition of a new DCE-MRI descriptor that we call textural kinetics, which attempts to capture spatiotemporal changes in breast lesion texture in order to distinguish malignant from benign lesions. We qualitatively and quantitatively demonstrated on 41 breast DCE-MRI studies that textural kinetic features outperform signal intensity kinetics and lesion morphology features in distinguishing benign from malignant lesions. A probabilistic boosting tree (PBT) classifier in conjunction with textural kinetic descriptors yielded an accuracy of 90%, sensitivity of 95%, specificity of 82%, and an area under the curve (AUC) of 0.92. Graph embedding, used for qualitative visualization of a low-dimensional representation of the data, showed the best separation between benign and malignant lesions when using textural kinetic features. The PBT classifier results and trends were also corroborated via a support vector machine classifier which showed that textural kinetic features outperformed the morphological, static texture, and signal intensity kinetics descriptors. When textural kinetic attributes were combined with morphologic descriptors, the resulting PBT classifier yielded 89% accuracy, 99% sensitivity, 76% specificity, and an AUC of 0.91.

  8. Rey's Auditory Verbal Learning Test scores can be predicted from whole brain MRI in Alzheimer's disease.

    PubMed

    Moradi, Elaheh; Hallikainen, Ilona; Hänninen, Tuomo; Tohka, Jussi

    2017-01-01

    Rey's Auditory Verbal Learning Test (RAVLT) is a powerful neuropsychological tool for testing episodic memory, which is widely used for the cognitive assessment in dementia and pre-dementia conditions. Several studies have shown that an impairment in RAVLT scores reflect well the underlying pathology caused by Alzheimer's disease (AD), thus making RAVLT an effective early marker to detect AD in persons with memory complaints. We investigated the association between RAVLT scores (RAVLT Immediate and RAVLT Percent Forgetting) and the structural brain atrophy caused by AD. The aim was to comprehensively study to what extent the RAVLT scores are predictable based on structural magnetic resonance imaging (MRI) data using machine learning approaches as well as to find the most important brain regions for the estimation of RAVLT scores. For this, we built a predictive model to estimate RAVLT scores from gray matter density via elastic net penalized linear regression model. The proposed approach provided highly significant cross-validated correlation between the estimated and observed RAVLT Immediate (R = 0.50) and RAVLT Percent Forgetting (R = 0.43) in a dataset consisting of 806 AD, mild cognitive impairment (MCI) or healthy subjects. In addition, the selected machine learning method provided more accurate estimates of RAVLT scores than the relevance vector regression used earlier for the estimation of RAVLT based on MRI data. The top predictors were medial temporal lobe structures and amygdala for the estimation of RAVLT Immediate and angular gyrus, hippocampus and amygdala for the estimation of RAVLT Percent Forgetting. Further, the conversion of MCI subjects to AD in 3-years could be predicted based on either observed or estimated RAVLT scores with an accuracy comparable to MRI-based biomarkers.

  9. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    PubMed

    Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan

    2015-01-01

    A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the

  10. MR Scanner Systems Should Be Adequately Characterized in Diffusion-MRI of the Breast

    PubMed Central

    Giannelli, Marco; Sghedoni, Roberto; Iacconi, Chiara; Iori, Mauro; Traino, Antonio Claudio; Guerrisi, Maria; Mascalchi, Mario; Toschi, Nicola; Diciotti, Stefano

    2014-01-01

    Breast imaging represents a relatively recent and promising field of application of quantitative diffusion-MRI techniques. In view of the importance of guaranteeing and assessing its reliability in clinical as well as research settings, the aim of this study was to specifically characterize how the main MR scanner system-related factors affect quantitative measurements in diffusion-MRI of the breast. In particular, phantom acquisitions were performed on three 1.5 T MR scanner systems by different manufacturers, all equipped with a dedicated multi-channel breast coil as well as acquisition sequences for diffusion-MRI of the breast. We assessed the accuracy, inter-scan and inter-scanner reproducibility of the mean apparent diffusion coefficient measured along the main orthogonal directions () as well as of diffusion-tensor imaging (DTI)-derived mean diffusivity (MD) measurements. Additionally, we estimated spatial non-uniformity of (NU) and MD (NUMD) maps. We showed that the signal-to-noise ratio as well as overall calibration of high strength diffusion gradients system in typical acquisition sequences for diffusion-MRI of the breast varied across MR scanner systems, introducing systematic bias in the measurements of diffusion indices. While and MD values were not appreciably different from each other, they substantially varied across MR scanner systems. The mean of the accuracies of measured and MD was in the range [−2.3%,11.9%], and the mean of the coefficients of variation for and MD measurements across MR scanner systems was 6.8%. The coefficient of variation for repeated measurements of both and MD was < 1%, while NU and NUMD values were <4%. Our results highlight that MR scanner system-related factors can substantially affect quantitative diffusion-MRI of the breast. Therefore, a specific quality control program for assessing and monitoring the performance of MR scanner systems for diffusion-MRI of the breast is

  11. Concurrent recording of RF pulses and gradient fields - comprehensive field monitoring for MRI.

    PubMed

    Brunner, David O; Dietrich, Benjamin E; Çavuşoğlu, Mustafa; Wilm, Bertram J; Schmid, Thomas; Gross, Simon; Barmet, Christoph; Pruessmann, Klaas P

    2016-09-01

    Reconstruction of MRI data is based on exact knowledge of all magnetic field dynamics, since the interplay of RF and gradient pulses generates the signal, defines the contrast and forms the basis of resolution in spatial and spectral dimensions. Deviations caused by various sources, such as system imperfections, delays, eddy currents, drifts or externally induced fields, can therefore critically limit the accuracy of MRI examinations. This is true especially at ultra-high fields, because many error terms scale with the main field strength, and higher available SNR renders even smaller errors relevant. Higher baseline field also often requires higher acquisition bandwidths and faster signal encoding, increasing hardware demands and the severity of many types of hardware imperfection. To address field imperfections comprehensively, in this work we propose to expand the concept of magnetic field monitoring to also encompass the recording of RF fields. In this way, all dynamic magnetic fields relevant for spin evolution are covered, including low- to audio-frequency magnetic fields as produced by main magnets, gradients and shim systems, as well as RF pulses generated with single- and multiple-channel transmission systems. The proposed approach permits field measurements concurrently with actual MRI procedures on a strict common time base. The combined measurement is achieved with an array of miniaturized field probes that measure low- to audio-frequency fields via (19) F NMR and simultaneously pick up RF pulses in the MRI system's (1) H transmit band. Field recordings can form the basis of system calibration, retrospective correction of imaging data or closed-loop feedback correction, all of which hold potential to render MRI more robust and relax hardware requirements. The proposed approach is demonstrated for a range of imaging methods performed on a 7 T human MRI system, including accelerated multiple-channel RF pulses. Copyright © 2015 John Wiley & Sons, Ltd

  12. SUMS calibration test report

    NASA Technical Reports Server (NTRS)

    Robertson, G.

    1982-01-01

    Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.

  13. Calibration of X-Ray Observatories

    NASA Technical Reports Server (NTRS)

    Weisskopf, Martin C.; L'Dell, Stephen L.

    2011-01-01

    Accurate calibration of x-ray observatories has proved an elusive goal. Inaccuracies and inconsistencies amongst on-ground measurements, differences between on-ground and in-space performance, in-space performance changes, and the absence of cosmic calibration standards whose physics we truly understand have precluded absolute calibration better than several percent and relative spectral calibration better than a few percent. The philosophy "the model is the calibration" relies upon a complete high-fidelity model of performance and an accurate verification and calibration of this model. As high-resolution x-ray spectroscopy begins to play a more important role in astrophysics, additional issues in accurately calibrating at high spectral resolution become more evident. Here we review the challenges of accurately calibrating the absolute and relative response of x-ray observatories. On-ground x-ray testing by itself is unlikely to achieve a high-accuracy calibration of in-space performance, especially when the performance changes with time. Nonetheless, it remains an essential tool in verifying functionality and in characterizing and verifying the performance model. In the absence of verified cosmic calibration sources, we also discuss the notion of an artificial, in-space x-ray calibration standard. 6th

  14. MRI Customized Play Therapy in Children Reduces the Need for Sedation--A Randomized Controlled Trial.

    PubMed

    Bharti, Bhavneet; Malhi, Prahbhjot; Khandelwal, N

    2016-03-01

    To evaluate the effectiveness of an MRI-specific play therapy intervention on the need for sedation in young children. All children in the age group of 4-10 y, who were advised an MRI scan over a period of one year were randomized. Exclusion criteria included children with neurodevelopmental disorders impairing cognition and children who had previously undergone diagnostic MRI. A total of 79 children were randomized to a control or an intervention condition. The intervention involved familiarizing the child with the MRI model machine, listing the steps involved in the scan to the child in vivid detail, training the child to stand still for 5 min, and conducting several dry runs with a doll or a favorite toy. The study was approved by the Institute ethical committee. The need for sedation was 41 % (n = 16) in the control group and this declined to 20 % (n = 8) in the intervention group (χ(2) = 4.13; P = 0.04). The relative risk of sedation decreased by 49 % in the intervention group as compared to the control group (RR 0.49; 95 % CI: 0.24-1.01) and this difference was statistically significant (P = 0.04). The absolute risk difference in sedation use between intervention and control group was 21 % (95 % CI 1.3 %-40.8 %). Even on adjusting for age, relative risk of sedation remained significantly lower in children undergoing play therapy as compared to the control (RR 0.57, 95 % CI: 0.32-0.98) with P value of 0.04. The use of an MRI customized play therapy with pediatric patients undergoing diagnostic MRI resulted in significant reduction of the use of sedation.

  15. Evaluation of “Autotune” calibration against manual calibration of building energy models

    DOE PAGES

    Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; ...

    2016-08-26

    Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption,more » zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.« less

  16. Calibrated intercepts for solar radiometers used in remote sensor calibration

    NASA Technical Reports Server (NTRS)

    Gellman, David I.; Biggar, Stuart F.; Slater, Philip N.; Bruegge, Carol J.

    1991-01-01

    Calibrated solar radiometer intercepts allow spectral optical depths to be determined for days with intermittently clear skies. This is of particular importance on satellite sensor calibration days that are cloudy except at the time of image acquisition. This paper describes the calibration of four solar radiometers using the Langley-Bouguer technique for data collected on days with a clear, stable atmosphere. Intercepts are determined with an uncertainty of less than six percent, corresponding to a maximum uncertainty of 0.06 in optical depth. The spread of voltage intercepts calculated in this process is carried through three methods of radiometric calibration of satellite sensors to yield an uncertainty in radiance at the top of the atmosphere of less than one percent associated with the uncertainty in solar radiometer intercepts for a range of ground reflectances.

  17. An Enclosed Laser Calibration Standard

    NASA Astrophysics Data System (ADS)

    Adams, Thomas E.; Fecteau, M. L.

    1985-02-01

    We have designed, evaluated and calibrated an enclosed, safety-interlocked laser calibration standard for use in US Army Secondary Reference Calibration Laboratories. This Laser Test Set Calibrator (LTSC) represents the Army's first-generation field laser calibration standard. Twelve LTSC's are now being fielded world-wide. The main requirement on the LTSC is to provide calibration support for the Test Set (TS3620) which, in turn, is a GO/NO GO tester of the Hand-Held Laser Rangefinder (AN/GVS-5). However, we believe it's design is flexible enough to accommodate the calibration of other laser test, measurement and diagnostic equipment (TMDE) provided that single-shot capability is adequate to perform the task. In this paper we describe the salient aspects and calibration requirements of the AN/GVS-5 Rangefinder and the Test Set which drove the basic LTSC design. Also, we detail our evaluation and calibration of the LTSC, in particular, the LTSC system standards. We conclude with a review of our error analysis from which uncertainties were assigned to the LTSC calibration functions.

  18. The Optics and Alignment of the Divergent Beam Laboratory X-ray Powder Diffractometer and its Calibration Using NIST Standard Reference Materials.

    PubMed

    Cline, James P; Mendenhall, Marcus H; Black, David; Windover, Donald; Henins, Albert

    2015-01-01

    The laboratory X-ray powder diffractometer is one of the primary analytical tools in materials science. It is applicable to nearly any crystalline material, and with advanced data analysis methods, it can provide a wealth of information concerning sample character. Data from these machines, however, are beset by a complex aberration function that can be addressed through calibration with the use of NIST Standard Reference Materials (SRMs). Laboratory diffractometers can be set up in a range of optical geometries; considered herein are those of Bragg-Brentano divergent beam configuration using both incident and diffracted beam monochromators. We review the origin of the various aberrations affecting instruments of this geometry and the methods developed at NIST to align these machines in a first principles context. Data analysis methods are considered as being in two distinct categories: those that use empirical methods to parameterize the nature of the data for subsequent analysis, and those that use model functions to link the observation directly to a specific aspect of the experiment. We consider a multifaceted approach to instrument calibration using both the empirical and model based data analysis methods. The particular benefits of the fundamental parameters approach are reviewed.

  19. Calibrating Wide Field Surveys

    NASA Astrophysics Data System (ADS)

    González Fernández, Carlos; Irwin, M.; Lewis, J.; González Solares, E.

    2017-09-01

    "In this talk I will review the strategies in CASU to calibrate wide field surveys, in particular applied to data taken with the VISTA telescope. These include traditional night-by-night calibrations along with the search for a global, coherent calibration of all the data once observations are finished. The difficulties of obtaining photometric accuracy of a few percent and a good absolute calibration will also be discussed."

  20. One-calibrant kinetic calibration for on-site water sampling with solid-phase microextraction.

    PubMed

    Ouyang, Gangfeng; Cui, Shufen; Qin, Zhipei; Pawliszyn, Janusz

    2009-07-15

    The existing solid-phase microextraction (SPME) kinetic calibration technique, using the desorption of the preloaded standards to calibrate the extraction of the analytes, requires that the physicochemical properties of the standard should be similar to those of the analyte, which limited the application of the technique. In this study, a new method, termed the one-calibrant kinetic calibration technique, which can use the desorption of a single standard to calibrate all extracted analytes, was proposed. The theoretical considerations were validated by passive water sampling in laboratory and rapid water sampling in the field. To mimic the variety of the environment, such as temperature, turbulence, and the concentration of the analytes, the flow-through system for the generation of standard aqueous polycyclic aromatic hydrocarbons (PAHs) solution was modified. The experimental results of the passive samplings in the flow-through system illustrated that the effect of the environmental variables was successfully compensated with the kinetic calibration technique, and all extracted analytes can be calibrated through the desorption of a single calibrant. On-site water sampling with rotated SPME fibers also illustrated the feasibility of the new technique for rapid on-site sampling of hydrophobic organic pollutants in water. This technique will accelerate the application of the kinetic calibration method and also will be useful for other microextraction techniques.

  1. Initial Radiometric Calibration of the AWiFS using Vicarious Calibration Techniques

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Thome, Kurtis; Aaron, David; Leigh, Larry

    2006-01-01

    NASA SSC maintains four ASD FieldSpec FR spectroradiometers: 1) Laboratory transfer radiometers; 2) Ground surface reflectance for V&V field collection activities. Radiometric Calibration consists of a NIST-calibrated integrating sphere which serves as a source with known spectral radiance. Spectral Calibration consists of a laser and pen lamp illumination of integrating sphere. Environmental Testing includes temperature stability tests performed in environmental chamber.

  2. MRI-guided brachytherapy

    PubMed Central

    Tanderup, Kari; Viswanathan, Akila; Kirisits, Christian; Frank, Steven J.

    2014-01-01

    The application of MRI-guided brachytherapy has demonstrated significant growth during the last two decades. Clinical improvements in cervix cancer outcomes have been linked to the application of repeated MRI for identification of residual tumor volumes during radiotherapy. This has changed clinical practice in the direction of individualized dose administration, and mounting evidence of improved clinical outcome with regard to local control, overall survival as well as morbidity. MRI-guided prostate HDR and LDR brachytherapy has improved the accuracy of target and organs-at-risk (OAR) delineation, and the potential exists for improved dose prescription and reporting for the prostate gland and organs at risk. Furthermore, MRI-guided prostate brachytherapy has significant potential to identify prostate subvolumes and dominant lesions to allow for dose administration reflecting the differential risk of recurrence. MRI-guided brachytherapy involves advanced imaging, target concepts, and dose planning. The key issue for safe dissemination and implementation of high quality MRI-guided brachytherapy is establishment of qualified multidisciplinary teams and strategies for training and education. PMID:24931089

  3. Magnetic vesicles as MRI-trackable biogenic nanovectors

    NASA Astrophysics Data System (ADS)

    Andriola Silva, Amanda K.; Luciani, Nathalie; Gazeau, Florence; Wilhelm, Claire

    2012-03-01

    Magnetic labeling renders cells MRI-detectable which provides attractive solutions for tracking the fate of a transplanted cell population. Understanding the interplay of magnetic nanoparticles and cells is then an important point that should not be neglected. Here we show that in the condition of food starvation, macrophage cells emit vesicles containing nanoparticles. First, we inferred the intracellular iron oxide load from the magnetophoretic velocity of cells at a calibrated magnetic field gradient. After magnetic labeling and culture in stress conditions, the intracellular iron oxide load was once more determined and a detectable difference was observed before and after stress. Moreover, we identified in the stress conditioned medium membrane vesicle structures carrying magnetic particles. Besides pointing out the role of cell-derived vesicles in the sequestration of the intracellular magnetic label, experiments also demonstrated that vesicles were able to chaperone the magnetic cargo into naïve cells.

  4. Comparison Between One-Point Calibration and Two-Point Calibration Approaches in a Continuous Glucose Monitoring Algorithm

    PubMed Central

    Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl

    2014-01-01

    Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420

  5. An Automated Thermocouple Calibration System

    NASA Technical Reports Server (NTRS)

    Bethea, Mark D.; Rosenthal, Bruce N.

    1992-01-01

    An Automated Thermocouple Calibration System (ATCS) was developed for the unattended calibration of type K thermocouples. This system operates from room temperature to 650 C and has been used for calibration of thermocouples in an eight-zone furnace system which may employ as many as 60 thermocouples simultaneously. It is highly efficient, allowing for the calibration of large numbers of thermocouples in significantly less time than required for manual calibrations. The system consists of a personal computer, a data acquisition/control unit, and a laboratory calibration furnace. The calibration furnace is a microprocessor-controlled multipurpose temperature calibrator with an accuracy of +/- 0.7 C. The accuracy of the calibration furnace is traceable to the National Institute of Standards and Technology (NIST). The computer software is menu-based to give the user flexibility and ease of use. The user needs no programming experience to operate the systems. This system was specifically developed for use in the Microgravity Materials Science Laboratory (MMSL) at the NASA LeRC.

  6. An Innovative Software Tool Suite for Power Plant Model Validation and Parameter Calibration using PMU Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yuanyuan; Diao, Ruisheng; Huang, Renke

    Maintaining good quality of power plant stability models is of critical importance to ensure the secure and economic operation and planning of today’s power grid with its increasing stochastic and dynamic behavior. According to North American Electric Reliability (NERC) standards, all generators in North America with capacities larger than 10 MVA are required to validate their models every five years. Validation is quite costly and can significantly affect the revenue of generator owners, because the traditional staged testing requires generators to be taken offline. Over the past few years, validating and calibrating parameters using online measurements including phasor measurement unitsmore » (PMUs) and digital fault recorders (DFRs) has been proven to be a cost-effective approach. In this paper, an innovative open-source tool suite is presented for validating power plant models using PPMV tool, identifying bad parameters with trajectory sensitivity analysis, and finally calibrating parameters using an ensemble Kalman filter (EnKF) based algorithm. The architectural design and the detailed procedures to run the tool suite are presented, with results of test on a realistic hydro power plant using PMU measurements for 12 different events. The calibrated parameters of machine, exciter, governor and PSS models demonstrate much better performance than the original models for all the events and show the robustness of the proposed calibration algorithm.« less

  7. 14. Interior, Machine Shop, Roundhouse Machine Shop Extension, Southern Pacific ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. Interior, Machine Shop, Roundhouse Machine Shop Extension, Southern Pacific Railroad Carlin Shops, view to north (90mm lens). - Southern Pacific Railroad, Carlin Shops, Roundhouse Machine Shop Extension, Foot of Sixth Street, Carlin, Elko County, NV

  8. Traceable X,Y self-calibration at single nm level of an optical microscope used for coherence scanning interferometry

    NASA Astrophysics Data System (ADS)

    Ekberg, Peter; Mattsson, Lars

    2018-03-01

    Coherence scanning interferometry used in optical profilers are typically good for Z-calibration at nm-levels, but the X,Y accuracy is often left without further notice than typical resolution limits of the optics, i.e. of the order of ~1 µm. For the calibration of metrology tools we rely on traceable artefacts, e.g. gauge blocks for traditional coordinate measurement machines, and lithographically mask made artefacts for microscope calibrations. In situations where the repeatability and accuracy of the measurement tool is much better than the uncertainty of the traceable artefact, we are bound to specify the uncertainty based on the calibration artefact rather than on the measurement tool. This is a big drawback as the specified uncertainty of a calibrated measurement may shrink the available manufacturing tolerance. To improve the uncertainty in X,Y we can use self-calibration. Then, we do not need to know anything more than that the artefact contains a pattern with some nominal grid. This also gives the opportunity to manufacture the artefact in-house, rather than buying a calibrated and expensive artefact. The self-calibration approach we present here is based on an iteration algorithm, rather than the traditional mathematical inversion, and it leads to much more relaxed constrains on the input measurements. In this paper we show how the X,Y errors, primarily optical distortions, within the field of view (FOV) of an optical coherence scanning interferometry microscope, can be reduced with a large factor. By self-calibration we achieve an X,Y consistency in the 175  ×  175 µm2 FOV of ~2.3 nm (1σ) using the 50×  objective. Besides the calibrated coordinate X,Y system of the microscope we also receive, as a bonus, the absolute positions of the pattern in the artefact with a combined uncertainty of 6 nm (1σ) by relying on a traceable 1D linear measurement of a twin artefact at NIST.

  9. Correction of MRI-induced geometric distortions in whole-body small animal PET-MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frohwein, Lynn J., E-mail: frohwein@uni-muenster.de; Schäfers, Klaus P.; Hoerr, Verena

    Purpose: The fusion of positron emission tomography (PET) and magnetic resonance imaging (MRI) data can be a challenging task in whole-body PET-MRI. The quality of the registration between these two modalities in large field-of-views (FOV) is often degraded by geometric distortions of the MRI data. The distortions at the edges of large FOVs mainly originate from MRI gradient nonlinearities. This work describes a method to measure and correct for these kind of geometric distortions in small animal MRI scanners to improve the registration accuracy of PET and MRI data. Methods: The authors have developed a geometric phantom which allows themore » measurement of geometric distortions in all spatial axes via control points. These control points are detected semiautomatically in both PET and MRI data with a subpixel accuracy. The spatial transformation between PET and MRI data is determined with these control points via 3D thin-plate splines (3D TPS). The transformation derived from the 3D TPS is finally applied to real MRI mouse data, which were acquired with the same scan parameters used in the phantom data acquisitions. Additionally, the influence of the phantom material on the homogeneity of the magnetic field is determined via field mapping. Results: The spatial shift according to the magnetic field homogeneity caused by the phantom material was determined to a mean of 0.1 mm. The results of the correction show that distortion with a maximum error of 4 mm could be reduced to less than 1 mm with the proposed correction method. Furthermore, the control point-based registration of PET and MRI data showed improved congruence after correction. Conclusions: The developed phantom has been shown to have no considerable negative effect on the homogeneity of the magnetic field. The proposed method yields an appropriate correction of the measured MRI distortion and is able to improve the PET and MRI registration. Furthermore, the method is applicable to whole-body small

  10. Calibration method for spectroscopic systems

    DOEpatents

    Sandison, David R.

    1998-01-01

    Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets.

  11. Calibration method for spectroscopic systems

    DOEpatents

    Sandison, D.R.

    1998-11-17

    Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets. 3 figs.

  12. Coda Calibration Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Addair, Travis; Barno, Justin; Dodge, Doug

    CCT is a Java based application for calibrating 10 shear wave coda measurement models to observed data using a much smaller set of reference moment magnitudes (MWs) calculated from other means (waveform modeling, etc.). These calibrated measurement models can then be used in other tools to generate coda moment magnitude measurements, source spectra, estimated stress drop, and other useful measurements for any additional events and any new data collected in the calibrated region.

  13. SU-F-J-93: Automated Segmentation of High-Resolution 3D WholeBrain Spectroscopic MRI for Glioblastoma Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreibmann, E; Shu, H; Cordova, J

    Purpose: We report on an automated segmentation algorithm for defining radiation therapy target volumes using spectroscopic MR images (sMRI) acquired at nominal voxel resolution of 100 microliters. Methods: Wholebrain sMRI combining 3D echo-planar spectroscopic imaging, generalized auto-calibrating partially-parallel acquisitions, and elliptical k-space encoding were conducted on 3T MRI scanner with 32-channel head coil array creating images. Metabolite maps generated include choline (Cho), creatine (Cr), and N-acetylaspartate (NAA), as well as Cho/NAA, Cho/Cr, and NAA/Cr ratio maps. Automated segmentation was achieved by concomitantly considering sMRI metabolite maps with standard contrast enhancing (CE) imaging in a pipeline that first uses the watermore » signal for skull stripping. Subsequently, an initial blob of tumor region is identified by searching for regions of FLAIR abnormalities that also display reduced NAA activity using a mean ratio correlation and morphological filters. These regions are used as starting point for a geodesic level-set refinement that adapts the initial blob to the fine details specific to each metabolite. Results: Accuracy of the segmentation model was tested on a cohort of 12 patients that had sMRI datasets acquired pre, mid and post-treatment, providing a broad range of enhancement patterns. Compared to classical imaging, where heterogeneity in the tumor appearance and shape across posed a greater challenge to the algorithm, sMRI’s regions of abnormal activity were easily detected in the sMRI metabolite maps when combining the detail available in the standard imaging with the local enhancement produced by the metabolites. Results can be imported in the treatment planning, leading in general increase in the target volumes (GTV60) when using sMRI+CE MRI compared to the standard CE MRI alone. Conclusion: Integration of automated segmentation of sMRI metabolite maps into planning is feasible and will likely streamline acceptance

  14. The Calibration Reference Data System

    NASA Astrophysics Data System (ADS)

    Greenfield, P.; Miller, T.

    2016-07-01

    We describe a software architecture and implementation for using rules to determine which calibration files are appropriate for calibrating a given observation. This new system, the Calibration Reference Data System (CRDS), replaces what had been previously used for the Hubble Space Telescope (HST) calibration pipelines, the Calibration Database System (CDBS). CRDS will be used for the James Webb Space Telescope (JWST) calibration pipelines, and is currently being used for HST calibration pipelines. CRDS can be easily generalized for use in similar applications that need a rules-based system for selecting the appropriate item for a given dataset; we give some examples of such generalizations that will likely be used for JWST. The core functionality of the Calibration Reference Data System is available under an Open Source license. CRDS is briefly contrasted with a sampling of other similar systems used at other observatories.

  15. GPI Calibrations

    NASA Astrophysics Data System (ADS)

    Rantakyrö, Fredrik T.

    2017-09-01

    "The Gemini Planet Imager requires a large set of Calibrations. These can be split into two major sets, one set associated with each observation and one set related to biweekly calibrations. The observation set is to optimize the correction of miscroshifts in the IFU spectra and the latter set is for correction of detector and instrument cosmetics."

  16. Calibration of mass spectrometric peptide mass fingerprint data without specific external or internal calibrants

    PubMed Central

    Wolski, Witold E; Lalowski, Maciej; Jungblut, Peter; Reinert, Knut

    2005-01-01

    Background Peptide Mass Fingerprinting (PMF) is a widely used mass spectrometry (MS) method of analysis of proteins and peptides. It relies on the comparison between experimentally determined and theoretical mass spectra. The PMF process requires calibration, usually performed with external or internal calibrants of known molecular masses. Results We have introduced two novel MS calibration methods. The first method utilises the local similarity of peptide maps generated after separation of complex protein samples by two-dimensional gel electrophoresis. It computes a multiple peak-list alignment of the data set using a modified Minimum Spanning Tree (MST) algorithm. The second method exploits the idea that hundreds of MS samples are measured in parallel on one sample support. It improves the calibration coefficients by applying a two-dimensional Thin Plate Splines (TPS) smoothing algorithm. We studied the novel calibration methods utilising data generated by three different MALDI-TOF-MS instruments. We demonstrate that a PMF data set can be calibrated without resorting to external or relying on widely occurring internal calibrants. The methods developed here were implemented in R and are part of the BioConductor package mscalib available from . Conclusion The MST calibration algorithm is well suited to calibrate MS spectra of protein samples resulting from two-dimensional gel electrophoretic separation. The TPS based calibration algorithm might be used to correct systematic mass measurement errors observed for large MS sample supports. As compared to other methods, our combined MS spectra calibration strategy increases the peptide/protein identification rate by an additional 5 – 15%. PMID:16102175

  17. Learning an Eddy Viscosity Model Using Shrinkage and Bayesian Calibration: A Jet-in-Crossflow Case Study

    DOE PAGES

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...

    2017-09-07

    In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less

  18. Learning an Eddy Viscosity Model Using Shrinkage and Bayesian Calibration: A Jet-in-Crossflow Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan

    In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less

  19. Machine learning-based analysis of MR radiomics can help to improve the diagnostic performance of PI-RADS v2 in clinically relevant prostate cancer.

    PubMed

    Wang, Jing; Wu, Chen-Jiang; Bao, Mei-Ling; Zhang, Jing; Wang, Xiao-Ning; Zhang, Yu-Dong

    2017-10-01

    To investigate whether machine learning-based analysis of MR radiomics can help improve the performance PI-RADS v2 in clinically relevant prostate cancer (PCa). This IRB-approved study included 54 patients with PCa undergoing multi-parametric (mp) MRI before prostatectomy. Imaging analysis was performed on 54 tumours, 47 normal peripheral (PZ) and 48 normal transitional (TZ) zone based on histological-radiological correlation. Mp-MRI was scored via PI-RADS, and quantified by measuring radiomic features. Predictive model was developed using a novel support vector machine trained with: (i) radiomics, (ii) PI-RADS scores, (iii) radiomics and PI-RADS scores. Paired comparison was made via ROC analysis. For PCa versus normal TZ, the model trained with radiomics had a significantly higher area under the ROC curve (Az) (0.955 [95% CI 0.923-0.976]) than PI-RADS (Az: 0.878 [0.834-0.914], p < 0.001). The Az between them was insignificant for PCa versus PZ (0.972 [0.945-0.988] vs. 0.940 [0.905-0.965], p = 0.097). When radiomics was added, performance of PI-RADS was significantly improved for PCa versus PZ (Az: 0.983 [0.960-0.995]) and PCa versus TZ (Az: 0.968 [0.940-0.985]). Machine learning analysis of MR radiomics can help improve the performance of PI-RADS in clinically relevant PCa. • Machine-based analysis of MR radiomics outperformed in TZ cancer against PI-RADS. • Adding MR radiomics significantly improved the performance of PI-RADS. • DKI-derived Dapp and Kapp were two strong markers for the diagnosis of PCa.

  20. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  1. Machine Learning Classification of Cirrhotic Patients with and without Minimal Hepatic Encephalopathy Based on Regional Homogeneity of Intrinsic Brain Activity.

    PubMed

    Chen, Qiu-Feng; Chen, Hua-Jun; Liu, Jun; Sun, Tao; Shen, Qun-Tai

    2016-01-01

    Machine learning-based approaches play an important role in examining functional magnetic resonance imaging (fMRI) data in a multivariate manner and extracting features predictive of group membership. This study was performed to assess the potential for measuring brain intrinsic activity to identify minimal hepatic encephalopathy (MHE) in cirrhotic patients, using the support vector machine (SVM) method. Resting-state fMRI data were acquired in 16 cirrhotic patients with MHE and 19 cirrhotic patients without MHE. The regional homogeneity (ReHo) method was used to investigate the local synchrony of intrinsic brain activity. Psychometric Hepatic Encephalopathy Score (PHES) was used to define MHE condition. SVM-classifier was then applied using leave-one-out cross-validation, to determine the discriminative ReHo-map for MHE. The discrimination map highlights a set of regions, including the prefrontal cortex, anterior cingulate cortex, anterior insular cortex, inferior parietal lobule, precentral and postcentral gyri, superior and medial temporal cortices, and middle and inferior occipital gyri. The optimized discriminative model showed total accuracy of 82.9% and sensitivity of 81.3%. Our results suggested that a combination of the SVM approach and brain intrinsic activity measurement could be helpful for detection of MHE in cirrhotic patients.

  2. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less

  3. MRI Safety during Pregnancy

    MedlinePlus

    ... during the exam? Contrast material MRI during pregnancy Magnetic resonance imaging (MRI) If you are pregnant and your doctor wants to perform a magnetic resonance imaging (MRI) exam, there is a possibility that your ...

  4. Psychophysical contrast calibration

    PubMed Central

    To, Long; Woods, Russell L; Goldstein, Robert B; Peli, Eli

    2013-01-01

    Electronic displays and computer systems offer numerous advantages for clinical vision testing. Laboratory and clinical measurements of various functions and in particular of (letter) contrast sensitivity require accurately calibrated display contrast. In the laboratory this is achieved using expensive light meters. We developed and evaluated a novel method that uses only psychophysical responses of a person with normal vision to calibrate the luminance contrast of displays for experimental and clinical applications. Our method combines psychophysical techniques (1) for detection (and thus elimination or reduction) of display saturating nonlinearities; (2) for luminance (gamma function) estimation and linearization without use of a photometer; and (3) to measure without a photometer the luminance ratios of the display’s three color channels that are used in a bit-stealing procedure to expand the luminance resolution of the display. Using a photometer we verified that the calibration achieved with this procedure is accurate for both LCD and CRT displays enabling testing of letter contrast sensitivity to 0.5%. Our visual calibration procedure enables clinical, internet and home implementation and calibration verification of electronic contrast testing. PMID:23643843

  5. An MRI-compatible platform for one-dimensional motion management studies in MRI.

    PubMed

    Nofiele, Joris; Yuan, Qing; Kazem, Mohammad; Tatebe, Ken; Torres, Quinn; Sawant, Amit; Pedrosa, Ivan; Chopra, Rajiv

    2016-08-01

    Abdominal MRI remains challenging because of respiratory motion. Motion compensation strategies are difficult to compare clinically because of the variability across human subjects. The goal of this study was to evaluate a programmable system for one-dimensional motion management MRI research. A system comprised of a programmable motorized linear stage and computer was assembled and tested in the MRI environment. Tests of the mutual interference between the platform and a whole-body MRI were performed. Organ trajectories generated from a high-temporal resolution scan of a healthy volunteer were used in phantom tests to evaluate the effects of motion on image quality and quantitative MRI measurements. No interference between the motion platform and the MRI was observed, and reliable motion could be produced across a wide range of imaging conditions. Motion-related artifacts commensurate with motion amplitude, frequency, and waveform were observed. T2 measurement of a kidney lesion in an abdominal phantom showed that its value decreased by 67% with physiologic motion, but could be partially recovered with navigator-based motion-compensation. The motion platform can produce reliable linear motion within a whole-body MRI. The system can serve as a foundation for a research platform to investigate and develop motion management approaches for MRI. Magn Reson Med 76:702-712, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  6. Link calibration against receiver calibration: an assessment of GPS time transfer uncertainties

    NASA Astrophysics Data System (ADS)

    Rovera, G. D.; Torre, J.-M.; Sherwood, R.; Abgrall, M.; Courde, C.; Laas-Bourez, M.; Uhrich, P.

    2014-10-01

    We present a direct comparison between two different techniques for the relative calibration of time transfer between remote time scales when using the signals transmitted by the Global Positioning System (GPS). Relative calibration estimates the delay of equipment or the delay of a time transfer link with respect to reference equipment. It is based on the circulation of some travelling GPS equipment between the stations in the network, against which the local equipment is measured. Two techniques can be considered: first a station calibration by the computation of the hardware delays of the local GPS equipment; second the computation of a global hardware delay offset for the time transfer between the reference points of two remote time scales. This last technique is called a ‘link’ calibration, with respect to the other one, which is a ‘receiver’ calibration. The two techniques require different measurements on site, which change the uncertainty budgets, and we discuss this and related issues. We report on one calibration campaign organized during Autumn 2013 between Observatoire de Paris (OP), Paris, France, Observatoire de la Côte d'Azur (OCA), Calern, France, and NERC Space Geodesy Facility (SGF), Herstmonceux, United Kingdom. The travelling equipment comprised two GPS receivers of different types, along with the required signal generator and distribution amplifier, and one time interval counter. We show the different ways to compute uncertainty budgets, leading to improvement factors of 1.2 to 1.5 on the hardware delay uncertainties when comparing the relative link calibration to the relative receiver calibration.

  7. Machine Shop Lathes.

    ERIC Educational Resources Information Center

    Dunn, James

    This guide, the second in a series of five machine shop curriculum manuals, was designed for use in machine shop courses in Oklahoma. The purpose of the manual is to equip students with basic knowledge and skills that will enable them to enter the machine trade at the machine-operator level. The curriculum is designed so that it can be used in…

  8. Pattern Recognition Approaches for Breast Cancer DCE-MRI Classification: A Systematic Review.

    PubMed

    Fusco, Roberta; Sansone, Mario; Filice, Salvatore; Carone, Guglielmo; Amato, Daniela Maria; Sansone, Carlo; Petrillo, Antonella

    2016-01-01

    We performed a systematic review of several pattern analysis approaches for classifying breast lesions using dynamic, morphological, and textural features in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Several machine learning approaches, namely artificial neural networks (ANN), support vector machines (SVM), linear discriminant analysis (LDA), tree-based classifiers (TC), and Bayesian classifiers (BC), and features used for classification are described. The findings of a systematic review of 26 studies are presented. The sensitivity and specificity are respectively 91 and 83 % for ANN, 85 and 82 % for SVM, 96 and 85 % for LDA, 92 and 87 % for TC, and 82 and 85 % for BC. The sensitivity and specificity are respectively 82 and 74 % for dynamic features, 93 and 60 % for morphological features, 88 and 81 % for textural features, 95 and 86 % for a combination of dynamic and morphological features, and 88 and 84 % for a combination of dynamic, morphological, and other features. LDA and TC have the best performance. A combination of dynamic and morphological features gives the best performance.

  9. Monitoring scanner calibration using the image-derived arterial blood SUV in whole-body FDG-PET.

    PubMed

    Maus, Jens; Hofheinz, Frank; Apostolova, Ivayla; Kreissl, Michael C; Kotzerke, Jörg; van den Hoff, Jörg

    2018-05-15

    The current de facto standard for quantification of tumor metabolism in oncological whole-body PET is the standardized uptake value (SUV) approach. SUV determination requires accurate scanner calibration. Residual inaccuracies of the calibration lead to biased SUV values. Especially, this can adversely affect multicenter trials where it is difficult to ensure reliable cross-calibration across participating sites. The goal of the present work was the evaluation of a new method for monitoring scanner calibration utilizing the image-derived arterial blood SUV (BSUV) averaged over a sufficiently large number of whole-body FDG-PET investigations. Data of 681 patients from three sites which underwent routine 18 F-FDG PET/CT or PET/MR were retrospectively analyzed. BSUV was determined in the descending aorta using a three-dimensional ROI concentric to the aorta's centerline. The ROI was delineated in the CT or MRI images and transferred to the PET images. A minimum ROI volume of 5 mL and a concentric safety margin to the aortic wall was observed. Mean BSUV, standard deviation (SD), and standard error of the mean (SE) were computed for three groups of patients at each site, investigated 2 years apart, respectively, with group sizes between 53 and 100 patients. Differences of mean BSUV between the individual groups and sites were determined. SD (SE) of BSUV in the different groups ranged from 14.3 to 20.7% (1.7 to 2.8%). Differences of mean BSUV between intra-site groups were small (1.1-6.3%). Only one out of nine of these differences reached statistical significance. Inter-site differences were distinctly larger (12.6-25.1%) and highly significant (P<0.001). Image-based determination of the group-averaged blood SUV in modestly large groups of whole-body FDG-PET investigations is a viable approach for ensuring consistent scanner calibration over time and across different sites. We propose this approach as a quality control and cross-calibration tool augmenting established

  10. Cardiac MRI in patients with complex CHD following primary or secondary implantation of MRI-conditional pacemaker system.

    PubMed

    Al-Wakeel, Nadya; O h-Ici, Darach; Schmitt, Katharina R; Messroghli, Daniel R; Riesenkampff, Eugénie; Berger, Felix; Kuehne, Titus; Peters, Bjoern

    2016-02-01

    In patients with CHD, cardiac MRI is often indicated for functional and anatomical assessment. With the recent introduction of MRI-conditional pacemaker systems, cardiac MRI has become accessible for patients with pacemakers. The present clinical study aims to evaluate safety, susceptibility artefacts, and image reading of cardiac MRI in patients with CHD and MRI-conditional pacemaker systems. Material and methods CHD patients with MRI-conditional pacemaker systems and a clinical need for cardiac MRI were examined with a 1.5-T MRI system. Lead function was tested before and after MRI. Artefacts and image readings were evaluated using a four-point grading scale. A total of nine patients with CHD (mean age 34.0 years, range 19.5-53.6 years) received a total of 11 cardiac MRI examinations. Owing to clinical indications, seven patients had previously been converted from conventional to MRI-conditional pacemaker systems. All MRI examinations were completed without adverse effects. Device testing immediately after MRI and at follow-up showed no alteration of pacemaker device and lead function. Clinical questions could be addressed and answered in all patients. Cardiac MRI can be performed safely with high certainty of diagnosis in CHD patients with MRI-conditional pacemaker systems. In case of clinically indicated lead and box changing, CHD patients with non-MRI-conditional pacemaker systems should be considered for complete conversion to MRI-conditional systems.

  11. Calibrating the Grigg's' Apparatus using Experiments performed at the Quartz-Coesite Transition

    NASA Astrophysics Data System (ADS)

    Heilbronner, R.; Stunitz, H.; Richter, B.

    2015-12-01

    The Griggs deformation apparatus is increasingly used for shear experiments. The tested material is placed on a 45° pre-cut between two forcing blocks. During the experiment, the axial displacement, load, temperature, and confining pressure are recorded as a function of time. From these records, stress, strain, and other mechanical data can be calculated - provided the machine is calibrated. Experimentalists are well aware that calibrating a Griggs apparatus is not easy. The stiffness correction accounts for the elastic extension of the rig as load is applied to the sample. An 'area correction' accounts for the decreasing overlap of the forcing blocks as slip along the pre-cut progresses. Other corrections are sometimes used to account for machine specific behaviour. While the rig stiffness can be measured very accurately, the area correction involves model assumptions. Depending on the choice of the model, the calculated stresses may vary by as much as 100 MPa. Also, while the assumptions appear to be theoretically valid, in practice they tend to over-correct the data, yielding strain hardening curves even in cases where constant flow stress or weakening is expected. Using the results of experiments on quartz gouge at the quartz-coesite transition (see Richter et al. this conference), we are now able to improve and constrain our corrections. We introduce an elastic salt correction based on the assumption that the confining pressure is increased as the piston advances and reduces the volume in the confining medium. As the compressibility of salt is low, the correction is significant and increases with strain. Applying this correction, the strain hardening artefact introduced by the area correction can be counter-balanced. Using a combination of area correction and salt correction we can now reproduce strain weakening, for which there is evidence in samples where coesite transforms back to quartz.

  12. A method of calibrating wind velocity sensors with a modified gas flow calibrator

    NASA Technical Reports Server (NTRS)

    Stump, H. P.

    1978-01-01

    A procedure was described for calibrating air velocity sensors in the exhaust flow of a gas flow calibrator. The average velocity in the test section located at the calibrator exhaust was verified from the mass flow rate accurately measured by the calibrator's precision sonic nozzles. Air at elevated pressures flowed through a series of screens, diameter changes, and flow straighteners, resulting in a smooth flow through the open test section. The modified system generated air velocities of 2 to 90 meters per second with an uncertainty of about two percent for speeds below 15 meters per second and four percent for the higher speeds. Wind tunnel data correlated well with that taken in the flow calibrator.

  13. Synthesis Polarimetry Calibration

    NASA Astrophysics Data System (ADS)

    Moellenbrock, George

    2017-10-01

    Synthesis instrumental polarization calibration fundamentals for both linear (ALMA) and circular (EVLA) feed bases are reviewed, with special attention to the calibration heuristics supported in CASA. Practical problems affecting modern instruments are also discussed.

  14. Calibration of the ARID robot

    NASA Technical Reports Server (NTRS)

    Doty, Keith L

    1992-01-01

    The author has formulated a new, general model for specifying the kinematic properties of serial manipulators. The new model kinematic parameters do not suffer discontinuities when nominally parallel adjacent axes deviate from exact parallelism. From this new theory the author develops a first-order, lumped-parameter, calibration-model for the ARID manipulator. Next, the author develops a calibration methodology for the ARID based on visual and acoustic sensing. A sensor platform, consisting of a camera and four sonars attached to the ARID end frame, performs calibration measurements. A calibration measurement consists of processing one visual frame of an accurately placed calibration image and recording four acoustic range measurements. A minimum of two measurement protocols determine the kinematics calibration-model of the ARID for a particular region: assuming the joint displacements are accurately measured, the calibration surface is planar, and the kinematic parameters do not vary rapidly in the region. No theoretical or practical limitations appear to contra-indicate the feasibility of the calibration method developed here.

  15. Extending the Range for Force Calibration in Magnetic Tweezers

    PubMed Central

    Daldrop, Peter; Brutzer, Hergen; Huhle, Alexander; Kauert, Dominik J.; Seidel, Ralf

    2015-01-01

    Magnetic tweezers are a wide-spread tool used to study the mechanics and the function of a large variety of biomolecules and biomolecular machines. This tool uses a magnetic particle and a strong magnetic field gradient to apply defined forces to the molecule of interest. Forces are typically quantified by analyzing the lateral fluctuations of the biomolecule-tethered particle in the direction perpendicular to the applied force. Since the magnetic field pins the anisotropy axis of the particle, the lateral fluctuations follow the geometry of a pendulum with a short pendulum length along and a long pendulum length perpendicular to the field lines. Typically, the short pendulum geometry is used for force calibration by power-spectral-density (PSD) analysis, because the movement of the bead in this direction can be approximated by a simple translational motion. Here, we provide a detailed analysis of the fluctuations according to the long pendulum geometry and show that for this direction, both the translational and the rotational motions of the particle have to be considered. We provide analytical formulas for the PSD of this coupled system that agree well with PSDs obtained in experiments and simulations and that finally allow a faithful quantification of the magnetic force for the long pendulum geometry. We furthermore demonstrate that this methodology allows the calibration of much larger forces than the short pendulum geometry in a tether-length-dependent manner. In addition, the accuracy of determination of the absolute force is improved. Our force calibration based on the long pendulum geometry will facilitate high-resolution magnetic-tweezers experiments that rely on short molecules and large forces, as well as highly parallelized measurements that use low frame rates. PMID:25992733

  16. Research on camera on orbit radial calibration based on black body and infrared calibration stars

    NASA Astrophysics Data System (ADS)

    Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng

    2018-05-01

    Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.

  17. Magnetic Resonance Imaging (MRI) Safety

    MedlinePlus

    ... News Physician Resources Professions Site Index A-Z Magnetic Resonance Imaging (MRI) Safety What is MRI and how does ... What is MRI and how does it work? Magnetic resonance imaging, or MRI, is a way of obtaining detailed ...

  18. Machine learning in a graph framework for subcortical segmentation

    NASA Astrophysics Data System (ADS)

    Guo, Zhihui; Kashyap, Satyananda; Sonka, Milan; Oguz, Ipek

    2017-02-01

    Automated and reliable segmentation of subcortical structures from human brain magnetic resonance images is of great importance for volumetric and shape analyses in quantitative neuroimaging studies. However, poor boundary contrast and variable shape of these structures make the automated segmentation a tough task. We propose a 3D graph-based machine learning method, called LOGISMOS-RF, to segment the caudate and the putamen from brain MRI scans in a robust and accurate way. An atlas-based tissue classification and bias-field correction method is applied to the images to generate an initial segmentation for each structure. Then a 3D graph framework is utilized to construct a geometric graph for each initial segmentation. A locally trained random forest classifier is used to assign a cost to each graph node. The max-flow algorithm is applied to solve the segmentation problem. Evaluation was performed on a dataset of T1-weighted MRI's of 62 subjects, with 42 images used for training and 20 images for testing. For comparison, FreeSurfer, FSL and BRAINSCut approaches were also evaluated using the same dataset. Dice overlap coefficients and surface-to-surfaces distances between the automated segmentation and expert manual segmentations indicate the results of our method are statistically significantly more accurate than the three other methods, for both the caudate (Dice: 0.89 +/- 0.03) and the putamen (0.89 +/- 0.03).

  19. Development of hardware system using temperature and vibration maintenance models integration concepts for conventional machines monitoring: a case study

    NASA Astrophysics Data System (ADS)

    Adeyeri, Michael Kanisuru; Mpofu, Khumbulani; Kareem, Buliaminu

    2016-03-01

    This article describes the integration of temperature and vibration models for maintenance monitoring of conventional machinery parts in which their optimal and best functionalities are affected by abnormal changes in temperature and vibration values thereby resulting in machine failures, machines breakdown, poor quality of products, inability to meeting customers' demand, poor inventory control and just to mention a few. The work entails the use of temperature and vibration sensors as monitoring probes programmed in microcontroller using C language. The developed hardware consists of vibration sensor of ADXL345, temperature sensor of AD594/595 of type K thermocouple, microcontroller, graphic liquid crystal display, real time clock, etc. The hardware is divided into two: one is based at the workstation (majorly meant to monitor machines behaviour) and the other at the base station (meant to receive transmission of machines information sent from the workstation), working cooperatively for effective functionalities. The resulting hardware built was calibrated, tested using model verification and validated through principles pivoted on least square and regression analysis approach using data read from the gear boxes of extruding and cutting machines used for polyethylene bag production. The results got therein confirmed related correlation existing between time, vibration and temperature, which are reflections of effective formulation of the developed concept.

  20. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations

    PubMed Central

    Simon, Aaron B.; Dubowitz, David J.; Blockley, Nicholas P.; Buxton, Richard B.

    2016-01-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2′ as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2′, we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2′-based estimate of the metabolic response to CO2 of 1.4%, and R2′- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2′-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. PMID:26790354

  1. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations.

    PubMed

    Simon, Aaron B; Dubowitz, David J; Blockley, Nicholas P; Buxton, Richard B

    2016-04-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2' as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2', we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2'-based estimate of the metabolic response to CO2 of 1.4%, and R2'- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2'-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Machine tool locator

    DOEpatents

    Hanlon, John A.; Gill, Timothy J.

    2001-01-01

    Machine tools can be accurately measured and positioned on manufacturing machines within very small tolerances by use of an autocollimator on a 3-axis mount on a manufacturing machine and positioned so as to focus on a reference tooling ball or a machine tool, a digital camera connected to the viewing end of the autocollimator, and a marker and measure generator for receiving digital images from the camera, then displaying or measuring distances between the projection reticle and the reference reticle on the monitoring screen, and relating the distances to the actual position of the autocollimator relative to the reference tooling ball. The images and measurements are used to set the position of the machine tool and to measure the size and shape of the machine tool tip, and examine cutting edge wear. patent

  3. POLCAL - POLARIMETRIC RADAR CALIBRATION

    NASA Technical Reports Server (NTRS)

    Vanzyl, J.

    1994-01-01

    Calibration of polarimetric radar systems is a field of research in which great progress has been made over the last few years. POLCAL (Polarimetric Radar Calibration) is a software tool intended to assist in the calibration of Synthetic Aperture Radar (SAR) systems. In particular, POLCAL calibrates Stokes matrix format data produced as the standard product by the NASA/Jet Propulsion Laboratory (JPL) airborne imaging synthetic aperture radar (AIRSAR). POLCAL was designed to be used in conjunction with data collected by the NASA/JPL AIRSAR system. AIRSAR is a multifrequency (6 cm, 24 cm, and 68 cm wavelength), fully polarimetric SAR system which produces 12 x 12 km imagery at 10 m resolution. AIRSTAR was designed as a testbed for NASA's Spaceborne Imaging Radar program. While the images produced after 1991 are thought to be calibrated (phase calibrated, cross-talk removed, channel imbalance removed, and absolutely calibrated), POLCAL can and should still be used to check the accuracy of the calibration and to correct it if necessary. Version 4.0 of POLCAL is an upgrade of POLCAL version 2.0 released to AIRSAR investigators in June, 1990. New options in version 4.0 include automatic absolute calibration of 89/90 data, distributed target analysis, calibration of nearby scenes with calibration parameters from a scene with corner reflectors, altitude or roll angle corrections, and calibration of errors introduced by known topography. Many sources of error can lead to false conclusions about the nature of scatterers on the surface. Errors in the phase relationship between polarization channels result in incorrect synthesis of polarization states. Cross-talk, caused by imperfections in the radar antenna itself, can also lead to error. POLCAL reduces cross-talk and corrects phase calibration without the use of ground calibration equipment. Removing the antenna patterns during SAR processing also forms a very important part of the calibration of SAR data. Errors in the

  4. Pattern classification of fMRI data: applications for analysis of spatially distributed cortical networks.

    PubMed

    Yourganov, Grigori; Schmah, Tanya; Churchill, Nathan W; Berman, Marc G; Grady, Cheryl L; Strother, Stephen C

    2014-08-01

    The field of fMRI data analysis is rapidly growing in sophistication, particularly in the domain of multivariate pattern classification. However, the interaction between the properties of the analytical model and the parameters of the BOLD signal (e.g. signal magnitude, temporal variance and functional connectivity) is still an open problem. We addressed this problem by evaluating a set of pattern classification algorithms on simulated and experimental block-design fMRI data. The set of classifiers consisted of linear and quadratic discriminants, linear support vector machine, and linear and nonlinear Gaussian naive Bayes classifiers. For linear discriminant, we used two methods of regularization: principal component analysis, and ridge regularization. The classifiers were used (1) to classify the volumes according to the behavioral task that was performed by the subject, and (2) to construct spatial maps that indicated the relative contribution of each voxel to classification. Our evaluation metrics were: (1) accuracy of out-of-sample classification and (2) reproducibility of spatial maps. In simulated data sets, we performed an additional evaluation of spatial maps with ROC analysis. We varied the magnitude, temporal variance and connectivity of simulated fMRI signal and identified the optimal classifier for each simulated environment. Overall, the best performers were linear and quadratic discriminants (operating on principal components of the data matrix) and, in some rare situations, a nonlinear Gaussian naïve Bayes classifier. The results from the simulated data were supported by within-subject analysis of experimental fMRI data, collected in a study of aging. This is the first study that systematically characterizes interactions between analysis model and signal parameters (such as magnitude, variance and correlation) on the performance of pattern classifiers for fMRI. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Deriving global parameter estimates for the Noah land surface model using FLUXNET and machine learning

    NASA Astrophysics Data System (ADS)

    Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.

    2016-11-01

    With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.

  6. Improved dewpoint-probe calibration

    NASA Technical Reports Server (NTRS)

    Stephenson, J. G.; Theodore, E. A.

    1978-01-01

    Relatively-simple pressure-control apparatus calibrates dewpoint probes considerably faster than conventional methods, with no loss of accuracy. Technique requires only pressure measurement at each calibration point and single absolute-humidity measurement at beginning of run. Several probes can be calibrated simultaneously and points can be checked above room temperature.

  7. MRI of retinoblastoma

    PubMed Central

    Razek, A A K A; Elkhamary, S

    2011-01-01

    We review the role of MRI in retinoblastoma and simulating lesions. Retinoblastoma is the most common paediatric intra-ocular tumour. It may be endophytic, exophytic or a diffuse infiltrating tumour. MRI can detect intra-ocular, extra-ocular and intracranial extension of the tumour. MRI is essential for monitoring patients after treatment and detection of associated second malignancies. It helps to differentiating the tumour from simulating lesions with leukocoria. PMID:21849363

  8. Relaxivity-iron calibration in hepatic iron overload: Probing underlying biophysical mechanisms using a Monte Carlo model

    PubMed Central

    Ghugre, Nilesh R.; Wood, John C.

    2010-01-01

    Iron overload is a serious condition for patients with β-thalassemia, transfusion-dependent sickle cell anemia and inherited disorders of iron metabolism. MRI is becoming increasingly important in non-invasive quantification of tissue iron, overcoming the drawbacks of traditional techniques (liver biopsy). R2*(1/T2*) rises linearly with iron while R2(1/T2) has a curvilinear relationship in human liver. Although recent work has demonstrated clinically-valid estimates of human liver iron, the calibration varies with MRI sequence, field strength, iron chelation therapy and organ imaged, forcing recalibration in patients. To understand and correct these limitations, a thorough understanding of the underlying biophysics is of critical importance. Toward this end, a Monte Carlo based approach, using human liver as a ‘model’ tissue system, was employed to determine the contribution of particle size and distribution on MRI signal relaxation. Relaxivities were determined for hepatic iron concentrations (HIC) ranging from 0.5–40 mg iron/ g dry tissue weight. Model predictions captured the linear and curvilinear relationship of R2* and R2 with HIC respectively and were within in vivo confidence bounds; contact or chemical exchange mechanisms were not necessary. A validated and optimized model will aid understanding and quantification of iron-mediated relaxivity in tissues where biopsy is not feasible (heart, spleen). PMID:21337413

  9. Automatic and Reproducible Positioning of Phase-Contrast MRI for the Quantification of Global Cerebral Blood Flow

    PubMed Central

    Liu, Peiying; Lu, Hanzhang; Filbey, Francesca M.; Pinkham, Amy E.; McAdams, Carrie J.; Adinoff, Bryon; Daliparthi, Vamsi; Cao, Yan

    2014-01-01

    Phase-Contrast MRI (PC-MRI) is a noninvasive technique to measure blood flow. In particular, global but highly quantitative cerebral blood flow (CBF) measurement using PC-MRI complements several other CBF mapping methods such as arterial spin labeling and dynamic susceptibility contrast MRI by providing a calibration factor. The ability to estimate blood supply in physiological units also lays a foundation for assessment of brain metabolic rate. However, a major obstacle before wider applications of this method is that the slice positioning of the scan, ideally placed perpendicular to the feeding arteries, requires considerable expertise and can present a burden to the operator. In the present work, we proposed that the majority of PC-MRI scans can be positioned using an automatic algorithm, leaving only a small fraction of arteries requiring manual positioning. We implemented and evaluated an algorithm for this purpose based on feature extraction of a survey angiogram, which is of minimal operator dependence. In a comparative test-retest study with 7 subjects, the blood flow measurement using this algorithm showed an inter-session coefficient of variation (CoV) of . The Bland-Altman method showed that the automatic method differs from the manual method by between and , for of the CBF measurements. This is comparable to the variance in CBF measurement using manually-positioned PC MRI alone. In a further application of this algorithm to 157 consecutive subjects from typical clinical cohorts, the algorithm provided successful positioning in 89.7% of the arteries. In 79.6% of the subjects, all four arteries could be planned using the algorithm. Chi-square tests of independence showed that the success rate was not dependent on the age or gender, but the patients showed a trend of lower success rate (p = 0.14) compared to healthy controls. In conclusion, this automatic positioning algorithm could improve the application of PC-MRI in CBF quantification. PMID:24787742

  10. A Natural Language Processing-based Model to Automate MRI Brain Protocol Selection and Prioritization.

    PubMed

    Brown, Andrew D; Marotta, Thomas R

    2017-02-01

    Incorrect imaging protocol selection can contribute to increased healthcare cost and waste. To help healthcare providers improve the quality and safety of medical imaging services, we developed and evaluated three natural language processing (NLP) models to determine whether NLP techniques could be employed to aid in clinical decision support for protocoling and prioritization of magnetic resonance imaging (MRI) brain examinations. To test the feasibility of using an NLP model to support clinical decision making for MRI brain examinations, we designed three different medical imaging prediction tasks, each with a unique outcome: selecting an examination protocol, evaluating the need for contrast administration, and determining priority. We created three models for each prediction task, each using a different classification algorithm-random forest, support vector machine, or k-nearest neighbor-to predict outcomes based on the narrative clinical indications and demographic data associated with 13,982 MRI brain examinations performed from January 1, 2013 to June 30, 2015. Test datasets were used to calculate the accuracy, sensitivity and specificity, predictive values, and the area under the curve. Our optimal results show an accuracy of 82.9%, 83.0%, and 88.2% for the protocol selection, contrast administration, and prioritization tasks, respectively, demonstrating that predictive algorithms can be used to aid in clinical decision support for examination protocoling. NLP models developed from the narrative clinical information provided by referring clinicians and demographic data are feasible methods to predict the protocol and priority of MRI brain examinations. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  11. Continuous Rapid Quantification of Stroke Volume using Magnetohydrodynamic Voltages in 3T MRI

    PubMed Central

    Gregory, T. Stan; Oshinski, John; Schmidt, Ehud J.; Kwong, Raymond Y.; Stevenson, William G.; Tse, Zion Tsz Ho

    2015-01-01

    Background To develop a technique to non-invasively estimate Stroke Volume (SV) in real-time during Magnetic Resonance Imaging (MRI) guided procedures, based on induced Magnetohydrodynamic Voltages (VMHD) that occur in Electrocardiogram (ECG) recordings during MRI exams, leaving the MRI scanner free to perform other imaging tasks. Due to the relationship between blood-flow (BF) and VMHD, we hypothesized that a method to obtain SV could be derived from extracted VMHD vectors in the Vectorcardiogram frame-of-reference (VMHDVCG). Methods and Results To estimate a subject-specific BF-VMHD model, VMHDVCG was acquired during a 20-second breath-hold and calibrated versus aortic BF measured using Phase Contrast Magnetic Resonance (PCMR) in 10 subjects (n=10) and one subject diagnosed with Premature Ventricular Contractions (PVCs). Beat-to-Beat validation of VMHDVCG derived BF was performed using Real-Time Phase Contrast (RTPC) imaging in 7 healthy subjects (n=7) during a 15 minute cardiac exercise stress tests and 30 minutes after stress relaxation in 3T MRIs. Subject-specific equations were derived to correlate VMHDVCG to BF at rest, and validated using RTPC. An average error of 7.22% and 3.69% in SV estimation, respectively, was found during peak stress, and after complete relaxation. Measured beat-to-beat blood flow time-history derived from RTPC and VMHD were highly correlated using a Spearman Rank Correlation Coefficient during stress tests (0.89) and after stress relaxation (=0.86). Conclusions Accurate beat-to-beat SV and BF were estimated using VMHDVCG extracted from intra-MRI 12-lead ECGs, providing a means to enhance patient monitoring during MR imaging and MR-guided interventions. PMID:26628581

  12. Optogenetic Functional MRI

    PubMed Central

    Lin, Peter; Fang, Zhongnan; Liu, Jia; Lee, Jin Hyung

    2016-01-01

    The investigation of the functional connectivity of precise neural circuits across the entire intact brain can be achieved through optogenetic functional magnetic resonance imaging (ofMRI), which is a novel technique that combines the relatively high spatial resolution of high-field fMRI with the precision of optogenetic stimulation. Fiber optics that enable delivery of specific wavelengths of light deep into the brain in vivo are implanted into regions of interest in order to specifically stimulate targeted cell types that have been genetically induced to express light-sensitive trans-membrane conductance channels, called opsins. fMRI is used to provide a non-invasive method of determining the brain's global dynamic response to optogenetic stimulation of specific neural circuits through measurement of the blood-oxygen-level-dependent (BOLD) signal, which provides an indirect measurement of neuronal activity. This protocol describes the construction of fiber optic implants, the implantation surgeries, the imaging with photostimulation and the data analysis required to successfully perform ofMRI. In summary, the precise stimulation and whole-brain monitoring ability of ofMRI are crucial factors in making ofMRI a powerful tool for the study of the connectomics of the brain in both healthy and diseased states. PMID:27167840

  13. Machinability of nickel based alloys using electrical discharge machining process

    NASA Astrophysics Data System (ADS)

    Khan, M. Adam; Gokul, A. K.; Bharani Dharan, M. P.; Jeevakarthikeyan, R. V. S.; Uthayakumar, M.; Thirumalai Kumaran, S.; Duraiselvam, M.

    2018-04-01

    The high temperature materials such as nickel based alloys and austenitic steel are frequently used for manufacturing critical aero engine turbine components. Literature on conventional and unconventional machining of steel materials is abundant over the past three decades. However the machining studies on superalloy is still a challenging task due to its inherent property and quality. Thus this material is difficult to be cut in conventional processes. Study on unconventional machining process for nickel alloys is focused in this proposed research. Inconel718 and Monel 400 are the two different candidate materials used for electrical discharge machining (EDM) process. Investigation is to prepare a blind hole using copper electrode of 6mm diameter. Electrical parameters are varied to produce plasma spark for diffusion process and machining time is made constant to calculate the experimental results of both the material. Influence of process parameters on tool wear mechanism and material removal are considered from the proposed experimental design. While machining the tool has prone to discharge more materials due to production of high energy plasma spark and eddy current effect. The surface morphology of the machined surface were observed with high resolution FE SEM. Fused electrode found to be a spherical structure over the machined surface as clumps. Surface roughness were also measured with surface profile using profilometer. It is confirmed that there is no deviation and precise roundness of drilling is maintained.

  14. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    NASA Astrophysics Data System (ADS)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  15. Ultra precision machining

    NASA Astrophysics Data System (ADS)

    Debra, Daniel B.; Hesselink, Lambertus; Binford, Thomas

    1990-05-01

    There are a number of fields that require or can use to advantage very high precision in machining. For example, further development of high energy lasers and x ray astronomy depend critically on the manufacture of light weight reflecting metal optical components. To fabricate these optical components with machine tools they will be made of metal with mirror quality surface finish. By mirror quality surface finish, it is meant that the dimensions tolerances on the order of 0.02 microns and surface roughness of 0.07. These accuracy targets fall in the category of ultra precision machining. They cannot be achieved by a simple extension of conventional machining processes and techniques. They require single crystal diamond tools, special attention to vibration isolation, special isolation of machine metrology, and on line correction of imperfection in the motion of the machine carriages on their way.

  16. Quantum machine learning.

    PubMed

    Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth

    2017-09-13

    Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.

  17. Quantum machine learning

    NASA Astrophysics Data System (ADS)

    Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth

    2017-09-01

    Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.

  18. Computer-assisted segmentation of white matter lesions in 3D MR images using support vector machine.

    PubMed

    Lao, Zhiqiang; Shen, Dinggang; Liu, Dengfeng; Jawad, Abbas F; Melhem, Elias R; Launer, Lenore J; Bryan, R Nick; Davatzikos, Christos

    2008-03-01

    Brain lesions, especially white matter lesions (WMLs), are associated with cardiac and vascular disease, but also with normal aging. Quantitative analysis of WML in large clinical trials is becoming more and more important. In this article, we present a computer-assisted WML segmentation method, based on local features extracted from multiparametric magnetic resonance imaging (MRI) sequences (ie, T1-weighted, T2-weighted, proton density-weighted, and fluid attenuation inversion recovery MRI scans). A support vector machine classifier is first trained on expert-defined WMLs, and is then used to classify new scans. Postprocessing analysis further reduces false positives by using anatomic knowledge and measures of distance from the training set. Cross-validation on a population of 35 patients from three different imaging sites with WMLs of varying sizes, shapes, and locations tests the robustness and accuracy of the proposed segmentation method, compared with the manual segmentation results from two experienced neuroradiologists.

  19. Stirling machine operating experience

    NASA Technical Reports Server (NTRS)

    Ross, Brad; Dudenhoefer, James E.

    1991-01-01

    Numerous Stirling machines have been built and operated, but the operating experience of these machines is not well known. It is important to examine this operating experience in detail, because it largely substantiates the claim that Stirling machines are capable of reliable and lengthy lives. The amount of data that exists is impressive, considering that many of the machines that have been built are developmental machines intended to show proof of concept, and were not expected to operate for any lengthy period of time. Some Stirling machines (typically free-piston machines) achieve long life through non-contact bearings, while other Stirling machines (typically kinematic) have achieved long operating lives through regular seal and bearing replacements. In addition to engine and system testing, life testing of critical components is also considered.

  20. Sparse network-based models for patient classification using fMRI

    PubMed Central

    Rosa, Maria J.; Portugal, Liana; Hahn, Tim; Fallgatter, Andreas J.; Garrido, Marta I.; Shawe-Taylor, John; Mourao-Miranda, Janaina

    2015-01-01

    Pattern recognition applied to whole-brain neuroimaging data, such as functional Magnetic Resonance Imaging (fMRI), has proved successful at discriminating psychiatric patients from healthy participants. However, predictive patterns obtained from whole-brain voxel-based features are difficult to interpret in terms of the underlying neurobiology. Many psychiatric disorders, such as depression and schizophrenia, are thought to be brain connectivity disorders. Therefore, pattern recognition based on network models might provide deeper insights and potentially more powerful predictions than whole-brain voxel-based approaches. Here, we build a novel sparse network-based discriminative modeling framework, based on Gaussian graphical models and L1-norm regularized linear Support Vector Machines (SVM). In addition, the proposed framework is optimized in terms of both predictive power and reproducibility/stability of the patterns. Our approach aims to provide better pattern interpretation than voxel-based whole-brain approaches by yielding stable brain connectivity patterns that underlie discriminative changes in brain function between the groups. We illustrate our technique by classifying patients with major depressive disorder (MDD) and healthy participants, in two (event- and block-related) fMRI datasets acquired while participants performed a gender discrimination and emotional task, respectively, during the visualization of emotional valent faces. PMID:25463459

  1. Technique for calibrating angular measurement devices when calibration standards are unavailable

    NASA Technical Reports Server (NTRS)

    Finley, Tom D.

    1991-01-01

    A calibration technique is proposed that will allow the calibration of certain angular measurement devices without requiring the use of absolute standard. The technique assumes that the device to be calibrated has deterministic bias errors. A comparison device must be available that meets the same requirements. The two devices are compared; one device is then rotated with respect to the other, and a second comparison is performed. If the data are reduced using the described technique, the individual errors of the two devices can be determined.

  2. Biparametric MRI of the prostate.

    PubMed

    Scialpi, Michele; D'Andrea, Alfredo; Martorana, Eugenio; Malaspina, Corrado Maria; Aisa, Maria Cristina; Napoletano, Maria; Orlandi, Emanuele; Rondoni, Valeria; Scialpi, Pietro; Pacchiarini, Diamante; Palladino, Diego; Dragone, Michele; Di Renzo, Giancarlo; Simeone, Annalisa; Bianchi, Giampaolo; Brunese, Luca

    2017-12-01

    Biparametric Magnetic Resonance Imaging (bpMRI) of the prostate combining both morphologic T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) is emerging as an alternative to multiparametric MRI (mpMRI) to detect, to localize and to guide prostatic targeted biopsy in patients with suspicious prostate cancer (PCa). BpMRI overcomes some limitations of mpMRI such as the costs, the time required to perform the study, the use of gadolinium-based contrast agents and the lack of a guidance for management of score 3 lesions equivocal for significant PCa. In our experience the optimal and similar clinical results of the bpMRI in comparison to mpMRI are essentially related to the DWI that we consider the dominant sequence for detection suspicious PCa both in transition and in peripheral zone. In clinical practice, the adoption of bpMRI standardized scoring system, indicating the likelihood to diagnose a clinically significant PCa and establishing the management of each suspicious category (from 1 to 4), could represent the rationale to simplify and to improve the current interpretation of mpMRI based on Prostate Imaging and Reporting Archiving Data System version 2 (PI-RADS v2). In this review article we report and describe the current knowledge about bpMRI in the detection of suspicious PCa and a simplified PI-RADS based on bpMRI for management of each suspicious PCa categories to facilitate the communication between radiologists and urologists.

  3. Biparametric MRI of the prostate

    PubMed Central

    Scialpi, Michele; D’Andrea, Alfredo; Martorana, Eugenio; Malaspina, Corrado Maria; Aisa, Maria Cristina; Napoletano, Maria; Orlandi, Emanuele; Rondoni, Valeria; Scialpi, Pietro; Pacchiarini, Diamante; Palladino, Diego; Dragone, Michele; Di Renzo, Giancarlo; Simeone, Annalisa; Bianchi, Giampaolo; Brunese, Luca

    2017-01-01

    Biparametric Magnetic Resonance Imaging (bpMRI) of the prostate combining both morphologic T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) is emerging as an alternative to multiparametric MRI (mpMRI) to detect, to localize and to guide prostatic targeted biopsy in patients with suspicious prostate cancer (PCa). BpMRI overcomes some limitations of mpMRI such as the costs, the time required to perform the study, the use of gadolinium-based contrast agents and the lack of a guidance for management of score 3 lesions equivocal for significant PCa. In our experience the optimal and similar clinical results of the bpMRI in comparison to mpMRI are essentially related to the DWI that we consider the dominant sequence for detection suspicious PCa both in transition and in peripheral zone. In clinical practice, the adoption of bpMRI standardized scoring system, indicating the likelihood to diagnose a clinically significant PCa and establishing the management of each suspicious category (from 1 to 4), could represent the rationale to simplify and to improve the current interpretation of mpMRI based on Prostate Imaging and Reporting Archiving Data System version 2 (PI-RADS v2). In this review article we report and describe the current knowledge about bpMRI in the detection of suspicious PCa and a simplified PI-RADS based on bpMRI for management of each suspicious PCa categories to facilitate the communication between radiologists and urologists. PMID:29201499

  4. PV Calibration Insights | NREL

    Science.gov Websites

    PV Calibration Insights PV Calibration Insights The Photovoltaic (PV) Calibration Insights blog will provide updates on the testing done by the NREL PV Device Performance group. This NREL research group measures the performance of any and all technologies and sizes of PV devices from around the world

  5. MRI-powered biomedical devices.

    PubMed

    Hovet, Sierra; Ren, Hongliang; Xu, Sheng; Wood, Bradford; Tokuda, Junichi; Tse, Zion Tsz Ho

    2017-11-16

    Magnetic resonance imaging (MRI) is beneficial for imaging-guided procedures because it provides higher resolution images and better soft tissue contrast than computed tomography (CT), ultrasound, and X-ray. MRI can be used to streamline diagnostics and treatment because it does not require patients to be repositioned between scans of different areas of the body. It is even possible to use MRI to visualize, power, and control medical devices inside the human body to access remote locations and perform minimally invasive procedures. Therefore, MR conditional medical devices have the potential to improve a wide variety of medical procedures; this potential is explored in terms of practical considerations pertaining to clinical applications and the MRI environment. Recent advancements in this field are introduced with a review of clinically relevant research in the areas of interventional tools, endovascular microbots, and closed-loop controlled MRI robots. Challenges related to technology and clinical feasibility are discussed, including MRI based propulsion and control, navigation of medical devices through the human body, clinical adoptability, and regulatory issues. The development of MRI-powered medical devices is an emerging field, but the potential clinical impact of these devices is promising.

  6. Frontotemporal correlates of impulsivity and machine learning in retired professional athletes with a history of multiple concussions.

    PubMed

    Goswami, R; Dufort, P; Tartaglia, M C; Green, R E; Crawley, A; Tator, C H; Wennberg, R; Mikulis, D J; Keightley, M; Davis, Karen D

    2016-05-01

    The frontotemporal cortical network is associated with behaviours such as impulsivity and aggression. The health of the uncinate fasciculus (UF) that connects the orbitofrontal cortex (OFC) with the anterior temporal lobe (ATL) may be a crucial determinant of behavioural regulation. Behavioural changes can emerge after repeated concussion and thus we used MRI to examine the UF and connected gray matter as it relates to impulsivity and aggression in retired professional football players who had sustained multiple concussions. Behaviourally, athletes had faster reaction times and an increased error rate on a go/no-go task, and increased aggression and mania compared to controls. MRI revealed that the athletes had (1) cortical thinning of the ATL, (2) negative correlations of OFC thickness with aggression and task errors, indicative of impulsivity, (3) negative correlations of UF axial diffusivity with error rates and aggression, and (4) elevated resting-state functional connectivity between the ATL and OFC. Using machine learning, we found that UF diffusion imaging differentiates athletes from healthy controls with significant classifiers based on UF mean and radial diffusivity showing 79-84 % sensitivity and specificity, and 0.8 areas under the ROC curves. The spatial pattern of classifier weights revealed hot spots at the orbitofrontal and temporal ends of the UF. These data implicate the UF system in the pathological outcomes of repeated concussion as they relate to impulsive behaviour. Furthermore, a support vector machine has potential utility in the general assessment and diagnosis of brain abnormalities following concussion.

  7. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhbardeh, Alireza; Jacobs, Michael A.; Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205

    2012-04-15

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), andmore » diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and

  8. National Machine Guarding Program: Part 1. Machine safeguarding practices in small metal fabrication businesses.

    PubMed

    Parker, David L; Yamin, Samuel C; Brosseau, Lisa M; Xi, Min; Gordon, Robert; Most, Ivan G; Stanley, Rodney

    2015-11-01

    Metal fabrication workers experience high rates of traumatic occupational injuries. Machine operators in particular face high risks, often stemming from the absence or improper use of machine safeguarding or the failure to implement lockout procedures. The National Machine Guarding Program (NMGP) was a translational research initiative implemented in conjunction with two workers' compensation insures. Insurance safety consultants trained in machine guarding used standardized checklists to conduct a baseline inspection of machine-related hazards in 221 business. Safeguards at the point of operation were missing or inadequate on 33% of machines. Safeguards for other mechanical hazards were missing on 28% of machines. Older machines were both widely used and less likely than newer machines to be properly guarded. Lockout/tagout procedures were posted at only 9% of machine workstations. The NMGP demonstrates a need for improvement in many aspects of machine safety and lockout in small metal fabrication businesses. © 2015 The Authors. American Journal of Industrial Medicine published by Wiley Periodicals, Inc.

  9. OLI Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Markham, Brian; Morfitt, Ron; Kvaran, Geir; Biggar, Stuart; Leisso, Nathan; Czapla-Myers, Jeff

    2011-01-01

    Goals: (1) Present an overview of the pre-launch radiance, reflectance & uniformity calibration of the Operational Land Imager (OLI) (1a) Transfer to orbit/heliostat (1b) Linearity (2) Discuss on-orbit plans for radiance, reflectance and uniformity calibration of the OLI

  10. Predicting Long-Term Cognitive Outcome Following Breast Cancer with Pre-Treatment Resting State fMRI and Random Forest Machine Learning.

    PubMed

    Kesler, Shelli R; Rao, Arvind; Blayney, Douglas W; Oakley-Girvan, Ingrid A; Karuturi, Meghan; Palesh, Oxana

    2017-01-01

    We aimed to determine if resting state functional magnetic resonance imaging (fMRI) acquired at pre-treatment baseline could accurately predict breast cancer-related cognitive impairment at long-term follow-up. We evaluated 31 patients with breast cancer (age 34-65) prior to any treatment, post-chemotherapy and 1 year later. Cognitive testing scores were normalized based on data obtained from 43 healthy female controls and then used to categorize patients as impaired or not based on longitudinal changes. We measured clustering coefficient, a measure of local connectivity, by applying graph theory to baseline resting state fMRI and entered these metrics along with relevant patient-related and medical variables into random forest classification. Incidence of cognitive impairment at 1 year follow-up was 55% and was predicted by classification algorithms with up to 100% accuracy ( p < 0.0001). The neuroimaging-based model was significantly more accurate than a model involving patient-related and medical variables ( p = 0.005). Hub regions belonging to several distinct functional networks were the most important predictors of cognitive outcome. Characteristics of these hubs indicated potential spread of brain injury from default mode to other networks over time. These findings suggest that resting state fMRI is a promising tool for predicting future cognitive impairment associated with breast cancer. This information could inform treatment decision making by identifying patients at highest risk for long-term cognitive impairment.

  11. Predicting Long-Term Cognitive Outcome Following Breast Cancer with Pre-Treatment Resting State fMRI and Random Forest Machine Learning

    PubMed Central

    Kesler, Shelli R.; Rao, Arvind; Blayney, Douglas W.; Oakley-Girvan, Ingrid A.; Karuturi, Meghan; Palesh, Oxana

    2017-01-01

    We aimed to determine if resting state functional magnetic resonance imaging (fMRI) acquired at pre-treatment baseline could accurately predict breast cancer-related cognitive impairment at long-term follow-up. We evaluated 31 patients with breast cancer (age 34–65) prior to any treatment, post-chemotherapy and 1 year later. Cognitive testing scores were normalized based on data obtained from 43 healthy female controls and then used to categorize patients as impaired or not based on longitudinal changes. We measured clustering coefficient, a measure of local connectivity, by applying graph theory to baseline resting state fMRI and entered these metrics along with relevant patient-related and medical variables into random forest classification. Incidence of cognitive impairment at 1 year follow-up was 55% and was predicted by classification algorithms with up to 100% accuracy (p < 0.0001). The neuroimaging-based model was significantly more accurate than a model involving patient-related and medical variables (p = 0.005). Hub regions belonging to several distinct functional networks were the most important predictors of cognitive outcome. Characteristics of these hubs indicated potential spread of brain injury from default mode to other networks over time. These findings suggest that resting state fMRI is a promising tool for predicting future cognitive impairment associated with breast cancer. This information could inform treatment decision making by identifying patients at highest risk for long-term cognitive impairment. PMID:29187817

  12. Current Status of Efforts on Standardizing Magnetic Resonance Imaging of Juvenile Idiopathic Arthritis: Report from the OMERACT MRI in JIA Working Group and Health-e-Child.

    PubMed

    Nusman, Charlotte M; Ording Muller, Lil-Sofie; Hemke, Robert; Doria, Andrea S; Avenarius, Derk; Tzaribachev, Nikolay; Malattia, Clara; van Rossum, Marion A J; Maas, Mario; Rosendahl, Karen

    2016-01-01

    To report on the progress of an ongoing research collaboration on magnetic resonance imaging (MRI) in juvenile idiopathic arthritis (JIA) and describe the proceedings of a meeting, held prior to Outcome Measures in Rheumatology (OMERACT) 12, bringing together the OMERACT MRI in JIA working group and the Health-e-Child radiology group. The goal of the meeting was to establish agreement on scoring definitions, locations, and scales for the assessment of MRI of patients with JIA for both large and small joints. The collaborative work process included premeeting surveys, presentations, group discussions, consensus on scoring methods, pilot scoring, conjoint review, and discussion of a future research agenda. The meeting resulted in preliminary statements on the MR imaging protocol of the JIA knee and wrist and determination of the starting point for development of MRI scoring systems based on previous studies. It was also considered important to be descriptive rather than explanatory in the assessment of MRI in JIA (e.g., "thickening" instead of "hypertrophy"). Further, the group agreed that well-designed calibration sessions were warranted before any future scoring exercises were conducted. The combined efforts of the OMERACT MRI in JIA working group and Health-e-Child included the assessment of currently available material in the literature and determination of the basis from which to start the development of MRI scoring systems for both the knee and wrist. The future research agenda for the knee and wrist will include establishment of MRI scoring systems, an atlas of MR imaging in healthy children, and MRI protocol requisites.

  13. NASA Metrology and Calibration, 1980

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The proceedings of the fourth annual NASA Metrology and Calibration Workshop are presented. This workshop covered (1) review and assessment of NASA metrology and calibration activities by NASA Headquarters, (2) results of audits by the Office of Inspector General, (3) review of a proposed NASA Equipment Management System, (4) current and planned field center activities, (5) National Bureau of Standards (NBS) calibration services for NASA, (6) review of NBS's Precision Measurement and Test Equipment Project activities, (7) NASA instrument loan pool operations at two centers, (8) mobile cart calibration systems at two centers, (9) calibration intervals and decals, (10) NASA Calibration Capabilities Catalog, and (11) development of plans and objectives for FY 1981. Several papers in this proceedings are slide presentations only.

  14. Automatic force balance calibration system

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T. (Inventor)

    1995-01-01

    A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within +/-0.05% the entire system has an accuracy of +/-0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.

  15. Automatic force balance calibration system

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T. (Inventor)

    1996-01-01

    A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within .+-.0.05%, the entire system has an accuracy of a .+-.0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.

  16. Non-invasive estimate of blood glucose and blood pressure from a photoplethysmograph by means of machine learning techniques.

    PubMed

    Monte-Moreno, Enric

    2011-10-01

    This work presents a system for a simultaneous non-invasive estimate of the blood glucose level (BGL) and the systolic (SBP) and diastolic (DBP) blood pressure, using a photoplethysmograph (PPG) and machine learning techniques. The method is independent of the person whose values are being measured and does not need calibration over time or subjects. The architecture of the system consists of a photoplethysmograph sensor, an activity detection module, a signal processing module that extracts features from the PPG waveform, and a machine learning algorithm that estimates the SBP, DBP and BGL values. The idea that underlies the system is that there is functional relationship between the shape of the PPG waveform and the blood pressure and glucose levels. As described in this paper we tested this method on 410 individuals without performing any personalized calibration. The results were computed after cross validation. The machine learning techniques tested were: ridge linear regression, a multilayer perceptron neural network, support vector machines and random forests. The best results were obtained with the random forest technique. In the case of blood pressure, the resulting coefficients of determination for reference vs. prediction were R(SBP)(2)=0.91, R(DBP)(2)=0.89, and R(BGL)(2)=0.90. For the glucose estimation, distribution of the points on a Clarke error grid placed 87.7% of points in zone A, 10.3% in zone B, and 1.9% in zone D. Blood pressure values complied with the grade B protocol of the British Hypertension society. An effective system for estimate of blood glucose and blood pressure from a photoplethysmograph is presented. The main advantage of the system is that for clinical use it complies with the grade B protocol of the British Hypertension society for the blood pressure and only in 1.9% of the cases did not detect hypoglycemia or hyperglycemia. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Development of an MRI-compatible digital SiPM detector stack for simultaneous PET/MRI.

    PubMed

    Düppenbecker, Peter M; Weissler, Bjoern; Gebhardt, Pierre; Schug, David; Wehner, Jakob; Marsden, Paul K; Schulz, Volkmar

    2016-02-01

    Advances in solid-state photon detectors paved the way to combine positron emission tomography (PET) and magnetic resonance imaging (MRI) into highly integrated, truly simultaneous, hybrid imaging systems. Based on the most recent digital SiPM technology, we developed an MRI-compatible PET detector stack, intended as a building block for next generation simultaneous PET/MRI systems. Our detector stack comprises an array of 8 × 8 digital SiPM channels with 4 mm pitch using Philips Digital Photon Counting DPC 3200-22 devices, an FPGA for data acquisition, a supply voltage control system and a cooling infrastructure. This is the first detector design that allows the operation of digital SiPMs simultaneously inside an MRI system. We tested and optimized the MRI-compatibility of our detector stack on a laboratory test bench as well as in combination with a Philips Achieva 3 T MRI system. Our design clearly reduces distortions of the static magnetic field compared to a conventional design. The MRI static magnetic field causes weak and directional drift effects on voltage regulators, but has no direct impact on detector performance. MRI gradient switching initially degraded energy and timing resolution. Both distortions could be ascribed to voltage variations induced on the bias and the FPGA core voltage supply respectively. Based on these findings, we improved our detector design and our final design shows virtually no energy or timing degradations, even during heavy and continuous MRI gradient switching. In particular, we found no evidence that the performance of the DPC 3200-22 digital SiPM itself is degraded by the MRI system.

  18. Results from Source-Based and Detector-Based Calibrations of a CLARREO Calibration Demonstration System

    NASA Technical Reports Server (NTRS)

    Angal, Amit; Mccorkel, Joel; Thome, Kurt

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is formulated to determine long-term climate trends using SI-traceable measurements. The CLARREO mission will include instruments operating in the reflected solar (RS) wavelength region from 320 nm to 2300 nm. The Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO and facilitates testing and evaluation of calibration approaches. The basis of CLARREO and SOLARIS calibration is the Goddard Laser for Absolute Measurement of Response (GLAMR) that provides a radiance-based calibration at reflective solar wavelengths using continuously tunable lasers. SI-traceability is achieved via detector-based standards that, in GLAMRs case, are a set of NIST-calibrated transfer radiometers. A portable version of the SOLARIS, Suitcase SOLARIS is used to evaluate GLAMRs calibration accuracies. The calibration of Suitcase SOLARIS using GLAMR agrees with that obtained from source-based results of the Remote Sensing Group (RSG) at the University of Arizona to better than 5 (k2) in the 720-860 nm spectral range. The differences are within the uncertainties of the NIST-calibrated FEL lamp-based approach of RSG and give confidence that GLAMR is operating at 5 (k2) absolute uncertainties. Limitations of the Suitcase SOLARIS instrument also discussed and the next edition of the SOLARIS instrument (Suitcase SOLARIS- 2) is expected to provide an improved mechanism to further assess GLAMR and CLARREO calibration approaches. (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  19. Competitive Advantage of PET/MRI

    PubMed Central

    Jadvar, Hossein; Colletti, Patrick M.

    2013-01-01

    Multimodality imaging has made great strides in the imaging evaluation of patients with a variety of diseases. Positron emission tomography/computed tomography (PET/CT) is now established as the imaging modality of choice in many clinical conditions, particularly in oncology. While the initial development of combined PET/magnetic resonance imaging (PET/MRI) was in the preclinical arena, hybrid PET/MR scanners are now available for clinical use. PET/MRI combines the unique features of MRI including excellent soft tissue contrast, diffusion-weighted imaging, dynamic contrast-enhanced imaging, fMRI and other specialized sequences as well as MR spectroscopy with the quantitative physiologic information that is provided by PET. Most evidence for the potential clinical utility of PET/MRI is based on studies performed with side-by-side comparison or software-fused MRI and PET images. Data on distinctive utility of hybrid PET/MRI are rapidly emerging. There are potential competitive advantages of PET/MRI over PET/CT. In general, PET/MRI may be preferred over PET/CT where the unique features of MRI provide more robust imaging evaluation in certain clinical settings. The exact role and potential utility of simultaneous data acquisition in specific research and clinical settings will need to be defined. It may be that simultaneous PET/MRI will be best suited for clinical situations that are disease-specific, organ-specific, related to diseases of the children or in those patients undergoing repeated imaging for whom cumulative radiation dose must be kept as low as reasonably achievable. PET/MRI also offers interesting opportunities for use of dual modality probes. Upon clear definition of clinical utility, other important and practical issues related to business operational model, clinical workflow and reimbursement will also be resolved. PMID:23791129

  20. Competitive advantage of PET/MRI.

    PubMed

    Jadvar, Hossein; Colletti, Patrick M

    2014-01-01

    Multimodality imaging has made great strides in the imaging evaluation of patients with a variety of diseases. Positron emission tomography/computed tomography (PET/CT) is now established as the imaging modality of choice in many clinical conditions, particularly in oncology. While the initial development of combined PET/magnetic resonance imaging (PET/MRI) was in the preclinical arena, hybrid PET/MR scanners are now available for clinical use. PET/MRI combines the unique features of MRI including excellent soft tissue contrast, diffusion-weighted imaging, dynamic contrast-enhanced imaging, fMRI and other specialized sequences as well as MR spectroscopy with the quantitative physiologic information that is provided by PET. Most evidence for the potential clinical utility of PET/MRI is based on studies performed with side-by-side comparison or software-fused MRI and PET images. Data on distinctive utility of hybrid PET/MRI are rapidly emerging. There are potential competitive advantages of PET/MRI over PET/CT. In general, PET/MRI may be preferred over PET/CT where the unique features of MRI provide more robust imaging evaluation in certain clinical settings. The exact role and potential utility of simultaneous data acquisition in specific research and clinical settings will need to be defined. It may be that simultaneous PET/MRI will be best suited for clinical situations that are disease-specific, organ-specific, related to diseases of the children or in those patients undergoing repeated imaging for whom cumulative radiation dose must be kept as low as reasonably achievable. PET/MRI also offers interesting opportunities for use of dual modality probes. Upon clear definition of clinical utility, other important and practical issues related to business operational model, clinical workflow and reimbursement will also be resolved. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. MRI brain imaging.

    PubMed

    Skinner, Sarah

    2013-11-01

    General practitioners (GPs) are expected to be allowed to request MRI scans for adults for selected clinically appropriate indications from November 2013 as part of the expansion of Medicare-funded MRI services announced by the Federal Government in 2011. This article aims to give a brief overview of MRI brain imaging relevant to GPs, which will facilitate explanation of scan findings and management planning with their patients. Basic imaging techniques, common findings and terminology are presented using some illustrative case examples.

  2. Leg MRI scan

    MedlinePlus

    ... anything that contains metal into the scanner room. Considerations Tests that may be done instead of an ... Magnetic resonance imaging - ankle; MRI - femur; MRI - leg Patient Instructions Femur fracture repair - discharge Hip fracture - discharge ...

  3. Blind calibration of radio interferometric arrays using sparsity constraints and its implications for self-calibration

    NASA Astrophysics Data System (ADS)

    Chiarucci, Simone; Wijnholds, Stefan J.

    2018-02-01

    Blind calibration, i.e. calibration without a priori knowledge of the source model, is robust to the presence of unknown sources such as transient phenomena or (low-power) broad-band radio frequency interference that escaped detection. In this paper, we present a novel method for blind calibration of a radio interferometric array assuming that the observed field only contains a small number of discrete point sources. We show the huge computational advantage over previous blind calibration methods and we assess its statistical efficiency and robustness to noise and the quality of the initial estimate. We demonstrate the method on actual data from a Low-Frequency Array low-band antenna station showing that our blind calibration is able to recover the same gain solutions as the regular calibration approach, as expected from theory and simulations. We also discuss the implications of our findings for the robustness of regular self-calibration to poor starting models.

  4. Assessing and calibrating the ATR-FTIR approach as a carbonate rock characterization tool

    NASA Astrophysics Data System (ADS)

    Henry, Delano G.; Watson, Jonathan S.; John, Cédric M.

    2017-01-01

    ATR-FTIR (attenuated total reflectance Fourier transform infrared) spectroscopy can be used as a rapid and economical tool for qualitative identification of carbonates, calcium sulphates, oxides and silicates, as well as quantitatively estimating the concentration of minerals. Over 200 powdered samples with known concentrations of two, three, four and five phase mixtures were made, then a suite of calibration curves were derived that can be used to quantify the minerals. The calibration curves in this study have an R2 that range from 0.93-0.99, a RMSE (root mean square error) of 1-5 wt.% and a maximum error of 3-10 wt.%. The calibration curves were used on 35 geological samples that have previously been studied using XRD (X-ray diffraction). The identification of the minerals using ATR-FTIR is comparable with XRD and the quantitative results have a RMSD (root mean square deviation) of 14% and 12% for calcite and dolomite respectively when compared to XRD results. ATR-FTIR is a rapid technique (identification and quantification takes < 5 min) that involves virtually no cost if the machine is available. It is a common tool in most analytical laboratories, but it also has the potential to be deployed on a rig for real-time data acquisition of the mineralogy of cores and rock chips at the surface as there is no need for special sample preparation, rapid data collection and easy analysis.

  5. Excimer laser calibration system.

    PubMed

    Gottsch, J D; Rencs, E V; Cambier, J L; Hall, D; Azar, D T; Stark, W J

    1996-01-01

    Excimer laser photoablation for refractive and therapeutic keratectomies has been demonstrated to be feasible and practicable. However, corneal laser ablations are not without problems, including the delivery and maintenance of a homogeneous beam. We have developed an excimer laser calibration system capable of characterizing a laser ablation profile. Beam homogeneity is determined by the analysis of a polymethylmethacrylate (PMMA)-based thin-film using video capture and image processing. The ablation profile is presented as a color-coded map. Interpolation of excimer calibration system analysis provides a three-dimensional representation of elevation profiles that correlates with two-dimensional scanning profilometry. Excimer calibration analysis was performed before treating a monkey undergoing phototherapeutic keratectomy and two human subjects undergoing myopic spherocylindrical photorefractive keratectomy. Excimer calibration analysis was performed before and after laser refurbishing. Laser ablation profiles in PMMA are resolved by the excimer calibration system to .006 microns/pulse. Correlations with ablative patterns in a monkey cornea were demonstrated with preoperative and postoperative keratometry using corneal topography, and two human subjects using video-keratography. Excimer calibration analysis predicted a central-steep-island ablative pattern with the VISX Twenty/Twenty laser, which was confirmed by corneal topography immediately postoperatively and at 1 week after reepithelialization in the monkey. Predicted central steep islands in the two human subjects were confirmed by video-keratography at 1 week and at 1 month. Subsequent technical refurbishing of the laser resulted in a beam with an overall increased ablation rate measured as microns/pulse with a donut ablation profile. A patient treated after repair of the laser electrodes demonstrated no central island. This excimer laser calibration system can precisely detect laser-beam ablation

  6. Identification of Tool Wear when Machining of Austenitic Steels and Titatium by Miniature Machining

    NASA Astrophysics Data System (ADS)

    Pilc, Jozef; Kameník, Roman; Varga, Daniel; Martinček, Juraj; Sadilek, Marek

    2016-12-01

    Application of miniature machining is currently rapidly increasing mainly in biomedical industry and machining of hard-to-machine materials. Machinability of materials with increased level of toughness depends on factors that are important in the final state of surface integrity. Because of this, it is necessary to achieve high precision (varying in microns) in miniature machining. If we want to guarantee machining high precision, it is necessary to analyse tool wear intensity in direct interaction with given machined materials. During long-term cutting process, different cutting wedge deformations occur, leading in most cases to a rapid wear and destruction of the cutting wedge. This article deal with experimental monitoring of tool wear intensity during miniature machining.

  7. Development of an MRI-compatible digital SiPM detector stack for simultaneous PET/MRI

    PubMed Central

    Düppenbecker, Peter M; Weissler, Bjoern; Gebhardt, Pierre; Schug, David; Wehner, Jakob; Marsden, Paul K; Schulz, Volkmar

    2016-01-01

    Abstract Advances in solid-state photon detectors paved the way to combine positron emission tomography (PET) and magnetic resonance imaging (MRI) into highly integrated, truly simultaneous, hybrid imaging systems. Based on the most recent digital SiPM technology, we developed an MRI-compatible PET detector stack, intended as a building block for next generation simultaneous PET/MRI systems. Our detector stack comprises an array of 8 × 8 digital SiPM channels with 4 mm pitch using Philips Digital Photon Counting DPC 3200-22 devices, an FPGA for data acquisition, a supply voltage control system and a cooling infrastructure. This is the first detector design that allows the operation of digital SiPMs simultaneously inside an MRI system. We tested and optimized the MRI-compatibility of our detector stack on a laboratory test bench as well as in combination with a Philips Achieva 3 T MRI system. Our design clearly reduces distortions of the static magnetic field compared to a conventional design. The MRI static magnetic field causes weak and directional drift effects on voltage regulators, but has no direct impact on detector performance. MRI gradient switching initially degraded energy and timing resolution. Both distortions could be ascribed to voltage variations induced on the bias and the FPGA core voltage supply respectively. Based on these findings, we improved our detector design and our final design shows virtually no energy or timing degradations, even during heavy and continuous MRI gradient switching. In particular, we found no evidence that the performance of the DPC 3200-22 digital SiPM itself is degraded by the MRI system. PMID:28458919

  8. VIIRS thermal emissive bands on-orbit calibration coefficient performance using vicarious calibration results

    NASA Astrophysics Data System (ADS)

    Moyer, D.; Moeller, C.; De Luccia, F.

    2013-09-01

    The Visible Infrared Imager Radiometer Suite (VIIRS), a primary sensor on-board the Suomi-National Polar-orbiting Partnership (SNPP) spacecraft, was launched October 28, 2011. It has 22 bands: 7 thermal emissive bands (TEBs), 14 reflective solar bands (RSBs) and a Day Night Band (DNB). The TEBs cover the spectral wavelengths between 3.7 to 12 μm and have two 371 m and five 742 m spatial resolution bands. A VIIRS Key Performance Parameter (KPP) is the sea surface temperature (SST) which uses bands M12 (3.7 μm), M15 (10.8 μm) and M16's (12.0 μm) calibrated Science Data Records (SDRs). The TEB SDRs rely on pre-launch calibration coefficients used in a quadratic algorithm to convert the detector's response to calibrated radiance. This paper will evaluate the performance of these prelaunch calibration coefficients using vicarious calibration information from the Cross-track Infrared Sounder (CrIS) also onboard the SNPP spacecraft and the Infrared Atmospheric Sounding Interferometer (IASI) on-board the Meteorological Operational (MetOp) satellite. Changes to the pre-launch calibration coefficients' offset term c0 to improve the SDR's performance at cold scene temperatures will also be discussed.

  9. Your Sewing Machine.

    ERIC Educational Resources Information Center

    Peacock, Marion E.

    The programed instruction manual is designed to aid the student in learning the parts, uses, and operation of the sewing machine. Drawings of sewing machine parts are presented, and space is provided for the student's written responses. Following an introductory section identifying sewing machine parts, the manual deals with each part and its…

  10. Evaluation of dose delivery accuracy of gamma knife using MRI polymer gel dosimeter in an inhomogeneous phantom

    NASA Astrophysics Data System (ADS)

    Pourfallah T, A.; Alam N, Riahi; M, Allahverdi; M, Ay; M, Zahmatkesh

    2009-05-01

    Polymer gel dosimetry is still the only dosimetry method for directly measuring three-dimensional dose distributions. MRI Polymer gel dosimeters are tissue equivalent and can act as a phantom material. Because of high dose response sensitivity, the MRI was chosen as readout device. In this study dose profiles calculated with treatment-planning software (LGP) and measurements with the MR polymer gel dosimeter for single-shot irradiations were compared. A custom-built 16 cm diameter spherical plexiglas head phantom was used in this study. Inside the phantom, there is a cubic cutout for insertion of gel phantoms and another cutout for inserting the inhomogeneities. The phantoms were scanned with a 1.5T MRI (Siemens syngo MR 2004A 4VA25A) scanner. The multiple spin-echo sequence with 32 echoes was used for the MRI scans. Calibration relations between the spin-spin relaxation rate and the absorbed dose were obtained by using small cylindrical vials, which were filled with the PAGAT polymer gel from the same batch as for the spherical phantom. 1D and 2D data obtained using gel dosimeter for homogeneous and inhomogeneous phantoms were compared with dose obtained using LGP calculation. The distance between relative isodose curves obtained for homogeneous phantom and heterogeneous phantoms exceed the accepted total positioning error (>±2mm). The findings of this study indicate that dose measurement using PAGAT gel dosimeter can be used for verifying dose delivering accuracy in GK unit in presence of inhomogeneities.

  11. Energy calibration issues in nuclear resonant vibrational spectroscopy: observing small spectral shifts and making fast calibrations.

    PubMed

    Wang, Hongxin; Yoda, Yoshitaka; Dong, Weibing; Huang, Songping D

    2013-09-01

    The conventional energy calibration for nuclear resonant vibrational spectroscopy (NRVS) is usually long. Meanwhile, taking NRVS samples out of the cryostat increases the chance of sample damage, which makes it impossible to carry out an energy calibration during one NRVS measurement. In this study, by manipulating the 14.4 keV beam through the main measurement chamber without moving out the NRVS sample, two alternative calibration procedures have been proposed and established: (i) an in situ calibration procedure, which measures the main NRVS sample at stage A and the calibration sample at stage B simultaneously, and calibrates the energies for observing extremely small spectral shifts; for example, the 0.3 meV energy shift between the 100%-(57)Fe-enriched [Fe4S4Cl4](=) and 10%-(57)Fe and 90%-(54)Fe labeled [Fe4S4Cl4](=) has been well resolved; (ii) a quick-switching energy calibration procedure, which reduces each calibration time from 3-4 h to about 30 min. Although the quick-switching calibration is not in situ, it is suitable for normal NRVS measurements.

  12. Can MRI-only replace MRI-CT planning with a titanium tandem and ovoid applicator?

    PubMed

    Harkenrider, Matthew M; Patel, Rakesh; Surucu, Murat; Chinsky, Bonnie; Mysz, Michael L; Wood, Abbie; Ryan, Kelly; Shea, Steven M; Small, William; Roeske, John C

    2018-06-23

    To evaluate dosimetric differences between MRI-only and MRI-CT planning with a titanium tandem and ovoid applicator to determine if all imaging and planning goals can be achieved with MRI only. We evaluated 10 patients who underwent MRI-CT-based cervical brachytherapy with a titanium tandem and ovoid applicator. High-risk clinical target volume and organs at risk were contoured on the 3D T2 MRI, which were transferred to the co-registered CT, where the applicator was identified. Retrospectively, three planners independently delineated the applicator on the axial 3D T2 MRI while blinded to the CT. Identical dwell position times in the delivered plan were loaded. Dose-volume histogram parameters were compared to the previously delivered MRI-CT plan. There were no significant differences in dose to D 90 or D 98 of the high-risk clinical target volume with MRI vs. MRI-CT planning. MRI vs. MRI-CT planning resulted in mean D 0.1cc bladder of 8.8 ± 3.4 Gy vs. 8.5 ± 3.2 Gy (p = 0.29) and D 2cc bladder of 6.2 ± 1.4 Gy vs. 6.0 ± 1.4 Gy (p = 0.33), respectively. Mean D 0.1cc rectum was 5.7 ± 1.2 Gy vs. 5.3 ± 1.2 Gy (p = 0.03) and D 2cc rectum 4.0 ± 0.8 Gy vs. 4.2 ± 1.0 Gy (p = 0.18), respectively. Mean D 0.1cc sigmoid was 5.2 ± 1.3 Gy vs. 5.4 ± 1.6 Gy (p = 0.23) and D 2cc sigmoid 3.9 ± 1.0 Gy vs. 4.0 ± 1.1 Gy (p = 0.18), respectively. There were no clinically significant dosimetric differences between the MRI and MRI-CT plans. This study demonstrates that cervical brachytherapy with a titanium applicator can be planned with MRI alone, which is now our clinical standard. Copyright © 2018. Published by Elsevier Inc.

  13. Close Range Calibration of Long Focal Length Lenses in a Changing Environment

    NASA Astrophysics Data System (ADS)

    Robson, Stuart; MacDonald, Lindsay; Kyle, Stephen; Shortis, Mark R.

    2016-06-01

    University College London is currently developing a large-scale multi-camera system for dimensional control tasks in manufacturing, including part machining, assembly and tracking, as part of the Light Controlled Factory project funded by the UK Engineering and Physical Science Research Council. In parallel, as part of the EU LUMINAR project funded by the European Association of National Metrology Institutes, refraction models of the atmosphere in factory environments are being developed with the intent of modelling and eliminating the effects of temperature and other variations. The accuracy requirements for both projects are extremely demanding, so accordingly improvements in the modelling of both camera imaging and the measurement environment are essential. At the junction of these two projects lies close range camera calibration. The accurate and reliable calibration of cameras across a realistic range of atmospheric conditions in the factory environment is vital in order to eliminate systematic errors. This paper demonstrates the challenge of experimentally isolating environmental effects at the level of a few tens of microns. Longer lines of sight promote the use and calibration of a near perfect perspective projection from a Kern 75mm lens with maximum radial distortion of the order of 0.5m. Coordination of a reference target array, representing a manufactured part, is achieved to better than 0.1mm at a standoff of 8m. More widely, results contribute to better sensor understanding, improved mathematical modelling of factory environments and more reliable coordination of targets to 0.1mm and better over large volumes.

  14. Automated Heat-Flux-Calibration Facility

    NASA Technical Reports Server (NTRS)

    Liebert, Curt H.; Weikle, Donald H.

    1989-01-01

    Computer control speeds operation of equipment and processing of measurements. New heat-flux-calibration facility developed at Lewis Research Center. Used for fast-transient heat-transfer testing, durability testing, and calibration of heat-flux gauges. Calibrations performed at constant or transient heat fluxes ranging from 1 to 6 MW/m2 and at temperatures ranging from 80 K to melting temperatures of most materials. Facility developed because there is need to build and calibrate very-small heat-flux gauges for Space Shuttle main engine (SSME).Includes lamp head attached to side of service module, an argon-gas-recirculation module, reflector, heat exchanger, and high-speed positioning system. This type of automated heat-flux calibration facility installed in industrial plants for onsite calibration of heat-flux gauges measuring fluxes of heat in advanced gas-turbine and rocket engines.

  15. Machine Learning

    DTIC Science & Technology

    1990-04-01

    DTIC i.LE COPY RADC-TR-90-25 Final Technical Report April 1990 MACHINE LEARNING The MITRE Corporation Melissa P. Chase Cs) CTIC ’- CT E 71 IN 2 11990...S. FUNDING NUMBERS MACHINE LEARNING C - F19628-89-C-0001 PE - 62702F PR - MOlE S. AUTHO(S) TA - 79 Melissa P. Chase WUT - 80 S. PERFORMING...341.280.5500 pm I " Aw Sig rill Ia 2110-01 SECTION 1 INTRODUCTION 1.1 BACKGROUND Research in machine learning has taken two directions in the problem of

  16. MODIS airborne simulator visible and near-infrared calibration, 1992 ASTEX field experiment. Calibration version: ASTEX King 1.0

    NASA Technical Reports Server (NTRS)

    Arnold, G. Thomas; Fitzgerald, Michael; Grant, Patrick S.; King, Michael D.

    1994-01-01

    Calibration of the visible and near-infrared (near-IR) channels of the MODIS Airborne Simulator (MAS) is derived from observations of a calibrated light source. For the 1992 Atlantic Stratocumulus Transition Experiment (ASTEX) field deployment, the calibrated light source was the NASA Goddard 48-inch integrating hemisphere. Tests during the ASTEX deployment were conducted to calibrate the hemisphere and then the MAS. This report summarizes the ASTEX hemisphere calibration, and then describes how the MAS was calibrated from the hemisphere data. All MAS calibration measurements are presented and determination of the MAS calibration coefficients (raw counts to radiance conversion) is discussed. In addition, comparisons to an independent MAS calibration by Ames personnel using their 30-inch integrating sphere is discussed.

  17. Assessment of radiofrequency ablation margin by MRI-MRI image fusion in hepatocellular carcinoma.

    PubMed

    Wang, Xiao-Li; Li, Kai; Su, Zhong-Zhen; Huang, Ze-Ping; Wang, Ping; Zheng, Rong-Qin

    2015-05-07

    To investigate the feasibility and clinical value of magnetic resonance imaging (MRI)-MRI image fusion in assessing the ablative margin (AM) for hepatocellular carcinoma (HCC). A newly developed ultrasound workstation for MRI-MRI image fusion was used to evaluate the AM of 62 tumors in 52 HCC patients after radiofrequency ablation (RFA). The lesions were divided into two groups: group A, in which the tumor was completely ablated and 5 mm AM was achieved (n = 32); and group B, in which the tumor was completely ablated but 5 mm AM was not achieved (n = 29). To detect local tumor progression (LTP), all patients were followed every two months by contrast-enhanced ultrasound, contrast-enhanced MRI or computed tomography (CT) in the first year after RFA. Then, the follow-up interval was prolonged to every three months after the first year. Of the 62 tumors, MRI-MRI image fusion was successful in 61 (98.4%); the remaining case had significant deformation of the liver and massive ascites after RFA. The time required for creating image fusion and AM evaluation was 15.5 ± 5.5 min (range: 8-22 min) and 9.6 ± 3.2 min (range: 6-14 min), respectively. The follow-up period ranged from 1-23 mo (14.2 ± 5.4 mo). In group A, no LTP was detected in 32 lesions, whereas in group B, LTP was detected in 4 of 29 tumors, which occurred at 2, 7, 9, and 15 mo after RFA. The frequency of LTP in group B (13.8%; 4/29) was significantly higher than that in group A (0/32, P = 0.046). All of the LTPs occurred in the area in which the 5 mm AM was not achieved. The MRI-MRI image fusion using an ultrasound workstation is feasible and useful for evaluating the AM after RFA for HCC.

  18. Three-dimensional MRI-linac intra-fraction guidance using multiple orthogonal cine-MRI planes

    NASA Astrophysics Data System (ADS)

    Bjerre, Troels; Crijns, Sjoerd; Rosenschöld, Per Munck af; Aznar, Marianne; Specht, Lena; Larsen, Rasmus; Keall, Paul

    2013-07-01

    The introduction of integrated MRI-radiation therapy systems will offer live intra-fraction imaging. We propose a feasible low-latency multi-plane MRI-linac guidance strategy. In this work we demonstrate how interleaved acquired, orthogonal cine-MRI planes can be used for low-latency tracking of the 3D trajectory of a soft-tissue target structure. The proposed strategy relies on acquiring a pre-treatment 3D breath-hold scan, extracting a 3D target template and performing template matching between this 3D template and pairs of orthogonal 2D cine-MRI planes intersecting the target motion path. For a 60 s free-breathing series of orthogonal cine-MRI planes, we demonstrate that the method was capable of accurately tracking the respiration related 3D motion of the left kidney. Quantitative evaluation of the method using a dataset designed for this purpose revealed a translational error of 1.15 mm for a translation of 39.9 mm. We have demonstrated how interleaved acquired, orthogonal cine-MRI planes can be used for online tracking of soft-tissue target volumes.

  19. Three-dimensional MRI-linac intra-fraction guidance using multiple orthogonal cine-MRI planes.

    PubMed

    Bjerre, Troels; Crijns, Sjoerd; af Rosenschöld, Per Munck; Aznar, Marianne; Specht, Lena; Larsen, Rasmus; Keall, Paul

    2013-07-21

    The introduction of integrated MRI-radiation therapy systems will offer live intra-fraction imaging. We propose a feasible low-latency multi-plane MRI-linac guidance strategy. In this work we demonstrate how interleaved acquired, orthogonal cine-MRI planes can be used for low-latency tracking of the 3D trajectory of a soft-tissue target structure. The proposed strategy relies on acquiring a pre-treatment 3D breath-hold scan, extracting a 3D target template and performing template matching between this 3D template and pairs of orthogonal 2D cine-MRI planes intersecting the target motion path. For a 60 s free-breathing series of orthogonal cine-MRI planes, we demonstrate that the method was capable of accurately tracking the respiration related 3D motion of the left kidney. Quantitative evaluation of the method using a dataset designed for this purpose revealed a translational error of 1.15 mm for a translation of 39.9 mm. We have demonstrated how interleaved acquired, orthogonal cine-MRI planes can be used for online tracking of soft-tissue target volumes.

  20. National machine guarding program: Part 1. Machine safeguarding practices in small metal fabrication businesses

    PubMed Central

    Yamin, Samuel C.; Brosseau, Lisa M.; Xi, Min; Gordon, Robert; Most, Ivan G.; Stanley, Rodney

    2015-01-01

    Background Metal fabrication workers experience high rates of traumatic occupational injuries. Machine operators in particular face high risks, often stemming from the absence or improper use of machine safeguarding or the failure to implement lockout procedures. Methods The National Machine Guarding Program (NMGP) was a translational research initiative implemented in conjunction with two workers' compensation insures. Insurance safety consultants trained in machine guarding used standardized checklists to conduct a baseline inspection of machine‐related hazards in 221 business. Results Safeguards at the point of operation were missing or inadequate on 33% of machines. Safeguards for other mechanical hazards were missing on 28% of machines. Older machines were both widely used and less likely than newer machines to be properly guarded. Lockout/tagout procedures were posted at only 9% of machine workstations. Conclusions The NMGP demonstrates a need for improvement in many aspects of machine safety and lockout in small metal fabrication businesses. Am. J. Ind. Med. 58:1174–1183, 2015. © 2015 The Authors. American Journal of Industrial Medicine published by Wiley Periodicals, Inc. PMID:26332060